During China’s National Day holiday, a new trend emerged among travelers—from the Great Wall to Tokyo’s bustling streets. Many tourists weren’t wearing sunglasses but AI glasses instead. Some used them to capture scenic moments, others to translate menus or identify historical artifacts. The appeal was simple yet powerful: freeing your hands while staying connected.
However, before AI glasses can evolve into the next “super device” like smartphones, they must overcome several major hurdles.

Who’s Buying AI Glasses?
Major tech brands are betting heavily on this new category. Market data predicts that China’s AI glasses shipments will reach 2.9 million units in 2025, marking a staggering 121% year-on-year growth. But who are the real buyers?
One outdoor enthusiast shared her experience—she uses AI glasses primarily for photography. “When cycling, I want to capture sunsets instantly. Pulling out a phone ruins the moment. With AI glasses, I just look, and it records,” she said. For her, instant capture matters more than photo quality.
Others are embracing AI glasses for health management. A user on a diet program described how his glasses help count calories and suggest meals in the company cafeteria. “I just tell them how much protein or carbs I want,” he explained. “The system isn’t perfect—it sometimes mistakes dishes—but overall it works fine.”
Despite enthusiasm, users have voiced common complaints. Battery life and comfort top the list. Most models can last 8–9 hours under light use, but when continuously recording or playing audio, that drops to just 40 minutes to 3 hours. For a wearable device, that’s a serious limitation.
Comfort remains subjective. Like regular glasses, fit varies by face shape. Current AI glasses often feature bulky frames that can feel heavy or slip easily. Some users have also reported lost videos, failed voice activation, or audio leakage.

Surprisingly, one emerging user group is the visually impaired. Some AI glasses now feature a “Be My Eyes” mode, where AI can describe surroundings in detail via voice feedback—identifying obstacles, locating objects, or guiding navigation. One user praised this feature but wished the glasses could also read text using OCR technology, a function still missing in many models.
While not perfect, this innovation shows how AI technology can enhance accessibility, offering a glimpse of human-centered progress.
AI Glasses Aren’t a New Idea
AI glasses may seem futuristic, but the concept isn’t new. Over a decade ago, a major tech company launched smart glasses capable of display projection, translation, and voice control. The idea was groundbreaking—but the execution failed. The glasses cost nearly $1,500, caused eye strain, and lacked must-have features that justified daily use.
Today, the landscape looks different. Display technology has improved, offering higher resolution and wider fields of view. Costs have dropped—one high-end model’s core components are now 35% cheaper than in 2023. With some AI glasses now available for under $150, they’ve become accessible “tech toys,” even for casual users.
But this market isn’t just about hardware anymore. It’s about ecosystem positioning. Leading brands are pushing “human-car-home” integration, linking AI glasses to a broader smart ecosystem—phones, vehicles, and home devices—all managed through one platform.
Given that the human head captures over 80% of sensory input, AI glasses are well-positioned to become the next major human-computer interface, collecting contextual data that smartphones can’t.

This explains why both hardware makers and software developers are racing to claim the “entry point.” Whether it’s using glasses to make payments, browse short videos, or access AI assistants, everyone wants a piece of the future interface.
Can AI Glasses Replace Smartphones?
A famous tech theory suggests that future devices will move toward “de-physicalization”—one product replacing many. The smartphone already replaced cameras, MP3 players, and wallets. Could AI glasses follow the same path?
Some previous attempts, like XR headsets, aimed to replace laptops but failed due to bulkiness and poor battery life. AI glasses face similar challenges—they complement smartphones but don’t yet replace them. In essence, they remain an additional gadget, not an essential one.
Still, innovation continues. A recent model introduced visual interaction and gesture control using a paired wristband, enabling users to scroll feeds or play music with hand movements. The concept is promising—it expands usability and frequency—but integration barriers remain. Hardware manufacturers and app developers are competing to own the same ecosystem, making collaboration difficult.
So, will something even more advanced come next? Perhaps—a future where a coin-sized chip enables direct VR immersion, as imagined in science fiction. But for now, AI glasses must focus on improving usage frequency, comfort, and ecosystem integration.
Without those improvements, they risk becoming another niche gadget, like the once-beloved Kindle—useful, but forgotten.

Conclusion
This holiday season, AI glasses became the talk of the town. From travelers recording adventures to visually impaired users gaining newfound independence, their potential is undeniable. Yet, beneath the excitement lie unresolved issues—short battery life, bulky design, and limited utility.
For now, AI glasses remain more of an “extra device” than a smartphone replacement. But with rapid advancements in AI and display technology, they might still evolve into the next must-have gadget—if they can truly merge function, comfort, and ecosystem into one seamless experience.