Meta Aria Gen 2 smart glasses look like serious upgrade on Ray-Ban Meta

What you need to know

  • Meta announced Aria Gen 2 glasses on Thursday, a research-focused design for “machine perception, egocentric and contextual AI, and robotics.”
  • These glasses have SLAM and eye-tracking cameras, force-canceling speakers, mics that distinguish between voices, continuous HR, and GNSS location tracking.
  • They weigh 75g — less than the 98g Orion but more than the 50g Ray-Ban Metas — and last 6–8 hours with no apparent wire.
  • Academic and commercial research labs can start testing Aria Gen 2 in early 2026, but they won’t ship to consumers.

As we wait impatiently for Meta to release its next generation of AR and smart glasses, Meta is tantalizing us with its new Aria Gen 2 glasses that very few people will ever get to try — but sound technically impressive and make us excited for the inevitable consumer version.

A follow-up to Meta’s original Aria glasses released in 2020, the Aria Gen 2 glasses have no AR holographic capabilities but are crammed full of RGB, 6DOF SLAM, eye-tracking cameras, and even GPS tracking to interpret the world and your body’s gaze and actions.

On the Meta blog, you can see a pair of videos of the Aria Gen 2 in action, including an impressive demo of a visually impaired woman using the familiar “Hey Meta” command to have Aria Gen 2 guide her with audio cues to the produce section of a grocery store to find apples. It looks like a next-gen version of the Be My Eyes accessibility tool for Ray-Ban Meta smart glasses.

Still from a promotional video about Meta Project Aria Gen 2, showing the various smart glasses components broken out in virtual space

The deconstructed Aria Gen 2 smart glasses components (Image credit: Meta)

“Project Aria from the outset was designed to begin a revolution around always-on, human-centric computing,” says Richard Newcombe, VP of Meta Reality Labs Research. The first-gen version was recently used to provide visual data for a robot in order to help it learn to perform human tasks, translating what we normally see and hear into data a robotic AI can use.

Leave a Reply

Your email address will not be published. Required fields are marked *