The Blind Truths: 6½ Things Every Builder of AI Glasses Needs to Know

Most “glasses for the blind” completely miss the point. Not because people don’t care — but because they don’t understand what blind people really need. These truths are based on my lived experience, my experiments with AI wearables, and my vision for what comes next.

If you’re serious about building the future of blindness, read on.

👓 Blind Truth #1: Glasses Are Just the Mount

Everyone’s building “glasses for the blind” — but most don’t seem to understand what glasses really are.

Why glasses? Because they’re the best place to put a camera (or two). That’s it.

  • Camera(s) near the eyes
  • Speakers near the ears
  • Comfortable, stylish, and able to hold whatever lenses the user needs (or none at all)

Everything else — the smarts, the computer vision, the natural language model — can live somewhere else. In the cloud. In your pocket. It’s not about the hardware. It’s about giving blind people Sight as a Service.

And yet, so many people keep reinventing the same underpowered, clunky wearable because they miss this basic design truth.

🕶️ Blind Truth #2: The Glasses Don’t Have to Be Smart — You Are

Everyone’s obsessed with making “smart glasses for the blind,” but here’s the real trick: the glasses don’t have to be that smart. You already carry the brains around with you.

  • You already own a smartphone.
  • That phone already runs the apps, connects to services, and has your preferences and accounts.
  • So the glasses? They just need to be a sleek, wearable access point.

Take Meta Ray-Bans. They’re not “for blind people.” They’re just a great, camera-and-audio front-end for services like Be My AI, Aira, ChatGPT, or Seeing AI. They delegate the thinking to the phone or the cloud.

This is probably the way to go:

  • 👉 Assume the user has a smartphone.
  • 👉 Assume the user already uses AI tools.
  • 👉 Make the glasses a great input/output layer for those tools.

Let’s stop stuffing underpowered CPUs into heavy, clunky glasses. It’s not about self-contained gadgets. It’s about seamless connection to powerful services.

Sight as a Service. Again.

🤖 Blind Truth #3: It’s All About the Model, Baby

Once your glasses have a camera and a speaker — and once they’re talking to something smart — the real question becomes:

What model are they talking to?

If your glasses connect via your phone (which makes total sense), then you’re playing in Apple or Google’s playground. That means:

  • You follow their rules.
  • You use their permissions.
  • You’re limited (or empowered!) by their frameworks.

But regardless of where the computing happens — phone, cloud, custom box — the crucial step is this:

📸 Get the image into a vision-language model.

That’s where the magic happens. And the models are improving fast. Yesterday’s alt text is today’s scene description is tomorrow’s situational awareness feed.

Right now, it’s still image-by-image — but we’re just a breath away from real-time video flowing into a model. An always-on vision companion. That’s not five years out. That’s months.

So here’s the real question for blind users (and builders):

🧠 What model do you want to trust with your sight?

Do you pick a platform? A provider? A vibe?

Because at the end of the day, the model your glasses use will shape what you know, what you notice, what you see.

🧭 Blind Truth #4: The Hard Part Comes Next

Once you’ve got a camera and a speaker. Once you’ve got a pipeline to a vision-language model. That’s the easy bit.

The hard part?

  • ➡️ Figuring out what the model should actually say.
  • ➡️ Figuring out how blind people want to receive that information.
  • ➡️ Figuring out how to describe a world you can’t see in a way that’s actually useful.

This is where most inventors — even well-meaning ones — hit a wall. Because this isn’t about object recognition. This is about insight. Glance. Awareness. Safety. Vibe.

And to be honest? As a blind person, I don’t trust anyone to get this right unless they’ve lived it — or are listening very, very closely to those who have.

You can’t just “brainwave” your way into this part. You have to work. You have to test. You have to co-design. And you have to be humble enough to admit you don’t know what a blind user needs from their second look, or their scene summary, or their sense of a room’s mood.

We’re not just talking about tools anymore. We’re talking about trust, judgement, independence.

The stuff that makes us feel safe — or not.

🧑‍🦯 Blind Truth #5: Blind People Aren’t Experts in Blindness

One of the biggest mistakes developers make is this:

They find one blind person (maybe two, maybe ten), show them a demo, get a thumbs-up… and assume they’ve cracked it.

But here’s the truth:

🧠 Blind people are experts in themselves — in their version of blindness, and in how they live with it.

They are not automatically experts in every form of blindness. They are not automatically UX designers or rehab specialists. And they definitely don’t represent the whole community.

Because “blindness” is not one experience. It’s a spectrum:

  • From people who’ve never had sight, and never missed it
  • To someone losing vision in their 70s after a fully sighted life
  • To low vision folks who still rely heavily on what sight they have
  • To tech power users with voice commands and custom gestures
  • To people who’ve never used a screen reader in their life

🚫 Don’t rush off and build something just because one or two blind people loved it.

You might get burned when others don’t see the same value — and you might feel hurt, confused, or rejected.

Instead, widen your lens. Stay curious. Test across the spectrum.

And most importantly: keep asking, What kind of blind person is this for?

Because there’s no such thing as “the blind user.” There’s only people — individuals — with different needs, different tools, and different goals.

🌊 Blind Truth #6: We’re Not Vision-First — But That Could Change

Here’s something that outsiders don’t often get:

Most blind people don’t live their lives waiting for vision. We’re not “sight-deprived.” We’re adapted. We’ve learned how to navigate the world without looking.

I’ve got Meta Ray-Bans. They’re great. But honestly? I forget to use them. Because I’ve got 40 years of not-looking behind me. Looking was never part of the plan.

That’s not resistance. That’s habit. That’s life.

But here’s the twist:

If the glasses ever get good enough — good enough to be always on, always accurate, always useful — they could trigger a phase transition.

A tipping point.

Not “blind people using glasses,” but something deeper:

📈 A step-function shift where the experience of blindness changes.

Where we become, in some ways, functionally partially sighted. Not because we see — but because we’re seeing enough to behave differently.

That won’t happen because of marketing. It won’t happen because of hype. It’ll happen when the tech is so good we stop forgetting to use it.

That’s the bar.

💡 One More Thing: Control Is the Real Interface

If we do get good enough glasses, and if the camera is reliable, and if the model understands what it’s seeing, and if it knows how to describe it…

There’s still one last thing that will decide whether blind people actually use it:

🕹️ Control.

Not just what the system says — but when, how, and whether it says anything at all.

Some blind users will want:

  • Constant updates. A running commentary.
  • Silence unless something changes.
  • Just a quick glance when they ask for it.

That means this technology needs to be more than smart. It needs to be respectful. Customisable. Context-aware.

It needs to give you, the user, the final say — because trust isn’t just about accuracy.

Trust is about control.

And that might be the real interface of the future.

1 thought on “The Blind Truths: 6½ Things Every Builder of AI Glasses Needs to Know”

  1. Well, written, Charlie! I think Martin the pixie bought guy has a great feature where you can choose from different AI depending on your needs. At this point in development each one has their strengths and weaknesses, but I am hoping this will become somewhat more of a commodity in the next little while so we don’t need to choose between which AI we want to use. I also think your point about verbocity and context was superduper important. making the AI understand your immediate needs whether being navigation or just a scene description or face recognition is important, but undoubtedly it’s hard to do all three at the same time.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top