Yacine Achiakh is the co-founder of Wisear, a company pioneering non-invasive neural interfaces that fit in your ears and can read your brain, facial muscles, and eye activity to control any device hands-free, voice-free.
In this interview, we dive into the technology behind neural interfaces, the challenges of building AI-powered wearables, and the launch date and price that left me speachless.
Before getting into the interview, I wanted to quickly introduce you to today’s sponsor Gracia AI. Gracia AI is the only app that allows you to experience Gaussian Splatting volumetric videos on a standalone headset, either in VR or MR.
It is a truly impressive experience and I recommend you to try it out right now on your Quest or PC-powered headset.
Interview with Yacine Achiakh
How do Wisear’s earbuds actually do?
Yacine Achiakh: What we do is embed small sensors in the earbuds, which capture electrical activity from your brain, eyes, and facial muscles.
Your body is like a big battery, and every movement, thought, or muscle activation generates a specific electrical signal. Our sensors detect these signals and then use embedded AI algorithms to interpret them in real time. This allows you to control your smartphone, laptop, or even an AR/XR headset without using your hands or voice.
Can you tell us about these sensors?
Yacine Achiakh: Our sensors are made from conductive polymers, similar in shape to the tips of regular earbuds. But unlike standard polymers, they conduct electricity and capture neural signals.
Technically, you could place these sensors anywhere on your body and capture electrical activity. For example, Meta’s wristband interface (from their acquisition of CTRL-Labs) detects signals from the hand. Similarly, by placing our sensors near the head, we capture brain and facial muscle activity.
The great thing is that it works for everyone. Despite individual differences, we all have brains, facial muscles, and nerves that function in a predictable way. This allows us to train our AI models to work universally, without requiring extensive calibration per user.
How does Wisear’s approach compare to other neurotech solutions like Neuralink?
Yacine Achiakh: The quest to measure electrical activity from the brain and body has been going on for over two centuries. Today, there are two main approaches:
Invasive Neural Interfaces: These involve surgically implanting electrodes into the brain (like Neuralink). This provides very precise, neuron-level data but is a highly complex and risky approach.
Non-Invasive Neural Interfaces: These place sensors outside the body, capturing broader brain activity without surgery. While less precise than invasive methods, they are practical, safe, and scalable.
Wisear falls into the non-invasive category, but our key innovation is miniaturization. Traditional EEG headsets are bulky and impractical. We spent five years optimizing our technology to fit into a regular pair of earbuds making it comfortable, portable, and easy to use.
So what exactly can you do with Wisear’s earbuds?
Yacine Achiakh: The goal is to provide hands-free, voice-free control for AR/XR devices, smartphones, and more. We’ve broken this down into three key interaction types:
Selection: This replaces a mouse click or button press. We use facial muscle contractions, such as a subtle jaw clench, to trigger an action.
Navigation: We detect eye gestures, like glancing left or right, to scroll through apps or control music (e.g., skipping a song on Spotify).
Silent Speech: This is a more advanced feature we’re working on, where we capture subvocalized speech (moving your lips without sound) and convert it into text or commands.
How difficult is it for new users to learn these controls?
Yacine Achiakh: If you want people to adopt a new human-computer interface, it has to be intuitive and fast to learn.
Take the Apple Vision Pro, for example. Apple designed the gesture-based UI so that new users instantly understand how to interact with it. We approached Wisear’s earbuds the same way since there is no need for calibration and onboarding takes under 2 min.
How do you prevent false triggers or accidental selections?
Yacine Achiakh: That’s a huge challenge in any neural interface. We prevent accidental activation in two key ways:
Distinct gestures – A jaw clench is not something you do unintentionally. It’s distinct from talking, chewing, or yawning, which prevents false triggers.
Contextual activation for eye gestures – Eyes are meant for exploration, not just control. So instead of always being active, eye gestures are contextually triggered (e.g., only while navigating a menu or answering a call).
Can you show a real-time demo of how Wisear works?
Yacine Achiakh: Check out the Live demo of Yacine playing Tetris with his eyes during the full interview (video will start at the precise timestamp 😉)
Who is your target market?
Yacine Achiakh: Our long-term goal is to make Wisear’s neural interface universal, but for our first product launch (end of 2025), we’re targeting the accessibility market. Many people with limited hand mobility struggle to use smartphones, computers, or AR/VR devices. Our technology enables them to control devices effortlessly, unlocking new levels of independence. That said, anyone can buy and use Wisear’s earbuds—they will work for all consumers, just like a pair of AirPods.
What’s the price and release date for Wisear’s earbuds?
Yacine Achiakh: We’re launching in Q4 2025, and the price will be $250 in line with premium earbuds like AirPods Pro.
That’s it for today, and don’t forget to subscribe to the newsletter if you find this interesting