The Listening Machine: Exploring the Benefits and Risks of Always-On Audio in AI Wearables

At CES, a small yellow bracelet caught attention, seemingly resembling a fitness tracker. However, this yellow wearable from Bee AI, called Pioneer, was quietly recording everything around its wearer. Unlike traditional recorder apps that simply store audio, this device processed conversations, generating personalized to-do lists and summaries of in-person chats.

Just days before the trade show, the founder of a new company, Omi, shared details about their upcoming product, which was officially revealed at CES. So, what does Omi do? It records everything in the environment to create an activity log, then uses AI to extract actionable insights and tasks from the information—functioning much like a personal assistant. While Omi’s wearable can be worn around the neck, it is most effective when placed near the forehead, close to the temple. The device contains an electroencephalogram (EEG), and Omi claims that if the wearer mentally focuses on speaking to the device, it will recognize the intent and activate to listen for commands.

This is the new reality we are entering, where AI-powered wearables constantly record the world around us. While voice assistants began as tools in speakers and smartphones, they quickly expanded to wrists and faces. These devices traditionally required active engagement—either a tap or a wake word—to activate their listening functions. However, the next generation of hardware assistants, such as the upcoming Friend pendant, can passively absorb information and operate in the background, continuously listening.

Despite their innovative capabilities, the hardware for these wearables is often quite affordable—Bee AI’s device costs just $50, and Omi’s wearable is priced at $89. The real value, however, lies in the software, which typically comes with a subscription. This software taps into multiple large language models to analyze conversations, providing detailed insights and transforming the way users interact with their day-to-day activities.

BEE AI

Bee AI was founded by Maria de Lourdes Zollo and Ethan Sutin, both former members of Squad—a company Sutin founded that enabled media screen sharing in video chats, allowing people to watch movies or YouTube videos together remotely. Squad was later acquired by X (formerly Twitter), where both Zollo and Sutin briefly worked on Twitter Spaces. Zollo also has experience working at Tencent and Musical.ly, which eventually became TikTok.

Sutin had the idea of a personal AI assistant back in 2016 when chatbots were becoming popular, but the technology at the time wasn’t advanced enough. Now, things have changed. Bee AI launched its platform in beta last February, with an active community providing valuable feedback. The company started selling its Pioneer hardware just over a week ago. The “Bee” name reflects the concept of ambient computing—an AI that buzzes around, absorbing and processing information. While Bee AI’s hardware is not necessary to use the service (it’s also available through an iPhone app), Zollo emphasizes that the wearable offers a more immersive experience, as it can record continuously throughout the day. An Android version of the app is expected by the end of the month.

The wearable itself is simple in design, featuring two microphones for noise isolation. Sutin mentions that if the wearer can hear a conversation in a busy environment, the wearable should be able to pick up both parties’ voices clearly. The device can be worn as a wristband or clipped to a shirt, with a central “Action” button. Pressing this button once mutes the microphones, while pressing it again reactivates them. Holding the button allows users to trigger actions, such as processing the current conversation or activating the “Buzz” AI assistant. Since there’s no speaker on the wearable, responses are played through the phone. When the mic is muted, a red LED lights up; when it’s recording, there is no visible indicator, which raises potential concerns about recording laws in different US states, though the device doesn’t store audio.

Bee AI doesn’t locally process conversations on the phone yet, due to battery limitations, but Sutin notes that edge processing technology is improving. For now, the data is processed in the cloud, with Bee AI utilizing a mix of commercial and open-source models, including OpenAI’s ChatGPT and Google’s Gemini, alongside some of the company’s own models. The platform’s target audience is people who “talk a lot for a living.” For individuals who aren’t engaged in constant conversation, there may not be much for the wearable to record, unless they actively engage with it. However, as it records throughout the day, it can provide valuable insights from past conversations. While the accuracy of the AI’s understanding of who’s speaking can vary, it can differentiate between voices and organize conversation transcripts accordingly. Users can assign names to the speakers and ask Bee AI to forget information if desired.

In the app, users can review summaries of their conversations, with a map showing the locations where they occurred. The standout feature is the “To-Dos” tab, where tasks are automatically generated based on conversations. For example, after discussing taking a photo of a product with an editor, Bee AI created a to-do reminding the user to “Remember to take a picture for Mike.” While many of these to-dos may not be relevant, when they are, it feels almost magical. Bee AI also integrates with Gmail, Google Calendar, and Google Contacts, allowing users to request summaries of emails or upcoming calendar events, although this functionality wasn’t tested during the review.

Bee AI operates on a freemium model: basic memory recall and summarization features are available with just the hardware, but to access advanced features, including third-party integrations, users must subscribe for $12 per month.

OMI

Nikita Shevchenko’s entrepreneurial journey began at the age of 14 when he started mining cryptocurrency. By 18, he had already sold his first company. His latest venture is Omi, a unique wearable device that can be worn either as a pendant or attached to the side of your forehead with medical tape. If you choose the latter (and don’t mind the quirky look), Shevchenko claims that Omi can sense when you’re thinking about speaking to it, allowing the device to engage with you simply by your thoughts.

Although I haven’t had the chance to try it, Shevchenko explains that Omi is trained to recognize specific brainwaves associated with focusing on talking to it. This means instead of needing a wake word, you can just think about interacting with the device. However, this brainwave activation is only for when you actively want to engage with the device. For the rest of the time, Omi functions as a wearable microphone, recording and processing your conversations throughout the day, similar to Bee AI. With this ability, Omi can transcribe, summarize, add events to your calendar, and even translate.

Processing occurs on the paired phone and in the cloud, so Omi is not a stand-alone piece of hardware, unlike products like the Humane Ai Pin. Shevchenko notes that Omi is open source, but it’s currently trained using ChatGPT. A standout feature of Omi is its marketplace for third-party apps. These “apps” are more like mods or skills that enhance Omi’s integration with everyday apps. For instance, one app allows all conversation summaries to be stored in a Google Drive folder at the end of the day. Developers can publish their apps to Omi’s store, and they have the option to make them free or charge for them. Since Shevchenko shipped 5,000 early units to developers last year, there are already dozens of apps available.

One of Shevchenko’s long-term goals for Omi is to enable users to create AI clones of themselves. These clones would be capable of interacting with followers, answering questions, and even being published for others to interact with, potentially earning users some extra income. This concept is already being explored through Omi’s Personas platform, where users can create AI clones of popular Twitter personalities to chat with.

In contrast to the Bee AI wearable, Omi features a light that signals it’s recording and processing conversations, ensuring implied consent. The battery lasts up to three days, and similar to Bee AI, Omi will create task lists based on your conversations. Every night, it sends a personalized action plan for the next day. It even offers mentorship; for example, after a job interview, it could provide a summary with suggestions for improvement.

Looking further into the future, Shevchenko envisions Omi being able to “read the brain” to understand your thoughts. Unlike Neuralink, which requires a brain implant, Shevchenko’s goal is to achieve this by adding more electrodes to the head. Although he has already demonstrated a more complex version of this system that can construct two words, there is still a long way to go before it reaches full functionality.

Omi is available now for $89, with shipments expected in a few weeks.

HumanPods

If you’re intrigued by the idea of an AI-powered wearable that’s always listening but have concerns about privacy, it’s reassuring to know not all new AI wearables are adopting the “always-on” model.

Natura Umana, a spin-off from the team behind the Swiss accessory brand Rolling Square, has created a pair of wireless earbuds called HumanPods. While these earbuds do have microphones, they don’t continuously listen. Instead, you need to double-tap the earbud to activate the onboard AI.

Like the Omi and Bee AI Pioneer devices, HumanPods are meant to be worn throughout the day, though their battery life is limited to a single day. Unlike traditional in-ear earbuds, these hang comfortably on your ears, and during a brief testing session at CES, they felt quite comfortable. While the system also uses multiple large language models, the key difference is that these earbuds are designed for you to interact with the AI by talking to it, rather than it passively listening to your surroundings.

HumanPods feature several AI avatars that you can chat with, each specializing in different areas. For example, Athena is a fitness and health AI persona that, when connected to your fitness apps and devices, can suggest workouts based on your health data, such as sleep patterns and heart rate. Another AI persona, Hector, is an “AI therapist.” I had a conversation with Hector about the stress of CES, and he recommended ways to make the event less overwhelming, such as focusing on just a few companies to engage with. However, as company founder Carlo Edoardo Ferraris points out, Hector is not a licensed therapist and comes with disclaimers.

Ferraris envisions a future where there’s an AI persona for every need, with a marketplace where people can publish their own personas—such as an AI therapist created by a mental health startup.

The earbuds are expected to launch in the first quarter of this year, with Android support either launching simultaneously or by the second quarter. While a firm price hasn’t been set, it is expected to be around $100, with a subscription fee for access to premium features.

These AI wearables come after the failed launch of the Humane Ai Pin, one of 2024’s most anticipated tech products. Shevchenko, the creator of Omi, believes his company is in a better position than Humane, thanks to its launch with a variety of apps to expand the capabilities of its AI assistant. Ferraris believes Natura Umana’s wearable will have a better chance of success, noting that wireless earbuds are a more familiar and straightforward technology, and the app is designed similarly to a messaging app—something people are already comfortable with.

While it’s true that many wearables overpromise and underdeliver, these “always-listening” AI devices may eventually become more useful as the technology matures. A microphone that’s always on could quickly become the new normal, though the privacy concerns surrounding such devices are sure to generate debate. What remains to be seen is whether those privacy alarms will be loud enough to slow this march toward an always-listening future.

Leave a Comment

Your email address will not be published. Required fields are marked *