Facebook says it’s getting closer to eliminating one of the tech world’s biggest problems: namely how ridiculously long it’s taken me to type this sentence.
Ok, so it only took me about 30 seconds, but the words formed in my head in a fraction of that time. This problem of human latency is a key hurdle for tech giants like Google and Facebook that are looking for new ways to grow by shoving ever-more petabytes of data into our brains, and vice versa.
Two years ago, Facebook announced it was working on a non-invasive wearable device that would allow users to type by imagining themselves speaking the words. The hope is that such a device can be used as an input interface for augmented reality glasses.
As part of their effort, Facebook has been funding a team of researchers at the University of California, San Francisco (UCSF) working to help patients with neurological damage to speak again by detecting imagined speech in real time.
The team published its results in the latest issue of Nature Communications, and although the patients it worked with each had implanted electrodes measuring brain activity, the demonstrated ability to decode a small set of words and phrases in real-time represents a significant breakthrough.
Facebook hopes the work of the UCSF team will serve as a proof of concept to inform the development of the non-invasive wearable it dreams of pairing with AR glasses.
“We’re a long way away from being able to get the same results that we’ve seen at UCSF in a non-invasive way,” reads a Facebook blog post detailing its efforts. “It could take a decade, but we think we can close the gap.”
Karen Panetta, IEEE Fellow and Dean of Graduate Engineering at Tufts University agrees that Facebook’s ambitions are feasible.
“If we can now measure signals in the brain via implantable devices, then we can transmit those signals outside of the brain.”
Facebook thinks that a promising way to make the leap from “reading minds” via wired electrodes to a wireless system is by measuring changes in oxygen levels in the brain using infrared light similar to a pulse oximeter at a doctor’s office.
“This could work, though I am afraid that the rates (timing) of oxygenation processes are much lower than the actual rate at which speech is produced,” Josep Jornet, a professor of electrical engineering at the University of Buffalo, told me. “Certainly more work is needed, but this is what research is about and should be promoted.”
Todd Richmond, an IEEE member and Director of the Tech and Narrative Lab at the Pardee RAND Graduate School in Santa Monica says “having a viable capability in the lab” to wirelessly send brain signals to a computer could be less than five years away.
“It will likely take longer to move from the lab to commercial deployment for a variety of reasons,” he adds.
Richmond thinks the first hurdles will be solving technical problems to make the system lighter, smaller, faster and essentially, more practical. Next comes the process of refining the user experience to make brain interfaces a necessity rather than a novelty.
“The third set of developments will be around improving accuracy, efficacy, and safety,” he explains. “Like any consumer product, we’ll need to sort out what agencies are looking at what aspects of how devices impact humans, both individually and from a societal level.”
I’ve covered science, technology, the environment and politics for outlets including CNET, PC World, BYTE, Wired, AOL and NPR. I currently produce the Warm Regards podcast and I’ve written e-books on Android and Alaska.
I began covering Silicon Valley for the now defunct Business 2.0 Magazine in 2000, but when the dot-com bubble burst, I found myself manning a public radio station in the Alaskan Bush for three years.
Upon returning to the lower 48, I covered politics, energy and the environment as a freelancer for National Public Radio programs and spent time as an online editor for AOL and Comcast. For the past 7 years, I’ve returned to focusing on the world of technology.