[ad_1]
Now the answer is inside – and it’s not close at all. Four years after announcing a “crazy and surprising” project to create a “silent speech” interface that uses optical technology to read thoughts, Facebook is shelving the project, saying consumer brain reading is still far away.
In a blog post, Facebook says it’s ending the project and will instead focus on an experimental wrist controller for virtual reality. reads muscle signals in the arm. “We still believe in the long-term potential of head-mounted optics. [brain-computer interface] technologies, we decided to focus our immediate efforts on a different neural interface approach that has a path closer to market.”
Facebook’s brain-writing project has taken it into uncharted territory – including to finance brain surgeries at a California hospital and by building prototype helmets that could pass light through the skull, sparking fierce debate over whether tech companies should have access to specialized brain information. As a result, the company seems to have decided that research won’t lead to a product any time soon.
“We have a lot of hands-on experience with these technologies,” says physicist and neuroscientist Mark Chevillet, who until last year headed the silent talk project but recently switched roles to study how Facebook handles elections. “Therefore, we can safely say that a head-mounted optical mute speech device as a consumer interface is still a long way off. Probably longer than we anticipated.”
mind reading
The reason for the craze around brain-computer interfaces is that companies see mind-controlled software as a major breakthrough – as important as a computer mouse, graphical user interface, or scrolling screen. What’s more, researchers have already shown that the results are remarkable if they place electrodes directly on the brain to touch individual neurons. Paralyzed patients with such “implants” deftly move the robotic arms and play video games or medicine through mind control.
Facebook’s goal was to turn such findings into consumer technology that anyone could use; that meant a helmet or headset that you could put on and take off. “We never had any intention of making a neurosurgical product,” Chevillet says. Given the social giant’s many regulatory issues, CEO Mark Zuckerberg once said that the last thing the company had to do was crack their skulls. “I don’t want to see congressional hearings about this” he was kidding.
In fact, as brain-computer interfaces advance, there are serious new concerns. What if big tech companies knew what people were thinking? In Chile, legislators are even considering a human rights bill to protect human rights. brain data, free will and mental privacy from technology companies. Given Facebook’s poor track record on privacy, the decision to discontinue this research may have a side benefit, putting some distance between the company and growing concerns about “neurorights.”
Facebook’s project was specifically targeting a brain controller that could match its ambitions in virtual reality; It bought Oculus VR for $2 billion in 2014. Chevillet says the company took a two-pronged approach to get there. First, he needed to determine whether a thought-speech interface was possible. For this, the University of California, San Francisco sponsored research, in which a researcher named Edward Chang placed electrode pads on the surface of people’s brains.
While implanted electrodes read data from single neurons, this technique, called electrocorticography, or ECoG, simultaneously measures from rather large groups of neurons. Chevillet said he hopes Facebook may also be able to detect equivalent signals coming from outside the head.
The UCSF team has made surprising progress and reports today in the New England Journal of Medicine that it is using these electrode pads to decode speech in real time. The subject was a 36-year-old man, whom the researchers dubbed “Bravo-1”, who lost the ability to form intelligible words after a severe stroke and only grunted or moaned. Chang’s group says in their report that with electrodes on the surface of its brain, Bravo-1 can create sentences on a computer at a rate of about 15 words per minute. The technology includes measuring neural signals in the part of the motor cortex associated with Bravo-1’s efforts to move its tongue and vocal tract while imagining it speaks.
To reach this conclusion, Chang’s team asked Bravo-1 to imagine saying one of 50 common words about 10,000 times, and fed the patient’s neural signals into a deep learning model. After training the model to match words with neural signals, the team was able to accurately identify the word Bravo-1 was thinking of saying with 40% accuracy (the odds would be about 2%). Still, his sentences were full of errors. “Hello how are you?” It might say, “How are you hungry?”
But the scientists improved performance by adding a language model—a program that judges which word sequences are most likely in English. This increased the accuracy to 75%. With this cyborg approach, the system can predict that Bravo-1’s phrase “I deserve my nurse” actually means “I love my nurse”.
As remarkable as the result may be, there are over 170,000 words in English, so performance will fall outside of the Bravo-1’s restricted vocabulary. This means that while the technique may be useful as medical aid, it’s nowhere near what Facebook had in mind. “We’re seeing applications in clinical assistive technology for the foreseeable future, but that’s not our job,” says Chevillet. “We’re focused on consumer applications and there’s a long way to go.”
optical failure
Facebook’s decision to stop reading brains isn’t a shock to researchers studying these techniques. “I can’t say I was surprised, because they implied that they were looking at a short period of time and were going to reevaluate everything,” says Professor Marc Slutzky of Northwestern University. his project. “Just from experience, the goal of deciphering speech is a huge challenge. We are still a long way from a practical, all-encompassing solution.”
Still, Slutsky says the UCSF project is an “impressive next step” that demonstrates both the remarkable possibilities and some of the limitations of brain-reading science. “We’ll see if you can decrypt the free-form speech,” he says. A patient saying “I want a glass of water” and a patient saying “I want my medicine” – these are different.” If AI models can be trained for longer periods of time and in more than one person’s brain, they can evolve rapidly, he says.
As the UCSF research continued, Facebook was also paying other centers, such as the Applied Physics Lab at Johns Hopkins, to figure out how to pump light through the skull to read neurons non-invasively. Like MRI, these techniques rely on sensing reflected light to measure the amount of blood flow to brain areas.
It’s these optical techniques that remain the bigger hurdle. Some, including Facebook, are not getting their neural signals at sufficient resolution, despite recent advances. Another problem, according to Chevillet, is that the blood flow detected by these methods occurs five seconds after a group of neurons fire and is too slow to control a computer.
[ad_2]
Source link