Neural signals translated into speech

Columbia University’s Nima Mesgarani is developing a computer-generated speech method for those who are unable to talk.

How brain signals translate to speech sounds varies from person to person, therefore computer models must be trained individually. The models are most successful when used during open skull surgeries, to remove brain tumors or when electrodes are implanted to pinpoint the origin of seizures before surgery.

Data is fed into neural networks, which process patterns by passing information through layers of computational nodes. The networks learn by adjusting connections between nodes. In the study, networks were exposed to recordings of speech that a person produced or heard and data on simultaneous brain activity.

Mesgarani’s team used data from five epilepsy patients. The network analyzed recordings from the auditory cortex as participants heard recordings of stories and people naming digits from zero to nine. The computer then reconstructed spoken numbers from neural data alone.

Click to view Science magazine’s sound file of the computer reconstruction of brain activity.


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Pierrick Arnal – Shea Balish – Kareem Ayyad – Mehran Talebinejad – Liam Kaufman – Scott Barclay

Alzheimer’s detected by AI 6 years before diagnosis

In a recent study, Jae Ho Sohn and UCSD colleagues used an AI to analyze glucose-monitoring PET scans to detect early-stage Alzheimer’s disease six years before  diagnosis.

The algorithm was trained on PET scans from patients who were eventually diagnosed with  Alzheimer’s disease, MCI, or no disorder. It was able to  identify 92% of patients who developed Alzheimer’s disease in the first test set and 98% in the second test set, 75.8 months before diagnosis on average.v


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Hugo Mercier – Shea Balish – Kareem Ayyad – Mehran Talebinejad – Liam Kaufman – Scott Barclay

Fingertip wearable measures disease-associated grip strength

IBM researchers are studying grip strength, which is associated with the effectiveness of Parkinson’s drugs, cognitive function in schizophrenics, cardiovascular health, and elderly mortality.

To better understand these markers, Steve Heisig, Gaddi Blumrosen and colleagues have developed a prototype wearable that continuously measures how a fingernail bends and moves.

The project began as an attempt to capture the medication state Parkinson’s patients, but was soon expanded to measure the tactile sensing of pressure, temperature, surface textures and other indicators of various diseases. Nail bending was measured throughout the day, and AI was used to analyze the data for disease association.

The system consists of strain gauges attached to the fingernail and a small computer that samples strain values, collects accelerometer data and communicates with a smart watch. The watch runs machine learning models to rate bradykinesia, tremor, and dyskinesia.

The work is also being used in the development of a fingertip-structure modeled device that could  help quadriplegics communicate.

Click to view IBM video


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Hugo Mercier – Shea Balish – Kareem Ayyad – Mehran Talebinejad – Liam Kaufman – Scott Barclay

Wearable haptic feedback/stimulation band to address Parkinson’s symptoms

Microsoft has submitted a patent application for a wearable band that uses haptic feedback for stimulation when wrapped around limbs or joints.  It is meant to alleviate Parkinson’s symptoms, including tremors and muscle stiffness.

Haptic actuators are distributed across a band that is adjusted to a  “duty cycle” which responds to data derived from wearable sensors, including accelerometers, gyroscopes, heart-rate sensors, and electromyography sensors, as well as tablets or phones.

Examples include stylus sensors communicating with a wrist-worn device to detect involuntary motion while writing. The actuators would then be used to reduce the involuntary motion.  The wearable itself could also detect the motion of the actuators.

The patent describes stimulation “provided through the vibration of two or more actuators within the wearable device. In various examples, the wearable device may additionally comprise a second channel for the provision of therapeutic stimulation, such as an audio channel (e.g. the wearable device may additionally comprise a speaker or buzzer),”

The sensors could be integrated into a patch on a shoulder or other joint, or into clothing.


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Hugo Mercier

3D-printed, bluetooth-controlled ingestible capsule delivers drugs, senses environment

MIT’s Bob Langer and Giovanni Traverso have developed a 3D-printed, wirelessly-controlled, ingestible capsule that can  deliver drugs, sense environmental conditions, or both.  It can reside in the stomach for a month.  Data is sent to a user’s phone, and instructions from the phone are sent to the device.  The sensor could also communicate with other wearable and implantable devices, and send the combined data to a doctor.

The technology could improve drug delivery in conditions where drugs must be taken over a long period.  It can also sense infections, allergic reactions, or other events, and then release a drug accordingly.


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Hugo Mercier

DARPA-developed closed loop therapies for neuropsychiatric illness

Led by Justin Sanchez, DARPA’s SUBNETS program develops responsive, adaptable, personalized closed-loop therapies for neuropsychiatric illness that incorporate recording and analysis of brain activity with near-real-time neural stimulation to correct or mitigate brain dysfunction.

The technology detects ongoing dynamic changes in brain activity associated with fluctuations in mood, and uses the data to deliver precisely timed therapeutic stimulation.

The premise is that brain function and dysfunction — rather than being relegated to distinct anatomical regions of the brain — play out across distributed neural systems. By understanding what healthy brain activity looks like across these sub-networks, compared to unhealthy brain activity, and identifying predictive biomarkers that indicate changing state, DARPA is developing interventions that maintain a healthy brain state within a normal range of emotions. 

Three recent papers show that decoding technology can predict changes in mood from recorded neural signals; a brain sub-network appears to contribute to depression, especially in those with anxiety; and moderate to severe depression symptoms can be alleviated using open-loop neural stimulation delivered to the orbitofrontal cortex to modulate a sub-network that contributes to depression. 

This work is inspired by Sanchez’s commitment to finding better treatments for the millions of veterans who suffer from neuropsychiatric illness, which have been limited by a lack of a mechanistic understanding of how these illnesses manifest in the brain.

These findings encompass key discoveries and technologies to enable the SUBNETS goal of a closed-loop system that can detect ongoing dynamic changes in brain activity associated with fluctuations in mood, and that can use this information to deliver precisely timed therapeutic stimulation to improve brain function in individuals living with neuropsychiatric illnesses.


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Hugo Mercier

EEG identifies cognitive motor dissociation

Nicholas Schiff and Weill Cornell colleagues have developed an EEG-based method for measuring the delay in brain processing of continuous natural speech in patients with severe brain injury. Study results correlated with fMRI obtained evidence, commonly used to identify the capacity to perform cognitively demanding tasks. EEG can be used for long periods, and is cheaper and more accessible than fMRI.

This type of monitoring can identify patients with severe brain injury who have preserved high-level cognition despite showing limited or no consciousness.

According to Schiff: “This approach may be a more effective and efficient method for initially identifying patients with severe brain injuries who are very aware but are otherwise unable to respond, a condition called cognitive motor dissociation.”


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Hugo Mercier

Thought controlled tablets

The BrainGate/Brown/Stanford/MGH/VA consortium has published a study describing three teraplegic patients who were able to control an off the shelf tablet with their thoughts. They surfed the web, checked the weather and shopped online. A musician played part of Beethoven’s “Ode to Joy” on a digital piano interface.

The BrainGate BCI included a small implant that detected and recorded signals associated with intended movements produced in the motor cortex. Neural signals were routed to a Bluetooth interface that worked like a wireless mouse, which was paired to an unmodified tablet.

Participants made up to  22 point-and-click selections per minute while using several apps. They typed up to 30 characters per minute with standard email and text interfaces.

The researchers believe that the technology can open new lines of communication between brain disorder patients and their caregivers.

Click to view BrainGate video


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Hugo Mercier

Thought controlled television

Samsung and EPFL researchers, including Ricardo Chavarriaga, are developing Project Pontis, a BCI system meant to allow the disabled to control a TV with their thoughts.

The prototype uses a 64 sensor headset plus eye tracking to determine when a user has selected a particular movie. Machine learning is used to build a profile of videos one is interested in, allowing future content suggestions.  The user ultimately makes a selection using eye tracking.  The team is now working on a system that relies on brain signals alone for users who aren’t able to control their eyes or other muscles reliably,

Click to view Samsung video


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar

Sensor patch monitors blood oxygen levels anywhere in the body

Ana Claudia Arias and Berkeley colleagues have developed a flexible, adhesive sensor that maps blood-oxygen levels over large areas of skin, tissue and organs, making it possible to monitor wound healing in real time, or oxygen levels in transplanted organs. It can also be used to continuously monitor blood oxygen levels in diabetes, respiration diseases and  sleep apnea.

The device is made of an array of alternating red and near-infrared organic LEDs and organic photodiodes, printed on bendable plastic that molds to the the body. Unlike fingertip oximeters, which measure oxygen levels at a single point, it can detect blood-oxygen levels at nine points in a grid and can be placed anywhere on the skin.


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le