Robot helps autistic kids engage

Georgia Tech professor Ayanna Howard is using interactive robots to help autistic kids engage with others,  socially and emotionally. Her company, Zyrobotics, is commericalizing this technology.

In a study, 18 kids, between the ages of 4 and 12,  five of whom had autism, interacted with two robots which expressed 20 emotional states, including boredom, excitement, and nervousness. As children heard, saw, smelled, tasted, and touched in different scenarios, the robots showed them appropriate responses. The results were increased engagement when the robots engaged with the participants in sensory stations.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School featuring talks by Brad Ringeisen, DARPA – Joe Wang, UCSD – Carlos Pena, FDA  – George Church, Harvard – Diane Chan, MIT – Giovanni Traverso, Harvard | Brigham & Womens – Anupam Goel, UnitedHealthcare  – Nathan Intrator, Tel Aviv University | Neurosteer – Arto Nurmikko, Brown – Constance Lehman, Harvard | MGH – Mikael Eliasson, Roche – David Rhew, Samsung

Join ApplySci at the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University featuring talks by Zhenan Bao, Stanford – Rudy Tanzi, Harvard – David Rhew, Samsung – Carla Pugh, Stanford – Nathan Intrator, Tel Aviv University | Neurosteer

Tiny fiber optic sensor monitors blood flow in real-time

John Arkwright and Flinders University colleagues have developed a tiny, low cost, fiber-optic sensor to monitor blood flow through the aorta in real-time.  The goal is continuous monitoring during prolonged intensive care and surgical procedures.  Current blood flow measurement, using ultrasound or thermo-dilution,  is intermittent, averaging every 30 minutes.

The device is inserted through a small  aperture in the skin, into the femoral artery, when heart function is compromised.  Its size allows it to be  used in the tiny blood vessels of infants. Very young babies  are particularly susceptible to sudden drops in blood pressure and oxygen delivery to vital organs.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

New electrodes, brain signal analysis, for smaller, lower power, wireless BCI

Building on his prior brain-controlled prosthetic work, Stanford’s Krishna Shenoy has developed a simpler way to study brain electrical activity, which he believes will lead to tiny, low-power, wireless brain sensors that would bring thought-controlled prosthetics into much wider use.

The method involved decoding neural activity in aggregate, instead of  “spike sorting.”  Spike sorting must be done for every neuron in every experiment, taking thousands of research hours.  Future brain sensors, with 1,000 or more electrodes — up from 100 today — would take a neuroscientist 100 hours or more to sort the spikes by hand for every experiment.

In the study, the researchers used a  statistics theory  to uncover patterns of brain activity when several neurons are recorded on a single electrode. An electrode designed to pick up brain signals in mice used the technology to record brain signals of rhesus monkeys. Hundreds of neurons were recorded at the same time, and accurately portrayed the monkey’s brain activity, without spike sorting.

The team believes that this work will ultimately lead to neural implants with simpler electronics, to track more neurons, more accurately than before.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Blood-brain-barrier recreated inside organ chip with pluripotent stem cells

Clive Svendsen, Gad Vatine, and  Cedars Sinai and Ben Gurion University of the Negev colleagues  have  recreated  the blood-brain barrier outside of the body using induced pluripotent stem cells for the first time.  In a study, the recreated bbb functioned as it would in the individual who provided the cells to make it. This could facilitate a new understanding of brain disease and/or predict which drugs will work best for an individual.

The stem cells were used to create the neurons, blood-vessel linings and support cells, which comprise the blood-brain barrier. They were placed inside organ-chips, which recreated the body’s microenvironment with the natural physiology and mechanical forces that cells experience.

The living cells formed a functioning unit of a blood-brain barrier that act as it does in the body, including blocking entry of certain drugs. Significantly, when this blood-brain barrier was derived from cells of patients with Huntington’s disease or Allan-Herndon-Dudley syndrome, a rare congenital neurological disorder, the barrier malfunctioned in the same way that it does in patients with these diseases.

This is the first time that induced pluripotent stem cells were used generate a functioning blood-brain barrier, inside an Organ-Chip, that displayed a characteristic defect of the individual patient’s disease.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

App optimizes meditation length to improve attention and memory

Adam Gazzaley and UCSF colleagues have developed a focus-driven digital meditation program that improved attention and memory in healthy adults in a recent study.

MediTrain tailors meditation session length to participant abilities, and challenges users to increase session time. Subjects received significant benefits in 6 weeks.  On their first day, they focused on their breath for an average of 20 seconds. After 30 days, they were able to focus for an average of six minutes.

According to Gazzaley: “We took an ancient experiential treatment of focused meditation, reformulated it and delivered it through a digital technology, and improved attention span in millennials, an age group that is intimately familiar with the digital world, but also faces multiple challenges to sustained attention.”

At the end of each segment, participants were asked whether they paid continuous attention for the allotted time.  The app adapted  slightly longer meditation periods for those who said yes, and shorter ones for those who said no. The team believes that  user participation contributed to the app’s usefulness.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Sensor glove identifies objects

In a Nature paper, the system accurately detected  objects, including a soda can, scissors, tennis ball, spoon, pen, and mug 76 percent of the time.

The tactile sensing sensors could be used in combination with traditional computer vision and image-based datasets to give robots a more human-like understanding of interacting with objects. The dataset also measured cooperation between regions of the hand during  interactions, which could be used to customize prosthetics.

Similar sensor-based gloves used cost thousands of dollars and typically 50 sensors. The  STAG  glove costs approximately $10 to produce.

Click to view MIT video


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Study: Noninvasive BCI improves function in paraplegia

Miguel Nicolelis has developed a non-invasive system for lower-limb neurorehabilitation.

Study subjects wore an EEG headset  to record brain activity and detect movement intention. Eight electrodes were attached to each leg, stimulating muscles involved in walking.  After training, patients used their own brain activity to send electric impulses to their leg muscles, imposing a physiological gait. With a walker and a support harness, they learned to walk again, and increased their sensorimotor skills. A wearable haptic display delivered tactile feedback to forearms, to provide continuous proprioceptive walking feedback.

The system was tested on two patients with chronic paraplegia. Both were able to move with less dependency on walking assistance, and one displayed motor improvement. Cardiovascular capacity and muscle volume also improved.

Click to view EPFL video


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Deep learning mammography model detects breast cancer up to five years in advance

MIT CSAIL professor Regina Barzilay and Harvard/MGH professor Constance Lehman  have developed a deep learning model that can predict breast cancer, from a mammogram, up to five years in the future. The model learned subtle breast tissue patterns that lead to malignant tumors from mammograms and known outcomes of 90,000 MGH patients.

The goal is to individualize screening and prevention programs.

Barzilay said that “rather than taking a one-size-fits-all approach, we can personalize screening around a woman’s risk of developing cancer.  For example, a doctor might recommend that one group of women get a mammogram every other year, while another higher-risk group might get supplemental MRI screening.”

The algortithm accurately placed 31 percent of all cancer patients in its highest-risk category, compared to 18 percent for traditional models.

Lehman hopes to change screening strategies from age-based to risk based. “This is because before we did not have accurate risk assessment tools that worked for individual women.”

Current risk assement,  based on age, family history of breast and ovarian cancer, hormonal and reproductive factors, and breast density, are weakly correlated with breast cancer. This makes many organizations believe that risk-based screening is not possible.

Rather than manually identifying the patterns in a mammogram that drive future cancer, the algorithm deduced patterns directly from the data, detecting abnormalities too subtle for the human eye to see.

Lehman said that “since the 1960s radiologists have noticed that women have unique and widely variable patterns of breast tissue visible on the mammogram. These patterns can represent the influence of genetics, hormones, pregnancy, lactation, diet, weight loss, and weight gain. We can now leverage this detailed information to be more precise in our risk assessment at the individual level.”

The MIT/MGH model  is equally accurate for white and black women, as opposed to prior models. Black women have been shown to be 42 percent more likely to die from breast cancer due to a wide range of factors that may include differences in detection and access to health care.

Barzilay believes the system could, in the future,  determine, based on mammograms, if patients are at a greater risk for cardiovascular disease or other cancers.


Professor Constance Lehman will discuss this technology at ApplySci’s 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School 

Atrial fibrillation-detecting ring

Eue-Keun Choi and Seoul National University colleages have developed an atrial fibrillation detecting ring, with similar functionality to AliveCor and other watch-based monitors. The researchers claim that the performance is comparable to medical grade pulse oximeters.

In a study, Soonil Kwon and colleagues analyzed data from 119 patients with AF who underwent simultaneous ECG and photoplethysmography before and after direct-current cardioversion. 27,569 photoplethysmography samples were analyzed by an algorithm developed with a convolutional neural network. Rhythms were then interpreted with the wearable ring.

The accuracy of the convolutional neural network was 99.3% to diagnose AF and 95.9% to diagnose sinus rhythm.  The accuracy of the wearable device was 98.3% for sinus rhythm and 100% for AF after filtering low-quality samples.

Choi believes that: “Deep learning or [artificial intelligence] can overcome formerly important problems of [photoplethysmography]-based arrhythmia diagnosis. It not only improves diagnostic accuracy in great degrees, but also suggests a metric how this diagnosis will be likely true without ECG validation. Combined with wearable technology, this will considerably boost the effectiveness of AF detection.”.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

AI detects depression in children’s voices

University of Vermont researchers have developed an algorithm that detects anxiety and depression in children’s voices with 80 per cent accuracy, according to a recent study.

Standard diagnosis involves a 60-90 minute semi-structured interview with a trained clinician and their primary care-giver. AI can make diagnosis faster and more reliable.

The researchers used an adapted version of  the Trier-Social Stress Task, which is intended to cause feelings of stress and anxiety in the subject.  71 children between ages three and eight were asked to improvise a three-minute story, and told that they would be judged based on how interesting it was. The researcher remained stern throughout the speech, and gave only neutral or negative feedback, to create stress.  After 90 seconds, and again with 30 seconds left, a buzzer would sound and the judge would tell them how much time was left.

The children were also diagnosed using a structured clinical interview and parent questionnaire.

The algorithm analyzed statistical features of the audio recordings of each child’s story and relate them to the the diagnosis. The algorithm diagnosed children with 80 per cent accuracy. The middle phase of the recordings, between the two buzzers, was the most predictive of a diagnosis.

Eight  audio features were identified.  Three stood out as highly indicative of internalizing disorders: low-pitched voices, with repeatable speech inflections and content, and a higher-pitched response to the surprising buzzer.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University