MRI, algorithm predict autism before behavioral symptoms appear

FacebooktwitterlinkedinFacebooktwitterlinkedin

UNC’s Heather Hazlett has published a study showing that an overgrowth in brain volume, determined by MRI scans during the first year of life, forecasts whether a child at high risk of developing autism.

The goal is to give parents the opportunity to intervene long before behavioral symptoms become obvious, which usually occurs between ages 2 and 4.

The study was small, and further research is needed before it can be developed into a diagnostic  tool.

Two groups were studied: 106 high-risk  infants, with an older sibling with autism, and 42 low-risk infants, with no family history. MRI measurements of overall volume, surface area and thickness of the cerebral cortex in certain regions were done at set times between 6 and 24 months. An overgrowth of cortical surface area in infants later diagnosed with autism, compared with the typically developing infants, was discovered.

The researchers then developed an algorithm that predicted autism, based on brain measurements. Approximately 80% of the 15 high-risk infants who would later meet the criteria for autism at 24 months. Using the algorithm, the team also accurately predicted which babies would not develop autism by age 2.


Join ApplySci at Wearable Tech + Digital Health + NeuroTech Boston – Featuring Roz Picard, Tom Insel, John Rogers and Nathan Intrator – September 19, 2017 at the MIT Media Lab

Dopamine sensor tracks single neurons

FacebooktwitterlinkedinFacebooktwitterlinkedin

MIT’s  Michael Strano has developed a carbon nanotube based detector that can track single cells’ secretion of dopamine. Using 20,000 sensor arrays, the team monitored dopamine secretion of single neurons, allowing them better understand dopamine dynamics.

Unlike most other neurotransmitters, dopamine can exert its effects beyond the synapse. Not all dopamine released into a synapse is taken up by the target cell, allowing some of the chemical to diffuse away and affect other nearby cells. Tracking this dopamine diffusion in the brain has proven difficult. Neuroscientists have tried using specialized electrodes, but only 20 of the smallest electrodes can be placed near any given cell.

The carbon nanotube sensors used in this study are coated with a DNA sequence that makes the sensors interact with dopamine. When dopamine binds to the carbon nanotubes, they fluoresce more brightly, allowing the researchers to see exactly where the dopamine was released. The researchers deposited more than 20,000 of these nanotubes on a glass slide, creating an array that detects any dopamine secreted by a cell placed on the slide.

According to Strano: “We have falsified the notion that dopamine should only be released at these regions that will eventually become the synapses,” Strano says. “This observation is counterintuitive, and it’s a new piece of information you can only obtain with a nanosensor array like this one.”


Join ApplySci at Digital Health + NeuroTech Boston – September 19, 2017 at the MIT Media Lab

Toward a speech-driven auditory Brain Computer Interface

FacebooktwitterlinkedinFacebooktwitterlinkedin

University of Oldenburg student Carlos Filipe da Silva Souto is in the early stages of developing a brain computer interface that can advise a user who he/she is listening to in a noisy room.   Wearers could focus on specific conversations, and tune out background noise.

Most BCI studies have focused on visual stimuli, which typically outperforms auditory stimuli systems, possibly because of the larger cortical surface of the visual system.  As researchers further optimize classification methods for auditory systems, performance will be improved.

The goal is for visually impaired or paralyzed people to be able to use natural speech features to control hearing devices.

ApplySci’s 6th  Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Sky Christopherson – Marcus Weldon – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis

 

 

 

 

AI assistant addresses specific needs of seniors

FacebooktwitterlinkedinFacebooktwitterlinkedin

ElliQ is AI assistant that intuitively interacts with seniors to support independent living.

The NLP based system enables users to make video calls, play games, and use social media. Music, TED talks, audio books,and other content is recommended, after machine learning tools analyze user preferences (or caregiver input is received.)  Physical activity is suggested after a long period of sitting is detected.  Medication reminders can be scheduled.

The robot is meant to act as a companion, to address loneliness, which is an epidemic amongst the elderly.  It could be further enhanced if memory triggers, anxiety-reducing content, and custom instructions about activities of daily living were incorporated.

ApplySci’s 6th  Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Sky Christopherson – Marcus Weldon – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis

3D-bioprinted human skin can replace animal testing, potentially be used in burns

FacebooktwitterlinkedinFacebooktwitterlinkedin

José Luis Jorcano at Universidad Carlos III de Madrid has developed a 3D bioprinter capable of replicating the structure of skin. The human-like  skin that is produced  includes an epidermal layer that protects against the environment, and a collagen-producing dermis that provides elasticity and strength.

The bioink material  contains human plasma, and  primary human fibroblasts and keratinocytes obtained from biopsies.

Currently, 100 cm2 of the printed skin  is able to be produced in 35 minutes.

ApplySci’s 6th  Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Sky Christopherson – Marcus Weldon – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis

Sensor, data, and AI-driven primary care

FacebooktwitterlinkedinFacebooktwitterlinkedin

Forward has brought advanced technology to well-care.

Patient/Members are integrated into the practice with a baseline  screening via body scans, blood and genetic tests.  They are then given consumer and medical wearables, which work with proprietary algorithms, for continuous monitoring (and access to data), personalized treatment, and emergency alerts. Physical exam rooms display all of the data during doctor visits,

Ongoing primary care, including continuous health monitoring, body scans, gynecology, travel vaccinations, medication, nutrition guidance, blood tests and skin care is included in the fee-based system.

Forward investor Vinod Khosla will be interviewed by ApplySci’s Lisa Weiner Intrator on stage at Digital Health + NeuroTech at Stanford on February 7th at 4:15pm.

ApplySci’s 6th  Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Sky Christopherson – Marcus Weldon – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis

 

Machine learning tools predict heart failure

FacebooktwitterlinkedinFacebooktwitterlinkedin

Declan O’Regan and MRC London Institute of Medical Sciences colleagues believe that AI can predict when pulmonary hypertension patients require more aggressive treatment to prevent death.

In a recent study,  machine learning software automatically analyzed moving images of a patient’s heart, captured during an MRI. It then used  image processing to build a “virtual 3D heart”, replicating how 30,000 points in the heart contract during each beat. The researchers fed the system data from hundreds of previous patients. By linking the data and models, the system learned which attributes of a heart, its shape and structure, put an individual at risk of heart failure.

The software was developed using data from 256 patients with pulmonary hypertension. It correctly predicted those who would still be alive after one year 80% of the time. The figure for doctors is 60%.

The researchers  want to test the technology in other forms of heart failure, including cardiomyopathy, to see when a pacemaker or other form of treatment is needed.

Click to view MRC London video.

ApplySci’s 6th  Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Sky Christopherson – Marcus Weldon – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis

 

 

Consumer wearable + medical monitor track exercise’s impact on glucose

FacebooktwitterlinkedinFacebooktwitterlinkedin

Consumer wearables can complement medical devices by integrating activity data into a disease management strategy.

Fitbit movement data will now be used with a Medtronic diabetes management tool, with the goal of users predicting the impact of exercise on glucose levels.

Diabetics can monitor glucose with Medtronic’s iPro2 system continuously for 6 days. Fitbit data will integrated into the  iPro2 myLog app. Users will no longer need to log daily activity on paper, and the information is easily shared with physicians and caregivers.

ApplySci’s 6th  Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Sky Christopherson – Marcus Weldon – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis

 

Multiple wearable sensors predict illness

FacebooktwitterlinkedinFacebooktwitterlinkedin

Stanford’s Michael Snyder has published the results of a health wearable study, in which 2 billion measurements were taken from 60 subjects, concluding that such devices can be used to predict illness.

Continuous biosensor data, plus blood chemistry, gene expression and other tests,  were included. Participants wore 1-7 commercial wearables, which collected more than 250,000 measurements per day. Weight, heart rate, blood oxygen level, skin temperature, sleep, walking, biking and running, calories expended, acceleration, and exposure to gamma rays and X-rays were analyzed.

To individualize the process, both baseline and illness values were established for each person. It was possible to monitor deviations from normal, and associate those deviations with environmental, illness or other factors that affect health.  Deviation patterns correlated with specific health problems.  (The lead author was able to detect Lyme Disease in himself during the study.) Algorithms which spot these patterns could be used for future diagnostics or research.

ApplySci’s 6th  Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Sky Christopherson – Marcus Weldon – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis

 

 

Alexa solidifies NLP’s role in smart homes, cars. Is senior care next?

FacebooktwitterlinkedinFacebooktwitterlinkedin

Amazon’s Alexa is the deserved  star of CES. Lights, thermostatsair purifiers, cars, refrigeratorsother appliances, and baby monitors are examples of interfaces solidifying the natural voice processing-driven future of the world.

Amazon now has the opportunity to enhance the lives of those aging in place.  Its development of senior citizen focused applications is lagging.  Alexa has the ability provide the social interaction, health monitoring, and memory triggers that many seniors need to live independently.  If caregivers were able to create customized questions and answers, specific user needs could be better addressed.

ApplySci’s 6th  Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Sky Christopherson – Marcus Weldon – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis