Small, foam hearable captures heart data

FacebooktwitterlinkedinFacebooktwitterlinkedin
In a small study, Danilo Mandic from Imperial College London has shown that his hearable can be used to capture heart data. The device detected heart pulse by sensing the dilation and constriction of tiny blood vessels in the ear canal, using the mechanical part of the electro-mechanical sensor. The hearable is made of foam and molds to the shape of the ear. The goal is a comfortable and discreet continuous monitor that will enable physicians to receive extensive data. In addition to the device’s mechanical sensors, Mandic, a signal processing experter, claims that electrical sensors detect brain activity that could  monitor sleep, epilepsy, and drug delivery, and be used in personal authentication and cyber security.

Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli

Registration rates increase November 24th, 2017

 

Closed-loop control of drug delivery across the blood brain barrier

FacebooktwitterlinkedinFacebooktwitterlinkedin

Tao Sun, Nathan McDannold, Eric Miller and Brigham & Women’s and Tufts colleagues have developed a controller that offers a finer degree of control in penetrating the blood brain barrier for drug delivery.  The technology, only tested in rats, could improve safety in humans if found effective.

Using focused ultrasound and microbubbles as before, the team is now able to listen to echoes for instantaneous feedback on microbubble oscillation stability, providing fast, real-time control and analysis.

Microbubbles can help temporarily open the blood-brain barrier without incision or radiation, but can destabilize and collapse, damaging the brain’s critical vasculature.

The team used a rat model to develop a closed-loop controller. Sensors were placed on the outside of the brain, as microphones, enabling researchers to listen to ultrasound echoes bouncing off the microbubbles, to determine stability. They then tuned and adjusted the ultrasound input, instantly, to stabilize the bubbles, excite them to open the barrier, and deliver a drug of a predefined dose, while maintaining safe ultrasound exposure.

The approach was tested in healthy rats as well as an animal model of glioma brain cancer. Further research is needed to adapt the technique for humans. Clinical trials  are now underway in Canada.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli

Registration rates increase November 24th, 2017

AI detects pneumonia from chest X-rays

FacebooktwitterlinkedinFacebooktwitterlinkedin

Andrew Ng and Stanford colleagues used AI to detect pneumonia from x-rays with similar accuracy to trained radiologists.  The CheXNet model analyzed 112,200 frontal-view X-ray images of 30,805 unique patients released by the NIH (ChestX-ray14.)

Deep Learning algorithms also detected14 diseases including fibrosis, hernias, and cell masses, with fewer false positives and negatives than NIH benchmark research.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli

Registration rates increase today, November 17th, 2017

Researchers claim to improve human memory with implanted electrodes

FacebooktwitterlinkedinFacebooktwitterlinkedin

In a small study, USC’s Dong Song demonstrated the efficacy of an implantable “memory prosthesis.”   Dr. Song presented his work at the Society for Neuroscience conference in Washington this week.

20 volunteers had the device implanted at the same time as electrodes for epilepsy treatment, a procedure which they had already planned.

The “prosthesis” collected brain activity data during tests designed to stimulate  short-term memory or working memory. The researchers then determined and used optimal memory performance patterns to stimulate the brain during later tests.

They claimed that the procedure improved short-term memory by  approximately 15 percent, and working memory by 25 percent. When the brain was stimulated randomly, performance worsened.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli

Registration rates increase November 17th, 2017

Optogenetic technique controls single neurons

FacebooktwitterlinkedinFacebooktwitterlinkedin

MIT’s Ed Boyden and Paris Descartes University’s Valentina Emiliani have developed a new optogenetic technique, combined with new opsins, that stimulates individual cells with precise control over both the timing and location of the activation.

This will allow the study of how individual cells, and connections among those cells, generate specific behaviors such as initiating a movement or learning a new skill.

The study‘s lead authors are Or Shemesh from MIT and Dimitrii Tanese and Valeria Zampini from CNRS.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi

Registration rates increase – November 17th, 2017

 

Video: John Rogers on soft electronics for the human body

FacebooktwitterlinkedinFacebooktwitterlinkedin

Recorded at ApplySci’s Wearable Tech + Digital Health + Neurotech Boston conference on September 19th at the MIT Media Lab.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine

Registration rates increase today – November 10th, 2017

Silicon probes record hundreds of neurons simultaneously

FacebooktwitterlinkedinFacebooktwitterlinkedin

Neuropixels, developed by HHMI’s Tim Harris, are electrodes that record brain activity from hundreds of neurons. Previously, it was not possible to measure the joint activity of individual neurons distributed across brain regions. Recording methods could either resolve the activity of individual neurons or monitor multiple brain regions.

UCL, Allen Institute for Brain Science, IMEC researchers collaborated on the study. The team is now developing a four-shank probe with a smaller base, for chronic recordings, and optrodes that combine recording with optical stimulation, for optogenetic experiments.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine

Registration rates increase November 10th.

Phone camera measures wound depth, severity

FacebooktwitterlinkedinFacebooktwitterlinkedin

AutoDepth by Swift Medical uses a phone’s camera to understand a wound’s depth and severity.  Algorithms process dynamic changes over time. Depth can indicate whether a wound is healing properly.

The system is noninvasive, and can be widely accessible to clinicians. In addition to gauging the wound healing process, it  can be used for measuring the progression of pressure ulcers, or in the analysis of moles on the skin, where volume, depth, and surface texture are considered.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer

Registration rates increase November 10th.

AI detects suicidal thoughts from brain scans in small study

FacebooktwitterlinkedinFacebooktwitterlinkedin

David Brent and PIttsburgh and Carnegie Mellon colleagues used machine learning to identify suicidal thoughts in subjects based on fMRI scans.

In a recent study, 18 suicidal participants and 18 members of a control group  were presented with three lists of 10 words related to suicide, positive, or negative effects. Previously mapped neural signatures  showing  brain patterns of emotions like “shame” and “anger” were incorporated.

5 brain locations, and 6 words, were distinguished suicidal patients from the controls. Using those locations and words,  a machine-learning classifier was trained and able to identify 15 of the 17 suicidal patients and 16 of 17 control subjects.

Suicidal patients were divided into those that had attempted suicide (9) and those that had not (8). Another classifier was then able to identify 16 of the 17 patients.

If healthy patients and those with suicidal thoughts are proven to have such different reactions to words, this work could impact therapy and, it is our hope, prevent lost lives.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer

Weather, activity, sleep, stress data used to predict migraines

FacebooktwitterlinkedinFacebooktwitterlinkedin

Migraine Alert by Second Opinion Health uses machine learning to analayze weather, activity, sleep, and stress, to determine if a user will have a migraine headache.

The company claims that the algorithm is effective after 15 episodes are logged. They have launched a multi patient study with the Mayo Clinic, in which subjects use a phone and a fitbit to log this lifestyle data. The goal is prediction, earlier treatment, and, hopefully, symptom reduction.  No EEG-derived brain activity data is incorporated into the prediction method.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer