Atrial fibrillation-detecting ring

Eue-Keun Choi and Seoul National University colleages have developed an atrial fibrillation detecting ring, with similar functionality to AliveCor and other watch-based monitors. The researchers claim that the performance is comparable to medical grade pulse oximeters.

In a study, Soonil Kwon and colleagues analyzed data from 119 patients with AF who underwent simultaneous ECG and photoplethysmography before and after direct-current cardioversion. 27,569 photoplethysmography samples were analyzed by an algorithm developed with a convolutional neural network. Rhythms were then interpreted with the wearable ring.

The accuracy of the convolutional neural network was 99.3% to diagnose AF and 95.9% to diagnose sinus rhythm.  The accuracy of the wearable device was 98.3% for sinus rhythm and 100% for AF after filtering low-quality samples.

Choi believes that: “Deep learning or [artificial intelligence] can overcome formerly important problems of [photoplethysmography]-based arrhythmia diagnosis. It not only improves diagnostic accuracy in great degrees, but also suggests a metric how this diagnosis will be likely true without ECG validation. Combined with wearable technology, this will considerably boost the effectiveness of AF detection.”.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

AI detects depression in children’s voices

University of Vermont researchers have developed an algorithm that detects anxiety and depression in children’s voices with 80 per cent accuracy, according to a recent study.

Standard diagnosis involves a 60-90 minute semi-structured interview with a trained clinician and their primary care-giver. AI can make diagnosis faster and more reliable.

The researchers used an adapted version of  the Trier-Social Stress Task, which is intended to cause feelings of stress and anxiety in the subject.  71 children between ages three and eight were asked to improvise a three-minute story, and told that they would be judged based on how interesting it was. The researcher remained stern throughout the speech, and gave only neutral or negative feedback, to create stress.  After 90 seconds, and again with 30 seconds left, a buzzer would sound and the judge would tell them how much time was left.

The children were also diagnosed using a structured clinical interview and parent questionnaire.

The algorithm analyzed statistical features of the audio recordings of each child’s story and relate them to the the diagnosis. The algorithm diagnosed children with 80 per cent accuracy. The middle phase of the recordings, between the two buzzers, was the most predictive of a diagnosis.

Eight  audio features were identified.  Three stood out as highly indicative of internalizing disorders: low-pitched voices, with repeatable speech inflections and content, and a higher-pitched response to the surprising buzzer.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Personalized, gamified, inhibitory control training for weight loss

Evan Forman, Michael Wagner, and Drexel colleagues have developed Diet DASH,  a brain training game meant to inhibit sugar-eating impulses. A recent study using the the game examined the impact of highly personalized and/or gamified inhibitory control training on weight loss, using repeated, at-home training. The trial randomized 109 overweight, sweet-eating participants , who  attended a workshop on why sugar is bad for their health. The training was customized to focus on the sweets that each participant enjoyed, Difficulty was adjusted according to how well they resisted. They  played the game for a few minutes every day, for six weeks, and then once a week for two weeks. In the game, players moved quickly through a grocery store with the goal of putting healthy food in a cart, while refraining from choosing sweets. Points were awarded for choosing healthy items. Half of the participants lost as much 3.1 percent of their body weight over the eight week study.

Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Study: Blood + spinal fluid test detects Alzheimer’s 8 years before symptoms

Klaus Gerwert at Ruhr-Universität Bochum has developed a blood + CSF test that he claims can detect Alzheimer’s disease 8 years before the onset of symptoms.  The goal is early stage therapy to achieve better results than current treatment protocols.

To reduce false positive results from the initial study, the researchers first used a blood test to identify high-risk individuals. They added a dementia-specific biomarker, tau protein, for participants shown to have Alzheimer’s in the first step. The second analysis was carried out in cerebrospinal fluid extracted from the spinal cord — an invasive procedure that the team is working to eliminate from the next phase of research. If both biomarkers were positive, it was determined that the presence of Alzheimer’s disease was  highly likely.

According to Gerwert: “Through the combination of both analyses, 87 of 100 Alzheimer’s patients were correctly identified in our study. And we reduced the number of false positive diagnoses in healthy subjects to 3 of 100.  Now, new clinical studies with test participants in very early stages of the disease can be launched. Recently, two major promising studies have failed, especially Crenezumab and Aducanumab – not least because it had probably already been too late by the time therapy was taken up. The new test opens up a new therapy window.”

Researcher Andreas Nabers added: “Once amyloid plaques have formed, it seems that the disease can no longer be treated. We are now conducting in-depth research to detect the second biomarker, namely tau protein, in the blood, in order to supply a solely blood-based test in future.”


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Embryo stem cells created from skin cells

Yossi Buganim from The Hebrew University of Jerusalem has discovered a set of genes that can transform murine skin cells into all three of the cell types that comprise the early embryo: the embryo itself, the placenta and the extra-embryonic tissues, such as the umbilical cord.

Buganim and colleagues discovered a combination of five genes that, when inserted into skin cells, reprogram the cells into the three early embryonic cell types–iPS cells which create fetuses, placental stem cells, and stem cells that develop into other extra-embryonic tissues. The transformations take about one month.

To uncover the molecular mechanisms that are activated during the formation of these cell types, the researchers analyzed changes to the genome structure and function inside the cells when the five genes are introduced. They discovered that during the first stage, skin cells lose their cellular identity and then slowly acquire a new identity of one of the three early embryonic cell types, and that this process is governed by the levels of two of the five genes.

This discovery may enable creation of entire human embryos out of human skin cells, without the need for sperm or eggs. It will also impact the modeling of embryonic defects and the understanding of placental dysfunctions.  It could address fertility problems by creating human embryos in a petri dish.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Thought generated speech

Edward Chang and UCSF colleagues are developing technology that will translate signals from the brain into synthetic speech.  The research team believes that the sounds would be nearly as sharp and normal as a real person’s voice. Sounds made by the human lips, jaw, tongue and larynx would be simulated.

The goal is a communication method for those with disease and paralysis.

According to Chang: “For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity.”

Berkeley’s Bob Knight has developed related technology, using HFB activity to decode imagined speech to develop a BCI for treatment of disabling language deficits.  He described this work at the 2018 ApplySci conference at Stanford.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Voice-detected PTSD

Charles Marmar, Adam Brown, and NYU colleagues are using AI-based voice analysis to detect PTSD with 89 per cent accuracy, according to a recent study.

PTSD is typically determined by bias-prone clinical interviews or self-reports.

The team recorded standard diagnostic interviews of 53 Iraq and Afghanistan veterans with military-service-related PTSD, as well as 78 veterans without the disease. The recordings were then fed into voice software to yield 40,526 speech-based features captured in short spurts of talk, which were then sifted  for patterns.

The  program linked less clear speech and a lifeless metallic tone with PTSD., While the study did not explore  disease mechanisms behind PTSD, the team believes that traumatic events change brain circuits that process emotion and muscle tone, affecting a person’s voice.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Trigeminal nerve stimulation to treat ADHD

NeuroSigma has received FDA clearance for its forehead patch which stimulates the trigeminal nerve during sleep to treat ADHD. The device won CE Mark approval in Europe in  2015.

The  approval was based on study of  62 subjects. Over four weeks, those who received the treatment showed  a decrease in ADHD-RS by -31.4%. The control group showed a  -18.4% decrease.

The FDA’s Carlos Pena said: “This new device offers a safe, non-drug option for treatment of ADHD in pediatric patients through the use of mild nerve stimulation, a first of its kind.”

Trigeminal nerve stimulation is also being studied in Epilepsy and PTSD.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

3D printed, vascularized heart, using patient’s cell, biological materials

Tel Aviv University professor Tal Dvir has printed a 3D vascularized engineered heart, including cells, blood vessels, ventricles and chambers,  using a patient’s own cell and biological materials.

A biopsy of fatty tissue was taken from patients. Cellular and a-cellular materials were separated. While the cells were reprogrammed to become pluripotent stem cells, the extracellular matrix were processed into a personalized hydrogel that served as printing “ink.” After being mixed with the hydrogel, the cells were efficiently differentiated to cardiac or endothelial cells to create patient-specific, immune-compatible cardiac patches with blood vessels and, subsequently, an entire heart.

Dvir believes that this “3D-printed thick, vascularized and perfusable cardiac tissues that completely match the immunological, cellular, biochemical and anatomical properties of the patient” reduces the risk of implant rejection.

The team now plans on culturing the printed hearts and “teaching them to behave” like hearts, then transplanting them in animal models.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

MRI detected intracellular calcium signaling

Alan Jasanoff and MIT colleagues are using MRI to monitor calcium activity at a much deeper level in the brain than previously possible, to show how neurons communicate with each other.  The research team believes that this enables neural activity to be linked with specific behaviors.

To create their intracellular calcium sensors, the researchers used manganese as a contrast agent, bound to an organic compound that can penetrate cell membranes, containint a calcium-binding chelator.

Once inside the cell, if calcium levels are low, the calcium chelator binds weakly to the manganese atom, shielding the manganese from MRI detection. When calcium flows into the cell, the chelator binds to the calcium and releases the manganese, which makes the contrast agent appear brighter in an MRI.

The technique could also be used to image calcium as it performs in facilitating the activation of immune cells, or in diagnostic brain or heart imaging.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University