Algorithm predicts low blood pressure during surgery

UCLA’s Maxime Cannesson has developed an algorithm that, in a recent study, predicted  an intraoperative hypotensive event 15 minutes before it occurred in 84 percent of cases, 10 minutes before in 84 percent of cases, and five minutes before in 87 percent of cases.

The goal is early identification and treatment, to prevent complications, such as postoperative heart attack, acute kidney injury, or death.

The algorithm is based on recordings of the increase and decrease of blood pressure in the arteries during a heartbeat—including episodes of hypotension. For each heartbeat, the researchers were able to derive 3,022 individual features from the arterial pressure waveforms, producing more than 2.6 million bits of information. They then identified which of the features—when they happen together and at the same time—predict hypotension.

Cannesson said that the research “opens the door to the application of these techniques to many other physiological signals, such as EKG for cardiac arrhythmia prediction or EEG for brain function” and “could lead to a whole new field of investigation in clinical and physiological sciences and reshape our understanding of human physiology.”


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson

REGISTRATION RATES INCREASE FRIDAY, JUNE 15TH

Weather, activity, sleep, stress data used to predict migraines

Migraine Alert by Second Opinion Health uses machine learning to analayze weather, activity, sleep, and stress, to determine if a user will have a migraine headache.

The company claims that the algorithm is effective after 15 episodes are logged. They have launched a multi patient study with the Mayo Clinic, in which subjects use a phone and a fitbit to log this lifestyle data. The goal is prediction, earlier treatment, and, hopefully, symptom reduction.  No EEG-derived brain activity data is incorporated into the prediction method.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer

Machine learning for early Alzheimer’s diagnosis

Anant Madabhushi and Case Western colleagues have used machine learning to diagnose Alzheimer’s disease via imaging data in a small study.  The goal is early intervention, which could potentially extend independence.

149 patients were analyzed using a Cascaded Multi-view Canonical Correlation (CaMCCo) algorithm, which integrates MRI scans, features of the hippocampus, glucose metabolism rates in the brain, proteomics, genomics, and MCI.

Parameters that distinguish between healthy and unhealthy subjects were selected first. The algorithm then selected, from the unhealthy variables, those that best distinguish who has mild cognitive impairment and who has Alzheimer’s disease.

This is an admirable attempt to diagnose a disease which currently has no cure.  ApplySci hopes that we will soon be able to combine early detection with a truly effective treatment.  Millions around the world are waiting.


Join ApplySci at Wearable Tech + Digital Health + NeuroTech Boston on September 19, 2017 at the MIT Media Lab – featuring  Joi Ito – Ed Boyden – Roz Picard – George Church – Nathan Intrator –  Tom Insel – John Rogers – Jamshid Ghajar – Phillip Alvelda – Michael Weintraub – Nancy Brown – Steve Kraus – Bill Geary – Mary Lou Jepsen

Registration rates increase Friday, August 18th.


ANNOUNCING WEARABLE TECH + DIGITAL HEALTH + NEUROTECH SILICON VALLEY – FEBRUARY 26 -27, 2018 @ STANFORD UNIVERSITY

Machine learning tools predict heart failure

Declan O’Regan and MRC London Institute of Medical Sciences colleagues believe that AI can predict when pulmonary hypertension patients require more aggressive treatment to prevent death.

In a recent study,  machine learning software automatically analyzed moving images of a patient’s heart, captured during an MRI. It then used  image processing to build a “virtual 3D heart”, replicating how 30,000 points in the heart contract during each beat. The researchers fed the system data from hundreds of previous patients. By linking the data and models, the system learned which attributes of a heart, its shape and structure, put an individual at risk of heart failure.

The software was developed using data from 256 patients with pulmonary hypertension. It correctly predicted those who would still be alive after one year 80% of the time. The figure for doctors is 60%.

The researchers  want to test the technology in other forms of heart failure, including cardiomyopathy, to see when a pacemaker or other form of treatment is needed.

Click to view MRC London video.

ApplySci’s 6th  Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Sky Christopherson – Marcus Weldon – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis

 

 

Wearable + cloud analysis track Huntington’s disease progression

In the latest pharma/tech partnership, Teva and Intel are developing a wearable platform to track the progression of Huntington’s disease.  There is no cure for the disease, which causes a breakdown of nerve cells in the brain, resulting in a  decline in motor control, cognition and mental stability.

The technology can be used to assess the effectiveness of drugs that treat symptoms, and in developing new drugs.  (Teva will use it in a  mid-stage study.)

Wearable sensors (in this case a watch, but the concept could progress to patches)  continuously measure patient functioning.  Data is wirelessly transmitted to the cloud, and algorithms score motor symptom severity.

In recent months, ApplySci has described similar pharma/device/big data/machine learning alliances, including:

  •  Sanofi and Verily’s Onduo platiform to treat diabetes
  • GlaxoSmithKline and Verily targeting electrical signals in the body
  • A Verily and Novartis smart contact lense glucose monitor
  • A Biogen /Alphabet MS study

ApplySci looks forward to the use of brain-monitoring wearables to enable users to see and address neurodegenerative changes before symptoms appear.


ApplySci’s 6th   Wearable Tech + Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Mary Lou Jepsen – Vivek Wadhwa – Miguel Nicolelis – Roozbeh Ghaffari – Unity Stoakes – Mounir Zok

Algorithm detects depression in speech

USC researchers are using machine learning to diagnose depression, based on speech patterns. During interviews, SimSensei detected reductions in vowel expression that might be missed by human interviewers.

The depression-associated speech variations have been documented in past studies.  Depressed patient speech can be flat, with reduced variability, and monotonicity. Reduced speech, reduced articulation rate, increased pause duration, and varied switching pause duration were also observed.

253 adults used the method to be  tested for “vowel space,” which was significantly reduced in subjects that reported symptoms of depression and PTSD.  The researchers believe that vowel space reduction could also be linked to schizophrenia and Parkinson’s disease, which they are investigating.


Join us at ApplySci’s 6th WEARABLE TECH + DIGITAL HEALTH + NEUROTECH conference – February 7-8, 2017 @ Stanford University

Machine learning model enables robotic hand to learn autonomously

Vikash Kumar and University of Washington colleagues have developed a simulation model that allows robotic hands to learn from their own experiences, while performing dexterous manipulation.  Human direction is not required.

A recent study incorporated the model while a robotic hand attempted several tasks, including  rotating an elongated object. With each try, the hand became better able to spin the tube. Machine learning algorithms helped it model the basic physics involved, and plan the actions it should take to best complete the task.

Click to view University of Washington video


Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

AI on a chip for voice, image recognition

Horizon Robotics, led by Yu Kai, Baidu’s former deep learning head,  is developing AI chips and software to mimic how the human brain solves abstract tasks, such as voice and image recognition.  The company believes that this will provide more consistent and reliable services than cloud based systems.

The goal is to enable fast and intelligent responses to user commands, with out an internet connection, to control appliances, cars, and other objects.  Health applications are a logical next step, although not yet discussed.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

 

Machine learning for faster stroke diagnosis

MedyMatch uses big data and artificial intelligence to improve stroke diagnosis, with the goal of faster treatment.

Patient CT photos are scanned  and immediately compared with hundreds of thousands of other patient results.  Almost any deviation from a normal CT is quickly detected.

With current methods, medical imaging errors can occur when emergency room radiologists miss subtle aspects of brain scans, leading to delayed treatment. Fast detection of stroke can prevent paralysis and death.

The company claims that it can detect irregularities more accurately than a human can. Findings are presented as 3D brain images, enabling a doctor to make better informed decisions. The cloud-based system allows scans to be uploaded from any location.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

 

Machine learning analysis of doctor notes predicts cancer progression

Gunnar Rätsch and Memorial Sloan Kettering colleagues are using AI to find similarities between cancer cases.  Ratsch’s algorithm has analyzed 100 million sentences taken from clinical notes of about 200,000 cancer patients to predict disease progression.

In a recent study, machine learning was used to classify  patient symptoms, medical histories and doctors’ observations into 10,000 clusters. Each cluster represented a common observation in medical records, including recommended treatments and typical symptoms. Connections between clusters were mapped to  show inter-relationships. In another study, algorithms were used to  find hidden associations between written notes and patients’ gene and blood sequencing.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences