Category Archives: Machine Learning

Machine learning tools predict heart failure

Facebooktwitterredditlinkedinmail

Declan O’Regan and MRC London Institute of Medical Sciences colleagues believe that AI can predict when pulmonary hypertension patients require more aggressive treatment to prevent death.

In a recent study,  machine learning software automatically analyzed moving images of a patient’s heart, captured during an MRI. It then used  image processing to build a “virtual 3D heart”, replicating how 30,000 points in the heart contract during each beat. The researchers fed the system data from hundreds of previous patients. By linking the data and models, the system learned which attributes of a heart, its shape and structure, put an individual at risk of heart failure.

The software was developed using data from 256 patients with pulmonary hypertension. It correctly predicted those who would still be alive after one year 80% of the time. The figure for doctors is 60%.

The researchers  want to test the technology in other forms of heart failure, including cardiomyopathy, to see when a pacemaker or other form of treatment is needed.

Click to view MRC London video.

ApplySci’s 6th  Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Sky Christopherson – Marcus Weldon – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis

 

 

Wearable + cloud analysis track Huntington’s disease progression

Facebooktwitterredditlinkedinmail

In the latest pharma/tech partnership, Teva and Intel are developing a wearable platform to track the progression of Huntington’s disease.  There is no cure for the disease, which causes a breakdown of nerve cells in the brain, resulting in a  decline in motor control, cognition and mental stability.

The technology can be used to assess the effectiveness of drugs that treat symptoms, and in developing new drugs.  (Teva will use it in a  mid-stage study.)

Wearable sensors (in this case a watch, but the concept could progress to patches)  continuously measure patient functioning.  Data is wirelessly transmitted to the cloud, and algorithms score motor symptom severity.

In recent months, ApplySci has described similar pharma/device/big data/machine learning alliances, including:

  •  Sanofi and Verily’s Onduo platiform to treat diabetes
  • GlaxoSmithKline and Verily targeting electrical signals in the body
  • A Verily and Novartis smart contact lense glucose monitor
  • A Biogen /Alphabet MS study

ApplySci looks forward to the use of brain-monitoring wearables to enable users to see and address neurodegenerative changes before symptoms appear.


ApplySci’s 6th   Wearable Tech + Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Mary Lou Jepsen – Vivek Wadhwa – Miguel Nicolelis – Roozbeh Ghaffari – Unity Stoakes – Mounir Zok

Algorithm detects depression in speech

Facebooktwitterredditlinkedinmail

USC researchers are using machine learning to diagnose depression, based on speech patterns. During interviews, SimSensei detected reductions in vowel expression that might be missed by human interviewers.

The depression-associated speech variations have been documented in past studies.  Depressed patient speech can be flat, with reduced variability, and monotonicity. Reduced speech, reduced articulation rate, increased pause duration, and varied switching pause duration were also observed.

253 adults used the method to be  tested for “vowel space,” which was significantly reduced in subjects that reported symptoms of depression and PTSD.  The researchers believe that vowel space reduction could also be linked to schizophrenia and Parkinson’s disease, which they are investigating.


Join us at ApplySci’s 6th WEARABLE TECH + DIGITAL HEALTH + NEUROTECH conference – February 7-8, 2017 @ Stanford University

Machine learning model enables robotic hand to learn autonomously

Facebooktwitterredditlinkedinmail

Vikash Kumar and University of Washington colleagues have developed a simulation model that allows robotic hands to learn from their own experiences, while performing dexterous manipulation.  Human direction is not required.

A recent study incorporated the model while a robotic hand attempted several tasks, including  rotating an elongated object. With each try, the hand became better able to spin the tube. Machine learning algorithms helped it model the basic physics involved, and plan the actions it should take to best complete the task.

Click to view University of Washington video


Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

AI on a chip for voice, image recognition

Facebooktwitterredditlinkedinmail

Horizon Robotics, led by Yu Kai, Baidu’s former deep learning head,  is developing AI chips and software to mimic how the human brain solves abstract tasks, such as voice and image recognition.  The company believes that this will provide more consistent and reliable services than cloud based systems.

The goal is to enable fast and intelligent responses to user commands, with out an internet connection, to control appliances, cars, and other objects.  Health applications are a logical next step, although not yet discussed.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

 

Machine learning for faster stroke diagnosis

Facebooktwitterredditlinkedinmail

MedyMatch uses big data and artificial intelligence to improve stroke diagnosis, with the goal of faster treatment.

Patient CT photos are scanned  and immediately compared with hundreds of thousands of other patient results.  Almost any deviation from a normal CT is quickly detected.

With current methods, medical imaging errors can occur when emergency room radiologists miss subtle aspects of brain scans, leading to delayed treatment. Fast detection of stroke can prevent paralysis and death.

The company claims that it can detect irregularities more accurately than a human can. Findings are presented as 3D brain images, enabling a doctor to make better informed decisions. The cloud-based system allows scans to be uploaded from any location.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

 

Machine learning analysis of doctor notes predicts cancer progression

Facebooktwitterredditlinkedinmail

Gunnar Rätsch and Memorial Sloan Kettering colleagues are using AI to find similarities between cancer cases.  Ratsch’s algorithm has analyzed 100 million sentences taken from clinical notes of about 200,000 cancer patients to predict disease progression.

In a recent study, machine learning was used to classify  patient symptoms, medical histories and doctors’ observations into 10,000 clusters. Each cluster represented a common observation in medical records, including recommended treatments and typical symptoms. Connections between clusters were mapped to  show inter-relationships. In another study, algorithms were used to  find hidden associations between written notes and patients’ gene and blood sequencing.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

 

 

Genetic disease patients identified via machine learning

Facebooktwitterredditlinkedinmail

Stanford’s Nigam Shah and Joshua Knowles are using machine learning to search for people with familial hypercholesterolemia, a genetic disorder that causes high levels of LDL cholesterol in the blood.

Only a 10 percent of people  with the disorder are aware of it, and it is often diagnosed after a cardiac event — the risk of which can be dramatically reduced with early treatment. (Men with the disorder have a 50 percent chance of having a heart attack by age 50; women have a 30 percent chance by age 60.)

Using electronic health records, the researchers identified 120 people known to have FH  from Stanford’s network,  and others with high LDL who don’t have the genetic disorder.

Algorithms then spotted people with FH by analyzing records and  identifying  cholesterol levels, age, and prescribed drugs. The algorithms then looked for and identified undiagnosed FH within the health record data.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech SanFrancisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

 

Machine learning based cancer diagnostics

Facebooktwitterredditlinkedinmail

Search engine giant Yandex is using its advanced machine learning capabilities to detect cancer predisposition.

Yandex Data Factory has partnered with AstraZeneca to develop the RAY platform, which analyzes DNA testing results, generates a report about patient genome mutations, and provides treatment recommendations and side effect information. Testing will begin next month.

The two companies have signed a cooperation agreement  to launch big data projects in epidemiology, pathologic physiology, diagnostics and treatment of diseases (focused on contagious diseases, cancer, endocrinology, cardiology, pulmonology, and psychiatry).

WEARABLE TECH + DIGITAL HEALTH SAN FRANCISCO – APRIL 5, 2016 @ THE MISSION BAY CONFERENCE CENTER

NEUROTECH SAN FRANCISCO – APRIL 6, 2016 @ THE MISSION BAY CONFERENCE CENTER

Mobile hyperspectral “tri-corder”

Facebooktwitterredditlinkedinmail

Tel Aviv University‘s David Menlovic and Ariel Raz are turning smartphones into hyperspectral sensors, capable of identifying chemical components of objects from a distance.

The technology, being commercialized by Unispectral and Ramot, improves camera resolution and noise filtering, and is compatible with smartphone lenses.

The new lens and software allow in much more light than current smartphone camera filter arrays. The software keeps the image resolution clean as the camera zooms further in.  Once the camera has acquired the image, data is sent to a third party to process and analyze  material compounds and the amount of each component. The third-party analyzer then sends the information back to the smartphone.

Unispectral is in talks with smartphone makers, auto makers, and security organizations to be third party analyzers.  To analyze the data from  camera images, the partner will need a large  database of hyperspectral signatures.

Wearable Tech + Digital Health NYC 2015 – June 30 @ New York Academy of Sciences.  Register now and save $300.