Category Archives: AI

AI assistant addresses specific needs of seniors

Facebooktwitterredditlinkedinmail

ElliQ is AI assistant that intuitively interacts with seniors to support independent living.

The NLP based system enables users to make video calls, play games, and use social media. Music, TED talks, audio books,and other content is recommended, after machine learning tools analyze user preferences (or caregiver input is received.)  Physical activity is suggested after a long period of sitting is detected.  Medication reminders can be scheduled.

The robot is meant to act as a companion, to address loneliness, which is an epidemic amongst the elderly.  It could be further enhanced if memory triggers, anxiety-reducing content, and custom instructions about activities of daily living were incorporated.

ApplySci’s 6th  Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Sky Christopherson – Marcus Weldon – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis

Sensor, data, and AI-driven primary care

Facebooktwitterredditlinkedinmail

Forward has brought advanced technology to well-care.

Patient/Members are integrated into the practice with a baseline  screening via body scans, blood and genetic tests.  They are then given consumer and medical wearables, which work with proprietary algorithms, for continuous monitoring (and access to data), personalized treatment, and emergency alerts. Physical exam rooms display all of the data during doctor visits,

Ongoing primary care, including continuous health monitoring, body scans, gynecology, travel vaccinations, medication, nutrition guidance, blood tests and skin care is included in the fee-based system.

Forward investor Vinod Khosla will be interviewed by ApplySci’s Lisa Weiner Intrator on stage at Digital Health + NeuroTech at Stanford on February 7th at 4:15pm.

ApplySci’s 6th  Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Sky Christopherson – Marcus Weldon – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis

 

Machine learning tools predict heart failure

Facebooktwitterredditlinkedinmail

Declan O’Regan and MRC London Institute of Medical Sciences colleagues believe that AI can predict when pulmonary hypertension patients require more aggressive treatment to prevent death.

In a recent study,  machine learning software automatically analyzed moving images of a patient’s heart, captured during an MRI. It then used  image processing to build a “virtual 3D heart”, replicating how 30,000 points in the heart contract during each beat. The researchers fed the system data from hundreds of previous patients. By linking the data and models, the system learned which attributes of a heart, its shape and structure, put an individual at risk of heart failure.

The software was developed using data from 256 patients with pulmonary hypertension. It correctly predicted those who would still be alive after one year 80% of the time. The figure for doctors is 60%.

The researchers  want to test the technology in other forms of heart failure, including cardiomyopathy, to see when a pacemaker or other form of treatment is needed.

Click to view MRC London video.

ApplySci’s 6th  Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Sky Christopherson – Marcus Weldon – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis

 

 

Sensors + robotics + AI for safer aging in place

Facebooktwitterredditlinkedinmail

IBM and rice University are developing MERA — a Waston enabled robot meant to help seniors age in place.

The system comprises a Pepper robot  interface, Watson, and Rice’s CameraVitals project, which calculates vital signs by recording video of a person’s face.  Vitals are measured multiple times each day. Caregivers are informed if the the camera and/or accelerometer detect a fall.

Speech to Text, Text to Speech and Natural Language Classifier APIs are being studied to enable answers to health related questions, such as “What are the symptoms of anxiety?” or “What is my heart rate?”

The company  believes that sensors plus cognitive computing can give clinicians and caregivers insights to enable better care decisions. They will soon test the technology in partnership with Sole Cooperativa, to monitor the daily activities of seniors in Italy.

Click to view IBM video


ApplySci’s 6th   Wearable Tech + Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis

AI speech, text, image, identity analysis for healthcare, self-driving cars

Facebooktwitterredditlinkedinmail

The Baidu and Nvidia developed Baidu Brain is a robot that simulates the human brain using advanced algorithms, computing power, and data analysis. It can be used for voice recognition and synthesis, image recognition, natural language processing, and user profiling. Potential healthcare uses include disease recognition, treatment tracking, and rehabilitation progress.


Wearable Tech + Digital Health + NeuroTech Silicon Valley – February 7-8 @ Stanford University – Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Mary Lou Jepsen – Vivek Wadhwa – Miguel Nicolelis

 

AI identifies cancer after doctor misdiagnosis, used to personalize care

Facebooktwitterredditlinkedinmail

IBM Watson detected a rare form of leukemia in a patient in Japan, after comparing genetic changes with a database of 20 million research papers. She had been misdiagnosed by doctors for months, and received the wrong treatment for her cancer type.

Watson has created partnerships with 16 US health systems and imaging firms to identify cancer, diabetes and heart disease.  It has just announced a similar partnership with 21 hospitals in China.

While AI systems still occasionally make mistakes, the trend of using the technology for diagnosis, and  personalized treatment, with suggested therapies based on assumptions of success,  is growing rapidly.


Wearable Tech + Digital Health + NeuroTech Silicon Valley – February 7-8, 2017 @ Stanford University

Algorithm to help computers reason, imagine like humans?

Facebooktwitterredditlinkedinmail

Vicarious is developing a neural network algorithm designed to process data in a way that is similar to a human brain — giving it a computer “imagination.”  The company considers its work superior to current Deep Learning processes, as they claim it includes more features that appear in biology — including  the ability to envision what the learned information should look like in various scenarios.

In an MIT Technology Review interview, co-founder Dileep George said that “imagination could help computers process language by tying words, or symbols, to low-level physical representations of real-world things. In theory, such a system might automatically understand the physical properties of something like water, for example, which would make it better able to discuss the weather.”

The company has not yet announced any tangible products If successful, they could develop applications for assistive robots that could change the lives of the disabled.


Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

AI on a chip for voice, image recognition

Facebooktwitterredditlinkedinmail

Horizon Robotics, led by Yu Kai, Baidu’s former deep learning head,  is developing AI chips and software to mimic how the human brain solves abstract tasks, such as voice and image recognition.  The company believes that this will provide more consistent and reliable services than cloud based systems.

The goal is to enable fast and intelligent responses to user commands, with out an internet connection, to control appliances, cars, and other objects.  Health applications are a logical next step, although not yet discussed.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

 

Machine learning for faster stroke diagnosis

Facebooktwitterredditlinkedinmail

MedyMatch uses big data and artificial intelligence to improve stroke diagnosis, with the goal of faster treatment.

Patient CT photos are scanned  and immediately compared with hundreds of thousands of other patient results.  Almost any deviation from a normal CT is quickly detected.

With current methods, medical imaging errors can occur when emergency room radiologists miss subtle aspects of brain scans, leading to delayed treatment. Fast detection of stroke can prevent paralysis and death.

The company claims that it can detect irregularities more accurately than a human can. Findings are presented as 3D brain images, enabling a doctor to make better informed decisions. The cloud-based system allows scans to be uploaded from any location.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

 

DeepMind Health identifies complication risks

Facebooktwitterredditlinkedinmail

Google has announced DeepMind Health, which creates non-AI based apps to identify patientscomplication risk.  It is expected for AI to be integrated in the future. Acute kidney injury is the group’s initial focus, being tested by the UK National Health Service and the Royal Free Hospital London.

The initial app, Streams, quickly alerts hospital staff of critical patient information.  One of Streams’ designers, Chris Laing, said that  “using Streams meant I was able to review blood tests for patients at risk of AKI within seconds of them becoming available. I intervened earlier and was able to improve the care of over half the patients Streams identified in our pilot studies.”

The company plans to integrate patient treatment prioritization features based on the Hark clinical management system.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences