Category Archives: AI

EEG + AI assists drivers in manual and autonomous cars

FacebooktwitterlinkedinFacebooktwitterlinkedin

Nissan’s Brain-to-Vehicle (B2V) technology will enable vehicles to interpret signals from a driver’s brain.

The company describes two aspects of the system — prediction and detection, which depend on a driver wearing EEG electrodes:

Predicton: By detecting, via the brain, that the driver is about to move, including turning the steering wheel or pushing the accelerator pedal, B2V can begin the action more quickly.

Detection: When driver discomfort is detected, and the car is in autonomous mode, AI tools change the driving configuration or style.

Lucian Gheorghe, an innovation researcher Nissan, said that the system can use AR to adjust what the driver sees, and can turn the wheel or slow the car  0.2 to 0.5 seconds faster than the driver.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli – Juan-Pablo Mas – Michael Eggleston – Walter Greenleaf – Jacobo Penide – David Sarno

Registration rates increase today – January 5th.

Robots visualize actions, plan, with out human instruction

FacebooktwitterlinkedinFacebooktwitterlinkedin

Sergey Levine and UC Berkeley colleagues have developed robotic learning technology that enables robots to visualize how different behaviors will affect the world around them, with out human instruction.  This ability to plan, in various scenarios,  could improve self-driving cars and robotic home assistants.

Visual foresight allows robots to predict what their cameras will see if they perform a particular sequence of movements. The robot can then learn to perform tasks without human help  or prior knowledge of physics, its environment or what the objects are.

The deep learning technology is based on dynamic neural advection. These  models predict how pixels in an image will move from one frame to the next, based on the robot’s actions. This has enabled robotic control based on video prediction to perform increasingly complex tasks.

Click to view UC Berkeley video


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli – Juan-Pablo Mas – Michael Eggleston – Walter Greenleaf

AI detects pneumonia from chest X-rays

FacebooktwitterlinkedinFacebooktwitterlinkedin

Andrew Ng and Stanford colleagues used AI to detect pneumonia from x-rays with similar accuracy to trained radiologists.  The CheXNet model analyzed 112,200 frontal-view X-ray images of 30,805 unique patients released by the NIH (ChestX-ray14.)

Deep Learning algorithms also detected14 diseases including fibrosis, hernias, and cell masses, with fewer false positives and negatives than NIH benchmark research.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli

Registration rates increase today, November 17th, 2017

AI detects bowel cancer in less than 1 second in small study

FacebooktwitterlinkedinFacebooktwitterlinkedin

Yuichi Mori and Showa University colleagues haved used AI to identify bowel cancer by analyzing colonoscopy derived polyps in less than a second.

The  system compares a magnified view of a colorectal polyp with 30,000 endocytoscopic images. The researchers claimed  86% accuracy, based on a study of 300 polyps.

While further testing the technology, Mori said that the team will focus on creating a system that can automatically detect polyps.

Click to view Endoscopy Thieme video


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda

Machine learning improves breast cancer detection

FacebooktwitterlinkedinFacebooktwitterlinkedin

MIT’s Regina Barzilay has used AI to improve breast cancer detection and diagnosis. Machine learning tools predict if a high-risk lesion identified on needle biopsy after a mammogram will upgrade to cancer at surgery, potentially eliminating unnecessary procedures.

In current practice, when a mammogram detects a suspicious lesion, a needle biopsy is performed to determine if it is cancer. Approximately 70 percent of the lesions are benign, 20 percent are malignant, and 10 percent are high-risk.

Using a method known as a “random-forest classifier,” the AI model resulted in 30 per cent fewer  surgeries, compared to the strategy of always doing surgery, while diagnosing more cancerous lesions (97 per cent vs 79 per cent) than the strategy of only doing surgery on traditional “high-risk lesions.”

Trained on information about 600 high-risk lesions, the technology looks for data patterns that include demographics, family history, past biopsies, and pathology reports.

MGH radiologists will begin incorporating the method into their clinical practice over the next year.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University, featuring:  Vinod KhoslaJustin SanchezBrian OtisBryan JohnsonZhenan BaoNathan IntratorCarla PughJamshid Ghajar – Mark Kendall – Robert Greenberg Darin Okuda Jason Heikenfeld

Detecting dementia with automated speech analysis

FacebooktwitterlinkedinFacebooktwitterlinkedin

WinterLight Labs is developing speech analyzing algorithms to detect and monitor dementia and aphasia.  A one minute speech sample is used to determine the lexical diversity, syntactic complexity, semantic content, and articulation associated with these conditions.

Clinicians currently conduct similar tests by interviewing patients and writing their impressions on paper.

The company believes that their automated system could inform clinical trials, medical care, and speech training.

If the platform could be used with mobile phones, the potential for widespread early detection is obvious.  Unfortunately, detection, even early detection, does not at this point translate into a cure.  ApplySci looks forward to the day when advanced neurodegenerative disease monitoring will be used to track progress toward healthy brain functioning.


Join ApplySci at Wearable Tech + Digital Health + NeuroTech Boston on September 19, 2017 at the MIT Media Lab – featuring  Joi Ito – Ed Boyden – Roz Picard – George Church – Nathan Intrator –  Tom Insel – John Rogers – Jamshid Ghajar – Riccardo Sabatini – Phillip Alvelda – Michael Weintraub – Nancy Brown – Steve Kraus – Bill Geary – Mary Lou Jepsen


ANNOUNCING WEARABLE TECH + DIGITAL HEALTH + NEUROTECH SILICON VALLEY – FEBRUARY 26 -27, 2018 @ STANFORD UNIVERSITY –  FEATURING:  ZHENAN BAO – JUSTIN SANCHEZ – BRYAN JOHNSON – NATHAN INTRATOR – VINOD KHOSLA

AI driven, music-triggered brain state therapy for pain, sleep, stress, gait

FacebooktwitterlinkedinFacebooktwitterlinkedin

The Sync Project has developed a novel, music-based, non-pharmaceutical approach to treating pain, sleep, stress, and Parkinson’s gait issues.

Recent studies showed Parkinson’s patients improved their gait when listening to a song with the right beat pattern, and post surgery patients used 1/3 the amount of self-administered morphine after listening to an hour of music.

Lifestyle applications include Unwind, an app detects ones heartbeat, and responds with relaxing music (customized by machine learning tools) to aid sleep, and the Sync Music Bot, which uses Spotify to deliver daily music to enhance work, relaxation, and exercise.

With further clinical validation, this non-invasive therapy could replace drugs for better, targeted, personalized interventions.


Join ApplySci at Wearable Tech + Digital Health + NeuroTech Boston on September 19, 2017 at the MIT Media Lab – featuring  Joi Ito – Ed Boyden – Roz Picard – George Church – Nathan Intrator –  Tom Insel – John Rogers – Jamshid Ghajar – Phillip Alvelda – Michael Weintraub – Nancy Brown – Steve Kraus – Bill Geary – Mary Lou Jepsen – Daniela Rus

Registration rates increase Friday, July 14th

AI assistant addresses specific needs of seniors

FacebooktwitterlinkedinFacebooktwitterlinkedin

ElliQ is AI assistant that intuitively interacts with seniors to support independent living.

The NLP based system enables users to make video calls, play games, and use social media. Music, TED talks, audio books,and other content is recommended, after machine learning tools analyze user preferences (or caregiver input is received.)  Physical activity is suggested after a long period of sitting is detected.  Medication reminders can be scheduled.

The robot is meant to act as a companion, to address loneliness, which is an epidemic amongst the elderly.  It could be further enhanced if memory triggers, anxiety-reducing content, and custom instructions about activities of daily living were incorporated.

ApplySci’s 6th  Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Sky Christopherson – Marcus Weldon – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis

Sensor, data, and AI-driven primary care

FacebooktwitterlinkedinFacebooktwitterlinkedin

Forward has brought advanced technology to well-care.

Patient/Members are integrated into the practice with a baseline  screening via body scans, blood and genetic tests.  They are then given consumer and medical wearables, which work with proprietary algorithms, for continuous monitoring (and access to data), personalized treatment, and emergency alerts. Physical exam rooms display all of the data during doctor visits,

Ongoing primary care, including continuous health monitoring, body scans, gynecology, travel vaccinations, medication, nutrition guidance, blood tests and skin care is included in the fee-based system.

Forward investor Vinod Khosla will be interviewed by ApplySci’s Lisa Weiner Intrator on stage at Digital Health + NeuroTech at Stanford on February 7th at 4:15pm.

ApplySci’s 6th  Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Sky Christopherson – Marcus Weldon – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis

 

Machine learning tools predict heart failure

FacebooktwitterlinkedinFacebooktwitterlinkedin

Declan O’Regan and MRC London Institute of Medical Sciences colleagues believe that AI can predict when pulmonary hypertension patients require more aggressive treatment to prevent death.

In a recent study,  machine learning software automatically analyzed moving images of a patient’s heart, captured during an MRI. It then used  image processing to build a “virtual 3D heart”, replicating how 30,000 points in the heart contract during each beat. The researchers fed the system data from hundreds of previous patients. By linking the data and models, the system learned which attributes of a heart, its shape and structure, put an individual at risk of heart failure.

The software was developed using data from 256 patients with pulmonary hypertension. It correctly predicted those who would still be alive after one year 80% of the time. The figure for doctors is 60%.

The researchers  want to test the technology in other forms of heart failure, including cardiomyopathy, to see when a pacemaker or other form of treatment is needed.

Click to view MRC London video.

ApplySci’s 6th  Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Sky Christopherson – Marcus Weldon – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis