Phone camera measures wound depth, severity

FacebooktwitterlinkedinFacebooktwitterlinkedin

AutoDepth by Swift Medical uses a phone’s camera to understand a wound’s depth and severity.  Algorithms process dynamic changes over time. Depth can indicate whether a wound is healing properly.

The system is noninvasive, and can be widely accessible to clinicians. In addition to gauging the wound healing process, it  can be used for measuring the progression of pressure ulcers, or in the analysis of moles on the skin, where volume, depth, and surface texture are considered.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer

Registration rates increase November 10th.

AI detects suicidal thoughts from brain scans in small study

FacebooktwitterlinkedinFacebooktwitterlinkedin

David Brent and PIttsburgh and Carnegie Mellon colleagues used machine learning to identify suicidal thoughts in subjects based on fMRI scans.

In a recent study, 18 suicidal participants and 18 members of a control group  were presented with three lists of 10 words related to suicide, positive, or negative effects. Previously mapped neural signatures  showing  brain patterns of emotions like “shame” and “anger” were incorporated.

5 brain locations, and 6 words, were distinguished suicidal patients from the controls. Using those locations and words,  a machine-learning classifier was trained and able to identify 15 of the 17 suicidal patients and 16 of 17 control subjects.

Suicidal patients were divided into those that had attempted suicide (9) and those that had not (8). Another classifier was then able to identify 16 of the 17 patients.

If healthy patients and those with suicidal thoughts are proven to have such different reactions to words, this work could impact therapy and, it is our hope, prevent lost lives.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer

Weather, activity, sleep, stress data used to predict migraines

FacebooktwitterlinkedinFacebooktwitterlinkedin

Migraine Alert by Second Opinion Health uses machine learning to analayze weather, activity, sleep, and stress, to determine if a user will have a migraine headache.

The company claims that the algorithm is effective after 15 episodes are logged. They have launched a multi patient study with the Mayo Clinic, in which subjects use a phone and a fitbit to log this lifestyle data. The goal is prediction, earlier treatment, and, hopefully, symptom reduction.  No EEG-derived brain activity data is incorporated into the prediction method.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer

Video: Ed Boyden on technologies for analyzing & repairing the brain

FacebooktwitterlinkedinFacebooktwitterlinkedin

Recorded at ApplySci’s Wearable Tech + Digital Health + Neurotech conference on September 19th at the MIT Media Lab.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda

AI detects bowel cancer in less than 1 second in small study

FacebooktwitterlinkedinFacebooktwitterlinkedin

Yuichi Mori and Showa University colleagues haved used AI to identify bowel cancer by analyzing colonoscopy derived polyps in less than a second.

The  system compares a magnified view of a colorectal polyp with 30,000 endocytoscopic images. The researchers claimed  86% accuracy, based on a study of 300 polyps.

While further testing the technology, Mori said that the team will focus on creating a system that can automatically detect polyps.

Click to view Endoscopy Thieme video


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda

3D neuron reconstruction reveals electrical behavior

FacebooktwitterlinkedinFacebooktwitterlinkedin

Christof Koch and Allen Institute colleagues  have created 3D computer reconstructions of living human brain cells using discarded surgical tissue.  As the tissue is still alive when it reaches the lab, the virtual cells are able to capture electrical signals, in addition to cell shape and anatomy.

This is the first time that scientists have been able to study the electrical behavior of living brain cells in humans.

Koch believes that this will enhance our understanding of how brain diseases, including Alzheimer’s and schizophrenia, impact the behavior of brain cells.

The institute has captured electrical data from 300 living neurons, taken from 36 patient brains.  100 cells have been reconstructed in 3D.  Genetic information about some of the cells will eventually be added to the database.

Click to view the  Allen Institute video.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight

Machine learning improves breast cancer detection

FacebooktwitterlinkedinFacebooktwitterlinkedin

MIT’s Regina Barzilay has used AI to improve breast cancer detection and diagnosis. Machine learning tools predict if a high-risk lesion identified on needle biopsy after a mammogram will upgrade to cancer at surgery, potentially eliminating unnecessary procedures.

In current practice, when a mammogram detects a suspicious lesion, a needle biopsy is performed to determine if it is cancer. Approximately 70 percent of the lesions are benign, 20 percent are malignant, and 10 percent are high-risk.

Using a method known as a “random-forest classifier,” the AI model resulted in 30 per cent fewer  surgeries, compared to the strategy of always doing surgery, while diagnosing more cancerous lesions (97 per cent vs 79 per cent) than the strategy of only doing surgery on traditional “high-risk lesions.”

Trained on information about 600 high-risk lesions, the technology looks for data patterns that include demographics, family history, past biopsies, and pathology reports.

MGH radiologists will begin incorporating the method into their clinical practice over the next year.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University, featuring:  Vinod KhoslaJustin SanchezBrian OtisBryan JohnsonZhenan BaoNathan IntratorCarla PughJamshid Ghajar – Mark Kendall – Robert Greenberg Darin Okuda Jason Heikenfeld

Teleneurology for remote, underserved populations

FacebooktwitterlinkedinFacebooktwitterlinkedin

Neurodegenerative disease cases have, unfortunately, far outpaced the number of neurologists able to diagnose and treat patients, particularly in rural areas. A recent study highlighted 20 states that were or would become “dementia neurology deserts,”

Remote tele-neurology is being introduced to fill the gap.

The American Academy of Neurology has announced a new curriculum to train students and providers to use video conferencing, sensors, and text and image communication tools to connect  with patients. Five training areas, developed by the University of Missouri, are the focus: technology; legal and ethical issues; “webside” manners; privacy; and, of course, neurology expertise

The University of Texas, Vanderbilt University, and Tufts Medical Center are already using teleneurology..


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University, featuring:  Vinod KhoslaJustin SanchezBrian OtisBryan JohnsonZhenan BaoNathan IntratorCarla PughJamshid Ghajar – Mark Kendall – Robert Greenberg Darin Okuda Jason Heikenfeld

Robot “patients” for medical research, training

FacebooktwitterlinkedinFacebooktwitterlinkedin

In addition to robots with increasingly human-like faces, being used as companions, “patient” robots are being developed to test medical equipment and procedures on babies and adults.

Yoshio Matsumoto  and AIST colleagues created a robotic skeletal structure of the lower half of the body, with 22 moveable joints.  Its skeleton is made of metal, and its skin, fat and muscles of silicone. Embedded sensors measure pressure on various parts of the lower body. It is being used to develop hospital beds with a reduced risk of pressure wounds.

Waseda University’s Hiroyuki Ishii has developed a robot baby for practicing  breathing protocols after birth. If non-breathing babies do not respond to sensory stimulation, a tube is inserted into the trachea.  This happens in  1-2% of newborns, and poses obvious risks. The robot is designed to train medical staff to properly insert the tube into the baby’s windpipe.

The use  of robots as personal assistants, and to test procedures, will increase rapidly as advanced sensors are built into the devices.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University, featuring:  Vinod KhoslaJustin SanchezBrian OtisBryan JohnsonZhenan BaoNathan IntratorCarla PughJamshid Ghajar – Mark Kendall – Robert Greenberg Darin Okuda Jason Heikenfeld

Prosthetic “skin” senses force, vibration

FacebooktwitterlinkedinFacebooktwitterlinkedin

Jonathan Posner, with University of Washington and UCLA colleagues, has developed a flexible sensor “skin” that can be stretched over prostheses to determine force and vibration.

The skin mimics the way a human finger responds to tension and compression, as it slides along a surface or distinguishes among different textures. This could allow users to sense when something is slipping out of their grasp.

Tiny electrically conductive liquid metal channels are placed on both sides of of a prosthetic finger. As it is slid across a surface, the channels on one side compress while those on the other side stretch, similar to a natural limb.  As the channel geometry changes, so does the amount of electricity. Differences in electrical resistance correlate with force and vibrations.

The researchers believe that the sensor skin will enable users to better be able to open a door, use a phone, shake hands, or lift packages.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University, featuring:  Vinod KhoslaJustin SanchezBrian OtisBryan JohnsonZhenan BaoNathan IntratorCarla PughJamshid Ghajar – Mark Kendall – Robert Greenberg Darin Okuda Jason Heikenfeld