Ghazal Ghazai and Newcastle University colleagues have developed a deep learning driven prosthetic hand + camera system that allow wearers to reach for objects automatically. Current prosthetic hands are controlled via a user’s myoelectric signals, requiring learning, practice, concentration and time.
A convolutional neural network was trained it with images of 500 graspable objects, and taught to recognize the grip needed for each. Objects were grouped by size, shape, and orientation, and the hand was programmed to perform four different grasps to accommodate them: palm wrist neutral (to pick up a cup); palm wrist pronated (to pick up the TV remote); tripod (thumb and two fingers), and pinch (thumb and first finger).
The hand’s camera takes a picture of the object in front of it, assesses its shape and size, picks the most appropriate grasp, and triggers a series of hand movements, within milliseconds.
In a small study of the technology, subjects successfully picked up and moved objects with an 88 per cent success rate.
The work is part of an effort to develop a bionic hand that senses pressure and temperature, and transmits the information to the brain.
Join ApplySci at Wearable Tech + Digital Health + NeuroTech Boston on September 19, 2017 at the MIT Media Lab. Featuring Joi Ito – Ed Boyden – Roz Picard – George Church – Nathan Intrator – Tom Insel – John Rogers – Jamshid Ghajar – Phillip Alvelda
Google has developed an algorithm which it claims is capable of detecting diabetic retinopathy in photographs. The goal is to improve the quality and availability of screening for, and early detection of, the common and debilitating condition.
Typically, highly trained specialists are required to examine photos, to detect the lesions that indicate bleeding and fluid leakage in the eye. This obviously makes screening difficult in poor and remote locations.
Google developed a dataset of 128,000 images, each evaluated by 3-7 specially-trained doctors, which trained a neural network to detect referable diabetic retinopathy. Performance was tested on two clinical validation sets of 12,000 images. The majority decision of a panel 7 or 8 ophthalmologists served as the reference standard. The results showed that the accuracy of the Google algorithm was equal to that of the physicians.
ApplySci’s 6th Wearable Tech + Digital Health + NeuroTech Silicon Valley – February 7-8 2017 @ Stanford | Featuring: Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis
UMass professor Hava Siegelmann used fMRI data from tens of thousands of patients to understand how thought arises from brain structure. This resulted in a geometry-based method meant to advance the identification and treatment of brain disease. It can also be used to improve deep learning systems, and her lab is now creating a “massively recurrent deep learning network.”
Siegelmann found that cognitive function and abstract thought exist as an agglomeration of many cortical sources, from those close to sensory cortices to those far deeper along the brain connector. Her data-driven analyses defined a hierarchically ordered connectome, revealing a related continuum of cognitive function.
Siegelmann claims that “with a slope (geometrical algorithm) identifier, behaviors could now be ordered by their relative depth activity with no human intervention or bias.”
Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center
NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center
Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences
NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences