MIT CSAIL’s Daniela Rus has developed an EEG/EMG robot control system based on brain signals and finger gestures.
Building on the team’s previous brain-controlled robot work, the new system detects, in real-time, if a person notices a robot’s error. Muscle activity measurement enables the use of hand gestures to select the correct option.
According to Rus: “This work, combining EEG and EMG feedback, enables natural human-robot interactions for a broader set of applications than we’ve been able to do before using only EEG feedback. By including muscle feedback, we can use gestures to command the robot spatially, with much more nuance and specificity.”
The researchers used a humanoid robot from Rethink Robotics, while a human controller wore electrodes on her or his head and arm.
Human supervision increased the choice of correct target from 70 to 97 per cent.
The goal is system that can be used for people with limited mobility or language disorders.
Click to view CSAIL video
Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab. Speakers include: Rudy Tanzi – Mary Lou Jepsen – George Church – Roz Picard – Nathan Intrator – Keith Johnson – Juan Enriquez – John Mattison – Roozbeh Ghaffari – Poppy Crum – Phillip Alvelda – Marom Bikson
REGISTRATION RATES INCREASE FRIDAY, JUNE 22nd
Sergey Levine and UC Berkeley colleagues have developed robotic learning technology that enables robots to visualize how different behaviors will affect the world around them, with out human instruction. This ability to plan, in various scenarios, could improve self-driving cars and robotic home assistants.
Visual foresight allows robots to predict what their cameras will see if they perform a particular sequence of movements. The robot can then learn to perform tasks without human help or prior knowledge of physics, its environment or what the objects are.
The deep learning technology is based on dynamic neural advection. These models predict how pixels in an image will move from one frame to the next, based on the robot’s actions. This has enabled robotic control based on video prediction to perform increasingly complex tasks.
Click to view UC Berkeley video
Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include: Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian – Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli – Juan-Pablo Mas – Michael Eggleston – Walter Greenleaf
In addition to robots with increasingly human-like faces, being used as companions, “patient” robots are being developed to test medical equipment and procedures on babies and adults.
Yoshio Matsumoto and AIST colleagues created a robotic skeletal structure of the lower half of the body, with 22 moveable joints. Its skeleton is made of metal, and its skin, fat and muscles of silicone. Embedded sensors measure pressure on various parts of the lower body. It is being used to develop hospital beds with a reduced risk of pressure wounds.
Waseda University’s Hiroyuki Ishii has developed a robot baby for practicing breathing protocols after birth. If non-breathing babies do not respond to sensory stimulation, a tube is inserted into the trachea. This happens in 1-2% of newborns, and poses obvious risks. The robot is designed to train medical staff to properly insert the tube into the baby’s windpipe.
The use of robots as personal assistants, and to test procedures, will increase rapidly as advanced sensors are built into the devices.
Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University, featuring: Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld
Ghazal Ghazai and Newcastle University colleagues have developed a deep learning driven prosthetic hand + camera system that allow wearers to reach for objects automatically. Current prosthetic hands are controlled via a user’s myoelectric signals, requiring learning, practice, concentration and time.
A convolutional neural network was trained it with images of 500 graspable objects, and taught to recognize the grip needed for each. Objects were grouped by size, shape, and orientation, and the hand was programmed to perform four different grasps to accommodate them: palm wrist neutral (to pick up a cup); palm wrist pronated (to pick up the TV remote); tripod (thumb and two fingers), and pinch (thumb and first finger).
The hand’s camera takes a picture of the object in front of it, assesses its shape and size, picks the most appropriate grasp, and triggers a series of hand movements, within milliseconds.
In a small study of the technology, subjects successfully picked up and moved objects with an 88 per cent success rate.
The work is part of an effort to develop a bionic hand that senses pressure and temperature, and transmits the information to the brain.
Join ApplySci at Wearable Tech + Digital Health + NeuroTech Boston on September 19, 2017 at the MIT Media Lab. Featuring Joi Ito – Ed Boyden – Roz Picard – George Church – Nathan Intrator – Tom Insel – John Rogers – Jamshid Ghajar – Phillip Alvelda
Professor Ravinder Dahiya, at the University of Glasgow, has created a robotic hand with solar-powered graphene “skin” that he claims is more sensitive than a human hand. The flexible, tactile, energy autonomous “skin” could be used in health monitoring wearables and in prosthetics, reducing the need for external chargers. (Dahiya is now developing a low-cost 3-D printed prosthetic hand incorporating the skin.)
Click to view University of Glasgow video
Join ApplySci at Wearable Tech + Digital Health + NeuroTech Boston – Featuring Roz Picard, Tom Insel, John Rogers, Jamshid Ghajar and Nathan Intrator – September 19, 2017 at the MIT Media Lab
Georgia Tech’s Ayanna Howard has developed Darwin, a socially interactive robot that encourages children to play an active role in physical therapy.
Specific targeting children with cerebral palsy (who are involved in current studies), autism, or tbi, the robot is designed to function in the home, to supplement services provided by clinicians. It engages users as their human therapist would — monitoring performance, and providing motivation and feedback.In a recent experiment, motion trackers monitored user movements as Darwin offered encouragement, and demonstrated movements when they were not performed correctly. Researchers claimed that wth the exception of one case, the robot’s impact was “significantly positive.
Darwin is still evolving (pun intended) and has not yet been commercialized.
At MIT, Newman Lab researcher Hermano Igo Krebs has been using robots for gait and balance neurorehabilitation in stroke and cerebral palsy patients since 1989. Krebs’s technology continues to be incorporated into Burke Rehabilitation hospital treatment plans.
Join ApplySci at Wearable Tech + Digital Health + NeuroTech Boston
– Featuring Roz Picard, Tom Insel, John Rogers and Nathan Intrator – September 19, 2017 at the MIT Media Lab
IBM and rice University are developing MERA — a Waston enabled robot meant to help seniors age in place.
The system comprises a Pepper robot interface, Watson, and Rice’s CameraVitals project, which calculates vital signs by recording video of a person’s face. Vitals are measured multiple times each day. Caregivers are informed if the the camera and/or accelerometer detect a fall.
Speech to Text, Text to Speech and Natural Language Classifier APIs are being studied to enable answers to health related questions, such as “What are the symptoms of anxiety?” or “What is my heart rate?”
The company believes that sensors plus cognitive computing can give clinicians and caregivers insights to enable better care decisions. They will soon test the technology in partnership with Sole Cooperativa, to monitor the daily activities of seniors in Italy.
Click to view IBM video
ApplySci’s 6th Wearable Tech + Digital Health + NeuroTech Silicon Valley – February 7-8 2017 @ Stanford | Featuring: Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis
Julie Shah and MIT CSAIL colleagues have developed a robot to assist labor nurses. The AI driven assistant learns how the unit works from people, and is then able to make care recommendations, including scheduling and patient movement.
Labor nurses attempt to predict the arrival and length of labor, and which patients will require a C-section. A smart robot, who has observed thousands of these decisions, could be a useful resource. Whether a machine can replace human judgment in stressful situations must still be proven.
Click to view CSAIL video
Daniela Rus and MIT, University of Sheffield, and Tokyo Institute of Technology colleagues have developed an ingestible origami robot designed to patch wounds, deliver medicine or remove foreign objects from a person’s stomach.
The tiny robot, made of pig intestines, can unfold itself from a swallowed capsule. Steered by a doctor using external magnetic fields, the “microsurgeon” crawls across the stomach wall, and propels itself using a “stick-slip” motion.
Click to view MIT video
Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences
NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences