Category Archives: Robotics

Robot “patients” for medical research, training

FacebooktwitterlinkedinFacebooktwitterlinkedin

In addition to robots with increasingly human-like faces, being used as companions, “patient” robots are being developed to test medical equipment and procedures on babies and adults.

Yoshio Matsumoto  and AIST colleagues created a robotic skeletal structure of the lower half of the body, with 22 moveable joints.  Its skeleton is made of metal, and its skin, fat and muscles of silicone. Embedded sensors measure pressure on various parts of the lower body. It is being used to develop hospital beds with a reduced risk of pressure wounds.

Waseda University’s Hiroyuki Ishii has developed a robot baby for practicing  breathing protocols after birth. If non-breathing babies do not respond to sensory stimulation, a tube is inserted into the trachea.  This happens in  1-2% of newborns, and poses obvious risks. The robot is designed to train medical staff to properly insert the tube into the baby’s windpipe.

The use  of robots as personal assistants, and to test procedures, will increase rapidly as advanced sensors are built into the devices.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University, featuring:  Vinod KhoslaJustin SanchezBrian OtisBryan JohnsonZhenan BaoNathan IntratorCarla PughJamshid Ghajar – Mark Kendall – Robert Greenberg Darin Okuda Jason Heikenfeld

Deep learning driven prosthetic hand + camera recognize, automate required grips

FacebooktwitterlinkedinFacebooktwitterlinkedin

Ghazal Ghazai and Newcastle University colleagues have developed a deep learning driven prosthetic hand + camera system that allow wearers to reach for objects automatically.  Current prosthetic hands are controlled  via a user’s myoelectric signals, requiring learning, practice, concentration and time.

A convolutional neural network was trained it with images of  500 graspable objects, and taught  to recognize the grip needed for each. Objects were grouped by size, shape, and orientation, and the hand was programmed to perform four different grasps to accommodate them: palm wrist neutral (to pick up a cup); palm wrist pronated (to pick up the TV remote); tripod (thumb and two fingers), and pinch (thumb and first finger).

The hand’s camera takes a picture of the object in front of it, assesses its shape and size, picks the most appropriate grasp, and triggers a series of hand movements, within milliseconds.

In a small study of the technology, subjects successfully picked up and moved objects with an 88 per cent success rate.

The work is part of an effort to develop a bionic hand that senses pressure and temperature, and transmits the information to the brain.


Join ApplySci at Wearable Tech + Digital Health + NeuroTech Boston on September 19, 2017 at the MIT Media Lab. Featuring Joi Ito – Ed Boyden – Roz Picard – George Church – Nathan Intrator –  Tom Insel – John Rogers – Jamshid Ghajar – Phillip Alvelda

Solar powered, highly sensitive, graphene “skin” for robots, prosthetics

FacebooktwitterlinkedinFacebooktwitterlinkedin

Professor Ravinder Dahiya, at the University of Glasgow, has created a robotic hand with solar-powered graphene “skin” that he claims is more sensitive than a human hand.  The flexible, tactile, energy autonomous “skin” could be used in health monitoring wearables and in prosthetics, reducing the need for external chargers. (Dahiya is now developing a low-cost 3-D printed prosthetic hand incorporating the skin.)

Click to view University of Glasgow video


Join ApplySci at Wearable Tech + Digital Health + NeuroTech Boston – Featuring Roz Picard, Tom Insel, John Rogers, Jamshid Ghajar and  Nathan Intrator – September 19, 2017 at the MIT Media Lab

 

Robots support neural and physical rehab in stroke, cerebral palsy

FacebooktwitterlinkedinFacebooktwitterlinkedin

Georgia Tech’s  Ayanna Howard has developed Darwin, a socially interactive robot that encourages children to play an active role in physical therapy.

Specific targeting children with cerebral palsy (who are involved in current studies),  autism, or tbi, the robot is designed to function in the home, to supplement services provided by  clinicians.  It engages users as their human therapist would — monitoring performance, and providing motivation and feedback.In a recent experiment, motion trackers monitored user movements as Darwin offered encouragement, and demonstrated movements when they were not performed correctly.  Researchers claimed that wth the exception of one case, the robot’s impact was “significantly positive.

Darwin is still evolving (pun intended) and has not yet been commercialized.

At MIT,  Newman Lab researcher Hermano Igo Krebs has been using robots for gait and balance neurorehabilitation in stroke and cerebral palsy patients since 1989.  Krebs’s technology continues to be incorporated into  Burke Rehabilitation hospital treatment plans.


Join ApplySci at Wearable Tech + Digital Health + NeuroTech Boston – Featuring Roz Picard, Tom Insel, John Rogers and Nathan Intrator – September 19, 2017 at the MIT Media Lab

Sensors + robotics + AI for safer aging in place

FacebooktwitterlinkedinFacebooktwitterlinkedin

IBM and rice University are developing MERA — a Waston enabled robot meant to help seniors age in place.

The system comprises a Pepper robot  interface, Watson, and Rice’s CameraVitals project, which calculates vital signs by recording video of a person’s face.  Vitals are measured multiple times each day. Caregivers are informed if the the camera and/or accelerometer detect a fall.

Speech to Text, Text to Speech and Natural Language Classifier APIs are being studied to enable answers to health related questions, such as “What are the symptoms of anxiety?” or “What is my heart rate?”

The company  believes that sensors plus cognitive computing can give clinicians and caregivers insights to enable better care decisions. They will soon test the technology in partnership with Sole Cooperativa, to monitor the daily activities of seniors in Italy.

Click to view IBM video


ApplySci’s 6th   Wearable Tech + Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis

Robotic hand exoskeleton for stroke patients

FacebooktwitterlinkedinFacebooktwitterlinkedin

ETH professor Roger Gassert has developed a robotic exoskeleton that allows stroke patients  to perform daily activities by supporting motor and somatosensory functions.

His vision is that “instead of performing exercises in an abstract situation at the clinic, patients will be able to integrate them into their daily life at home, supported by a robot.” He observes that existing exoskeletons are heavy, rendering patients unable to lift their hands. They also have difficulty feeling objects and exerting the right amount of force. To address this, the palm of the hand is left free in the new device.

Gassert’s Kyushu University colleague Jumpei Arata developed a mechanism for the finger featuring three overlapping leaf springs. A motor moves the middle spring, which transmits the force to the different segments of the finger through the other two springs. The fingers thus automatically adapt to the shape of the object the patient wants to grasp.

To reduce the weight of the exoskeleton, motors are placed on the patient’s back and  force is transmitted using a bicycle brake cable. ApplySci hopes that the size and weight of the motor can be reduced, allowing it to be integrated into the exoskeleton in its next phase.

Gassert wants to make the exoskeleton thought controlled, and is using MRI and EEG to detect, in the brain,  a patient’s intention to move his or her hand, and communicating this to the device.


ApplySci’s 6th   Wearable Tech + Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Mary Lou Jepsen – Vivek Wadhwa – Miguel Nicolelis – Roozbeh Ghaffari – Unity Stoakes – Mounir Zok – Krishna Shenoy

AI robot learns ward procedures, advises nurses

FacebooktwitterlinkedinFacebooktwitterlinkedin

Julie Shah and MIT CSAIL  colleagues have developed a robot to assist labor nurses.  The AI driven assistant learns how the unit works from people, and is then able to make care recommendations, including scheduling and patient movement.

Labor nurses attempt to predict the arrival and length of labor, and which patients will require a C-section.  A smart robot, who has observed thousands of these decisions, could be a useful resource.  Whether a machine can replace human judgment in stressful situations must still be proven.

Click to view CSAIL video

Tiny, ingestible robot can deliver medicine, patch wounds, remove objects

FacebooktwitterlinkedinFacebooktwitterlinkedin

Daniela Rus and MIT, University of  Sheffield, and Tokyo Institute of Technology colleagues have developed an ingestible origami robot designed to patch wounds, deliver medicine or remove foreign objects from a person’s stomach.

The tiny robot, made of pig intestines, can unfold itself from a swallowed capsule. Steered by a doctor using external magnetic fields, the “microsurgeon” crawls across the stomach wall, and propels itself using a “stick-slip” motion.

Click to view MIT video


Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

Machine learning model enables robotic hand to learn autonomously

FacebooktwitterlinkedinFacebooktwitterlinkedin

Vikash Kumar and University of Washington colleagues have developed a simulation model that allows robotic hands to learn from their own experiences, while performing dexterous manipulation.  Human direction is not required.

A recent study incorporated the model while a robotic hand attempted several tasks, including  rotating an elongated object. With each try, the hand became better able to spin the tube. Machine learning algorithms helped it model the basic physics involved, and plan the actions it should take to best complete the task.

Click to view University of Washington video


Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

Voice, image,language identifying robot responds to human dialogue

FacebooktwitterlinkedinFacebooktwitterlinkedin

Hitachi’s EMIEW3 robot, designed to provide customer service in commercial environments, could be an ideal companion for the elderly or disabled. Its “remote brain” allows it to identify voices, images and language in its surroundings (which it can process with background street noise).  AI enables it to  respond to human dialogue and avoid collisions.  It is light enough to lift,can move at 6km per hour,  and stand on its own if knocked over. A cosmetic LED-light “beating heart” makes the robot seem more human.


Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences