AI – optimized glioblastoma chemotherapy

Pratik Shah, Gregory Yauney,  and MIT Media Lab researchers have developed an AI  model that could make glioblastoma chemotherapy regimens less toxic but still effective. It analyzes current regimens and iteratively adjusts doses to optimize treatment with the lowest possible potency and frequency toreduce tumor sizes.

In simulated trials of 50 patients, the machine-learning model designed treatment cycles that reduced the potency to a quarter or half of the doses It often skipped administration, which were then scheduled twice a year instead of monthly.

Reinforced learning was used to teach the model to favor certain behavior that lead to a desired outcome.  A combination of  temozolomide and procarbazine, lomustine, and vincristine, administered over weeks or months, were studied.

As the model explored the regimen, at each planned dosing interval it decided on actions. It either initiated or withheld a dose. If it administered, it then decided if the entire dose, or a portion, was necessary. It pinged another clinical model with each action to see if the the mean tumor diameter shrunk.

When full doses were given, the model was penalized, so it instead chose fewer, smaller doses. According to Shah, harmful actions were reduced to get to the desired outcome.

The J Crain Venter Institute’s Nicholas Schork said that the model offers a major improvement over the conventional “eye-balling” method of administering doses, observing how patients respond, and adjusting accordingly.


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane

Sensor could continuously monitor brain aneurysm treatment

Georgia Tech’s Woon-Hong Yeo has  developed a proof of concept, flexible, stretchable sensor that can continuously monitor hemodynamics when integrated with a stent like flow diverter after a brain aneurysm. Blood flow is measured using  capacitance changes.

According to Pittsburgh professor Youngjae Chun, who collaborated with Yeo, “We have developed a highly stretchable, hyper-elastic flow diverter using a highly-porous thin film nitinol,” Chun explained. “None of the existing flow diverters, however, provide quantitative, real-time monitoring of hemodynamics within the sac of cerebral aneurysm. Through the collaboration with Dr. Yeo’s group at Georgia Tech, we have developed a smart flow-diverter system that can actively monitor the flow alterations during and after surgery.”

The goal is a batteryless, wireless device that is extremely stretchable and flexible that can be miniaturized enough to be routed through the tiny and complex blood vessels of the brain and then deployed without damage  According to Yeo, “It’s a very challenging to insert such electronic system into the brain’s narrow and contoured blood vessels.”

The sensor uses a micro-membrane made of two metal layers surrounding a dielectric material, and wraps around the flow diverter. The device is a few hundred nanometers thick, and is produced using nanofabrication and material transfer printing techniques, encapsulated in a soft elastomeric material.

“The membrane is deflected by the flow through the diverter, and depending on the strength of the flow, the velocity difference, the amount of deflection changes,” Yeo explained. “We measure the amount of deflection based on the capacitance change, because the capacitance is inversely proportional to the distance between two metal layers.”

Because the brain’s blood vessels are so small, the flow diverters can be no more than five to ten millimeters long and a few millimeters in diameter. That rules out the use of conventional sensors with rigid and bulky electronic circuits.

“Putting functional materials and circuits into something that size is pretty much impossible right now,” Yeo said. “What we are doing is very challenging based on conventional materials and design strategies.”

The researchers tested three materials for their sensors: gold, magnesium and the nickel-titanium alloy known as nitinol. All can be safely used in the body, but magnesium offers the potential to be dissolved into the bloodstream after it is no longer needed.

The proof-of-principle sensor was connected to a guide wire in the in vitro testing, but Yeo and his colleagues are now working on a wireless version that could be implanted in a living animal model. While implantable sensors are being used clinically to monitor abdominal blood vessels, application in the brain creates significant challenges.

“The sensor has to be completely compressed for placement, so it must be capable of stretching 300 or 400 percent,” said Yeo. “The sensor structure has to be able to endure that kind of handling while being conformable and bending to fit inside the blood vessel.”


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane

David Axelrod: VR in healthcare & the Stanford Virtual Heart | ApplySci @ Stanford

David Axelrod discussed VR-based learning in healthcare, and the Stanford Virtual Heart, at ApplySci’s recent Wearable Tech + Digital Health + Neurotech conference at Stanford;


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson

REGISTRATION RATES INCREASE JULY 6th

Combined BCI + FES system could improve stroke recovery

Jose Millan and EPFL colleagues have combined a brain computer interface with functional electrical stimulation in a system that, in a study, showed the ability to enhance the restoration of limb use after a stroke.

According to Millan: “The key is to stimulate the nerves of the paralyzed arm precisely when the stroke-affected part of the brain activates to move the limb, even if the patient can’t actually carry out the movement. That helps re-establish the link between the two nerve pathways where the signal comes in and goes out.”

27 patients with a similar lesion that resulted in moderate to severe arm paralysis following a stroke participated in the trial. Half were treated with the dual-therapy approach, and reported clinically significant improvements.  A BCI system  enabled the researchers to pinpoint where the electrical activity occurred in the brain when they tried to extend their hands. Each time the electrical activity was identified, the system stimulated the muscle controlling the corresponding wrist and finger movements.

The control group received FES only, and had their arm muscles stimulated randomly. This allowed the scientists to understand how much additional motor function improvement could be attributed to the BCI system.


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson

REGISTRATION RATES INCREASE JUNE 29TH

Tony Chahine on human presence, reimagined | ApplySci @ Stanford

Myant‘s Tony Chahine reimagined human presence at ApplySci’s recent Wearable Tech + Digital Health + Neurotech conference at Stanford:


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson

REGISTRATION RATES INCREASE JUNE 29TH

Thought, gesture-controlled robots

MIT CSAIL’s Daniela Rus has developed an EEG/EMG robot control system based on brain signals and finger gestures.

Building on the team’s previous brain-controlled robot work, the new system detects, in real-time, if a person notices a robot’s error. Muscle activity measurement enables the use of hand gestures to select the correct option.

According to Rus: “This work, combining EEG and EMG feedback, enables natural human-robot interactions for a broader set of applications than we’ve been able to do before using only EEG feedback. By including muscle feedback, we can use gestures to command the robot spatially, with much more nuance and specificity.”

The researchers used  a humanoid robot from Rethink Robotics, while a human controller wore electrodes on her or his head and arm.

Human supervision  increased the choice of correct target from 70 to 97 per cent.

The goal is system that can be used for people with limited mobility or language disorders.

Click to view CSAIL video


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson

REGISTRATION RATES INCREASE FRIDAY, JUNE 22nd

Phillip Alvelda: More intelligent; less artificial | ApplySci @ Stanford

Phillip Alvelda discussed AI and the brain at ApplySci’s recent Wearable Tech + Digital Health + Neurotech Silicon Valley conference at Stanford:


Dr. Alvelda will join us again at Wearable Tech + Digital Health + Neurotech Boston, on September 24, 2018 at the MIT Media Lab.  Other speakers include: Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum Marom Bikson

REGISTRATION RATES INCREASE JUNE 22nd

Bob Knight on decoding language from direct brain recordings | ApplySci @ Stanford


Berkeley’s Bob Knight discussed (and demonstrated) decoding language from direct brain recordings at ApplySci’s recent Wearable Tech + Digital Health + Neurotech Silicon Valley at Stanford:


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda

Join Apply Sci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22, 2019 at Stanford University

Nathan Intrator on epilepsy, AI, and digital signal processing | ApplySci @ Stanford

Nathan Intrator discussed epilepsy, AI and digital signal processing at ApplySci’s Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 26-27, 2018 at Stanford University:


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum

EEG determines SSRI effectiveness in depression

UT Southwestern researchers are using EEG to determine whether an SSRI would effectively treat a person’s depression.

Part of the EMBARC project, the study tracked 300 depressed patients who were given an 8 week course of an SSRI or a placebo. EEG recordings were taken before and after the trial. Higher rACC theta activity before treatment corresponded with greater treatment response to the antidepressant.

 EMBARC director Madhukar Trivedi hopes that the EEG test, combined with his previous blood-biomarker guided drug choice work will dramatically improve accuracy in predicting whether common antidepressants will work for a patient.


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include Roz Picard – George Church – Poppy Crum – Nathan Intrator – Roozbeh Ghaffari – John Mattison