The system uses an electroencephalography (EEG) monitor to record brain activity that the system can detect when the user notices that there was an error in a robot’s object-sorting task. The machine-learning algorithms allow the system to sort brain waves in 10 to 30 milliseconds.
“Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button or even say a word,” Daniela Rus, CSAIL director and the paper’s senior author, said in a news release. “A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars and other technologies we haven’t even invented yet.”
Get the full story on our sister site, Medical Design & Outsourcing.
Tell Us What You Think!