Robots Get AI from Startup
Startup Embodied Intelligence announced software to embed machine-learning capabilities into industrial robots.
SAN JOSE, Calif. — In the next few months, industrial robots will learn how to do their jobs by watching humans, using software from a startup that debuts today. The neural-networking program from Embodied Intelligence also will let robots improve their performance over time.
The work marks a step toward a future in which robots will understand the visual world. Today, human experts typically train factory-floor robots to repeat motions in a relatively slow two-step process that sometimes requires humans writing custom software.
“Instead of programming each procedure, we demo it — it doesn’t require an expert … the robot learns from trial and error,” said Peter Chen, a co-founder and chief executive of the company.
“Our robot software is not restricted to fixed motions. Today, robots do the same mechanical tasks over and over. Our software gives robots the ability to really see through their cameras and make adjustments.”
In addition to training robots faster and more cheaply, the software also opens the door to teaching new tasks. For example, the system could teach a robot how to thread a wire through a mechanical part, something most computer-vision systems cannot do given the complexity of tracking and programming for a flexible object.
The startup uses virtual reality headsets to train robots. It currently uses the HTC Vive headset and its motion controller, although any VR headset will do.
“You see what the robot sees, you make decisions based on what the robot sees … and the robot imitates it,” he said.
Chen was one of three Berkeley researchers who published results earlier this year of experiments teaching robots 10 basic tasks using machine learning and a VR connection. “With a three-minute demo in VR, robots solved all tasks that previously might have required a PhD in writing algorithms,” he said.
The approach uses the same deep neural network techniques that web giants such as Google and Facebook use to recognize images and other tasks. VR demos act as the training, setting up neural network pathways or policies that the robots later refine by running inference tasks.
The company currently builds its own Linux x86 servers using up to eight high-end Nvidia GPUs for training and one for inference work.
“In the beginning, we will provide this as a service for users who come to us with their specs … that will help us perfect our platform,” he said. “At some point, we will license the software to systems integrators.”
SAN JOSE, Calif. — In the next few months, industrial robots will learn how to do their jobs by watching humans, using software from a startup that debuts today. The neural-networking program from Embodied Intelligence also will let robots improve their performance over time.
The work marks a step toward a future in which robots will understand the visual world. Today, human experts typically train factory-floor robots to repeat motions in a relatively slow two-step process that sometimes requires humans writing custom software.
“Instead of programming each procedure, we demo it — it doesn’t require an expert … the robot learns from trial and error,” said Peter Chen, a co-founder and chief executive of the company.
“Our robot software is not restricted to fixed motions. Today, robots do the same mechanical tasks over and over. Our software gives robots the ability to really see through their cameras and make adjustments.”
In addition to training robots faster and more cheaply, the software also opens the door to teaching new tasks. For example, the system could teach a robot how to thread a wire through a mechanical part, something most computer-vision systems cannot do given the complexity of tracking and programming for a flexible object.
The startup uses a VR link over industrial Ethernet to teach a robotic arm by imitation. (Image: Embodied Intelligence)
The startup uses virtual reality headsets to train robots. It currently uses the HTC Vive headset and its motion controller, although any VR headset will do.
“You see what the robot sees, you make decisions based on what the robot sees … and the robot imitates it,” he said.
Chen was one of three Berkeley researchers who published results earlier this year of experiments teaching robots 10 basic tasks using machine learning and a VR connection. “With a three-minute demo in VR, robots solved all tasks that previously might have required a PhD in writing algorithms,” he said.
The approach uses the same deep neural network techniques that web giants such as Google and Facebook use to recognize images and other tasks. VR demos act as the training, setting up neural network pathways or policies that the robots later refine by running inference tasks.
The company currently builds its own Linux x86 servers using up to eight high-end Nvidia GPUs for training and one for inference work.
“In the beginning, we will provide this as a service for users who come to us with their specs … that will help us perfect our platform,” he said. “At some point, we will license the software to systems integrators.”
CONTACT US
USA
Vilsion Technology Inc.
36S 18th AVE Suite A,Brington,Colorado 80601,
United States
E-mail:sales@vilsion.com
Europe
Memeler Strasse 30 Haan,D 42781Germany
E-mail:sales@vilsion.com
Middle Eastern
Zarchin 10St.Raanana,43662 Israel
Zarchin 10St.Raanana,43662 Israel
E-mail:peter@vilsion.com
African
65 Oude Kaap, Estates Cnr, Elm & Poplar Streets
Dowerglen,1609 South Africa
E-mail:amy@vilsion.com
Asian
583 Orchard Road, #19-01 Forum,Singapore,
238884 Singapore
238884 Singapore
E-mail:steven@vilsion.com