AI Startup Raises $400 Million to Train Robotic Arms

A robotics and AI start-up, Physical Intelligence (PI, pronounced as ‘π’), has recently raised the funds of $400 million at a valuation of $2 billion for training robots in a humanlike manner. The notable investors include Jeff Bezos and OpenAI.

Based in San Francisco, the company focuses on developing a general-purpose “robot brain” that can be utilized across multiple robotic systems, reducing the need for customized operating software for each robot. This innovation aims to make robotics more scalable and versatile in applications like logistics, healthcare, and manufacturing.

“PI is part of a larger trend in robotics start-ups gaining significant funding for developing advanced systems.”

So far, everyone has only dealt with machines which do the work with precision and accuracy without humanlike ingenuity which makes robotic actions completely mechanical and 100% command dependent. This means that for every small task to get done by robot, a clear set of instructions with all nitty-gritty details need to be fed to its command interface. However, physical intelligence is a revolutionary concept in which robots performing large number of simulations of physical scenarios will feed their motion and sensor data to the master AI model, which will develop its own understanding about every such scenario.

 

The model when connected with the target robots shall pass on the humanlike understanding of this physical world to them enabling them behave in humanlike manner apparently without any special time-consuming training at all.

 

For example, if you have a robot named Alice, you simply say, “Alice, the washroom tap is leaky”. Now, Alice would make a sensible action plan using its LLM model which involves not just fixing the tap, but also wiping the floor dry.

 

The idea behind this big revolution is very simple though. The LLMs (Large language models) which are used to train interactive AI systems such as chatGPT, Bloom, Anthropic Claude, Microsoft Azure can also train the robotic arms when combined with the actual demonstrations of the physical world. Though LLMs do not interact with physical world themselves, but they can train the robots by giving information about the physical world, i.e. objects and scenarios through the available training data. This is as if a human being is made to understand the whole world by only reading a book about it. The intelligence which comes with actual interaction with the world can be developed in a robot artificially by feeding the experience of the interactions through the AI training model.

 

Once robots become intelligent enough to take actins derived from human experiences of doing a certain number of things in a certain way, interactions with robots will be quite like dealing with humans.

 

It will be exciting, productive, creative and an easy thing to control robots without the need to understand robotic science.