
When Rodney Brooks talks about robotics and AI, you have to listen. Currently the Panasonic Professor of Robotics Emeritus at MIT, he has co-founded three key companies, including Rethink Robotics, iRobot, and his current endeavor, Robust.ai. Brooks also ran MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) for 10 years, beginning in 1997.
In fact, he loves to speculate about the future of AI and keeps a scorecard on his blog about how well he's doing.
He knows what he's talking about, and he thinks it may be time to pump the brakes on the massive hype of generative AI. Brooks thinks it's an impressive skill, but he also thinks he may not be as capable as many people suggest. “I’m not saying LLMs aren’t important, but we have to be careful about how we evaluate them,” he told TechCrunch.
The problem with generative AI, he said, is that while it can do certain tasks perfectly, it can’t do everything a human can do, and humans tend to overestimate its capabilities. “When a human sees an AI system do a task, they immediately generalize that to similar things and estimate the capabilities of the AI system. It’s not just about performance, it’s about competence,” Brooks said. “And they’re often very, very optimistic because they’re using models of individual performance on the task.”
The problem, he added, is that generative AI is not human or human-like, and that trying to give it human capabilities is flawed. He said people see it as so capable that they want to use it for applications that don't even make sense.
Brooks points to a recent company called Robust.ai, a warehouse robotics system, as an example of this. Recently someone suggested to him that it would be cool and efficient to build an LLM for the system to tell warehouse robots where to go. However, in his assessment, this is not a reasonable use case for generative AI and may actually slow things down. Instead, it is much simpler to connect robots to the data stream coming from warehouse management software.
“If you have to get 10,000 orders in two hours, you have to optimize for that. Language doesn’t help. It just slows you down,” he said. “We have massive data processing, massive AI optimization technology and planning, and that’s how we get orders done quickly.”
Another lesson Brooks has learned about robots and AI is that you can never try too much. It must solve a solvable problem that allows easy integration of the robot.
“In areas where it’s already been cleaned up, automation is necessary. For example, in my company, we do pretty well in our warehouses, which are actually pretty constrained. Even in those big buildings, the lighting doesn’t change. There’s no stuff on the floor because people pushing carts are going to bump into it. There’s no floating plastic bags. And it’s not really in the best interest of the people working there to have a robot do something malicious,” he said.
Brooks explains that it's also important for robots and humans to work together. So rather than making robots that look like humans, his company designed these robots for practical purposes related to warehouse operations. In this case, it looks like a shopping cart with handles.
“So the form factor that we use is not a walking humanoid, even though I’ve built and delivered more humanoids than anybody else. It looks like a shopping cart,” he said. “It has handlebars, so if there’s a problem with the robot, a person can grab the handlebars and do whatever they want.”
After many years, Brooks realized that it was important to make technology accessible and purpose-built. “I always try to make technology easy for people to understand. That way, you can deploy it at scale and always see the business case. The return on investment is also very important.”
Still, Brooks says we have to accept that there will always be hard-to-solve outliers when it comes to AI that could take decades to fix. “If we don’t carefully constrain how AI systems are deployed, we’re always going to have a long list of special cases that take decades to discover and fix. Ironically, all those fixes are done by AI.”
Brooks added that there is a misconception that technology will always grow exponentially, largely due to Moore’s Law. If ChatGPT 4 is this good, imagine what ChatGPT 5, 6, and 7 will be like. He sees this flaw in the logic that, despite Moore’s Law, technology doesn’t always grow exponentially.
He uses the iPod as an example. Over a few iterations, we actually doubled the storage size from 10GB to 160GB. If this trajectory continued, he thought we'd have an iPod with 160TB of storage by 2017, but of course we didn't. Models sold in 2017 actually came with 256GB or 160GB. Because, as he points out, no one really needed more than that.
Brooks acknowledges that LLMs could come in handy at some point when it comes to home robots that can perform specific tasks, especially as the population ages and there is a shortage of workers to care for them. But even that can come with its own challenges, he says.
“People say, ‘Oh, if you use large language models, you’ll be able to do things that robots can’t do.’ The problem isn't there. The problem with being able to do something is about control theory and all kinds of other hardcore mathematical optimization,” he said.
Brooks explains that this could eventually lead to robots with useful language interfaces for people in care situations. “It’s not useful in a warehouse to tell individual robots to go out and get one item for one order, but in aged care in the home it could be useful if people could tell the robots things,” he said.








