The frustration and anguish of trying and failing to piece together Ikea furniture may seem like an exercise in humiliation for you, but know this: The particleboard nightmare may one day lead to robots that aren’t so stupid.
In recent years, roboticists have been finding that building Ikea furniture is actually a great way to teach robots how to handle the chaos of the real world. One group of researchers coded a simulator in which virtual robot arms used trial and error to put chairs together. Others managed to get a different set of robot arms to construct Ikea chairs in the real world, though it took them 20 minutes. And now, a helpful robot can assist a human in assembling an Ikea bookcase by predicting what part they’ll want next and handing it over.
“It’s one of these things that’s easy to try—even if we break a couple of bookcases in the lab, it’s not a big deal,” says University of Southern California roboticist Stefanos Nikolaidis, coauthor on a recent paper describing the research, which was presented in May at the International Conference on Robotics and Automation. “It’s pretty cheap. And it’s also something that we all have to do at some point in our life.”
Nikolaidis and his colleagues began by studying how different people construct an Ikea bookcase. Instead of providing them that instruction sheet with pictographs, they had the subjects improvise the order in which they configured the supporting boards for the frame, as well as the shelf inserts. (That’s an important distinction, because the bigger research question for this experiment isn’t about building furniture—more on that in a second.) Based on these results, the researchers could group people into types, or preferences. Some would attach all the shelves to one of the frames, for instance. Others would attach a single shelf to both frames at once. These are known as action sequences.
They then had subjects do the assembly again, this time with a robot arm nearby to grab pieces for them. The researcher would log which pieces (shelves or supports) the person began with, establishing a pattern for the robot to clue into. “Let’s say that you come in and you put the first shelf,” says Nikolaidis. “OK, the robot doesn’t know that much. Then you pick the second shelf. And now you start putting the third shelf. Well, it’s very, very likely that you belong to that group of users that assembled all six shelves in a row. It’s very, very unlikely that you would then suddenly change your preference.” Once the robot knows a person’s preference, it’ll hand them the part that it knows people like them had previously chosen next. The experiments showed that the robot could quickly and accurately adapt to a human’s style in this way, successfully handing off the right components.
Think of it like the way AI researchers develop an image-recognition algorithm: If you want to detect cats, you feed a neural network oodles of images of felines. Because it has previously seen so many examples, the algorithm can then generalize. If you show it a picture of a cat it’s never seen before, it can draw on its previous knowledge to confirm it is indeed parsing a furry four-legged mammal with a crappy attitude.
This robot is doing the same, only instead of using a bank of static images, it’s drawing on examples of sequences, the order in which the humans pieced together shelves and supports, based on their preferences. “The robot knows that the next action that it should do is handing you the next shelf, with very, very high certainty,” says Nikolaidis.
In the end, though, this research isn’t about developing highly specialized robots that come to your house and help you build bookcases. Nor is it about developing machines that can do complex tasks like this on their own. It’s about teaching robots how to collaborate with humans without driving them even more insane than people already get when building Ikea furniture.
Despite all the hoopla about robots arriving to steal our jobs, the reality is that you’re more likely to have a machine work with you than replace you outright. For the time being—and probably for quite some time in the future—people are just going to be way better at certain tasks. No machine can replicate the dexterity of the human hand or come anywhere close to solving problems like we do. What robots are good at is brute work. Think of an automotive assembly line: Robot arms heft car doors into place, but the fine detail work requires a human touch.
In fact, Ikea furniture turns out to be a useful proving ground for roboticists because it’s a sort of miniaturized version of an auto factory. “In the Ikea example, how can the robot deliver tools to you?” says Nikolaidis. “We want to do this in a way that is efficient and productive, right? We want the robot to be able to anticipate what tool a worker is going to need, and deliver the tool to them.”
His team’s idea is to generalize this system to other situations in which a human and robot might collaborate—like maybe for an aircraft mechanic who wants a machine to grab a wrench for them. In robotics this is known as human-robot interaction, or HRI, the quest to get people and machines to collaborate, exploiting their respective skills, instead of making them compete with one another.
There’s a long way to go, though. In these experiments, the researchers told the robot what the human was doing, instead of the robot detecting those steps itself. (The team is now working on a machine vision system in which the robot could actually watch the human and figure things out on its own.)
And there are all kinds of nuances to consider with this kind of cooperation too. A robot has to safely hand over a screwdriver instead of jamming it through someone’s hand, for instance. It has to stay out of the way at all times, yet be close enough to help. “There’s also physical effort,” says Nikolaidis. “So for example, I’m more likely to pick up something that is near me than something that’s far away. If you change a little bit how tools are placed or objects are placed like that, you will see also different variability.”
A robot has to somehow adapt to these kinds of uncertainties to remain useful instead of becoming a burden. “When developing robots, it’s important to recognize that one size doesn’t fit all—cooks, nurses, mechanics, surgeons, almost all workers do their jobs in different ways,” says UC Berkeley roboticist Ken Goldberg, who wasn’t involved in this new research. “So robots that adapt to human preferences will be more appealing and helpful. This paper proposes an interesting new way to learn these preferences.”
Now if you wouldn’t mind, helpful robot, please hand me the dreaded Ikea Allen wrench.
More Great WIRED Stories
- 📩 The latest on tech, science, and more: Get our newsletters!
- How to survive the worst tornado in US history
- This is what gaming does to your brain
- Windows 11’s security push leaves scores of PCs behind
- Yes, you can edit sizzling special effects at home
- Reagan-Era Gen X dogma has no place in Silicon Valley
- 👁️ Explore AI like never before with our new database
- 🎮 WIRED Games: Get the latest tips, reviews, and more
- ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers