Robots Going Meta
A flying robot is zipping along above mountainous terrain on a mission: it’s looking for a lost hiker. It’s also small, and its range is limited by its battery, which is critically low – it will need to return to base to recharge. How does it decide which tasks to prioritize? Should it take a thermal image, move to a different location, or try to pick up an object on the ground?
The National Science Foundation has awarded a $1.2 million grant to ECE’s Ryan Williams, an assistant professor, to help robots solve these dilemmas independently. For years, Williams has been developing teams of robots that can interact with each other and with humans to survey and analyze large tracts of land. Now he is pushing his work a step further to make robots more capable of higher-level self-awareness and decision-making.
Past research in robotics has focused primarily on designing robots to perform tasks like how to plot a course through an environment. “Think of the Roomba,” says Williams, “it’s capable of mapping and localizing itself. That allows it to do a (somewhat poor) job of cleaning floors by solving mapping problems.”
But if a robot has multiple functions, it has to be able to decide which functions take priority to increase its chances of achieving its goal, while considering its energy and time constraints.
“It’s called computational awareness, because we want to allow robots to balance the computational loads associated with autonomy,” explains Williams. “Humans are very good at this. We can navigate an environment while making high-level decisions.”
To illustrate his point, he uses the example of commuting to work: we determine when the road ahead is busy, dangerous, or slow and automatically decide when to focus on the path ahead, change the radio station, check the GPS, or think about work. “We humans anticipate increased cognitive load very well,” observes Williams, “we want to embody this in robots.”
While Williams focuses on the robots’ higher-level decision-making related to mapping, Haibo Zeng, an assistant professor of ECE and the other primary investigatory on the NSF grant, along with Jia Bin Huang, an assistant professor of ECE, are developing innovative methods for profiling computation on embedded robot hardware to help the robots analyze data and estimate how long processes will take, in order to improve their ability to schedule tasks.
“If I know how long decisions typically take and how many I need to complete, I can schedule them to guarantee results by a deadline,” notes Williams.
One of Williams’ other collaborators on the project, Changhee Jung, a professor of electrical and computer engineering at Purdue University [please confirm], is designing ways for their robots to be more aware of issues affecting their reliability. For example, after the Fukushima Daiichi nuclear disaster, robots inspecting and cleaning up the site struggled to operate effectively. “Radiation creates issues of hardware reliability,” Williams explains. “It can flip bits in memory and corrupt data. The robots at Fukushima failed precisely because of the influence of radiation.”
To address this issue, Williams and Jung are coming up with new methods for robots to map out factors in their environment that will affect their computational reliability, evaluate the level of the threat on a spectrum, and decide when to avoid and when to accept risks as they try to achieve their objective.
Building Human-Robot Trust
This latest research project intersects with Williams’ ongoing work on two other efforts. With NSF funding, he is also working with interdisciplinary teams to develop robots for search and rescue missions (finding people lost in rugged terrain) and precision grazing (determining when livestock should change pastures).
For now, the robots designed for these projects are programmed to avoid humans entirely to be reliably safe. Because they are autonomous, there is no guarantee that they could safely approach humans, says Williams.
“Computational awareness puts us on the road towards improved safety,” Williams explains, because it helps robots determine how and under what circumstances they can safely interact with humans.
Improving robots’ ability to interact safely with humans is critical for diversifying the tasks they can perform. As robots become increasingly present in our lives, their primary role will be to enhance humans’ efforts by taking on mundane or dangerous tasks. “It’s going to be very hard to convince [human] practitioners to give up control. I agree with them. It needs to be a mixture of humans and robots working together,” Williams concludes. But for that to happen, robots will have to come with safety guarantees.
Williams’s research involves understanding humans and how they will learn to trust robotic partners. Ensuring robots can approach and interact with humans safely would obviously be a major step towards building the confidence necessary for autonomous systems to take on more tasks.
While engineers work towards a future in which robots can interact with humans in closer physical proximity, Williams is also helping society understand the value of robots by determining tasks that can already be turned over to them.
His new project will make it easier for robots to decide, for example, which terrain to explore first when searching for lost people by computing which areas would be most difficult or dangerous for human search teams to enter. In mountainous areas, terrain might be too steep for easy access. “We would have robots go to a treacherous region, take a thermal snapshot and report back: ‘I think there’s a person here. It’s worth your time to deploy people into this ravine.”
For robots to make these decisions and coordinate with humans, they also need to know when and how to propose solutions that, while based on a sound computational analysis of a situation, might run counter to human intuition. “It might be counterproductive to recommend solutions that are mathematically optimal if they are not intuitive. We are working on a path recommender that takes into consideration the trust building that needs to happen between autonomous systems and humans, so that later the human trusts the autonomy.”
Building confidence in robotics through concrete projects with an immediate and positive impact on humans is central to Williams’ research vision. With a large interdisciplinary team of specialists and students, he is pushing the frontier of robotic capabilities by demonstrating how robots can solve real-world challenges.
“I want to see physical robots that deploy in their environment and are adapting to computational needs,” observes Williams, “I think it would be lovely to have robots that actually adapt to their environment, improve safety, and have tangible benefits to people.”