In the not-too-distant future, our smart cars, bridges, buildings, and cities will all be talking to each other, says ECE Assistant Professor Ryan Williams. This highly interconnected infrastructure will rely on sensors, distributed intelligence, and robotic teams.
"In the coming decades, we'll have teams of fully autonomous, distributed robots," says Williams. But how will large-scale autonomous systems coordinate and compute in a distributed or decentralized way?
This is the question that Williams poses in his research and strives to answer in his Laboratory for Coordination at Scale (CASLab).
Williams and graduate students Anand Bangad and Siddharth Bhal are designing algorithms to carry out multi-robot computation, where a system composed of many robots autonomously assigns and distributes computational tasks. In this setup, the network could sense if the workload is heavier in some sectors, then dynamically adjust to correct it and complete the task in the most efficient way.
"I'm intrigued by the idea of an autonomous network doing its own computation, as opposed to relying solely on a centralized server," says Williams.
In the flurry and excitement of robot design and development, we commonly assume that whatever robots are sensing is worth sensing, Williams says. One arm of his research explores this assumption, questioning if it is the best use of resources. Williams wants to know what is appropriate to sense and when to sense it.
"Quadrotors can only fly for 30 minutes at most, and they often have low-power embedded computation," said Williams. "So asking the question of when you should use your resources is very important."
Williams and doctoral student Jun Liu have been applying tenets of decision theory to develop multi-scale planning algorithms. The algorithms equip teams of robots with the ability to autonomously decide what to observe when, then self-deploy and report back to further refine the plan.
This distributed scale computing and decision-making would lend itself to any number of applications, including environmental modeling. For example, in the first phase of an oil spill, high-intensity monitoring is necessary. As the spill is transported by currents and becomes more predictable, its monitoring shifts into a phase that's longer-lived but less resource-intensive. The impact on wildlife happens at different timescales, and the surface area increases.
"It would be highly inefficient to constantly send out big teams of autonomous surface and air vehicles to track slowly evolving impact," says Williams. "In this design, expert-informed models and decision theory guide deployment."
As autonomous drone capabilities expand, researchers grapple with new challenges in modeling and in security.
For example, multi-robot teams that can adapt to their environment are being vetted for infrastructure inspectiondrones that can inspect bridges, ships, or tanks."The model that dictates how a quadrotator behaves is significantly different when it's flying near a bridge versus in an open field," says Williams.
However, a flexible model exposes a security risk. How does one secure a system that learns Williams and doctoral student Remy Wehbe have begun investigating.
"It's a very interesting question, and one that's relevant, given the recent advances in deep learning," said Williams. "Autonomous systems that learn and adapt are absolutely the future."
Introducing Ryan Williams
- Joined ECE Fall 2016
- Ph.D., electrical engineering, University of Southern California, '14
- B.S., computer engineering, Virginia Tech, '05
- Viterbi Fellowship at the University of Southern California