The BRADLEY DEPARTMENT of ELECTRICAL and COMPUTER ENGINEERING

Heterogeneous networks-on-chip | ECE | Virginia Tech

ECE NEWS

Reliable, energy-efficient heterogeneous networks-on-chip

Paul Ampadu headshot.
Professor Paul Ampadu joined ECE in the fall of 2016.

Professor Paul Ampadu dreams of putting more than 1,000 electronic/photonic/nanomechanical cores on a single chip. He wants to shrink large data centers —like Amazon server warehouses—into an integrated system (with the same processing capability) that might fit in the palm of your hand.

Ampadu seeks to realize this vision by developing reliable, energy-efficient, heterogeneous Networks-on-Chip (NoC), putting a twist on the conventional System-on-Chip (SoC) design.

A gold chip
Ampadu Group Fabricated Dual-Core Sensor

SoCs are typically bus-based, using a single lane of communication to connect all of the components on the chip—microcontroller, memories, timing sources, etc. While this approach works well when there are only a few systems to arbitrate, says Ampadu, issues arise as the number of cores increases.

"The big disadvantage with a bus system is that it is quite unscalable, with worsening latency and deadlocks as the number of cores increases," says Ampadu. "When everybody tries to access a bus at the same time, it can lead to nightmares of delay and arbitration inefficiency."

Transforming System-on-Chip design

Ampadu's research focuses on re-engendering the non-scalable SoC design into a more structured, regular, and modular design of a network for components to interact with each other.

With a mesh network-on-a-chip topology, all nodes can access resources without a shared bus. One node (core) can interact with any other via a series of simple hops.

"A design like this is scalable in the same way the internet is," says Ampadu. "Even if the world population increases to 20 billion, people will still be able to get online."

While hopping brings scalability into reach, each hop introduces lag time. But, Ampadu explains, performance is based on how many tasks the systems can process in a set unit of time.

"As soon as a node passes off a task, it's ready for the next one," says Ampadu. Even with the slight latency increase, "we can accomplish more in the same amount of time."

Ampadu, his Ph.D. student Venkata Sai Kiran Adigopula, and his master's students Shamit Bansal, Vinidhra Sivakumar, Sathyapriya Subramani, and Vigneshwaran Venugopal Kalaiarasi are also researching topics in many-core task allocation, approximate computing and storage, and joint reliability/security for many-core chips.

In the footsteps of Claude Shannon

Ampadu's research practices have been influenced by the perspective shift conceived by Claude Shannon in the 1940s, which prioritizes preserving data over improving the physical medium through which data is transferred.

"We do the same thing," says Ampadu. "We focus on coding information so that we can transmit and receive it reliably."

In this way, his research team has been able to worry less about the unreliable transistor and interconnect components that accompany modern aggressively-scaled nanotechnologies, and instead, focus on signal-processing and information-theoretic approaches to process information. The group has modeled a reliable and energy-efficient 1,000-core chip using this approach.

"It works beautifully," says Ampadu.

Introducing Paul Ampadu

  • Joined ECE Fall 2016 as Professor

  • Associate Professor, University of Rochester, 2010-2016

  • Assistant Professor, University of Rochester, 2004-2010

  • Dr. Martin Luther King Jr. Visiting Associate Professor, Massachusetts Institute of Technology (MIT), 2011-2013

  • Ph.D., electrical and computer engineering, Cornell University, 2004

  • M.S., electrical engineering, University of Washington, 1999

  • B.S., electrical engineering, Tuskegee University, 1996

  • National Science Foundation CAREER Award 2010

  • Black Engineer of the Year Special Recognition Award 2010

  • IEEE Circuits & Systems Elected Board of Governors Member 2010-2016