As hardware and applications evolve, so must operating systems.
Operating systems may be the workhorses of computer engineering, but rapidly changing demands from both hardware and applications keep it an active research area.
"There are a lot of changes these days," says ECE's Changwoo Min, who recently joined the department as an assistant professor. He describes recent advances that challenge operating systems, including parallel and multicore computing, nonvolatile memory, machine learning, and stronger computer security. Each, he says, comes with its own demands and possibilities.
"Operating systems often don't receive the same scrutiny or hype as other systems, but they must support all the advances," he says.
The multicore challenge
One advance that impacts operating systems is the use of multicore platforms, which have helped maintain the pace of regular CPU speed increases as we reach the limits of silicon processors.
Modern multicore processors have up to 76 physical cores and 304 hardware threads, Min says, and researchers expect that number to rise with parallel machine learning workloads and other advances.
The essence of multicore scalability and performance is coordination, he says. Designers must consider all possible conditions to make it correct. To ensure efficiency, accuracy, and scalability, you have to figure out all the possible combinations of conditions. That's the fun part, according to Min.
Timing is critical in operating systems. Operating systems were designed for single core processors, so their schedulers can throw off the rhythm and upset the performance of parallel applications on multicore processors, according to Min. "Coordination is all about ordering," he says. "Which event happens first, and which event happens later."
Although the typical method for ordering operations is the timestamp, "timestamp generation itself can be a scalability bottleneck." Min proposes to cut down on this time cost by using a physical timestamp. He is also designing a lockless implementation scheduler with fewer context switches and more efficient scheduling decisions.
Nonvolatile memory advantages
With the emergence of nonvolatile memory, the performance of storage devices is several orders of magnitude faster, says Min. However, users are unable to take full advantage of the increased speeds because the operating system has become the bottleneck. Min and his collaborators have been analyzing the behavior of widely-deployed file systems and identifying aspects of file system design that must be brought up to speed.
For example, "all operations on a directory are sequential regardless of read or write," says Min. "A file cannot be concurrently updated even if there is no overlap in each update. Moreover, we should revisit the core design of file systems for many core scalability."
Machine learning demands
New applications incorporate big data and machine learning, and application scalability is a critical aspect for multicore processing, says Min. But the techniques (like task placement and data sharing) used to achieve scalability can misbehave in operating system subsystems, which interact and share data structures among themselves. But Min sees this as an opportunity to figure out the data requirements and customize the operating system itself.
One way to do this is with a hardware accelerator, such as a graphics processing unit (GPU). For applications like big data and machine learning that require a lot of computation, the operating system should be able to recognize these types of operations and send them to an accelerator for processing. Min likes to think of these new operating system architectures as "a new customer requiring something completely different."
Security concerns are rising, and Min is working to minimize operating system vulnerabilities. "It may be a small bug in the software or hardware, but the consequences are far from small in the operating system."
He is working with ECE's Binoy Ravindran on software re-randomization, which looks at how adversaries attack existing infrastructure via software.
One traditional attack is for the adversary to determine the system's memory contents and analyze them to find a piece of code that can be exploited. To thwart these attacks, "we just change the memory," explains Min. "We randomly change the contents of the memory while maintaining the semantics of the code and data."
Min and Ravindran are also collaborating to extend the security infrastructure of the Popcorn Linux project.
"It's a dynamic and challenging area," says Min. "New requirements, new security vulnerabilities, and new definitions pop up every day."