Multicore requires OS rework, Windows architect advises

Microsoft architect Dave Probert sees tomorrow's OS kernel acting more like a hypervisor

With chip makers continuing to increase the number of cores they include on each new generation of their processors, perhaps it's time to rethink the basic architecture of today's operating systems, suggested Dave Probert, a kernel architect within the Windows core operating systems division at Microsoft.

The current approach to harnessing the power of multicore processors is complicated and not entirely successful, he argued. The key may not be in throwing more energy into refining techniques such as parallel programming, but rather rethinking the basic abstractions that make up the operating systems model.

Today's computers don't get enough performance out of their multicore chips, Probert said. "Why should you ever, with all this parallel hardware, ever be waiting for your computer?" he asked.

Probert made his presentation on Wednesday at the University of Illinois at Urbana-Champaign's Universal Parallel Computing Research Center.

Probert is on the team working on the next generation of Windows, though he said the ideas in this talk did not represent any actual work his team is doing for Microsoft. In fact, he noted that many of the other architects on the Windows kernel development team don't even agree with his views.

For the talk, he set out to define what a new operating system, if designed by scratch, would look like today. He concluded it would be quite different from Windows or Unix.

Today's typical desktop computer runs multiple programs at once, playing music while the user writes an e-mail and surfs the Web, for instance.

"Responsiveness really is king," he said. "This is what people want."

The problem in being responsive, he noted, is "how does the OS know [which task] is the important thing?" You don't want to wait for Microsoft Word to get started because the antivirus program chose that moment to start scanning all your files.

Most OSes have some priority scheduling to avoid these bottlenecks, but they are still crude. (Probert even suggested telemetry, in the form of a "This Sucks!" button on each computer, that a user can push whenever he or she gets frustrated with the computer's pokiness. The resulting data could be compiled to give OS developers a better idea of scheduling.)

As they began adding multiple processor cores, chip makers took a "Field of Dreams" approach to multicore chips: Build them and hope the application programmers would write programs for them. The problem is today's desktop programs don't use the multiple cores efficiently enough, Probert said.

To get the full benefit from multiple cores, developers need to use parallel programming techniques. It remains a difficult discipline to master and hasn't been used much, outside of specialized scientific programs such as climate simulators.

Perhaps a better way to deal with multiple cores is to rethink the way operating systems handle these processors, Probert said. "Really, the question is, not how do we do parallel, but what do we do with all these transistors?"

The current architecture of operating systems is based on a number of different abstractions, he explained.

In the early days of computing, one program was run on a single CPU. When we wanted multiple programs to run on a single processor, the CPU time was sliced up into processes, giving each application the illusion that it was running on a dedicated CPU.

The idea of the process was an abstraction, and wouldn't be the last one. Once the OS started juggling multiple programs, it needed a protected space, free from user and program interference. Thus was born the kernel mode, which is separate from the space in which the programs were run, the user mode. In effect, kernel mode and user mode abstracted the CPU into two CPUs, Probert said.

With all these virtual CPUs, however, come struggles over who gets the attention of the real CPU. The overhead of switching between all these CPUs starts to grow to the point where responsiveness suffers, especially when multiple cores are introduced.

But with Intel and AMD predicting that the core count of their products will continue to multiply, the OS community may be safe in jettisoning abstractions such as user mode and kernel mode, Probert argued.

"With many-core, CPUs [could] become CPUs again," he said. "If we get enough of them, maybe we can start to hand them out" to individual programs.

In this approach, the operating system would no longer resemble the kernel mode of today's OSes, but rather act more like a hypervisor. A concept from virtualization, a hypervisor acts as a layer between the virtual machine and the actual hardware.

The programs, or runtimes as Probert called them, themselves would take on many of the duties of resource management. The OS could assign an application a CPU and some memory, and the program itself, using metadata generated by the compiler, would best know how to use these resources.

Probert admitted that this approach would be very hard to test out, as it would require a large pool of existing applications. But the work could prove worthwhile.

"There is a lot more flexibility in this model," he said.

Join the newsletter!

Error: Please check your email address.
Rocket to Success - Your 10 Tips for Smarter ERP System Selection

Tags CPUsoperating systems

Keep up with the latest tech news, reviews and previews by subscribing to the Good Gear Guide newsletter.

Joab Jackson

IDG News Service
Show Comments

Cool Tech

SanDisk MicroSDXC™ for Nintendo® Switch™

Learn more >

Breitling Superocean Heritage Chronographe 44

Learn more >

Toys for Boys

Family Friendly

Panasonic 4K UHD Blu-Ray Player and Full HD Recorder with Netflix - UBT1GL-K

Learn more >

Stocking Stuffer

Razer DeathAdder Expert Ergonomic Gaming Mouse

Learn more >

Christmas Gift Guide

Click for more ›

Most Popular Reviews

Latest Articles

Resources

PCW Evaluation Team

Walid Mikhael

Brother QL-820NWB Professional Label Printer

It’s easy to set up, it’s compact and quiet when printing and to top if off, the print quality is excellent. This is hands down the best printer I’ve used for printing labels.

Ben Ramsden

Sharp PN-40TC1 Huddle Board

Brainstorming, innovation, problem solving, and negotiation have all become much more productive and valuable if people can easily collaborate in real time with minimal friction.

Sarah Ieroianni

Brother QL-820NWB Professional Label Printer

The print quality also does not disappoint, it’s clear, bold, doesn’t smudge and the text is perfectly sized.

Ratchada Dunn

Sharp PN-40TC1 Huddle Board

The Huddle Board’s built in program; Sharp Touch Viewing software allows us to easily manipulate and edit our documents (jpegs and PDFs) all at the same time on the dashboard.

George Khoury

Sharp PN-40TC1 Huddle Board

The biggest perks for me would be that it comes with easy to use and comprehensive programs that make the collaboration process a whole lot more intuitive and organic

David Coyle

Brother PocketJet PJ-773 A4 Portable Thermal Printer

I rate the printer as a 5 out of 5 stars as it has been able to fit seamlessly into my busy and mobile lifestyle.

Featured Content

Product Launch Showcase

Latest Jobs

Don’t have an account? Sign up here

Don't have an account? Sign up now

Forgot password?