Multicore vs the Programmers

The processors, Moore’s law and all that

So far, the increase in computing power has come from faster processors with increasingly complex instruction sets and longer pipelines. Some extra increase, mainly in servers, has been accomplished using multiprocessing. Fine, except the increases both in pipelines and processor speed were running into a power/heat problem. As the clock frequency goes up, the consumed power (produced heat) increases exponentially. There’s an increasingly steep wall there.

Then, more recently (last year or two?), both AMD and Intel embarked on a road of multicores, where each core is actually slower than the fastest single-core processors today. As you increase the number of cores, power consumption and heat goes up only linearly, that is parallel to the increase in computing power.

But (there’s always one), if we want the increase in computing power to continue, software that classically didn’t care, or needed to care, now needs to utilize an increasing number of cores to utilize an increased availability of computing power. Put in other words, the continued increase in computing power comes in the form of an increased number of cores, *not* anymore as an increase in power per core.

So, why should my apps care? They work just fine now, don’t they?

Yes, they do. But as everyone knows, apps steadily use more and more power just to stay useful to about the same degree they were. For instance, a typical accounting package may take 2 seconds to post a new invoice today, and took the same 2 seconds to do it five years ago, in the version that was current then on machines that were current then. Why? Because just as much as machines get more powerful, the display resolution goes up, the frameworks that the apps are built on get more sophisticated (read: slower), the apps themselves do more (the invoice is more complex today that it was five years ago), more sophisticated input devices are used today, and so on. IOW, the advancing power of the machine is consumed by the advancing needs of power of the apps, while the business advantage of using the app remains about the same.

Take your average accounting package. It’s single-threaded, I’m sure. Now, let it remain single-threaded in the future, and what happens? Well, your average desktop still gains power, doubling in power every 18 months, but now it’s chiefly done by increasing the number of cores. Each core maybe just gains 50% of power per 18 month cycle. Meanwhile, the frameworks grow even more obese, the display surface grows disgustingly, and your app becomes a kitchen sink of little features. Your single-threaded app now effectively becomes slower for each iteration. After 18 months, that one invoice takes 3 seconds to post. After another 18 months, maybe 5 seconds, and so on. Soon, you’re going to realize you’ll have to find a way to get that invoice posted using more than one core, or you’ll be history pretty soon.

Ok, I buy it, but how can I post an invoice using multiple cores?

The CPU vendors have tried to make the CPU’s parallelize instruction streams with some success (the pipe-line idea) and even reorder instructions. You can only take this so far and no really new performance tricks will come from this, I’m sure. It’s exhausted. The compiler vendors have tried to create parallelizeable (what-a-word…) machine code. Very limited success. The framework people have tried. Not much came of that. This leaves us, the architects and developers to do it. And it may go like this:

To get an invoice posted, you need the line items, the customer, the customers current credit, the stock situation, etc. Classically, the line items would be entered, for each item the price and availability would be retrieved, the sums made. Then the customers credit line would be compared to the total. The invoice number would be reserved. The whole enchilada would be written to disk and then sent to the printer. For instance. It’s a very linear process and tough to parallelize.

Now, if you changed the *business* procedures and the general architecture to allow you to create invoices tentatively, like for instance:
– sum up the entered line items
– at the same time, retrieve the customer credit info
– at the same time, reserve and retrieve an invoice number
– at the same time, issue a pick list for the warehouse

Then, when the customer credit info comes back, you can:
– approve the invoice, or
– deny the invoice and have it reversed
– at the same time, issue a restocking list if the picklist was
already issued

…etc… I hope you get the idea, since I’m tiring of this example.

The gist of the above is that to construct scalable business apps using multithreading, you have to adapt both your business procedures and your high level architecture and design. It entails both distributed computing and asynchronous messaging to a very large degree, something that current business apps lack almost entirely.

Ok, I got this far, so what’s the frickin’ problem?

The problem is that having parallel, asynchronous, and distributed processes running requires both what we call “compensating transactions” and a contention-free data model. None of these are particularly simple to handle and none of them are particularly well taught or even current knowledge. On top of that, you’ll get race conditions.

“Compensating transactions” require you to design all transactions so there exists a compensating transaction for each. (An issue for another thread, some other day, if anyone would really like to hear it.)

Now, all these technologies have to be learned by a current generation of instant programmers that are taught to drag-and-drop themselves an application in zero time and have no support from their environments (computing or business) for anything multi-processing except the primitives, which have been there for ages.

On top of that, there are no tools (or exceedingly few and primitive tools) to debug and test distributed, multithreaded, and asynchronous messaging apps.

And on top of that (it’s getting to look like a real heap), bugs in distributed systems are, as I already intimated, exceedingly difficult to reproduce.

IOW, you’re shoving a raft of advanced techniques into the lap of a generation of programmers that aren’t prepared in any way to handle this stuff. So, what do you think will happen?

Leave a Reply

Your email address will not be published. Required fields are marked *