A few years ago, I noted that Moores law (transistor density on chips double every 18 months) has expired outside of a looser definition, and that’s about to end as well. So.. now it has. Graphic processors, general purpose massively parallel processors, and DSPs indeed bravely held the torch for a moment, and are still all the rage as we (at last, kicking and screaming) adapt code to occur not just in a few neatly organized threads, but in huge bazooka blasts of thousands to millions of simultaneous actions. The value of accomplishing more with fewer instructions is fading, and it’s becoming more about figuring out how to flatten out an action into as many small things that can occur at the same time. Often by doing a lot that ends up nowhere, just in case – go ahead and start preparing what the intricate 3d view in front of you before checking if there’s a sold colored wall in front of it, because hey, by the time that’s determined, it’s too late. Start processing what you’ll do if the voice activation detects “Yes” and also “No” and “Cancel”. Wasteful? Well, certainly explore other options, but really, during the next 1/800 sec cycle, processor-days of calculations can happen – the only option is to idle. Of course you’d rather put them toward a solid goal you’ll have defined in a few seconds, but they’ll be there then too. There is, quite literally, more power than we can figure out what to do with.
Of course we *can* idle and save power. That’s useful too. And we can throw raw power at stuff. Try every conceivable angle, not just one or two we have time for. Throw in some AIs, boy do they gobble up calculations by the trillions. It’s a good problem to have, and thanks to Moore, we’re used to having it. But, now that we double every four years, down from the earlier two.. things are about to change again.
CMOS isn’t quite hitting battling physics yet, but another five years will take care of that. We can layer harder, fight the cooling issues. But after that, I think we’re about to shift toward the less physical again. Everything used to be about physical action, movement, but then software completely dominated as a way to steer that. Which relied on hardware upping the ante. The next phase will be to take a deep breath and figure out what we actually have here, and how to organize it, at scale. We kind of quit optimizing things that weren’t at the very lowest levels, because there ceased to be much of a point – hardware is fast and cheap – and then it became downright counterproductive. Elegant, fast code doesn’t scale, nor can you trivially apply it to everything the way you can with hardware. It’s hard to maintain, it breaks, and it becomes unwieldy at more than a linear pace. Keep it modular, dot all the i and cross the ts and then dot the ts and cross the is because you can always buy faster hardware, but you can’t buy fixing fragile design.
Well, you’re about to not be able to buy faster hardware either, so buying tighter design is going to start being hip again. Hardware already did that. If you open old devices, they’re very modular. Lots of components, nice and neat, easy to reuse. There’s even little squares of PCB lines off for each little module (so cute!). Now, there’s a half dozen ICs, if that. The speed of light is constant, power input is only sort of scaling.. you can’t split ram, cpu, LED driver, RF receiver, and so on into their own parts, now matter how neat it looks. You can barely even segment them in their own parts of the actual silicon wafer (still cute!). It all needs to meld, and post-68000 (late 80s early 90s) it’s all machine optimized – no more scatterbrained humans who can’t keep track of a billion parts at once.
That, I think, is the kind of talent we’ll be aiming at less cloak-and-dagger goals, like optimizing away the separations, but without sacrificing stability.
Or, yaknow, perhaps they invent something better than CMOS, or turn these newly fanged AIs onto writing better AIs and start the singularity. Either way.