What Year 2000 bug?
Been hearing all about the Year 2000 bug these days. In a little over a year civilization as we know it will collapse into utter chaos. Or, depending on who you ask, nothing at all will happen and nobody needs to worry about a thing because it’s all taken care of. Or anyway, it will be all taken care of by the time it matters. It’s the second group that scares me.
But what is going to happen? I mean, this is something that is getting a lot of press these days and nobody really seems to know how well, if at all, our institutions have prepared for this piece of programmers’ short-sightedness.
Well, it seems that like everything else in this life, the Devil has taken residence in the details. A friend of mine has already been bitten by the Y2K bug. In going to make a credit-card purchase, his card was declined. Why? The computer thought it had expired. The expiration date on his card is sometime in 2000, or as it says on the card «00».
The bank that issued him the card doesn’t have a Y2K problem. They’ve already addressed it and have no plans to spend any more money on this particular bugbear. Unfortunately, the problem doesn’t end at that bank’s mainframes. The problem extends out to the millions of terminals across the country that business owners and consumers count on to allow them to make transactions. Make no mistake: many of these terminals are already Y2K-compliant. Nobody really knows how many are out there that aren’t. Again, the problem is that the bank doesn’t recognize that there is a problem.
Furthermore there’s the self-fulfilling prophecy theory, which says that since people will be in a panic about losing all their money, or even prudently stocking up on cash in case there’s a problem with their account while the banks fix the problem, a cash shortage will be created by people wanting actual money, not numbers in a book. The vast majority of the money in the world is not printed on paper but floats around electronically. If there’s not enough paper money to go around, it’s 1929 all over again.
Be all this as it may, I doubt that anyone really knows whether this is a problem of disastrous proportions. Those who most loudly decry the end of the digital world are those who are in a position to make money from it. There is no doubt that a small group of COBOL programmers have found their skills to be marketable once again. Probably more marketable than they were when they built the programs that they are now being hired to fix.
There is no question that there will be some problems stemming from the Y2K problem. And there’s no question that the reason that we got into this mess is because computing made the jump from academic to commercial probably too fast. It’s one thing for an experimental system to be designed with no thought to the future. It’s quite another for the systems on which our economy depends to have been designed with such apparent lack of foresight.
The Y2K problem is a barely significant symptom of a much larger problem at large in the computer industry. As schoolchildren we were told that haste makes waste, advice we are all too quick to ignore. In a bid to please stockholders, corporate America rarely looks six months into the future. Long-term benefit is a forgotten term. Corporate raiders destroy the future of companies in exchange for a better stock price today, and are hailed as heroes, because the beneficiaries have the voices and the victims do not.
If you bring a better mousetrap to the table, it must be ready to go out of the box. Management asks «what’s in it for me today?» and the software industry responds that it speaks corporate lingo by answering a question with a question: «where do you want to go today?»
Computer Science theorists may enjoy the «luxury» of developing ideas, but any idea or new tool brought into an industrial forum has a limited time in which it must mature. «Yes,» the businessman says, «computerized accounting looks as though it does have many benefits. You‘ve shown me all the pieces to convince me that it can be done. But we won’t pay for it unless you can deliver within the hour.» Precious few companies in today‘s market bother with research and development. Those that do, such as maintain conservative growth and are not the darlings of the market. Historically technology products that require greater development time lose in the market because either they lose the precious FTM (first to market) status or because they are released prematurely and have too many problems. One example that comes to mind is OpenDoc, scrapped when it could not compete with the already finished although far less functional and useful OLE, which later became called OLE2 and ActiveX; perhaps now ActiveX has become as powerful as the OpenDoc architecture, but is much more resource-draining, inefficient, and not designed to be used by anyone but professional programmers. It is not an empowering and useful technology, but rather one that takes the power of computers farther out of the hands of the end users.
In this environment, is it any wonder that «academic» issues such as the longevity of the software architecture of the products one develops are left by the wayside? Why would a project manager, knowing their job was at stake, waste time and resources making certain that a product would still work ten, fifteen or twenty years down the line? By that time nobody could be held accountable for sloppy work anyhow. Even without threats to one’s livelihood, it is well-known that one can make ten times as much money from a job done in half the time. Who can be blamed if the extra resources don’t get put into making the product stable after delivery? Who would dare admit that the product still had faults to be fixed after the sale has gone through?
Whatever happens after midnight of December 31, 1999 will be the result of our own impatience and our own greed. It’s easy to blame problems on the greed of the fatcat corporations, but the investors demanding greater and greater returns extend to and include people in the lower class. Anyone with a 401K is a participant. Anyone who buys software or hardware from companies that don’t do their own research and development is an accessory. And those who make their technology decisions based on immediacy rather than technical merit, even a family buying a computer bundled with software, is holding the smoking gun to the problems we’ll encounter as the clock strikes midnight a little more than a year from now.
Here’s the real kicker, though. The «Year 2000 Bug» is sexy and easy to understand. Journalists can describe it halfway accurately, and even someone not versed in computer science can understand the basics of the problem given a short, if simplified, description. It fits with what we understand from looking at digital clocks and auto odometers. Yet the Y2K problem isn’t something that must be addressed at the level of the operating system. It’s a problem that affects applications and embedded systems, mostly.
I want to know what will happen roughly two billion seconds after January 1, 1970. «The Year 2038 Bug» has no sexy ring to it for journalists to latch onto, and explaining base two arithmetic to the general populace is not as easy as looking at a digital clock. The media and the general public won’t want to hear about it. Especially if there are minimal problems with Y2K, there will not be any resources put into fixing it. Y2K has gotten a lot of press, yet most companies waited until the problem was less than two years away before beginning to address the problem. 2038 will be a tougher, because it will address both the applications’ handling of time and the operating systems’ handling of time. My guess is that no one will try and fix it until six months before disaster hits.
Unless of course, a few airplanes fall out of the sky this time. Maybe then we’ll learn our lesson. All this supposed progress and still we need people to die before we’ll learn a lesson?
I hope I’m wrong.