A former student of Spartanburg Technical College and overall geek, I enjoy listening to music, reading books, playing video games, and watching movies. Sometimes I write about them.
View all articles by Adrian Tallent
For a generation of us, the Y2K Bug was more than just a joke. We are the ones that can remember what it was like during the days leading up to the turn of the century, how nobody was sure just what to make of this obstacle that the so called “experts” were so certain we’d face. It is now 2009, and there are a generation of you out there now who were probably too young at the time to remember all this. So just what was the Y2K Bug? Sit back and take a moment from reading this article to reflect on all of the computers you use in your everyday life. Aside from the obvious personal computer at home, work, or the public library (otherwise, you wouldn’t be reading this), there are the computers that you use every day without thinking much about them. Perhaps you are a student enrolled in a large high school or college, and your schedules are all generated electronically. When you go out to the grocery store, there are machines involved in the transaction of money, from the ATM machine to the cash registers and further extended into the credit card readers. You are constantly surrounded by machines imbedded with microchips to regulate functions-coffee machines, cd players, television, automobiles, hospital equipment, and so called “smart” versions of various other everyday objects. Now imagine that every such computer and computer related component in the world stopped working. What if all of them all over the world where to suddenly cease operations at the exact same time?
The immediate response is to dismiss such an event as impossibility, and while such may be true today, back in the late 90’s, no one was very certain. How could such a thing happen in the first place? To answer that question, one needs to go back to observe the history of computers and their programming.
In the early days of industrial computing, back when a single computer filled an entire room and stored data on magnetic tapes, space in a system’s memory was limited. This space was filled by numericals of data known as bits, representing either a zero or a one. In those days, space was at such a premium that programmers had to cut corners wherever possible in order to make sure that a program had enough memory allocated to perform its basic function. One of the popular shortcuts was to leave out the leading digits of the date, cutting the year 1983 to 83. This particular piece of programming methodology became a standard over time, and saw continued usage long after the advancement of technology eliminated the need to “scrimp” for memory space. Indeed, because technology was advancing so fast, many programmers assumed that their programs would fall out of use in as little as five years time, so the dates a program recognized was essentially a non-issue.
Unfortunately, the programmers guessed wrong. It was very expensive to continue upgrading all the time, so many companies continued to run on outdated software, even as the use of computers began to spread, and microchips permeated every aspect of our society. These legacy programs served their jobs well, even though there were newer systems on the market; newer wasn’t always better, especially when you only needed a few of the total functions offered by a particular software package. (IE, how many of you really use Microsoft Word 2007’s built-in translator?) Since such programs only recognized the last two digits in all calculations involving dates, it was believed that the programs would behave unpredictably or simply stop working when the year 1999 rolled over into 2000. In the best scenario, the computer would continue to function but would incorrectly assume the year was 1900 instead of 2000, and provide incorrect results whenever a calculation involving dates was called up.