This entry is part 12 in the series Economics In RPGs

The Sydney Olympic Games were one heck of a good reason for a party, more than a decade in preparation. So it only makes sense to illustrate this article about the 1990s with this image of the opening ceremonies. The image is courtesy of Wikipedia and is considered to be in the US Public Domain – see the image page.

As usual in this series, I’ve decided to just push on from where we left off, without the preambles and without a synopsis. You should read Chapter 8.1 before starting this continuation of the article to get the most out of it. But you can dive right in if you want to – at your own risk.

One word of warning: I was in the IT industry in the period in question, and know an awful lot about it as a result. That creates a tendency to waffle on (which I have tried to fight against) and to disappear down (tangentially-relevant) rabbit holes (which I have actively tried to resist). That can have two outcomes: either I have skirted over something superficially that deserved greater attention on behalf of those less-informed, or I have delved too deeply into things that seem relevant to me, but which may not be so important in the eyes of others. It’s even possible to fail in both ways at the same time. If there’s anything that’s unclear, use this as a starting point and guideline for your own research.

The Digital Age, Third Period 90s–00s

The final decade of any century is always going to be a compound of conclusions and precursors signposting the beginning of the new. This is doubted and even tripled when we’re talking about the end of a millennium. This was doubly true here in Australia and in Sydney as we ramped up not only for Y2K (like everyone else) but to host the Sydney Olympics; the city was obsessed with going the extra mile to make those games a spectacular success.

Sidebar: The Best Games In History

The reasons for this are an illustrative case study in how many influences can come together to create an irresistible trend. In this series, we’re primarily concerned with influences on the economy, and I’ll touch on that again at the end of this sidebar.

First, there was the inevitable desire to show our city and our nation off to the world. The economic benefits in terms of tourism were expected to last long beyond the games and even beyond the interval to the 2004 games, and in fact this was the outcome. It took COVID to bring them to an end, and now that most countries have reopened, there is undoubtedly a lingering residuum. But there was an increased emphasis on this because Australians are well aware of the tyranny of distance – we are a long way from anywhere else, especially from the US, Canada, and Europe, and needed to sell the nation as being worth that extra effort and expense.

Second, there was a dollop of inter-city rivalry. Melbourne had hosted the Olympics twice, and while Sydney-siders had supported those games wholeheartedly, we wanted to seriously up the ante. Such friendly rivalries are a common element in Australian Society – there are those who think that Australians approach everything as though it were a friendly game, and there’s an element of truth in that.

Third, the preceding games had not been the greatest success. We were acutely aware that then-president of the IOC had failed to deliver his usual pronouncement of ‘the best games in History’ over those games, and that earning that accolade would only enhance the perception of the city and of the event. What’s more, everyone overseas knew that Australia felt that way – so there was an expectation. (The thing with expectations is that you either fail to live up to the hype, and are viewed as diminished as a consequence, or you achieve or even better expectations, and gain added reputational luster as a result. The higher the expectation, the easier it is to fall flat. But if expectations are already going to be sky-high and you pull it off, the world is your oyster. We wanted to ensure that the hype, however elevated it was, would be seen as understatement afterwards – which demanded our going the extra mile to make visitors feel welcome. We did, and it worked).

Fourth, this would be either the culminating sporting event of the century, of the millennium even, or the launchpad for those of the century and millennium to come. Either way, there was extra pressure to get it ‘better than right’.

Fifth, there was widespread community support for the Games. Making the event a success was a point of pride for almost everyone in the city – to the point where some of the volunteers and officials took time off work (at their own expense) to attend the two prior games and learn what to do, and what not to do. And they were so successful that they were actively recruited, to pass those lessons on, by a number of subsequent games and comparable events.

Sixth, Australia had developed an international reputation for hosting big events “better than anyone else”, starting with the Formula 1 in Adelaide. The number of timers teams, drivers, and officials reported that ‘the Australian temporary facilities are better than those of many of the permanent tracks that we go to’ started that perception and it’s been built on every year since.

Seventh, Australians as a group tend to be perceived – more accurately than not – as sports-mad. We regularly punch above our weight in sporting events, and pride ourselves on always making an opposition earn their victories, no matter how great the mismatch may be on paper. We successfully translated that into a perception of the games as a whole being a sporting event in which other host cities, past and future, were our rivals.

And, finally, there were the other economic and social benefits. These were estimated to be in the Billions of Australian dollars pre-game and more billions in the course of the actual event. Afterwards, not only were there the anticipated tourism benefits but the games infrastructure was designed and intended to be economically and socially productive. Other games had made such claims, and failed to deliver, and those lessons were harshly scrutinized as our plans moved forward. The result: the stadium is still in regular use, the Olympic Village is now a residential suburb, there was a marque event designed to step into the Olympic aftermath to keep a positive view of the location active (which it successfully did for many years), and the games turned a significant profit for the state and the country as a whole.

One example of how this was achieved can be considered indicative: There was collaboration with Tourism providers to produce integrated tour packages that either culminated in, or kicked off with, attendance at the Games. Australia may have been a once-in-a-lifetime destination due to the distance involved, there was a lot of work done to maximize the bang that people got for their buck, on the assumption (and I think it was calculated at the time) that every happy tourist would generate 2-point-something more in future years – and that would mean more work, and more money, for everyone.

So there were eight factors contributing to set the standard for our hosting. Some of them were more significant early on, to be largely supplanted and left behind as a culture of excellence took hold. But they were all pushing in the same direction. Other events have done their best to emulate the success, and achieved it at least somewhat, but there were factors – like the fourth item on my list – that they could not replicate.

In economics, there are often four factors pulling one way and three pulling in the other, creating an unstable tension with an overall (and temporary) trend that can be reversed by a quite small change or event. When the dice eventually all line up, the result is a tsunami of events, all but unstoppable; the most you can hope for is to cushion the impact. More often than not, these events are negative in nature; positive examples require a lot of hard work on every front. But sometimes you can aim for the stars, and hit the mark.

The end of one era is the beginning of another

Getting back to the main point, the timing means that everything that happens in the last decade of a century or a millennium is viewed either as a culmination – “everything has been leading to this” – or as the harbinger of the future. There is considerable truth to the concept that – rightly or wrongly – anything that can’t be characterized as the first is automatically assumed to be the second, which shapes policies – sometimes, when it shouldn’t.

There is therefore an inherent turbulence built into social and economic systems and perceptions at the start of a new millennium or century – but this is often masked and temporarily delayed from public perception by a sense of having made the milestone. Everyone relaxes afterwards, at least for a while, and feels secure – when perhaps they shouldn’t. The seeds of the First World War were undoubtedly laid by the Empires of the 19th century, and by their relationships and treaties. Intended to create peace and respect between the dominant powers of their day, no-one foresaw that they would lead instead to war on an unprecedented scale.

We have the benefit of hindsight, and so can see – looking back – the significance of the milestones marked off in the critical decade, whereas they were largely unappreciated or underappreciated or misinterpreted at the time. The lesson some people – including scholars – fail to incorporate into their perceptions and theses is that the same is true of EVERY time period you study. Seeing the forest for the trees is always a challenge, often not possible until you look back from some remove. At the time, critical events are just ‘stuff that happens’.

    Beginning: Invasion Of The PCs

    The ‘stuff that was just happening’ at the beginning of the ultimate decade of the 20th century was the tsunami of adoption of the Personal Computer. No-one, and I mean no-one foresaw how big this movement would be, or what impact it would have.

      Office Computing

      Computers had been in the big businesses for a while, but they cost so much – to purchase, to install, and to operate – that they were completely out of reach for even moderate-sized businesses. To change that, all three of these elements had to change.

      The PC, in any of its incarnations, solved the first problem, purchase price, to at least some extent. The IBM-product, and its clones, solved it better than the Apple Mac, and the price difference became a leading cause of tribal contention at the time – were Macs worth the extra expense? Those in the Pro-Apple tribe said yes, those in the IBM-tribe said no, and there was a very narrow group caught in the middle that said “yes – for now.” The latter were shouted down by everyone but were proven right in the end.

      Both solved the installation problem almost completely – to the point where such problems were viewed as unusual exceptions when they arose. You didn’t need a dedicated computer room to house these devices, you didn’t need expensive air-conditioning systems, etc – you simply plugged them into a wall socket or power-board and sat them on a desk. Job done. Very occasionally, a domestic power supply would not be stable enough for reliable operations, but these were so rare that they were viewed as a failure on the part of the electrical supplier. In effect, the definition of ‘reliable power’ was rewritten to meet the needs of the newly-ubiquitous technology.

      Nevertheless, the occasional such problem arose regularly throughout the decade, slowly becoming less and less frequent as electricity providers caught up.

      At the start, the IBM product had no GUI (Graphical User Interface – i.e. mouse and pointer). It was a text-based system that was perceived as less user-friendly than the Mac. GUIs are inherently easier to learn, more intuitive. Windows 95 changed that – at least partially – and Windows 98 took the best selling point of the Mac and welded it to the best of the IBM product to create a world-beater. Apple responded with better color and high-resolution graphics, but the writing was largely on the wall.

      But Windows had already taken the business world by storm long before Windows 95. There are three major legs of business application: what became known as desktop publishing, database entry and retrieval, and spreadsheets. The Macs had an undoubted edge in the desktop publishing arena – WYSIWYG (‘What you see is what you get’, in other words what you saw onscreen looked like the eventual printed page). Microsoft and the IBM-clones had the edge in the other two, and there was a lot of debate about which was more important, most of which missed the point; it wasn’t that the spreadsheets and databases themselves were more important, it was what you could do with an application built on top of these, using them as a ‘back-end’. Point-of-sale systems, inventory management systems, accounting systems, etc. What was more, these applications could then output information to template ‘forms’ in the word processors to produce invoices and sales records and what-have-you, integrating everything into one unified suite of products. As the potential for these integrated systems emerged, customer after customer was won over – especially since the price was so much better. WYSIWYG was nice, but not critical to operations; everything else permitted better management of resources and potentially greater profitability.

      Cautionary Tales

      Before moving on, let me address one of the biggest mistakes that people made at the time, and continued to make right through the 2000s. Computerizing an operation did not make for less work for staff, or less expense, but that was always the #1 reason offered for doing so. Unscrupulous salespeople took full advantage of this misperception, promising the earth, then washing their hands when it didn’t materialize.

      Computerizing an operation permits better management of the operation, if you structure the implementation to give you essential information in a timely manner. Interpreting that information is always up to management, the computer can’t do it for you. You pay for this control by requiring (generally) more work from staff, and possibly even additional staff.

      You can’t control the cost of implementation; what you can control is getting value for money. Toe-in-the water implementations are doomed to inevitable failure; the more whole-hog that you go, the greater the bang that you get for the buck, and the more likely the implementation is to be a success.

      In the mid-90s, I saw an estimate that 80% of computer implementations were disasters waiting to happen, and half of the rest had already been disasters that had been survived and learned from. Only one-in-ten computerizations had been successful, and half of those (or more) were down to blind luck. Bear these facts in mind when dealing with any business operation in such a time period.

      Back to the topic at hand – the invasion of the PCs into office spaces. I’ve covered two of the three problems that needed to be overcome, showing that one was only partially solved by Mac but fully solved by the IBM-clone; that both solved the second successfully; which brings me to the third, ease of operation. And, in fact, I’ve touched on a lot of aspects of this third problem, along the way. It’s worth remembering that prior to the PC, staff needed specialist training to operate computer equipment, often very expensive training. The new desktop computers did away with that need almost completely – or so it might seem at first glance.

      At first, the Macintoshes and Apple-IIs were comparable to, if not better than, the IBM-PC in the user-interface stakes. The problem was that there was so little software available for business applications on the Apples. The IBMs may have had a menu-based system, with keyboard shortcuts that needed to be memorized, but they had the software ready-to-go. And they had training available for using those applications, providing an instant productivity boost in terms of using the new system. Apples looked prettier (on-screen, at least) but had none of this infrastructure to make them business-friendly. They were selling raw beef and expecting the customer to do the cooking – which is fine at home, but not what you expect at a four-star restaurant.

      An example of benefits

      GMs running games in this era need to understand what made a computerized workflow successful, but this is really hard to pin down unless you were there at the time, because glossy promises in glossier promotional materials that promise the earth, coupled with a certain rose-tinted hindsight, obscure the truth. So here’s a practical, if fictional, example.

      The computer gets brought in to manage sales and inventory for a small general store (I’ll be using a “corner store’ in the Australian sense of the term, but it should be pretty close to similar retail operations the world over). This store has been operating at a profit for a number of years but the profit margins are shrinking and the customers are starting to dry up. Instead of relying on general impressions of what’s selling when, a year’s worth of actual sales being recorded permits the owner to determine that some products sell better in certain seasons (which he already knew) but that there are a couple of spikes in demand because of holidays and the like. As a result, he is able to reduce the inventory that he is carrying of those products, stocking more of them when demand is about to increase; instead of $100,000 in stock, he now only has to keep $80,000 worth for most of the year, but at key points, he needs to carry $110,000 worth – explaining why he experiences difficulty in paying his bills at such times of year.

      That gives him $20,000 for most of the year to invest in products in greater demand – he uses half of it for that purpose, puts half of what’s left towards those difficult bills, and intends to use the remaining $5000 for promotional activities – sales and discounts and so on. Using the computer’s sales, he can determine week-to-week which promotions work (overall revenues go up) and which don’t. So he grows his entire business by 10%. His staff are somewhat disgruntled, because they have to work harder, but their jobs are actually more secure because they know how the new system works – except for those that don’t take the time to understand it. But he can now afford to give them a 2-4% pay rise, and still take more than half the extra earnings as profits. What’s more, he’s able to better cope with local changes in buying habits – stocking products that the nearby supermarket doesn’t, for example, with a small overlap of products that might be needed unexpectedly or when the supermarket is closed.

      History shows that this won’t be enough in the long term to fight off the supermarkets, but this store won’t be amongst the early casualties.

      Home Computing

      Here’s a simple thought experiment: If the average small business has two-to-four employees plus the owner, what’s the more important market: the domestic computer, or the business computer package? The latter costs twice as much, maybe even three times as much, but we’re talking about 3-5 times as many home systems for every business installation. So the home systems market is the more important, right?

      Wrong. People are strongly resistant to learning to do things two different ways. If you’re workplace is using a windows installation, you are twice as likely (or more) to prefer a windows installation at home, too. And businesses like to encourage this, because it means that any experience or additional familiarity you acquire through using your home system makes you more productive with the business systems and vice-versa, so they start including ‘familiarity with’ in their job vacancy requirements. And salesmen cotton on quickly, and start discounting lower-end home systems, because if four of a business’ staff already have familiarity with System X, it makes it more likely that the business will buy System X.

      On top of that, there are all those employees of businesses who had not computerized – and at the start of the decade, that was the majority. If you can capture that market segment, too…

      Computers had already started infiltrating the domestic market for entertainment purposes, but the rise of the business PC massively accelerated the process, and overwhelmingly, it was an IBM-clone that became the baseline home system..

      Bulletin Boards to Early Networks

      So you have a shiny new computer at home. It’s just going to sit there until YOU do something with it.

      A computer is a tool, you have to use it for something. There are a number of application areas that emerged at much the same time as PCs started infiltrating homes in a big way – games, art, writing/publishing, personal finances, well-being, automation, and communications. To these were soon added music and video. All of these were primitive to start with, and developed at different paces. Initially, a lot of development went into the computer games sector, because it made the best and fastest returns. In most of the other areas, you had one or perhaps two programs (if any), while you might have fifty or a hundred different games.

      At different times in the history of the PC, other applications came to the fore to (temporarily) dominate the landscape – the desktop publishing craze of the mid-90s, for example. No-one really expected communications to emerge from the pack as one of the most important to users day-to-day lives. E-mail grew and grew in importance, slowly gaining ground and not relinquishing it until the late 2010s. But it started far simpler, as bulletin boards.

      Describing these for anyone who hasn’t used one is phenomenally difficult, because any explanation takes far longer to describe the limitations than the subject is worth. A smaller, simpler Reddit is probably the closest brief description that can be made, but it only tells half the story. Early examples had no subject differentiation, for example – everything went into one vast list of threads that had to be manually selected. There was no search function. There was no internet – you had to dial a specific number to get access to the bulletin board hosted at that number. To go to a different one, you had to hang up and dial a different number, establishing a new connection. You discovered new sites through word-of-mouth recommendations from other users.

      Over the course of the decade, the bulletin boards evolved into the internet, and spun off into chat rooms (which evolved into social media) and connections began to be extruded into all sorts of other areas – when it became possible to email a fax machine, for example. And then along came the search engine and internet protocols so that one window could be used to “browse” from one site to another, all on related topics. Online trade was still in its infancy by the end of the decade, with people just barely beginning to grasp the potential.

      The history of this development is one milestone after another at lightning-fast pace, and WAY beyond the scope of this series. But some awareness of ‘the state of the art’ is necessary to properly simulate any point in history properly, and it’s generally better described, from a modern perspective, by listing all the things that you couldn’t do – with the knowledge that within two years, the list would have changed.

      Microsoft: The first Mega-corp

      The success of Windows made Microsoft the first in the modern generation of mega-corporations, a term unashamedly stolen from Cyberpunk. They didn’t just lead the industry, they were dominant. And they used this power to do some unsavory things in the corporate sense.

      Serious attempts were made to apply anti-monopoly legislation against them, but these generally fell foul of the fact that this was the product that everyone wanted. There were times when regulators were able to restrict and restrain the monolith – the browser wars come to mind, for example – but these were drops in the ocean. Even today, the monopolistic powers of the mega-corps like Google are at best restrained by legislative authority.

      History shows us that these powers will persist until some fundamental shift in the technological foundation creates an opportunity for a new player to supplant the existing power base; the smartphone brought apple back into prominence, but Google’s Chrome has become even more ubiquitous. To the extent that anyone can be said to have won the Browser Wars, Google has perhaps the best claim to the ‘trophy’.

      Here’s a very common potted business history: Someone writes an app that adds useful functionality to the then-current generation of Windows. It starts selling like hot cakes. Microsoft do one of three things: (1) decide it’s a flash in the pan, and not worth the effort of doing anything about; (2) buy the rights to the killer-app and bundle it into the next generation of Windows; or (3) develop their own version or buy the rights to a rival product and enhance it, then bundle that with the next generation of Windows (if not sooner, through an update). In two out of three cases, a prosperous entity within the IT universe vanishes – and the next version of Windows becomes that little bit more profitable, having something to sell and market to existing customers.

    Middle: Skirting Eco-disaster

    Who remembers the hole in the Ozone layer? This was an eco-disaster in the making in the mid-80s and led to a banning of CFCs and other chemicals known to cause Ozone depletion in 1987.

      The ban came into effect in 1989. Ozone levels stabilized by the mid-1990s and began to recover in the 2000s.

      — Wikipedia, Ozone Depletion

      Recovery is projected to continue over the next century, and the ozone hole was expected to reach pre-1980 levels by around 2075. In 2019, NASA reported that the ozone hole was the smallest ever since it was first discovered in 1982.

      — Same source

    When considering the “Ozone Hole” (actually a misnomer), it has to be remembered that when the ban was instituted, no-one knew how quickly or completely the damage would be repaired. As the populated nation most closely affected, Australia paid particular attention to the situation for the next couple of years, until other policy priorities began to distract the government of the day.

    This is perhaps the most widely-recognized environmental disaster that was narrowly avoided in this era. It is far from the only one. Another one to assume prominence in Australia was Soil Erosion.

    In the US, Acid Rain was perhaps a bigger concern, threatening their Aquaculture-based industries. This led to the 1990 Clean Air Act. How close to disaster was this? You can judge by the 1992 closure of all eastern seaboard fishing grounds because there had been insufficient recover of stock.

    The 1990s saw increased awareness of the hazards of oil spills, soil and water contamination, toxic waste dumping, and chemical accidents.

    And then there’s Asbestos…

    Asbestos

      The use of asbestos in new construction projects has been banned for health and safety reasons in many developed countries or regions, including the European Union, the United Kingdom, Australia, Hong Kong, Japan, and New Zealand.

      A notable exception is the United States, where asbestos continues to be used in construction such as cement asbestos pipes.

      The 5th Circuit Court prevented the EPA from banning asbestos in 1991 because EPA research showed the ban would cost between US$450 and 800 million while only saving around 200 lives in a 13-year time-frame, and that the EPA did not provide adequate evidence for the safety of alternative products.

      — Wikipedia, Asbestos

      Before the ban, asbestos was widely used in the construction industry in thousands of materials. Some are judged to be more dangerous than others due to the amount of asbestos and the material’s friable nature.

      Sprayed coatings, pipe insulation, and Asbestos Insulating Board (AIB) are thought to be the most dangerous due to their high content of asbestos and friable nature. Many older buildings built before the late 1990s contain asbestos.

      — Same source

    The Australian Experience
    Asbestos and the diseases that it causes were far more prominent in Australia even that in countries that took strong action with respect to the construction material. We had as many asbestos-related fatalities here as did the UK, despite having only 1/3 the population, and there was ongoing litigation on behalf of miners throughout the 80s that kept the issue returning to the front pages.

      Western Australia’s center of blue asbestos mining was Wittenoom. The mine was run by CSR Limited (a company that had been the Colonial Sugar Refinery).

      — Wikipedia, Asbestos and the law

    The 1990 single Blue Sky Mine by environmentally and socially-aware rock group Midnight Oil exemplified the anger and resentment that was felt – not so much over the issue itself, but with the way those deemed responsible sought to dodge culpability.

    James Hardie Industries

      The main manufacturer of asbestos products was James Hardie, which set up a minor fund for its workers, then transferred operations to the Netherlands where it would be out of reach of the workers when the fund expired.

      — Wikipedia, Asbestos and the law

    Just to finish the story: In 2001, James Hardy separated two of its subsidiaries from the parent company to create the Medical Research and Compensation Foundation (MRCF), essentially an inadequately-funded dumping ground for the company’s asbestos liabilities.

      Then CEO of James Hardie, Peter McDonald, made public announcements emphasizing that the MRCF had sufficient funds to meet all future claims and that James Hardie would not give it any further substantial funds.

      …The net assets of the MRCF were $293 million, mostly in real estate and loans, and exceeded the ‘best estimate’ of $286 million in liabilities which had been estimated in an actuarial report commissioned by James Hardie.

      — Wikipedia, James Hardie Industries

    The 2004 Jackson report (see below) later found that

      …this ‘best estimate’ was ‘wildly optimistic’ and the estimates of future liabilities was ‘far too low’.

      — Same source

    James Hardy then moved all of their operations to the Netherlands in an attempt to isolate the rest of the company from these liabilities.

    Such tactics created outrage here, and cemented public opinion firmly against what had been one of the most successful and respected companies in the country. But the story kept getting worse:

      Shortly after the move, an actuarial report found that James Hardie asbestos liabilities were likely to reach $574 million.

      The MRCF sought extra funding from James Hardie and was offered $18 million in assets, an offer the MRCF rejected.

      The estimate of asbestos liabilities was promptly revised to $752 million in 2002 and then $1.58 billion in 2003.

      — Same source

    James Hardy was dragged to the negotiating table kicking and screaming by the findings of the Jackson Report cited above, but eventually promised to set up a compensation fund – then stalled and delayed for another two years.

      It was not until November 2006, after the federal government had created ‘black hole’ tax legislation, which made the contributions of James Hardie into the voluntary fund tax deductible, and had granted the voluntary fund tax-exempt status, that James Hardie finalized the compensation deal.

      — Same source

    The saga continues!
    But the story still wasn’t over.

      In February 2007 every member of the 2001 board and some members of senior management were charged by the Australian Securities & Investments Commission (ASIC) with a range of breaches of the Corporations Act 2001 including breach of director’s duties by failing to act with care and diligence.

      ASIC also undertook investigations into possible criminal charges against the company’s executives but in September 2008 the Commonwealth Director of Public Prosecutions decided there was insufficient evidence and charges were not pursued.

      In 2009, the Supreme Court of New South Wales found that directors had misled the stock exchange in relation to James Hardie’s ability to fund claims. They were also banned from serving as board members for five years. Former chief executive Peter Macdonald was banned for 15 years and fined $350,000 for his role in forming the MRCF and publicizing it.

      — Same source

    The former directors other than MacDonald appealed, but the ruling against seven of them was upheld.

    Circling back to relevance
    At the start of the decade, there was a general perception that Asbestos claims were related to the mining of the raw material and the preparation of products that used it, that it was stable and safe to use.

      Asbestos cement, genericized as fibro, fibrolite (short for “fibrous (or fibre) cement sheet”) or AC sheet, is a building material in which asbestos fibres are used to reinforce thin rigid cement sheets.

      — Wikipedia, Asbestos Cement

    Since WWII, this product had been massively popular for quick and easy construction of homes and other structures. While it was used world-wide to some extent because of its resilience and affordability, it became a ubiquitous construction material in Australia and New Zealand through this period. I spent most of my youth living in Fibro-based houses. And, so long as it remained intact, those who saw no danger were correct.

    Damage or demolitions, which tore and shattered the sheets and other forms created using the material, however, released dangerous levels of asbestos fiber into the air and onto surfaces which could then be absorbed by workmen simply by touching contaminated surfaces.

    Over the course of the decade, as this came to light, asbestos removal (abatement) and remediation measures became mandatory, and often expensive. Mitigating the exposures involved tents to confine the dust, high-quality masks and environmental suits for the workforce. At one point, continual wetting of the sources was thought to be necessary. The water itself that was used had to be captured and cleaned, because otherwise you were simply spreading the fibers around.

    Home renovations were a big thing in the Australia of the 90s and have stayed that way, led by TV shows such as Our House (not to be confused with movies or the US TV series). Our House (Australian) ran from 1993 to 2001, and arguably it would have continued if not for the untimely death of host and former Skyhooks front-man, ‘Shirley’ Strachan.

    It was far from the only one, though – “Better Homes And Gardens” (A TV series modeled on the Australian version of the American Magazine) has been running for twenty-eight seasons and a game-show-styled renovations reality program, “The Block” has appeared onscreen for a total of 19 seasons – two in 2003-2004, and the rest after a 6-year break. (The 19th season is currently airing).

    So Asbestos abatement is a particularly big deal here, but is important elsewhere, too.

    Bottom line: Environmental considerations will crop up frequently and unexpectedly throughout the decade, but especially the latter half. That was when the general public started becoming aware of Climate Change.

    Middle: Rise Of The Smaller Device

    Smaller computing devices had been around for years, but exploded in popularity in the 90s.

      Laptops

      Laptops – if you can call them that – had been around for as long as the computer. They weren’t really portable until the Epson HX-20 of 1981, but this had an LCD screen and not the full “portable PC” experience. Displays reached 640×480 (VGA) resolution by 1988 (Compaq SLT/286), and color screens started becoming a common upgrade in 1991 (Wikipedia, Laptop). So these were the first generations of what would be recognized as a modern laptop.

      PDAs

      Parallel to these developments was the rise of the PDA – the first example of which was the Psion Organizer of 1984, but didn’t really take off until the Psion Model 3 of 1991. And then they seemed to explode for the rest of the decade.

      If, as is arguably the case, the Laptop evolved into the iPad, it was a merger between the iPad and the PDA that became the iPhone – the beginnings of the now ubiquitous Smartphone. It can also be argued that e-book readers are also a development of PDAs, with some technology from the iPad incorporated (bigger screens, for example).

      Pagers

      Before all of these was the Pager. These were first developed in the 50s and 60s, became popular in the 1980s, and were ubiquitous amongst certain professions and tiers of management throughout the 1990s. For a lot of the 2000s, they were still preferred over more capable devices by some government groups because they were perceived as more resilient services in the event of natural or man-made emergencies. They were also widely used in restaurants and medical facilities like hospitals.

      In Japan, more than ten million pagers were active in 1996 (Wikipedia, Pager). It’s a measure of the decline of the technology that on 1 October 2019, the last Japanese provider of pager services ceased operating. Everywhere, they are now being phased out, a technology that has reached its sunset.

      But in the 1990s, they are a ubiquitous presence. Some people had – and routinely used – more than one.

    Understanding what is popular at any given point in the era, what it can be used for, and its limitations and fallibilities, is essential to properly depicting the era (does everyone remember Nelson and his Apple Newton from an early episode of The Simpsons?).

    End: Hope Fails

    Although it had been possible for most of the decade to balance good news against the occasional piece of bad, there were a couple of historical developments that could not be dismissed so easily. Four developments in particular would create tensions – some of which would only be inflamed by subsequent decades.

    The end of Glasnost and Rise of Yeltsin

    Much of the following paraphrases content from the Wikipedia page for Mikhail Gorbachev.

    The Russian word from which Glasnost derives has long been used to mean “openness” and “transparency”. In the mid-1980s, it was popularized by Mikhail Gorbachev as a political slogan for increased government transparency in the Soviet Union within the framework of perestroika and the word entered the English language with that definition, especially in relation to the Soviet Union and Russian Federation.

    Under Gorbachev the ice not only thawed, it seemed to shatter; every time he had a summit with a western leader, there was a positive outcome for both Soviet Citizens and those in the West.

    To those who knew what to watch for, Gorbachev was straddling a fine line between hardliners and even more progressive elements, and in time this led to an attempted coup in August of 1991. In less than three weeks, the coup leaders had realized that they lacked sufficient popular support to continue, and had stood down. Boris Yeltsin emerged as a popular figure for standing up against the coup, giving a memorable speech atop a tank.

    Gorbachev pledged to reform the Soviet Communist Party but faced aggressive criticism from Yeltsin for having appointed many of the coup members to their positions of authority in the first place. His attempts at compromising between the two factions were now held as a mistake, and his authority was lost; he had been pushed off that fine line by the conservative hard-liners, and was lost. Just two days after his return, he resigned as general secretary of the Communist Party and called on the Central Committee to dissolve.

    After the coup, the Supreme Soviet indefinitely suspended all Communist Party activity, effectively ending communist rule in the Soviet Union, and the collapse followed at breakneck speed. Many celebrated a final victory in the Cold War (who should have known better), but most considered the resulting instability as a greater threat than a Soviet Union under Gorbachev would have been. Yeltsin, now wielding greater authority than Gorbachev, stated that he would veto any idea of a unified state, instead favoring a confederation with little central authority. The referendum in Ukraine on 1 December with a 90% turnout for secession from the Union was a fatal blow; Gorbachev had expected Ukrainians to reject independence.

    Without Gorbachev’s knowledge, Yeltsin met with Ukrainian president Leonid Kravchuk and Belarusian president Stanislav Shushkevich in Belovezha Forest, near Brest, Belarus, on 8 December and signed the Belavezha Accords, which declared the Soviet Union had ceased to exist and formed the Commonwealth of Independent States (CIS) as its successor. Gorbachev was furious but impotent to preserve the Soviet Union, as one state after another ratified the new political structure. Forced to accept the fait accompli, Gorbachev announced that he would resign when the CIS became a reality.

    Gorbachev reached a deal with Yeltsin that called for Gorbachev to formally announce his resignation as Soviet president and Commander-in-Chief on 25 December, before vacating the Kremlin by 29 December. On the 26th, the Soviet of the Republics, the upper house of the Supreme Soviet of the Soviet Union, formally voted the country out of existence.

    Few people knew what to expect from Yeltsin and this new political entity, the CIS. And unpredictability always lends itself to uncertainty and doubt. The question that needed to be answered was how sincere Yeltsin was in his past proposed reforms, and how much of a political opportunist had he been?

    The fall of Yeltsin & the Rise of Putin

    As it happened, Yeltsin was sincere, but fell victim to the perpetual enemy of the idealist – wishful thinking. His policies were insufficiently robust and had too great a tendency to assume that things would always work out the way he thought they would. Or at least, that’s how many people came to see him after the fact.

    On at least two occasions, he survived attempts to impeach him, signaling the renewed presence of hardliners within the government ranks. In 1998, the Prosecutor General of Russia, Yuri Skuratov, opened a bribery investigation against Mabetex a Swiss construction firm that held many contracts with the Russian Government, accusing its Chief Executive Officer Behgjet Pacolli of bribing Yeltsin and his family. Swiss authorities issued an international arrest warrant for Pavel Borodin, the official who managed the Kremlin’s property empire. Stating that bribery was a common business practice in Russia, Pacolli confirmed in early December 1999 that he had guaranteed five credit cards for Yeltsin’s wife, Naina, and two daughters, Tatyana and Yelena. Yeltsin resigned a few weeks later on 31 December 1999, appointing Vladimir Putin as his successor.

    By some estimates, his approval ratings when leaving office were as low as 2%. Polling also suggests that a majority of the Russian population were pleased by Yeltsin’s resignation.

    — Paraphrased from the Wikipedia page for Boris Yeltsin.

    Yeltsin had become seen as a reasonably amiable drunkard by many in the West. Few recognized the empowerment of the Oligarchs for the potential threat that it became. There was considerable angst over an every-man-for-himself attitude amongst the military and ex-military, in particular the potential for the black market sale of nuclear weapons. This plot element features strongly in a number of Hollywood movies of the time. But Yeltsin had only sown the seeds; they would flower under Putin.

    A former intelligence officer who was generally prepared to play the long game,

      Following Yeltsin’s resignation, Putin became acting president and, in less than four months, was elected to his first term as president. He was subsequently reelected in 2004. Due to constitutional limitations of two consecutive presidential terms, Putin served as prime minister again from 2008 to 2012 under Dmitry Medvedev. He returned to the presidency in 2012, following an election marked by allegations of fraud and protests, and was reelected in 2018. In April 2021, after a referendum, he signed into law constitutional amendments that included one allowing him to run for reelection twice more, potentially extending his presidency to 2036.

      — Wikipedia, Vladimir Putin.

    At first, Putin’s ascension seemed a good thing, bringing stability to an unstable situation. But rather than bringing the oligarchs into line, he encouraged them, indebted each of them to himself, then played one off against the others. At the same time, he blatantly rewrote the rules, as described above. He has now become obsessed with the notion of recreating the Soviet Union (by force since there is no other way); step one was to have been the annexation of Ukraine. Part of that obsession is that 2036 time limit – there must be some reason why he can’t simply rewrite the rules again. So he’s bet the farm on a Ukrainian Assimilation – and appears to be losing.

    But the seeds of the current whirlwind were sown way back then, I think.

    AIDS and the Death of Free Love

    Flower Power was all the idealism and hope of a generation in one package, and nothing was more symbolic of it than the free love movement. Made possible by the contraceptive pill, a hedonist expression of women’s liberation, it was starry-eyed idealism at its most extreme. It’s more than a little ironic that Flower Power withered and before the sub-movement that it engendered, but the impact of the Vietnam War was too great a cross for it to bear. The irony stems from the horror that vision of the actual fighting and the atrocities of war actually made peace seem all that much more desirable. The world might have been a very different place if the flower power movement post-dated that conflict.

    But the legacy remained – sexual liberation – and its descendant movements remain with us today in the struggle for LGBT rights and recognition.

    The 1990s came close to killing what remained, however, as a new disease arose which seemed to target the promiscuous, and especially the gay community: AIDS, caused by HIV. It’s not my intention to delve too deeply into this story; what I am more concerned with is the sense of despair that it engendered. At first, it was thought to be a disease that only afflicted gay men, but slowly it seemed to spread to those addicted to intravenous narcotics, and then to the general public.

    There was something close to a public panic, fueled by suspicion and paranoia, and which gave rise to massive levels of disinformation. I remember someone asking me if you could get it from giving someone a haircut, or shaking their hand. It didn’t quite reach the level where “breathing the same air” was suspicious and to be avoided, but any other form of contact was deemed “dangerous” by some.

    We’ve learned a lot, and a lot better, since those days, and AIDS is no longer a death sentence. In fact, there are indications that a full cure is not far away – the legacy of the bucket-loads of money that were eventually targeted at the disease. But, at the time, there was mortal fear in being anywhere near a potential victim, and any number of people who had done nothing wrong were made outcasts by the more fearful and intolerant elements of society.

    The imminence of Y2K

    Finally, we have Y2K, sometimes described as the Armageddon That Never Came (or other, equally-colorful terms). The “Millennium Bug” is now considered a non-event by the general public, in the same chicken-little vein of predictions of Armageddon by Planetary Alignment or the 2012 panic, or any number of other similar events.

    Unfortunately, there is a qualitative difference – the Y2K problem was very real, and the potential for disaster if nothing was done about it may have been worst-case but were otherwise equally real. To me, it’s ironic that the ‘non-event’ was used to cast aspersions on the credibility of Climate Change – ironic, because the reason that Y2K wasn’t a disaster is that a lot of people put in a lot of very hard and sometimes tedious work making sure that it wasn’t a disaster, and that – a lot of very hard and sometimes tedious work – is exactly what is required to mitigate Climate Change.

    But for those who claim that it was a non-event, consider the following list of actual events and consequences, from the same source cited above:

    • Before 2000:
      • Late 1998: Commonwealth Edison reported a computer upgrade intended to prevent the Y2K glitch caused them to send the village of Oswego, Illinois an erroneous electric bill for $7 million.
      • 1 January 1999: taxi meters in Singapore stopped working, while in Sweden, incorrect taxi fares were given.
      • Midnight, 1 January 1999: at three airports in Sweden, computers that police used to generate temporary passports stopped working.
      • February 8, 1999: while testing Y2K compliance in a computer system monitoring nuclear core rods at Peach Bottom Nuclear Generating Station, instead of resetting the time on the external computer meant to simulate the date rollover a technician accidentally changed the time on the operation systems computer. This computer had not yet been upgraded, and the date change caused all the computers at the station to crash. It took approximately seven hours to restore all normal functions, during which time workers had to use obsolete manual equipment to monitor plant operations.
      • November 1999: approximately 500 residents in Philadelphia received jury duty summonses for dates in 1900.
      • December 1999: in the United Kingdom, a software upgrade intended to make computers Y2K compliant prevented social services in Bedfordshire from finding if anyone in their care was over 100 years old, since computers failed to recognize the dates of birth being searched.
      • Late December 1999: Telecom Italia (now Gruppo TIM), Italy’s largest telecom company, sent a bill for January and February 1900. The company stated this was a one-time error and that it had recently ensured its systems would be compatible with the year rollover.
      • 28 December 1999: 10,000 card swipe machines issued by HSBC and manufactured by Racal stopped processing credit and debit card transactions. This was limited to machines in the United Kingdom, and was the result of the machines being designed to ensure transactions had been completed within four business days; from 28 to 31 December they interpreted the future dates to be in the year 1900. Stores with these machines relied on paper transactions until they started working again on 1 January.
      • 31 December, at 7:00 pm EST, Virginia, USA: as a direct result of a patch intended to prevent the Y2K glitch, computers at a ground control station in Fort Belvoir, Virginia crashed and ceased processing information from five spy satellites, including three KH-11 satellites. The military implemented a contingency plan within 3 hours by diverting their feeds and manually decoding the scrambled information, from which they were able produce a limited dataset. All normal functionality was restored at 11:45 pm on 2 January 2000
    • 1 January, 2000:
      • Australia: bus ticket validation machines in two states failed to operate.
      • Japan: machines in 13 train stations stopped dispensing tickets for a short time.
      • Japan: the Shika Nuclear Power Plant in Ishikawa reported that radiation monitoring equipment failed at a few seconds after midnight. Officials said there was no risk to the public, and no excess radiation was found at the plant.
      • Japan: at 12:02AM, the telecommunications carrier Osaka Media Port found date management mistakes in their network. A spokesman said they had resolved the issue by 02:43 and did not interfere with operations.
      • Japan: NTT Mobile Communications Network (NTT Docomo), Japan’s largest cellular operator, reported that some models of mobile telephones were deleting new messages received, rather than the older messages, as the memory filled up.
      • South Korea: at midnight, 902 Ondol heating systems and water heating failed at an apartment building near Seoul; the Ondol systems were down for 19 hours and would only work when manually controlled, while the water heating took 24 hours to restart.
      • South Korea: two hospitals in Gyeonggi Province reported malfunctions with equipment measuring bone marrow and patient intake forms, with one accidentally registering a newborn as having been born in 1900, four people in the city of Daegu received medical bills with dates in 1900, and a court in Suwon sent out notifications containing a trial date for 4 January 1900.
      • South Korea: a video store in Gwangju accidentally generated a late fee of approximately 8 million won (approximately $7,000 US dollars) because the store’s computer determined a tape rental to be 100 years overdue. South Korean authorities stated the computer was a model anticipated to be incompatible with the year rollover, and had not undergone the software upgrades necessary to make it compliant.
      • Hong Kong: police breathalyzers failed at midnight.
      • China: In Jiangsu, taxi meters failed at midnight.
      • Egypt: three dialysis machines briefly failed.
      • Greece, approximately 30,000 cash registers, amounting to around 10% of the country’s total, printed receipts with dates in 1900.
      • Denmark, the first baby born on 1 January was recorded as being 100 years old.
      • France: the national weather forecasting service, Meteo-France, said a Y2K bug made the date on a webpage show a map with Saturday’s weather forecast as “01/01/19100”. Additionally, the government reported that a Y2K glitch rendered one of their Syracuse satellite systems incapable of recognizing onboard malfunctions.
      • Germany: at the Deutsche Oper Berlin, the payroll system interpreted the new year to be 1900 and determined the ages of employees’ children by the last two digits of their years of birth, causing it to wrongly withhold government childcare subsidies in paychecks. To reinstate the subsidies, accountants had to reset the operating system’s year to 1999.
      • Germany: a bank accidentally transferred 12 million Deutsche Marks (equivalent to $6.2 million) to a customer and presented a statement with the date 30 December 1899. The bank quickly fixed the incorrect transfer.
      • Italy, courthouse computers in Venice and Naples showed an upcoming release date for some prisoners as 10 January 1900, while other inmates wrongly showed up as having 100 additional years on their sentences.
      • Norway, a day care center for kindergartners in Oslo offered a spot to a 105 year old woman because the citizen’s registry only showed the last two digits of citizens’ years of birth.
      • Spain: a worker received a notice for an industrial tribunal in Murcia which listed the event date as 3 February 1900.
      • Sweden: the main hospital in Uppsala, a hospital in Lund, and two regional hospitals in Karlstad and Linkoping reported that machines used for reading electrocardiogram information failed to operate, although the hospitals stated it had no effect on patient health.
      • UK: In Sheffield, a Y2K bug that was not discovered and fixed until 24 May caused computers to miscalculate the ages of pregnant mothers, which led to 154 patients receiving incorrect risk assessments for having a child with Down syndrome. As a direct result two abortions were carried out, and four babies with Down syndrome were also born to mothers who had been told they were in the low-risk group.
      • Brazil: at the Port of Santos, computers which had been upgraded in July 1999 to be Y2K compliant could not read three-year customs registrations generated in their previous system once the year rolled over. Santos said this affected registrations from before June 1999 that companies had not updated, which Santos estimated was approximately 20,000, and that when the problem became apparent on 10 January they were able to fix individual registrations, “in a matter of minutes”. A computer at Viracopos International Airport in Sao Paulo state also experienced this glitch, which temporarily halted cargo unloading.
      • Jamaica, in the Kingston and St. Andrew Corporation, 8 computerized traffic lights at major intersections stopped working. Officials stated these lights were part of a set of 35 traffic lights known to be Y2K non-compliant, and that all 35 were already slated for replacement.
      • USA: the US Naval Observatory, which runs the master clock that keeps the country’s official time, gave the date on its website as 1 Jan 19100.
      • USA: the Bureau of Alcohol, Tobacco, Firearms and Explosives could not register new firearms dealers for 5 days because their computers failed to recognize dates on applications.
      • USA: 150 Delaware Lottery racino slot machines stopped working.
      • USA: In New York, a video store accidentally generated a $91,250 late fee because the store computer determined a tape rental was 100 years overdue.
      • USA: In Tennessee, the Y-12 National Security Complex stated that a Y2K glitch caused an unspecified malfunction in a system for determining the weight and composition of nuclear substances at a nuclear weapons plant, although the United States Department of Energy stated they were still able to keep track of all material. It was resolved within three hours, no one at the plant was injured, and the plant continued carrying out its normal functions.
      • USA: In Chicago, for one day the Chicago Federal Reserve Bank could not transfer $700,000 from tax revenue; the problem was fixed the following day. Additionally, another bank in Chicago could not handle electronic Medicare payments until January 6, during which time the bank had to rely on sending processed claims on diskettes.
      • USA: In New Mexico, the New Mexico Motor Vehicle Division was temporarily unable to issue new driver’s licenses.
      • USA: The campaign website for United States presidential candidate Al Gore gave the date as 3 January 19100 for a short time.
      • USA: Godiva Chocolatier reported that cash registers in its American outlets failed to operate. They first became aware of and determined the source of the problem on 2 January, and immediately began distributing a patch. A spokesman reported they that restored all functionality to most of the affected registers by the end of that day and had fixed the rest by noon on 3 January.
      • USA: The credit card companies MasterCard and Visa reported that, as a direct result of the Y2K glitch, for weeks after the year rollover a small percentage of customers were being charged multiple times for transactions.
      • USA: Microsoft reported that, after the year rolled over, Hotmail e-mails sent in October 1999 or earlier showed up as having been sent in 2099, although this did not affect the e-mail’s contents or the ability to send and receive e-mails. [To me, this sounds like an error in the Y2K patch.]

    ….and there are about as many problems again that took effect after January 1. Some of the problems involved February 29 – the rule is that there’s no leap year if the year ends in 00 except if the year ends in 000, when there is one. And there were a number that took place on 31 Dec, 2000, or Jan 1, 2001, also often leap-year related. And there have even been a couple of significant errors come to light in the years since – the destruction of NASA’s Deep Impact spacecraft has been blamed on a time tagging error, for example.

    In most cases, these are minor errors, though my heart goes out to the UK mothers who had abortions because of a miscalculation of the risk of Downs Syndrome. But there are enough of them, and serious enough here and there – the Japanese nuclear power plant, for example – to show what could have happened.

    Y2K was NOT a non-event. But we survived it, and rolled into the new millennium.

Wow, I can’t believe how much space and time it took to get through all that! There’s absolutely no time to take this article further. So, next week: the 2000s and (hopefully) beyond!



Discover more from Campaign Mastery

Subscribe to get the latest posts sent to your email.