This article is dedicated in memory of Dr. William (Bill) Bail in appreciation for his decades-long commitment to national security and software engineering excellence. Thank you for priceless years of guidance, wisdom, and mentorship.
In November 1998, I authored a magazine cover article titled “19 Infosecurity Predictions for 99” for Information Security Magazine’s (an ICSA publication) Crystal Ball Forecast edition for 1999.  The world-wide-web (WWW) was in its infancy , as were many of today’s readers of this article, but the field of information/computer security was already vibrant and rapidly expanding.
Given that an entire generation of humanity has gone by since I first made these predictions (25 years – 1998 to 2023), it is entertaining, sometimes enlightening, and even a bit concerning to reflect on the predictions I had made, whether or not they proved to become accurate, and how many of these cybersecurity challenges still need to be addressed today.
For example, here is one of those predictions: "Change Will Be Constant - As organizations evolve into virtual businesses, new opportunities – and, consequently, new threats – will continue to crop up daily. The challenges of secure e-commerce will continue to test the limits of technology; remote partners will require an increasingly higher level of access to critical business applications and information; hackers will develop new tools and techniques for breaking through today’s security barriers; savvy end-users will explore new ways to use (and abuse) their privileges."  Seems obvious today (in 2023), but in 1998 there were far fewer security threats - we didn't know what we were in for.
To set the context for these predictions, here (below) is the visualization of what the Internet looked like back then.
The Internet sure has changed a lot since 1998.
(Yes, the image to the left is of the ARPANET from 1973 – quite literally the grandparent of today’s Internet. In 1998, people looked back 25 years to 1973 and reflected upon images like this one.)
Thank you to David Newbury for sharing that image via his Twitter account, and to his father for saving the original document for posterity.
Below is a list of each of the predictions from 1998. 
Understand that there was considerable “editorial craftsmanship” put into the exact wording of the original article and its predictions, in order to fit the article into a tight footprint of word-count, spacing on printed pages (around graphics), etc. so the wording of some of these predictions is relatively pithy and provocative – and without the benefit of the fuller narrative (I cannot quote the entire article due to copyright restrictions), a few of these predictions do not immediately appear to be security-related. But they all most definitely have substantial security ramifications.
"19 Infosecurity Predictions for 99" 
(And now you know why I’m the guy you do not want to get stuck talking to at a cocktail party.)
The details and evidence for or against each prediction can be found below.
Fortunately, interested readers have an independent source with which to assess the outcomes of these predictions: Dr. Michel E. Kabay  has been a diligent custodian and archivist of much of what has transpired in the field of computer/information security from 1997 thru today via his “Infosec Year In Review” – an outstanding compendium of computer security related news article summaries from around the world. 
As my prior colleague, the infamous and incomparable computer security expert and evangelist Dr. Dan Geer, once stated: “What I want is to predict the future. I want it for reasons that are no doubt emotionally clear, but I also want it because of my own definition of security: The absence of unmitigable surprise…. There is never enough time. I thank you for yours.” 
(This one was a “softball pitch” to get warmed up. They get better; trust me.)
In 1996, Microsoft released its Distributed Component Object Model (DCOM) which enabled software components to communicate with each other across networks, including the Internet. By 1998, software companies in the industry were beginning to ship large numbers of applications built using Microsoft’s DCOM, enabling those applications to interact with other applications’ components inside and outside of companies’ firewalls.
“One widely criticized aspect of the DCOM model, is that there is no absolute way of addressing an object instance – everything is done through object interfaces. As such, it can be difficult to manage a large set of worker object instances or temporarily disconnect and reconnect at a later time. Another problem DCOM is facing is that currently there is no good solution to the problem of keeping track of possibly thousands of objects spread over thousands of computers on the network. The user has to supply the network address of the host machine for the server object, or that address must be hard-coded in the client application itself.” – Tom Markiewicz, 1998 (Introduction to DCOM)
Additionally, advanced object-oriented languages like Java (initially released in 1995 and Java 2 was released during 1998) were quickly gaining popularity and features such as “object serialization” (added to Java in 1997), combined with the nascent CORBA brokers (v2.2 was released in Feb 1998), were enabling the sharing of business objects across networks and even between business partners.
Ok, so “millions of moving parts” did not quite come to fruition, but there were “thousands” of application components and objects inside any business’s network in 1999. And the same is certainly true even today, as we are presently discovering through organizational efforts to compile Software Bills of Material (SBOMs).
The image is of Sun Microsystems’ homepage @ www.sun.com in December 1998
In 1996, a Microsoft employee created the first virtual private network (VPN) then known as peer-to-peer tunneling protocol (PPTP). (Apologies to SIPP from the U.S. Navy in 1992, SwIPe from Columbia University and AT&T in 1993, and IPsec from Trusted Information Systems in 1994.) In 1998, when I was publishing this prediction, companies were already adopting the use of VPNs. By 1999, the specification for PPTP was published  which facilitated the creation of other VPN offerings.
“As this technology was expensive in its infancy, the first VPNs were generally only used by businesses exposed to several threats that could compromise the confidentiality of their information or steal critical data when they were still using open internet connections. This special security was also needed so that remote users or operators could use the institution's files without running the risk of them escaping.” – “The First-Generation VPNs of the 1990s”
Knowing that companies were already trying to establish secured business-to-business channels over which to share business components and objects (see prediction #1 above), it was easy to recognize that any given company would want to terminate their incoming VPN connections in varying locations within their own corporate networks. A company’s supply chain partners’ VPN connections might terminate near its procurement application (so that I could place digital purchase orders directly with suppliers), where as the same company’s customers’ VPNs might terminate near its sales applications (so that it could digitally receive purchases).
In 1998, VPN technologies were relatively non-sophisticated by today’s standards. Gauntlet from Trusted Information Systems was one of the more advanced VPNs. It was easy to improperly configure a VPN (see details in prediction #3) and mistakenly bridge together networks that allowed traffic to flow freely between a company’s various businesses partners – which enabled them to see each other’s network traffic and enabled network intrusions to jump across corporate firewalls and along business supply chains. To complicate matters, in 1998 there was essentially no international body of law that could regulate liability for company-to-company security intrusions across national borders.
(Why is that image so blurry? Because 800 x 600 was the typical resolution in 1998. Its for nostalgic "effect".)
By 1999, security-savvy companies were quickly establishing “numerous security perimeters” through which they could safely manage their Internet traffic separately with each business partner using VPNs that terminated in different network locations (sometimes even in specific geographies) within their own firewall. There’s more to “security perimeters” but the VPN discussion here should suffice to paint the picture as it was evolving in 1998 to 1999.
In 1998, firewalls were very elementary by today’s standards; they were basically either “packet filters”, “proxy server” firewalls (also known back then as “application-layer” firewalls), or “stateful inspection” firewalls. Packet filters were significantly faster at handling incoming network traffic. But proxy servers added substantially more security functionality.
By 1999, leading firewall vendors such as Checkpoint and Network Associates began offering hybrid firewalls, capable of both packet filtering and proxying for individual applications. (Network Associates changed its name to McAfee in 2004.) More advanced topological firewall configurations, such as support for DMZ capabilities, were not introduced until the early to mid-2000s (which also helps explain prediction #2’s need for “numerous security perimeters” in 1999).
“Increasingly, customers are finding that firewalls are blocking legitimate traffic and are keeping end users from accessing key applications. But firewall suppliers are having a tough time keeping up with the demand for new capabilities. One challenge is that the growth of remote access and electronic commerce has boosted the number of people trying to get into a network. In addition, those inside the firewall are looking to interact more with the outside world through technologies such as Internet telephony, audio streaming, and multimedia conferencing. They also want workgroup or database access….
“Not long ago, firewalls supported only a handful of standard applications, such as FTP, SMTP, telnet, and the World Wide Web. As users asked for Oracle and Microsoft database support, or pointed to new proprietary voice- or data-conferencing products they wanted to use, some firewall vendors upgraded their products. For instance, many vendors now support Progressive Networks' streaming protocols RealAudio and RealVideo.
“‘The hot requirements now are IP telephony, fax and the conferencing protocols H.323 and T.120,’ says Ray Suarez, product marketing manager at Axent Technologies, which sells the Raptor firewall. Axent is also hearing demands that its firewall support a proprietary voice and fax product from Clarent.
“One vendor, Check Point Technologies, went gung-ho with its Firewall-1 product by supporting almost 300 applications, including several security services from Security Dynamics and Axent.
“But there's always some unique or cutting-edge application not supported by any firewall. Because opening a port is considered a bit risky, a few firewall vendors offer tool kits and similar means to let the user prepare a custom proxy for an application-layer firewall or stateful inspection custom code….
“And Network Associates, which markets Trusted Information Systems' Gauntlet firewall, a product gained when Network Associates acquired the company, soon plans to release a proxy development tool kit. At present, the tool kit is used internally at Network Associates by a software-design team service that builds custom proxies for users by assignment.
“A recent custom project involved designing a proxy for the Internet Inter-ORB Protocol (IIOP), the data-exchange mechanism defined in the Common Object Request Broker Architecture. Using this new proxy, IIOP-based applications can be filtered through the Gauntlet firewall.” – Burned by firewalls (CNN and IDC) September 1998
Today, in 2023, every desktop computer’s operating system offers a built-in personal firewall. And today companies require very specific configurations of these personal firewalls on every one of their employees’ work computers. In fact, with the shift to work-from-home often leading to employees sometimes even using their own computers, the use of personal firewalls is often mandatory for remote employees, as an additional protection for the business information processing happening on those “remote” computers.
But back in 1998, there was no such thing as a “personal firewall”. Firewalls were only setup at the network perimeters to companies on the Internet. The use of home computers had not yet given rise to the need for “personal firewalls”. And it wasn’t until October 2001, when Microsoft shipped XP with the first limited “Internet Connection Firewall” to protect a personal computer. There were a few minor security companies that did release “personal firewall” offerings during 1999 but those offerings were rendered obsolete when companies like Microsoft and Apple began shipping their own versions of “personal firewall” products by 2001.
“When I first heard of the personal firewall concept, it didn't make sense to me. Administrators used firewalls to secure enterprise networks. And knowing that firewalls are complex and expensive devices, companies assigned dedicated administrators to maintain the firewalls. Why would an individual user need one? How would someone without a background in transport protocols understand a firewall well enough to implement it?
"But as the Internet grew, it changed the structure of corporate business. Full-time high-speed access now lets more people work at home in a virtual-office capacity. Small remote offices leverage their Internet connections for connectivity to corporate offices. Yet many organizations still implement Internet connectivity via a simple router without protection. As the number of possible intrusion points to a network grows in step with the Internet, the time has come for Signal 9 Solutions' ConSeal PC FIREWALL for Windows NT….
"ConSeal PC FIREWALL is a low-cost personal firewall with capabilities comparable to those of more expensive corporate firewall products. This package lets you define rules specifying the traffic you will accept on your computer. By building rules based on protocol, IP address, service, direction of travel, and interface, you completely control what type of traffic you will let in and out of your system. A quick and easy installation requires only that you install the product's driver service in the Network applet of Control Panel under the Services tab. After you load the driver service, ConSeal PC FIREWALL is ready to protect your system.” – ITPro Today, May 1999
In 1998, business partners would share network traffic across early VPN offerings. Each VPN connection would be hand-crafted by the networking security administrators for the two companies on either end of a given VPN. And sharing encryption keys between companies was very much a manual process.
The IPsec protocol for sharing encryption keys was in its infancy, having been published in 1995 via RFC 1825 through RFC 1829. “In 1998, these documents were superseded by RFC 2401 and RFC 2412…. In addition, a mutual authentication and key exchange protocol Internet Key Exchange (IKE) was defined (via RFCs 2407, 2408, and 2409) to create and manage security associations. In December 2005, new standards were defined in RFC 4301 and RFC 4309 which are largely a superset of the previous editions with a second version of the Internet Key Exchange standard IKEv2.” 
The creation of IKE in 1998 was the enabler that resulted in “securing business partner access” becoming “easier”. Setting up and maintaining VPNs with one’s business partners became a lot simpler with IKE.
“Currently, (in November 2001) some of the fastest-growing security challenges are those related to the growth of business-to-business (B2B) interactions and upcoming Web services…. During the past two years (1999 & 2000), as enterprises have opened their Internet-based communication and commerce channels to customers, partners, employees, and affiliates, numbers Internet- and Intranet-based portals have emerged. These portals have thousands or even millions of Web visitors requiring access to a wide range of content and applications.” – “Safe and Sound – A Treatise on Internet Security”, RBC Capital Markets, November 2001
The Melissa virus spread around the world (March 1999) via email in merely hours and the Chernobyl virus was first detected in 1998, but its payload was not triggered to be executed until April 26, 1999.
“Viruses and security holes actually caused real damage during the (1999) year, not the mere hype we’d seen before. Everyone knew Melissa, but that wasn’t even the year’s worst bug…. ‘Exploding e-mail’ - Security historian and virus industry cynic Rob Rosenberger in August (1999) quietly unveiled a technique to bring e-mail servers to a crashing halt…. Rosenberger created files that violated established protocol: COM files of zero length, Zipped files with no content, and other techniques. To the server, these methods don't make a difference, but many anti-virus and content scanners freeze when they scan such a file. The problem: When the scanners die, they take the servers with them. While several industry insiders had problems with the way Rosenberger announced the flaws -- and the fact that he targeted anti-virus software, few disputed the efficacy of the techniques.” – “The biggest computer bugs of 1999!”, ZDNET, December 1999
In 1998, Java applets and browser plug-ins containing malicious code were just starting to show up on the Internet. And because “personal firewalls” were not yet available (see prediction #4), there was little defense against these new and emerging threats. Users of browsers were turning off their browsers’ features in order to prevent these types of attacks. While the concept of “sandboxing” applications (running them in separate environments inside an operating system) was already available in Unix operating systems, it wasn’t until 2001 that the web browsers’ vendors began introducing “sandboxes” into their browsers ; and the early attempts at containing intrusive malware inside of “sandboxes” left a lot to be desired. To the left is the Monthly Rate of Virus Infections Per 1,000 PCs…
“Consumers' privacy also became a big issue in information security this year (1999). ‘Privacy was always related to information security,’ said Jason Catlett, president of pro-privacy Junkbusters Corp. ‘But I think 1999 was the year when lots of people really started taking Internet privacy personally. We have moved our lives into a place that isn't very secure or private, so everyone's understandably feeling kind of nervous and uncomfortable.’" – “The biggest computer bugs of 1999!”, ZDNET, December 1999
Full disclosure: Fred Cohen had already made the claim in 1987 that there is no algorithm that can perfectly detect all possible viruses. My 1998 prediction focused on intruders(implying an active human attacker). And I had based my prediction upon a number of professional papers published in the 1990s.
In 1998, the state-of-the-art for Intrusion Detection Systems (IDS) was to use your company’s firewall’s logs to detect intrusion attempts. There were very few IDS implemented, and most that did exist were co-located with a company’s firewall, at the edge of their corporate network. These came to be known as “network-based intrusion detection” systems. With all the external connectivity (see prediction #2) and moving application components (see prediction #1), it quickly became impossible to detect intruders coming from all these new “directions” (sources of entry). So “host-based intrusion detection” systems (or “agent-based”) quickly arrived on the market.
“By the early 2000s, IDS started becoming a security best practice. Prior to then, firewalls had been very effective for countering the threat landscape of the 1990s. Firewalls process traffic quickly as they have no ‘deep packet inspection,’ meaning they have no visibility into the content and context of network traffic. Firewalls only have the ability to react based on port, protocol and/or IP addresses. In the early 2000s new threats like SQL injections and cross site scripting (XSS) attacks were becoming popular and these attacks would pass right by the firewall. Hence the real beginning of putting the IDS into use. The popularity of IPS (Intrusion Prevention Systems) would come later.” 
To the left are the estimated market shares (by vendor) for 2000, from International Data Corp (IDC)…projected to be $234 million total (worldwide) in 2000…
In 1998, the rise in the use of personal computers, the Internet for commerce, web browsers for early commerce, etc. had already opened Pandora’s (proverbial) Box regarding computer security. (See  for newspaper article summaries from 1998.) New vulnerabilities were appearing so quickly in 1999, that the MITRE Corporation created the concept of CVEs® (Common Vulnerability and Exposures) and launched the CVE® repository  in an attempt to catalog all the new security issues introduced by new technologies.
“With the advent of the Internet, we have entered a digital age in which information continues to gain ground as the currency of choice. Similar to pure money, true value of information (the new currency) will remain unrealized unless it is allowed to flow in a secure, trusted, and controlled environment – uncontaminated. Currently, the infrastructure to achieve this value creating flow of information centers on the Internet and is supported by numerous intranets and extranets distributed across thousands of organizations all over the globe. Even though these information and communication networks have seen tremendous innovation during the past five years, they continue to remain in a state of perennial modification due to their multi-layered, open standards-based architectures, heterogeneity of hardware and software components, and lack of built-in security capabilities. While encouraging innovation, the flexible architectures invite new security risks with every iteration of software and hardware upgrades, producing new waves of security vulnerabilities that simply append to their predecessors and are never completely fixed. The end result is that the current information infrastructure is riddled with more security holes than Swiss cheese.” – “Safe and Sound – A Treatise on Internet Security”, RBC Capital Markets, November 2001
Certainly, I was correct with this prediction because it sadly remains true even today in the 2020s (two decades later). The SolarWinds 2020 attack , and the Log4j’s “Log4Shell” exploit  are just two recent examples of legitimate technologies unknowingly and unintentionally “opening more security holes”. Even sadder is the fact that over the history of CVEs being cataloged, the growth of CVEs continues to increase geometrically over time. The overall state of security vulnerabilities and exposures is not improving. Software development teams need to focus far more effort on the security of their products and services because cyber liability laws are being established that will hold their companies legally responsible for security breaches.
This prediction was more focused on the use of audit trails as legal evidence when attempting to prosecute cyber criminals. In 1998, it was common practice to just keep a computer system’s audit logs on the computer system itself. But advanced attackers learned how to modify those audit logs on the computers which they had compromised, thereby erasing their footprints and most evidence of their intrusions.
By early 1999, companies were beginning to use their distributed systems management tools (such as BMC’s Patrol, HP’s OpenView, and IBM/Tivoli’s TME) to capture system log file entries and copy those entries off to external log sinks. Audit log entries stored outside of compromised systems could be used as legal evidence that a crime had been committed.
But it wasn’t until 2007 that SCAP (Security Content Automation Protocol) was established to provide a common format and structure to audit log entries related to security.  Today, there are many commercial and open-source tools available for operating audit log sinks – even the major cloud vendors all offer cloud services for centralizing audit logs.
“In many real-world applications, sensitive information must be kept in log files on an untrusted machine. In the event that an attacker captures this machine, we would like to guarantee that he will gain little or no information from the log files and to limit his ability to corrupt the log files. We describe a computationally cheap method for making all log entries generated prior to the logging machine’s compromise impossible for the attacker to read, and also impossible to modify or destroy undetectably.”
– “Secure audit logs to support computer forensics”, ACM Transactions on Information and System Security, Volume 2, Issue 2pp 159–176 (01 May 1999)
In 1998, very few companies could boast about having 1 million users (customers, employees, or both combined) under their management. Netflix (founded in 1997) was still in the DVD rental business (e.g., no online customer accounts for streaming until 2007). Amazon was on a very steep growth curve, having announced in 1997 that they had 1,500,000 customer accounts.  And AOL claimed to have reached 5 million users by 1996. But most of today’s largest companies (from a user community perspective) had yet to be founded: Tencent (founded in November 1998 – the same month my predictions were published), Alibaba (1999), PayPal (2000), Facebook (2004), Instagram (2010), etc.
As e-commerce was blossoming in the late 1990s, user community sizes were growing, and companies were about to exhaust their home-grown approaches to (customer) user account management. Vendors with security offerings began to launch federated (LDAP compliant) directory products and services during 1999.
Today, it is not unusual for the largest social media companies to support billions of user accounts. Facebook (now META) claims to have 2.963 billion active monthly users (in 2023).  Similarly, Instagram reached 2 billion active monthly users in Q3 2021. 
The image is of AOL's homepage @ www.aol.com on 11 December 1998
“1999 will be the year the traditional corporate walls crumble. To remain competitive, companies will need to have the ability to do business electronically with partners, remote employees and customers. With all these users entering the enterprise from different points of entry, it will be nearly impossible to determine where one company ends and the next begins.” 
“The hidden reality here is that these electronic business applications provide external access deep within a company’s computing infrastructure. This type of access is unprecedented in the industry and is becoming the major driving factor of the security market.” 
In 1998, SOAP was created as an object-access protocol and published as XML-RPC, and then submitted to IETF in September of that year.  By 1999, the earliest “web services” movement was underway as companies began fielding web services to which their supply chain partners could connect.  SOAP 1.0 came was formally released in December 1999. And by 2000, Roy Fielding had published the REST “architecture style” as part of his doctoral dissertation. 
From a broader perspective today in 2023, it is easy to spot the foundational need arising in 1999 for today’s concept of “zero trust” architecture.
“A U.S. government survey…found that e-commerce was dominated by business-to-business transactions in 1999…. While the research for the 1999 survey was conducted during a time when e-commerce was entering its heyday, the business-to-business findings in the study--conducted by the U.S. Census Bureau, part of the Commerce Department--took government analysts by surprise. ‘The B2B component was larger than many of us expected,’ said Thomas Mesenbourg, assistant director of economic development at the Census Bureau. ‘The size of manufacturing e-commerce was also larger than many expected.’ Mesenbourg said the Census Bureau just released figures on 1999 because ‘we started focusing in on e-commerce seriously just 2 years ago. We wanted to provide baseline statistics starting from when e-commerce was just becoming significant.’” – “(US) Commerce Department: 1999 was a good year for B2B”, ZDNET, March 2001
Time-relevant observations published by the press in 2021 regarding the evolution of Single Sign-On (SSO)…
“While it wasn’t called SSO in the early days, Microsoft created AD which allowed users to simply log in to their Windows devices and subsequently be able to access anything on their network that was Windows-based. However, as web applications emerged in the early 2000s, another generation of SSO solutions emerged to help users authenticate to non-Windows-based resources — these solutions are often referred to as IDaaS or Identity-as-a-Service.” 
“Single sign-on (SSO) solutions have been gaining traction in the market since the early 2000s when web-based applications started to populate the workspace and users needed an efficient, secure way to authenticate to them.” 
By 2001, Wall Street players had taken notice of the SSO market opportunity…
“The Internet has made it mandatory that enterprise applications be Web-based and available to thousands of customers, partners, and suppliers. Consequently, several vendors centered on providing SSO for Web-based applications have emerged in recent years. Netegrity dominates this segment; some of the other players are Oblix, RSA/Securant, Entrust/Encommerce, Novell, IBM/Tivoli Systems, and OpenNetwork Technologies. Major players in the host and legacy-based SSO markets include NEC Corp., Vasco, Hewlett-Packard, and others…. We believe the overall market will grow into a $2.5 billion per year opportunity by 2005, representing a 13% CAGR from its current level of $1.4 billion per year (in 2001). Notably, the Web-based SSO market is expected to grow from $200 million in 2000 (actuals) to $1.2 billion by 2005, a 44% CAGR.” – “Safe and Sound – A Treatise on Internet Security”, RBC Capital Markets, November 2001
Enough said. I nailed it.
In 1999, anybody could log into any Microsoft Hotmail account using the password “eh” (seriously). 
So, predicting in 1998 that “certificate-based authentication” would be adopted in 1999 was a rather bold (and very optimistic) statement.
VeriSign, Inc. was a leading vendor of digital certificates in 1999 thru 2002 and beyond (NASDAQ: VRSN). In the years following my prediction made in 1998, VeriSign’s annual revenues grew explosively: 1999 = $305.587m; 2000 = $646.683m; 2001 = $980.436m; and 2002 = $1,412.758m.
While it appears that my prediction was correct, there is more to the story…
X.509 and digital certificates had been around since 1988. By 1995, the PKIX Working Group (of the Internet Engineering Task Force) had been established to work on major challenges inhibiting the commercial adoption and use of X.509 certificates. I predicted “in 1999, the growing popularity of digital certificates and soft tokens will continue to erode interest in hardware-based authentication solutions (e.g., token cards and smart cards).” 
Well, that didn’t entirely come true during 1999. The PKIX Working Group didn’t publish its accomplishments (their standards) in IETF RFC 3280 until April 2002.  And even though the use of digital certificates for certificate-based authentication did rise through the early 2000s, interest in hardware-based authentication solutions has certainly NOT “eroded” – in fact, the market for smart cards is estimated to reach $30.71B (US) by 2030. 
(Let’s count that one as a “foul ball” – not quite a “swing and a miss” because I was at least correct about the use of certificates for authentication increasing, as reflected by the VeriSign revenues.)
“While 98% of companies use passwords for authentication today (June 1999), in two years’ time, 56% expect to be using digital certificates. That startling finding has emerged from a Forrester Report called A Digital Certificate Road Map. The report, based on interviews with 50 of the Global 2500, found that interoperability and the sheer number of options are the chief concerns barring the immediate adoption of certificates. It urges public key infrastructure (PKI) pioneers Entrust, Cybertrust and VeriSign to back the PKIX standard effort with open source certificate code. Certificates are no panacea, Forrester warns. They aren’t cheap, they lack interoperability and they have weak application support. The bottom line for users is to roll certificates out in stages. Suppliers should get them now. Employees can wait until Windows upgrades are rolled out in 2001. For now, customers will have to make do with passwords. When public certificates mature, however, Forrester expects them to take on frequent flyer properties, enabling firms to single out their top customers for special treatment. The study concludes with these predictions: certificate-related services will boom”. – “The Rise and Rise of Digital Certificates”, Tech Monitor (Rachel Chalmers), 27 June, 1999
In 1997, computer security companies were doing “market roll-ups”, acquiring smaller security companies and working to integrate their collective offerings into suites of integrated security platforms.
For example, in 1997, McAfee Associates (under Bill Larson as CEO) had been renamed Network Associates through a merger of McAfee Associates, Network General, PGP Corporation, and Helix Software. McAfee Corp. had become a public company on the NASDAQ in 2020 but was taken private again in 2022 – and has never quite lived up to the vision of offering an “integrated security strategy” such as I had predicted.
Around that same time, Platinum Technologies (under co-founder Andrew Filipowski) had acquired the security products from OpenVision in an attempt to build a similar cohesive portfolio of security offerings – but Computer Associates (CA) quickly acquired Platinum Technologies and the OpenVision products were largely “shelved” by Computer Associates. Prior to it being acquired by Broadcom, Inc., CA had long since shifted its attention away from its security product offerings to focus on its cloud computing strategy.
Symantec also went on an acquisition spree over the years – picking up many security-focused companies along the way, such as Axent Technologies (in 1994) and Blue Coat (more recently in 2016). But by 2019, Symantec’s enterprise security business was acquired by Broadcom, Inc.
Today, Broadcom, Inc. (through its many acquisitions and some of its own organic product development) is one of the few examples of a commercial vendor that offers a comprehensive (and partially integrated) suite of security products and services.
On the whole though, it is safe to assert that my prediction for “integrated security strategies gaining popularity” is NOT reflected in the corporate evolutions of the major security vendors in the market in 1999.
(That one was a “swing and a miss!” - That’s two strikes against me.)
“When Network Associates Chief Executive Bill Larson spent more than $2.4 billion to buy a slew of other software companies over two years, he was looking forward and thinking big. After a disastrous first half of 1999, though, investors are waiting to see whether he's ahead of his time.
“Larson wanted to transform Network Associates from a leader in antivirus software into a powerhouse for network security and desktop management. Using the success of Microsoft as a model, Larson planned to build a one-stop shop that could take on Computer Associates and other big rivals by meeting all of a corporation's system needs.
“The plan entailed acquiring a wide range of products and selling them as a package to the loyal customer base that had made the company's forerunner, McAfee Associates, a leader in the antivirus business. There was a big problem, however. Those customers didn't want a one-stop shop.
“‘The market is not where they thought it would be,’ says Brian Goodstadt, an equity analyst with Standard & Poor's Research Group who rates Network Associates stock a ‘hold.’ ‘It wasn't ready for suites.’” – “A buying binge backfires on Network Associates”, Forbes, 9 September 1999
Anyone remember Windows 98? Windows 98 SE was released in May 1999. There were ten (10) CVEs published for Windows 98/98 SE during 1999. And another twelve (12) published during 2000. Fortunately, Microsoft released XP in 2001. But in XP’s first year (2001) it racked up ten (10) CVEs of its own. And during 2002, XP tallied up 35 CVEs. 
Hmm… Ok, so maybe I wasn’t referring to Microsoft operating systems being hardened.
During those same years (1999 thru 2002), Apple’s MacOS racked up a total of 39 CVEs itself. 
Hmm… Ok, so Apple wasn’t doing much better at hardening its operating system.
Hey Linux! Help me out here.
Ah, nope – from 1999 thru 2002, there were three hundred forty (340) CVEs documented for the Linux operating system. (Ouch. That’s a fair bit of “hardening” going on there, right?) Well… SELinux was released to the open source community by the NSA in 2000, but it wasn’t integrated into the upstream Linux kernel until 2003.  Obviously, the security of the Linux kernel after that time has been substantially improved.
(Third “foul ball.” That one even bounced off my batting helmet. Fortunately, I can’t strike out on a 3rd foul ball in baseball.)
“’They certainly don't have a very secure environment. There are so many holes in the Microsoft environment that any [worthy] hacker... is going to figure out how to break in,’ says Anne Thomas, a senior analyst at the Patricia Seybold Group in Boston.
“‘It's the dominant operating system out there, so it's going to attract the attention. On the other hand, Windows has extremely sloppy security,’ says Bruce Schneier, author of Applied Cryptography and a founder and chief technology officer of Counterpane Internet Security, a provider of managed security services in Minneapolis, Minn.
“What often upsets people is that Microsoft hasn't learned from the mistakes made in older operating systems, notes Jon McCown, technical director of network security at the International Computer Security Association in Reston, Va. Categories of attack that are well understood are cropping up in Windows. ‘They're doing a forthright job of addressing them, but there's a concern about what we don't know about yet; what's still in the operating system or in the servers that will become an issue.’
“But hacker Space Rogue, a member of the L0pht Heavy Industries, summed up what he and others see as Microsoft's security challenges. ‘Windows has three strikes against it, as I see it. Popular OS, weak security, easy-to-use, oh, and it is made by MS, the company everyone loves to hate.’" – “Microsoft: Bad security, or bad press?”, CNN.com, 28 September 1999
To be specific, my “authorization” prediction was referring to “access control” and I had stated that “technologies…require major modifications to existing applications, and therefore are considerably more difficult to implement in larger organizations. Historically, this security area has been the slowest to materialize, and 1999 will be no different.” 
“Granting, modifying, and canceling access rights to various applications is a seemingly never-ending task and highly labor intensive for system administrators. Every time an employee is added, deleted, or moved within an organization, a sequence of changes affecting the entitlement repositories is triggered. The system administrator has to update entitlements for each of the applications using separate tools. Needless to say, this is an extremely time-consuming process and is certainly not a top priority of the system administrators…. These issues become maintenance nightmares as the number of users and the number of applications grow.” – A Treatise on Internet Security”, RBC Capital Markets, November 2001
Clearly, my prediction was true for 1999, and continues to be true even today (2023). Authorization and access control remains the most challenging and invasive security control to implement and administer.
Allow me to offer an example to illustrate the magnitude of the challenge.
“In April of 2015, IT staffers within the United States Office of Personnel Management (OPM), the agency that manages the government's civilian workforce, discovered that some of its personnel files had been hacked. Among the sensitive data that was exfiltrated were millions of SF-86 forms, which contain extremely personal information gathered in background checks for people seeking government security clearances, along with records of millions of people's fingerprints. The OPM breach led to a Congressional investigation and the resignation of top OPM executives, and its full implications—for national security, and for the privacy of those whose records were stolen—are still not entirely clear.” 
By August 2017, the US Government Accountability Office (GAO) had performed an assessment of the OPM response to the 2015 security breach and published their findings .
“We (GAO) reported that OPM, one of four agencies reviewed, had implemented numerous controls to protect selected systems, but access controls had not always been implemented effectively…. In addition, we issued a restricted version of the May 2016 report that identified vulnerabilities specific to each of the two systems we reviewed and made recommendations to resolve access control weaknesses in those systems.” 
“In two prior reports, we made numerous recommendations to enhance the agency’s information security program and to resolve access control weaknesses in those systems. To date, these recommendations remain open.” 
This example illustrates that implementing granular authorization and access control is hard to accomplish, even with the weight and resources of the US government behind such an effort.
I am proud to say that, in the late 2010s, I was part of the leadership of the development team that designed and prototyped the replacement for this OPM system.
n 1998, cryptography was being leveraged in business-to-business commerce (largely via VPNs, EDI, and the use of shared secrets for encryption). But cryptography still required a fairly manual effort to install, implement, and maintain. 
Security for the early online commerce sites was largely based upon username/password pairs, which were sometimes transmitted in clear text across the Internet (though SSL did exist since 1995 and so too did the SSL vulnerabilities – version 1.0 of SSL was never released because it had serious security flaws and version 2.0 was quickly replaced by 3.0 because it too had serious security flaws).  Hampered by the lack of cryptographic security for consumer transactions, “U.S. retail e-commerce sales accounted for 0.5 percent ($15 billion) of total sales ($2,868 billion).”  But all that was about to change.
Consumer-related security standards and products began to proliferate in 1999. The Transport Layer Security (TLS) was defined in 1999, around the same time that IEEE published 802.11b and introduced WEP security. These along with many other cryptographic-related evolutions formed a stable basis upon which to build modern e-commerce businesses.
Napster launched in June 1999, sharing (sometimes pirated) music online. Apple released iMovie in 1999 and iPods (for downloading music) were released on 23 October, 2001. Even Disney Interactive Media Group was founded in 1999.
Additional advancements in the field of applied cryptography would arrive shortly thereafter. By 2002, the W3C had released XML-Encryption which would be used to secure more advanced e-commerce transactions (those that leveraged SOAP). XML-Encryption provided a standard and easy means by which to encrypt content using one-time symmetric keys (which were fast) and then encrypt the one-time symmetric key with a public-key/private-key pair – enabling the now-encrypted one-time symmetric key to be attached to the encrypted content and delivered across the Internet to the receiving parties.  And elliptic curve cryptography algorithms (created in 1985) would enter widespread use by 2004 and 2005 – which helped to secure the content for some of the earliest streaming services.
(The image is of Apple’s homepage @ www.apple.com in October 1999)
First, here’s a synopsis of what the Y2K Bug was (because many readers today were not in the professional workforce at that time and won’t even know what “Y2K” refers to.)
“The ‘Millennium Bug,’ or Y2K as it is commonly known, arose when experts in the early days of computing cut the code that designated the year from four digits to two, which changed ‘1998’ to ‘98’, in order to save data space on hard drives, according to National Geographic. But with the new millennium approaching, experts realized that computers might not see ‘00’ as the year ‘2000’. Instead, they feared, it could be interpreted as ‘1900’.
“This presented a slew of problems for any computer-based system that depended on the date in calculations. Banks that issued interest rates on a daily basis could suddenly see a loan plummet to a rate for minus 100 years, the magazine explained. Flights, too, could be disrupted since airlines kept flight schedules and records on computers.
“Then-President Bill Clinton, while addressing the Y2K problem in 1998, urged businesses to do their part to prepare and update their computer codes.
“‘Now, this is not one of the summer movies where you can close your eyes during the scary parts,’ he said. ‘Every business, of every size, with eyes wide open, must face the future and act.’
“The night did not go off entirely without a hitch, though.
“In all, preparation for Y2K cost the U.S. upwards of $100 billion, the Washington Post reported in November 1999, though many have since credited Y2K with creating new jobs and highlighting the importance of information technology employees.” 
Certainly, the hysteria around “Y2K” evaporated as 1999 passed, and the market’s attention to, and appetite for, all-things security-related did grow to overtake the “Y2K” hype. (Dr. Michel Kabay’s “Infosec Year In Review” is an excellent historical perspective of the rise in interest in information security. ) But it would take until 2018 before worldwide spending on information security would top $100 billion ($114 billion actually). 
Now we can turn our collective attention toward the "Epochalypse" in 2038, but we've got some time to get ready for that.
Artwork © 2001, Ward Cunningham (www.agilemanifesto.org)
The age of “Agile” was about to be launched in February 2001 by the publishing of the Agile Manifesto.  But software development companies around the world were already adopting agile software development practices (without having the “agile” moniker as a reference name just yet). Back in 1986, Harvard Business Review had already published “The New New Product Development Game” article by Hirotaka Takeuchi and Ikujiro Nonaka , which Jeff Sutherland (co-author of the Agile Manifesto and co-creator of SCRUM) has credited as influencing the creation of the SCRUM agile development methodology (the posterchild methodology for “agile” now in the 2020s).
By 1998, I myself had already spent well over a decade working for software companies (e.g., Oracle, OpenVision) and technology companies (e.g., GTE) at which my teams were performing nightly software builds (compilations), daily automated code inspections, and weekly automated testing of our software code. (It was humbling and enlightening to get such timely and critical feedback regarding my software code.) We didn’t know we were being “agile” because there wasn’t a name for our type of software development yet (e.g., non-waterfall). The closest concept in 1998 was Dr. Barry Boehm’s “spiral model of software development” from 1986. 
Based upon this prior exposure to rapidly building and releasing software products, and upon my years as a computer security expert (CISSP, etc.), I made this prediction:
“As organizations evolve into virtual businesses, new opportunities – and, consequently, new threats – will continue to crop up daily. The challenges of secure e-commerce will continue to test the limits of technology; remote partners will require an increasingly higher level of access to critical business applications and information; hackers will develop new tools and techniques for breaking through today’s security barriers; savvy end-users will explore new ways to use (and abuse) their privileges…the list goes on.” 
In hindsight, and with a compendium of published news articles from 1999 and beyond as evidence, it is safe to conclude that this last prediction was a “home run”.
 Foote, Steven, “19 Infosecurity Predictions for 99”, Information Security Magazine, November 1998, https://www.infosecuritymag.com/(Note: the original publisher seems to have been acquired by Tech Target, Needham, MA)
 Tim Berners-Lee released the first browser, called “WorldWidWeb”, in 1990. https://www.w3.org/People/Berners-Lee/WorldWideWeb.html Netscape released its Mosaic browser (soon to be renamed Navigator) in late 1994. The browser was the most advanced available and was an instant success. The rich functionality piqued the interest of millions of people, launching explosive growth in browser users worldwide.
 Dr. Michel E. Kabay’s contact homepage: http://www.mekabay.com/contact.htm
 Kabay, Michel E, Dr., “Infosec Year in Review”, https://www.mekabay.com/iyir/index.htm
 http://dotnet.sys-con.com/node/45908“Exclusive .NET Developer’s Journal “Indigo” Interview with Microsoft’s Don Box”, Dotnet.sys-con.com
 https://www.informit.com/articles/article.aspx?p=27295 “History of Web Services”, informIT, June 21, 2002
 https://www.csoonline.com/article/3318238/the-opm-hack-explained-bad-security-practices-meet-chinas-captain-america.htmlFruhlinger, Josh, “The OPM hack explained: Bad security practices meet China’s Captain America”, Computer Security Online, February 2020
 https://www.gao.gov/assets/gao-17-614.pdf“US Government Accountability Office (GAO), “OPM Has Improved Controls, but Further Efforts Are Needed”, GAO-17-614, August 2017
 https://web.archive.org/web/20160103124500/https://www.isoc.org/inet97/proceedings/C5/C5_1.HTMElectronic Data Interchange: Concepts and Effects, 29 Jul 1997
 Boehm, B (August 1986). "A Spiral Model of Software Development and Enhancement". ACM SIGSOFT Software Engineering Notes. 11 (4): 14–24.
Copyright © 2023 Phenomenati - All Rights Reserved.