The concept of Cyber Entropy™ refers to the uncontrolled growth of all aspects of an organization within the “cyber” domain. This is most obvious with the proliferation of core business systems and enabling information technologies (e.g., recent adoption trends for GenAI services); however, less obvious are the cascading effects which impact business concepts such as expectations, opportunity, obligations, critical dependencies, and risk. Left unaddressed, the entropy of the environment leads to increased complexity, inefficiency, poor decision-making, and negative consequences to the business.
The most effective way to tame such ever expanding complexity is to decompose it into more tractable components and apply disciplined management to bring order to each of those topic areas. For this discussion, we’ll use the following decomposition of core topics:
Network entropy refers to the un-managed growth of network infrastructure, including physical and virtual networks, switches, routers, wireless access points, segmentation, gateways, perimeters, tunnels, overlays, etc. Compounding this is the continued expansion of interconnected networks; most alarmingly, connecting “operational technology” (OT) networks like those supporting Industrial Control Systems (ICS) or Supervisory Control and Data Acquisition (SCADA) environments with traditional corporate IT backbones. Most startups today forego traditional physical perimeter-based LANs, CANs or WANs completely in favor of establishing virtual (software-defined) WANs (SD-WAN) tunneled over the internet. However connectivity is established and provisioned, it is clearly a fundamental business enabler; but left un-managed it will expose the organization to malicious actors.
Here, one example of a critical administrative control is the creation and maintenance of detailed network diagrams. These help to institutionalize the knowledge of the company’s environment. Examples of technical controls that can support this effort include network topology mapping, network traffic analysis tools, and external scanning and penetration testing. However, knowledge of one’s networks will not build and maintain itself – investment in staff and time cannot and should not be avoided if this information is to be kept accurate and relevant. Further, these diagrams must include external networks (e.g., remote segments, hosted, cloud based, vendor provided), as those become more and more prevalent. Ideally, each network will be categorized or classified based upon the types of information assets that will exist on or transit them. Finally, every business asset, even networks, should have a well-documented, cradle-to-grave Lifecycle – even individual networks and the devices which create them.
This type of entropy is likely most obvious with the proliferation of familiar IT devices, including both physical and virtual IT equipment, such as servers, storage devices, networking equipment, and end-user technology like computers and mobile devices. Not to be ignored is the emergence of network-connected Internet of Things (IoT) devices from smart controls like thermostats, to connected appliances and “smart” assistants, to wearable personal devices, to home/office security systems, to smart/connected robots and automobiles. Within a business, such sprawl of devices often occurs due to decentralized purchasing, lack of inventory management, and inadequate device/asset lifecycle management. And the result is an ever expanding “attack surface” for the business.
The administrative control of disciplined Asset Management is unavoidable, even for small to medium businesses. If a device (physical or virtual, machine or ‘container’) is important enough to the business to attach it to a corporate network, then it should be deliberately managed and accounted for to secure it, optimize its performance, and maximize its value. An example of a technical control here would be continuous Asset Discovery to inform the knowledgebase of assets. But again, investment in staff time is unavoidable. Ideally, each device will be categorized or classified based upon the types of information assets that will exist on them. And finally, worth stating again, every business asset such as a physical or virtual device should have a well-documented, cradle-to-grave Asset Lifecycle that includes planned obsolescence and secure disposal.
The entropy concept with software refers to the uncontrolled adoption of software components (commercial, open source, and internally developed) across an organization. A profound example of this is the rampant adoption of Generative AI services (e.g., ChatGPT, Bard, Copilot, etc.) which which most organizations experienced in 2023. Un-managed adoption is just the beginning, software or “application” entropy also involves the proliferation of redundant, outdated, or underutilized applications within an organization. Which usually results from a lack of centralized software governance, decentralized purchasing, willingness to permit “shadow” IT, and insufficient software license management.
Trends such as the broad adoption of GenAI services, open source software, and industry obsession with "micro-services" has accelerated this software entropy in recent years. But so have the existence of browser-plugins (e.g., the “browser” has become the end-user’s “operating system” as the platform where they install personal software) and software available “as-a-service” (SaaS) where software lives remotely but portions of it are pulled into the browser (temporarily) to execute on the end-user’s device. Compounding this, more and more technology today is “software-defined” which blurs the lines of exactly what is considered “software” vs “hardware”. All of which only further expands your "attack surface".
Taming this software entropy is even more challenging than with networks and devices. But it begins with administrative controls including a well-crafted Acceptable Use Policy, deliberate software and application rationalization, centralized acquisition and license management, well-documented deployments, and deliberate application or Software Lifecycle management. Technical controls such as automated daily scans of software resident on each device can be a “force multiplier”. And as with networks and devices, each software component or application (whether hosted internally or externally) should be categorized or classified based upon the types of information assets that it will have access to and manage.
Directly related to device and software entropy, is the concept of deployment entropy. Given the prolific use of terms such as “cloud” and “edge” to refer to the relative location of hardware and software, the topic of deployment approach is worth addressing here separately.
Deployment entropy with respect to “cloud” assets, occurs when an organization lacks visibility and control over resources centrally deployed in some “cloud” (e.g., “public” or “private”, it is still in an external datacenter). This can include the proliferation of virtual networks, virtual machines, containers, storage instances, and software-as-a-service (SaaS) subscriptions across multiple cloud providers; leading to increased costs, security vulnerabilities, and difficulty in managing and securing the cloud environment.
Deployment entropy with respect to “edge” assets, refers to the lack of visibility and control over decentralized resources which are deployed at or very near to the remote or end-user device; such as real-time analysis of videos on dedicated devices attached to cameras on a manufacturing assembly line.
Unsurprisingly, bringing order to this chaos requires the same discipline applied to networks, devices and software entropy… both administrative and technical controls including investment in centralized acquisition, continuous discovery, regular maintenance of key documentation(e.g., network diagrams), deliberate asset and configuration management, classification of “cloud’ and “edge” assets based upon the information they process, and attention to the full Deployment Lifecycle from instantiation through obsolescence and secure disposal.
No contemporary organization is an island. Over time, all develop dependencies on external parties to provide commodity goods and services which are not unique or critical to the competitive differentiation of their own business. This is particularly true with commodity “cyber” components such as hardware, software, and general business systems.
The concept of vendor entropy here describes an organization’s ability to stay informed about market dynamics and analyze the potential risks and opportunities associated with their vendor selection, procurement strategies, and supply chain management. High vendor entropy usually results from undisciplined acquisition (reference again the example of unchecked usage of GenAI services), often fueled by the search for greater efficiencies (better pricing) or perceived-to-be-unique offerings.
While increasing flexibility and responsiveness, this introduces complexities in managing vendor relationships, evaluating options, and monitoring/auditing vendors to ensure performance and security levels are maintained. On the other hand, low vendor entropy does provide stability and consistency, but it may limit the ability to adopt market innovations, potentially result in higher costs, and/or create single-points-of-failure in critical dependencies for the business. Either way, an organization must deliberately balance the Risks inherent in their growing dependencies on external vendors.
Larger organizations understand this challenge and commit full-time staff to the ongoing needs of Third Party Risk Management (TPRM). Smaller to medium sized businesses are well served by a focus on vendor consolidation, if not performed internally then by an external service provider. Whoever is responsible for this Vendor Management, it should include classification based upon the information each vendor will access or process, and attention to the full Vendor Lifecycle from onboarding to retirement.
Perhaps the most familiar and most taxing topic in this list, Data Entropy (aka Data Sprawl) involves the exponential growth of information within an organization, including structured and unstructured data that is received, processed, copied, shared, and stored across various internal and external systems and repositories (addressed above). This entropy most often results from inadequate data governance, lack of data classification, reluctance to develop a data “architecture” (e.g., what is it, where is it, for how long, where did it come from, where is it going to, etc.), decentralized data storage practices, and inconsistent data replication, backup and archiving procedures.
The first administrative control which can help reduce the sense of chaos here, most organizations do not invest in – building a Data Inventory. This is not as tedious or time-consuming as most believe it to be. Each functional department already has a good idea of which business topics it is responsible for, and from that should easily be able to draft a short list of the data/information they handle – e.g., Sales and Marketing handle or manage Information about Target Markets, Products/Services, Leads/Prospects, Customers, Contacts, Inquiries (RFI, RFP, RFQ), Proposals/Quotes, Contracts/Agreements, etc. This need not be a long list for any functional organization. Further, the functional department leads should be able to perform some basic Data Classification of their business information using a small list of approved categories (e.g., Public, Copyrighted, Customer Confidential, Company Confidential, Restricted [PI/PII/PHI/PCI], etc.) The types of information captured in this Data Inventory should then be easily mapped to the company’s core Business Systems (e.g., CRM, ERP, HRIS, KM, TPM, CI/CD, etc.). Most small to medium sized businesses should be able to build such a Data Inventory using a simple spreadsheet. The trick is having the discipline to maintain it.
Understanding what data the organization has, and which business systems process it, the next highly useful administrative control is Data Flow Diagrams (DFD) – simple pictures which explain in layman’s terms how a specific type of information flows in to, through, and potentially out of the organization. These DFDs can be hugely useful in assessing and communicating the Inherent Risks to information, and determining where and how to deploy Access Controls (policies, procedures, and enforcement technologies) to protect sensitive information assets. Perhaps not obvious, but Data Loss Prevention (DLP) programs rely heavily on Data Classification, Data Labeling, and approved Data Flows if they are to successfully prevent data loss.
Finally, once again worth stating, every business asset… including information assets… should have a well-documented, cradle-to-grave Information Lifecycle that minimally includes Data Ownership, Data Classification, Labeling and Handling, Data Protection, Data Retention and secure Destruction.
Very closely related to Data Entropy, and directly reliant upon a well maintained Data Inventory, is the concept of Access Entropy. Here, this term refers to the level of disorder or poor discipline exercised in granting and revoking access permissions or privileges, very likely impacting the security (confidentiality, integrity, and availability) of sensitive information within an organization. Such entropy is at its worst where the principles of “Need to Know” and “Least Privilege” are considered too burdensome, e.g., in startups and other organizations with a high risk tolerance and “need for speed” to get a product or service to market. Even where privileges or permissions are explicitly granted to individuals, they are often never revisited or revoked, leading to excessive privilege accumulation / aggregation. The consequences of such Access Entropy (e.g., privilege creep) include the risk of unauthorized access, data breaches, extortion, and even data destruction when exploited by malicious actors or abused by authorized users; potentially leading to operational degradation, reputational harm, revenue losses, and legal liabilities for the business.
To mitigate this type of entropy, organizations need to enforce proper administrative access control and privilege management practices, and supporting technical controls (e.g., Privileged Access Management platforms). This includes aspects of the Access Lifecycle such as implementing the principle of Least Privilege – granting users with a demonstrable “Need to Know” only the minimum permissions necessary to perform their job functions, regularly reviewing and auditing access privileges, immediately revoking privileges when no longer required, and maintaining a strong access control framework.
As businesses expand and evolve, their reliance and potentially critical dependence on information and IT infrastructure and solutions tends to increase. This can include reliance on the networks, hardware, software applications, databases, cloud services, and other IT resources discussed above. The problems arise when these dependencies on and expectations of IT are allowed to grow and evolve implicitly, unplanned, undocumented, and without adequate resources or management. This type of Business Entropy, the unchecked sprawl of the organization’s Dependencies upon and Expectations of the “cyber” infrastructure supporting and enabling the business, can lead to risky decision-making, poor business performance, missed business objectives, and even operational failures due to unplanned disruptions or outages of systems or suppliers. As the business evolves, its critical dependencies on information and enabling technologies must be explicitly documented, and must continuously inform the investments required to ensure that investments in the Resiliency of IT infrastructure stay well aligned to the needs of the business.
Perhaps the best defense against this specific type of entropy are administrative controls such as Business Impact Analyses (BIA). Each BIA is designed to expose, document, and characterize the organization’s dependencies upon its critical assets. A BIA can be as simple as the Data Inventory (discussed earlier) updated to reflect the Classification and Criticality of each type of information based upon the business function(s) it supports. Criticality of an information category then exposes the expectations of Resiliency for the Systems that process or store that information. Here is where disaster recovery and business continuity plans can remain well informed with explicitly documented expectations of Recovery Time Objective or RTO (maximum time to restoration) and Recovery Point Objective or RPO (maximum amount of data loss that can be tolerated). As this business entropy continues to evolve, so will the nature of critical dependencies.
Resiliency is not cheap, so risk-informed tradeoffs will need to be explicitly made (and regularly revisited) to ensure the organization continues to meet its commitments within the practical constraints of time and money. Perhaps less obvious is the point that each such dependency should itself have a well documented Dependency Lifecycle, including considerations for contingencies (in case of technology failures, or system outages) and eventual retirement.
For contemporary businesses like those driven by the promise of digital transformation, the concept of Opportunity Entropy refers to the expansive range of possibilities and potential growth avenues that arise from internal R&D efforts, external M&A targets, or leveraging technology and digitization to enhance business operations, to create new products and services, or to tap into emerging markets. Consideration of new Opportunities is of course fundamental to maintaining the agility, adaptability, and competitiveness of any business. But too often, pursuit of such new business Opportunities is not well Risk-informed.
As discussed under “Business Entropy” above, decisions regarding whether or not to pursue a new opportunity should be well-informed through a Due Diligence process that considers not just the additional Resources required, but the new Dependencies on IT infrastructure, Expectations of Resiliency, new Obligations (discussed next) and potential new Risks to all stakeholders involved. While not the most exciting part of planning a new endeavor, the Business Impact Analyses (BIA) discussed above can help to proactively inform otherwise unchecked enthusiasm. Another guardrail that will help keep the planning for a new Opportunity on track is the idea that every opportunity should have a documented Opportunity Lifecycle, informing what success looks like as well as what off-ramps should be put in place to mitigate Risks as those materialize. Most important here is to acknowledge that the administrative control of disciplined decision-making is “Not about saying No”, but rather is “About saying Yes, responsibly.”
Critical to this concept of Cyber Entropy is the acknowledgment that the organization is subject to an ever-growing set of professional Obligations. Here the business must embrace the expanding scope and complexity of responsibilities and requirements that organizations must comply with in their planning and decision-making processes, their ongoing operations, and even with their interactions with staff, clients, consumers, partners, and vendors. In recent years, the expansion of these Obligations has intensified due to several factors, including: globalization, technological advancements, threat actor activity, introduction of new regulations (e.g., international, extra-territorial Privacy laws), and evolving societal expectations. Businesses are increasingly being held accountable for their actions, and non-compliance or unethical behavior can result in reputational damage, legal consequences, financial penalties, even personal criminal liabilities, and most importantly a loss of Trust.
To effectively manage the unrelenting growth of these Obligations, organizations should establish robust Governance, Risk, and Compliance (GRC) programs, that help the business to regularly monitor and adapt to changes in laws and regulations, by developing and maintaining comprehensive internal policies and procedures, conducting employee training on legal and ethical standards, and fostering a culture of ethical decision-making and accountability. Like each of the topics discussed earlier, there should be an overall Inventory of Obligations, as well explicit Obligation Lifecycles documented to help inform the ongoing compliance with each critical obligation, including when the obligation becomes effective and subsequently ineffective, an obligation Owner, to whom the obligation exists, the penalties for lack of compliance, and whether or not the organization is currently in compliance. By proactively addressing the inherent Entropy of their legal, regulatory, contractual, and ethical Obligations, businesses can mitigate Risks, contribute to sustainable and responsible business practices, protect their reputation, and build Trust with stakeholders.
Any discussion regarding Entropy in the Vulnerability landscape has to start with those in information technology. Here there are an ever-growing number of vulnerabilities that can be exploited by malicious actors or inadvertently expose sensitive information. These vulnerabilities can arise from various sources, such as software bugs, coding errors, misconfigurations of storage or virtualized environments, insecure protocols, or weaknesses in hardware components. The continuous expansion of these vulnerabilities increases the complexity of managing and securing business systems, as well as the difficulty in predicting and addressing potential security risks. That said, beyond vulnerabilities in technologies, most organizations also wrestle with vulnerabilities in their people (e.g., lack of awareness, discipline, responsibility, accountability) and their processes (e.g., lack of consistency, checks, audits, etc.) as well.
It is important for organizations to continuously manage and reduce the entropy of their vulnerability landscape by implementing industry best practices such as: rigorous and continuous Vulnerability Scanning and Patch Lifecycle Management, conducting regular Security Audits and assessments, providing regular and mandatory Security Awareness Training, and fostering a culture of security responsibility and accountability. Furthermore, because it is neither practical or feasible to attempt to address all vulnerabilities in one’s environment, each organization should develop a strategy for prioritization based upon the Criticality of the vulnerable Resource, and the Likelihood and potential Impact (based upon the categories of information at Risk) should the vulnerability be exploited.
Entropy in the Cyber Threat Landscape needs no explanation to those directly involved with digital forensics and incident response (DFIR) on a regular basis. The cyber threat landscape is constantly evolving, with new capabilities and malicious tactics, techniques, and procedures (TTP) emerging literally daily, making it nearly impossible to predict, detect, and defend against all potential cyber threats. For all practical purposes, there is no way for even the largest of organizations to bring order and control to the chaos and entropy in the Threat Landscape. The best that can be achieved is to stay continuously informed and use that knowledge to inform one’s own defenses.
Those larger organizations that can afford to focus some resources on tracking threat usually begin by cultivating and fusing many sources of Cyber Threat Intelligence (CTI) for Indicators of Attack (IoA), Indicators of Behaviors (IoB), and Indicators of Compromise (IoC) to inform their internal “Threat Hunting” procedures and technologies. While such knowledge management is critical for effective defense, the sheer volume of cyber threat intelligence available today will quickly overwhelm most threat hunting teams. What is needed then is an ability to down select and fuse only the threat intelligence that is most relevant to your actual business and the markets you serve. Smaller to medium sized organizations are best served by engaging a Managed Security Service Provider (MSSP) who can afford to maintain the continuous Threat Intelligence gathering and Threat Hunting competencies centrally and apply those across many companies in your market sector(s).
The one area of Threat Entropy that is within the power of any organization to control is that of the Insider Threat. Such threats can be known, evaluated, and appropriately addressed for those organizations willing to make the investment in relevant administrative controls (e.g., background checks) and technical controls (e.g., User/Entity Behavior Monitoring and Analytics or UBA/UEBA) in a formal Insider Threat Program.
Risk is not a problem to be solved. It is a characteristic of performing any task or function to achieve an important objective or outcome, while protecting the interests of all potential stakeholders. Further, Risk is never static, but dynamically evolves as the operational environment where the task/function is performed continues to evolve. Herein lies the inherent Entropy of Risk, which ignored or left unchecked can eventually lead to a range of potential negative consequences for a business.
To effectively reduce or control this entropy of Risk, organizations need to adopt a comprehensive and proactive approach to Risk Management; including investments in Risk Identification that documents specific Risk Scenarios (e.g., specific Threats, that might target and exploit specific Vulnerabilities, resulting in specific Consequences), qualitative and quantitative Risk Assessment, Risk Evaluation (against one’s Risk Tolerance – in terms of Risk Appetite or Risk Thresholds), Risk Treatment decision making (e.g., to reject, accept, mitigate and/or transfer the inherent Risk), Risk Mitigations, and continuous monitoring, review and updates to Risk Management strategies.
One practical yet powerful concept that has proven to be effective at bringing order to the chaos of Risk Entropy is that of Risk Level Agreements™ (RLA) between an executive team and an organization’s board of directors. These agreements are based on specific Risk Scenarios that have been qualitatively and quantitatively assessed, include matrices of recommended Controls, and establish Cost/Benefit Analyses to help inform Risk Treatment decisions and investments.
In summary, Cyber Entropy is the amalgamation of many concurrent and potentially destructive market forces continuously pulling on the resources of every organization.
While the resulting complexity can seem overwhelming, addressing Cyber Entropy is possible with some discipline and deliberate investment in both staff time and supporting technologies, to implement effective governance, standardization, and control mechanisms across many levels of the organization.
Risk is high. Decisions are complex.
Effective strategy demands informed, objective tradeoffs based on experience.
Our team can help you develop a practical way forward for securing your Organization.
Copyright © 2024 Phenomenati - All Rights Reserved.