Message in a Bottle

Dr. Sybe Izaak Rispens
28 min readJan 11, 2024

Why the SEC’s recent legal complaint against SolarWinds has made good Cyber Security Risk Management significantly harder

CISOs now have personal liability on their desk. Some may rethink their career plans. © wernerwerke

The legal complaint that the U.S. Securities and Exchange Commission (SEC) filed against software company SolarWinds Corporation on October 31st, 2023, is worrisome.¹ The complaint not only states that the company that witnessed the largest supply chain attack of the century is liable for fraud and internal control failures but also that its top security executive is to be held personally liable.²

The SEC has recently filed similar legal complaints against various publicly traded companies. However, the SEC argued in previous cases that the organisations were responsible for assessing their risks with large gaps to reality. No one was personally made liable for something the SEC considers “deceiving investors.”
In August 2021, for example, the SEC asserted that Pearson plc had characterised a data breach as an “unlikely risk” despite the company having already experienced a real cyber intrusion.³ Another example is the allegations the SEC raised against Blackbaud, Inc., in March 2023, for misleading disclosures about a ransomware attack that impacted more than 13,000 customers.⁴ In both cases, no employees faced any charges.
In the SolarWinds filing, however, the SEC makes the case that the organisation and the Chief Information Security Officer (“CISO”) structurally and willfully misled investors about the company’s cyber security practices. SolarWinds CISO Tim Brown, the filing noted, should be held personally liable for fraud and misconduct, including false and misleading statements and omitting material information, due to recklessness or negligence in not knowing about the dire state of SolarWinds’s security posture.

For the cyber security domain, this is a game changer. It is the first time the SEC has made such a legal case against any information security executive over their role in a company’s allegedly inadequate disclosures regarding cyber security risks.

The claim is framed as a case of fraud. To do this, the SEC’s complaint contains some details about the case that were unknown. This puts a few details of the attack itself in a new perspective, especially questions about who knew what, when, and how this information was publicly disclosed (see for an extensive reconstruction of the hack, Wired’s long piece of May 2023).⁵

Yet, framing the issue as a case of fraud also raises questions. For example, what happens if legal experts try to evaluate cyber security risks? Are realities from the cyber domain easily distorted and put out of context in a legal context? On a more strategic level, what impact does this legal case have on future cybersecurity risk management? The unintended consequence of the SEC ruling may be that it will become harder, and concerns around the quality of cyber risk management programs may even increase.

My overall aim here is to take this case as a starting point for more generic questions such as:
• How can we collectively manage cyber risk better?
• What should the role of the CISO look like?
• And how can we contribute to solving one of the most significant global risks that humanity faces, one that has the potential to wreak havoc on a foundational pillar of modern society?⁶

Details of the SEC complaint

The SEC complaint states that between 2018 and 2021, SolarWinds concealed the organisations’ poor cyber security practices and heightened cyber security risks.⁷ It states that in public security assessments, SolarWinds and Mr. Brown, in particular, have been materially misleading their investors. The organisation claimed to have robust cyber security practices in place when, in fact, there were significant gaps.

For years, the SEC’s complaint states, SolarWinds kept poor cyber security practices, including failure to consistently maintain a secure development lifecycle for the software it produced, failure to enforce the use of strong passwords on all systems, and failure to remedy identity and access control problems which had persisted for years.⁸

Even if SolarWinds had not experienced any material cyber event, the SEC argues, the company would still be liable because of the wide gap between its public statements about cyber risk and its factual risk exposure and cyber security management practices.

For example, Mr Brown made public statements in podcasts and blog posts showcasing the company’s cyber security practices and hygiene as if the organisation was a model student of the NISTcyber security risk management framework.⁹

These public statements are seen as “deceptive” because, at the time of publication, Mr Brown already knew and acknowledged internally that the organisation had severe cyber security issues. Also, the security culture at SolarWinds was significantly below the level portrayed in his public statements.

While Mr. Brown published his blog posts, SolarWinds’ infosec personnel had grown increasingly cynical about the company’s security culture. For example, it took a senior security expert to explain in management meetings that a default password “password” is a poor security practice.¹⁰ (Mind you, this is the password that attackers ultimately used to hack SolarWinds). A freshly hired junior security engineer had initially observed this, yet apparently, it took a senior person to talk truth to power. It also indicates the level to which SolarWinds’ management lacked an understanding of fundamental cyber security principles.

The company also did not mention material aspects of its security processes assessment. This is usually done using a “maturity level” scale, a standardised and normalised self-assessment method combining a theoretical framework developed at Carnegie Mellon University and industry best practices. It consists of five maturity levels, ranging from “1-Initial” to “5-Optimizing”, each representing a stage in an organisation’s capability to manage and improve its cyber security risk effectively.¹¹ And on that scale, SolarWinds’ score left much to be desired.

SolarWinds’ assessment evaluated around 300 standardised cyber security controls, defined in the NIST risk management framework (NIST 800–53).¹² In September 2019, the company had around six per cent of the NIST framework controls in place.¹³ As a result, the overall maturity level of the self-assessment was the lowest possible.

Another assessment, a year later, showed significant progress. In 2021, the company concluded that 40% of the recommended NIST controls were in place. Which still left 60% of the recommended controls needing to be implemented.

Mr. Brown did not mention this in his mandatory periodic cyber security risk reporting to the SEC. The security process framework was stated as generally aligned with the NIST cyber security risk management framework, and cyber security risks were merely described in the quarterly reporting in generic and theoretical terms. The SEC claims that these cyber risk reportings have been materially misleading because of this omission. This violates the securities laws, which mandate that companies disclose all known and material risks.

And then one of those risks became reality. On December 12th, 2020, SolarWinds was informed by the CEO of security company Mandiant that it had been hacked. Mandiant announced it would go public with this massive breach, which it had dubbed “Sunburst”, within 24 hours. This timeframe allowed SolarWinds to publish an announcement first.¹⁴ The timeline was not negotiable.
SolarWinds’s mandatory incident report described, again, all vulnerabilities in hypothetical terms, although Mandiant had unequivocally shown that the company was hacked severely. The company knew at this time that the issue had been actively exploited against customers of SolarWinds “Orion” software — a platform designed for network and systems management, offering features such as performance monitoring, fault detection, and configuration management.¹⁵

The SEC claims that Mr. Brown,as the CISO of the organisation, aided and abetted this violation. The complaint states that he approved all public statements regarding cyber security incidents, practices, and risks, and he should have been aware that these statements should always be carefully crafted for any publicly traded organisation, especially for supply chain companies whose software products are ultimately deeply ingrained in customers’ systems.¹⁶ The SEC now seeks a permanent bar for Mr. Brown from holding a senior management position and a monetary sanction.

The legal filter

At first glance, the SEC has a strong, reasonable, and necessary case against SolarWinds. But closer inspection leads to an important question: does translating cyber risk to a legal domain by non-subject matter experts cause a dichroic shift that turns blue into red and green into orange?

For example, the SEC complaint cites personal messages from security operations personnel highly frustrated by the lack of basic security in the company as “proof” of a bad security culture. Yet, as most people in the cyber domain know, this frustration is not uncommon.
Security teams occupy one of the most high-stress positions in an organisation. Cyberthreats never stop, so there is an expectation of being available 24/7/365. Too often, cyber security teams do not receive adequate internal support and are blamed when there are system failures or performance issues that they did not cause. Then there is the unpredictable work environment, the constant threat evolution landscape, the increasing complexity of attacks and rising attacker sophistication, and the mounting burden of security alerts upon alerts. This can easily lead to low morale, high turnover rates, emotional exhaustion, stress, and, indeed, cynicism.

Security professionals are somewhat like veterinarians, who are often medically able to help but unable to administer care because the owners of pets are not willing or able to afford the necessary treatments and surgeries. This is called the “caring-killing paradox”: doctors who devote themselves to treating pets must euthanise many of their patients because of costs.¹⁷ This leads to high levels of emotional suffering in veterinarians to such an extent that suicide rates among veterinarians are staggering.

Similarly, security professionals are often capable of, yet hindered in improving things. The necessary resources are not given, or management is unwilling to decide on even bare spending levels for cyber security. Many security professionals feel disconnected, disinterested, and emotionally detached from their jobs and organisations.

What about the senior engineer who needed to tell management that “password” is not a good idea? From a legal perspective, this may prove an appalling level of cyber competence at higher management levels. However, the example is not an exception. Research shows that the number of directors at S&P 500 companies with cyber security competencies is at a staggeringly low rate of around two percent.¹⁸

This has enormous implications for oversight effectiveness.¹⁹ It means leadership often cannot comprehend cyber risk reports, has a lack of understanding of key cyber security metrics, fails to ask even the most basic relevant questions, overly relies on the CISO for cyber guidance, cannot understand the implications of cyber security metrics for strategic business goals, and lacks fundamental awareness about cyber regulations and compliance requirements.
Most damaging, directors and board members mostly show no interest in cybersecurity risk management. This can lead to a fundamental flaw in cyber security governance. In practice, a CISOdoes not simply consult leadership on cyber security matters but also has to train and coach on cyber security concepts. Sometimes, even the process of cyber security oversight itself needs to be explained by the CISO, which can result in circular governance between the board and management, as the terms of oversight are primarily dictated by the supposed subjects of that oversight.²⁰

So, suppose the CISO at SolarWinds participated in “filtering” reports to the board to obfuscate potentially damaging information. How much of that is the liability of one particular individual? Especially when this individual is subordinate to a top-down structure, which could be more helpful for cyber governance.

This is how things related to cyber security work in most organisations, making it a structural issue. One thing contributing to this is the unwillingness of organisations to have people on the board with expertise on a particular topic. They want board members to bring broad business experience so they can discuss anything that arises in the organisation. People with a broader view are seen as more beneficial than people with narrow technology experience. This leads to most boards needing more cyber security literacy to detect information filtering in cyber risk reporting.
With such levels of cyber competency at the top of organisations, it’s no wonder that the popularity of the password phrase “password” went up from number 7 in 2019 to number 1 in 2022.²¹

What about the self-assessed cyber security maturity level at Solarwinds? Organisations voluntarily do these assessments, often using structured approaches, such as the one suggested by NIST. But these assessments are, contrary to what the SEC complaint suggests, not objective science.

For example, how does one interpret the effectiveness of a discretionary access control? On paper, the process that describes how system owners decide who has access to a resource and what actions they can perform may be correct. Processes and tools guide system owners in doing their periodic access rights revalidation. Metrics allow for monitoring the whole thing. Periodic reporting is established. Details are continuously improved upon.

At an initial glance, this is a mature process. Yet, do system owners have enough contextual knowledge to decide whether individuals are entitled to specific permissions? This critically depends on the owner’s understanding of the user’s role and responsibilities. Given the available information for the user and the asset, is such a level of understanding even possible? Is there a four-eye principle in place that validates the decision-making? What is the data quality used to assess the risks associated with granting or denying people access to a resource? Do assessors record such levels of uncertainty in their decision-making? And why is discretionary access control necessary in the first place? Would it not be much better to implement a role-based access model?

Thus, the criteria for evaluating the discretionary access control are open to interpretation. Do you look at formal compliance and see if everyone sticks to the policies and guidelines, or do you ask what the purpose of a control is? Also, an organisation with a low or high self-assessed maturity level can mean different things, depending on how you view it. Do organisations with low levels automatically have a terrible security posture, as the SEC complaint implies? Or is it the other way around? Are organisations that self-assess high levels of control effectiveness the ones to be more critical of because high marks may mean that the self-assessment itself may be broken?
One of the systemic issues is that, often, system owners are often overly confident. This is a common phenomenon, as many cognitive biases and organisational factors can cause overconfidence. Engineers, especially the smart and competent ones, tend to overestimate their abilities and underestimate the risks involved: expert bias. Sometimes, engineers may be inclined to seek information that confirms their pre-existing beliefs about the effectiveness of controls while neglecting or downplaying contradictory evidence: confirmation bias. I have seen many first-hand examples of excellent engineers convinced that adverse events are not likely to happen to their systems, leading to gravely underestimated probabilities of cyber risks: optimism bias. And engineers who have experienced success in the past may become complacent and overly confident in their abilities: past success bias.

How organisations can handle such biases depends on the security culture. If there’s a culture that discourages dissenting opinions, or if there is low-risk awareness, or high performance pressure in production lines, individuals may become overly confident in their control measures. Even worse, the security culture in some organisations sees risk assessments as a formality. In this case, nobody even notices specific threats or vulnerabilities, or management has explicitly directed staff to play down cyber risks. Sometimes, it is even considered good practice to explicitly target the lowest possible levels of risk after mitigation because “this is what auditors expect”.

Imagine what most organisations will do with their control effectiveness and maturity self-assessments after reading the SEC’s complaint against SolarWinds. The unintended consequence of the complaint may be that low residual risk levels are seen as a legal necessity, no matter what the technical realities are. What happens if legal teams dictate the outcome of integrating the cyber capability maturity model?

The net effect of the SEC complaint may be that the gap between compliance and cyber risk management will grow wider. To be meaningful, maturity assessments must be conducted in a safe space where every contributor can be as honest, open, and transparent as possible. There should be room for debate, uncertainty, self-awareness, and a critical reflection on the outcome. People should be safe to argue constructively, disagree, and compromise.

If these conditions are not given, and anything anyone says can be used against them in court, the net result will be “compliance theatre”. This will lead to resource allocation, both financial and human, to ensure compliance-related activities are prioritised to meet the threshold for avoiding penalties. These resources are no longer available for implementing a robust security strategy that can make things safer. So, the SEC’s bold actions may in fact worsen the cyber security posture of organisations, and may even lead to more, not less breaches.²¹ᵇ

The role of the CISO

Regarding the role of the CISO, regulators now needs to be more precise about what the role entails.

In theory, the CISO oversees developing and implementing comprehensive information security policies and practices to safeguard sensitive data. The CISO assesses and mitigates cyber security risks, ensuring compliance with regulations and industry standards. To do this, the CISO must collaborate with stakeholders to establish a robust cyber security framework, conduct regular risk assessments, and implement measures to protect against cyber threats. This role extends beyond technical aspects, encompassing strategic leadership to fortify the organisation’s resilience against evolving security challenges.

In practice, however, most CISOs need more strategic leadership or organisational leverage. The targets of a CISO are often subordinated to cost, performance, efficiency, and speed to market. Most CISOs cannot make budget decisions because they don’t have a sufficiently sizeable independent budget in the first place or because of long procurement or management approval processes. The result for their organisations is slow response and increased risks. The SEC complaint ignores this reality.

In many organisations, CISOs are not invited to participate in strategic decision-making. They don’t have a big say in the organisation’s cyber security budget. CISOs are often only involved in evaluating the cyber risk of large new projects when the decision on whether the projects should be pursued, how and when, has already been made. The same goes for mergers and acquisitions or new business strategies. CISOs are often only marginally involved in the development process and the software development life cycle to avoid any obstacles to market speed. Security is mostly seen as hindering development efficiency, leaving CISOs often sidelined.

When companies are required to disclose information on significant events that are considered material to investors (in the U.S., in an 8-K filing, which the SEC complaint against SolarWinds states has been untruthful), or periodic reporting of risk factors (in the U.S., in a 10-K form), it’s mostly the legal department who drives the reporting. The bare minimum of details on past breaches or potential risks are usually reported. CISOs often do not even see such reporting before they are filed.

Security professionals are somewhat like veterinarians, who are often medically able to help, but unable to administer care because the owners of pets are not willing or able to afford the treatments and surgeries that are needed. This is called the “caring-killing paradox”: doctors who devote themselves to treating pets must euthanise many of their patients because of costs. © wernerwerke

The reporting lines of CISOs vary widely in organisations. Often, the structure of the reporting line of the CISO needs to lead to a strong voice on the executive leadership team to advocate appropriately for security. Many CISOs must accept that the person representing them at the executive level has low influence over the CEO or CFO. This makes, per definition, any cyber security program doomed to fail, as most suggestions will be disregarded — even if they have merit — as long as they are perceived as colliding with business plans.

If CISOs are now held personally liable for these systemic failures, this will have significant unintended consequences for managing cyber risk. Not just in the U.S., and also not just limited to publicly traded companies. The SEC ruling throws ripples in the industry, including closely held businesses. For example, suppose a company is a third-party service provider to a publicly traded company. In that case, any material data breach or attack may require significant changes in the ability to respond.

The first immediate consequence may be an unprecedented talent exodus. The talent shortage has always been an issue in the cyber security domain, as senior CISO candidates often lack the right level of technical knowledge, don’t have the right management experience, or aren’t the right cultural fit. CISOs now do not just face the task of prevention and responding to material incidents. They also need to navigate the organisational, political, and legal waters when reporting up the command chain while at the same time juggling the requirements of official regulatory disclosures of risks and material incidents. Personal liability is not improving things here.

If CISOs now realise that most executives continue to structurally put profits ahead of security, while at the same time, their role demands high levels of complicity in misleading the organisation, the public, lawmakers, and regulators about cyber risks, many may choose to move on. Not every CISO is born a whistleblower²².

Seasoned CISOs may opt for early retirement. Organisations may choose to reframe or redefine the role of the CISO. Microsoft announced at the beginning of December that a “major shake-up” of their cyber security management structure has been decided. The SEC ruling is not mentioned in the announcement. Still, since the changes will take effect almost immediately, there are probably more drivers for the move than the “rapidly evolving threat landscape”. The changes include that Microsoft’s current CISO, Bret Arsenault, will shift — after 14 years — to a newly created role called “Chief Security Advisor”. ²³

This means that the role of CISO, which has been around for 20 years, as the person dedicated to security at an organisation, now needs fundamental re-clarification and recalibration.²⁴ This includes both the responsibilities of the role, its reporting lines, and methodology.

As for the role, many questions still need to be answered. Is it a technical role? Is it a management role? Is it a legal and regulatory role? Is it an executive role? Is it an advisory role? Suppose the CISO is growing into the direction of a Chief Financial Officer or Chief Compliance Officer. Will organisations then be willing, or will regulators mandate the role to have more operational and strategic authority?

What about reporting lines? Shall the CISO report directly to the CEO, to the Chief Operations Officer (COO), to the Chief Financial Officer (CFO), or Chief Information (CIO)? Will the CISO be a full key executive team member, collaborating with other leaders and having the authority and budget to manage the company’s cyber exposure?

As for methodology, unlike the CFO, CISOs currently don’t have widely accepted, objective metrics to report on. Most CISOs are unsure what to measure in the first place and often don’t even know how to measure it. The industry is still focused on the risk matrix, which uses ordinal scales to estimate the probability and impact of a given risk based on expert intuition.

Not all risks are created equal © wernerwerke

Various versions of scores and risk matrices are endorsed and promoted by several prominent organisations, including the National Institute of Standards and Technology (NIST), the International Standards Organisation (ISO), the Open Web Application Security Project (OWASP), and MITRE.org. Yet, scientific research, including that of Nobel Prize winners, has demonstrated unequivocally, and now, for almost as long as the role of the CISO has been in existence: risk matrices do not work.

There is no evidence scoring cyber risk in a risk matrix improves judgment. On the contrary, there is overwhelming evidence in published research that risk matrices as a method are not any better than alchemy, astrology, witchcraft, or bloodletting. They are worse than useless.²⁵ Risk matrices add noise and error to the judgment process and promote the appearance of rationality, whereas they provide no measurable improvement in estimating risk. They are “compliance theatre”.
It is time that organisations and regulators finally recognise and address this. It is time that quantitative, probabilistic methods are mandated as the primary methodology for measuring, analysing, and reporting cyber risk.²⁶

Regulatory bodies play a crucial role here. They must incentivise research and analyses and summarise excellent existing academic research to understand improved cyber risk measurement requirements. They should engage with major organisations for international standards and frameworks and industry experts, including CISOs and scientists, to gather insights and define a proper risk management methodology based on sound scientific principles. These new methodologies should be transposed into globally binding rules.

Transformative ideas needed

What needs to be done? The first and most transformative change is defining the problem correctly. This means holistically²⁷. Stakeholders from different countries, industries, sectors, and their regulators must cooperate more to foster global collaboration and address shared challenges.
In cyberspace, stakeholders acting in their self-interest deplete shared resources, which is the availability of secure systems. If, like in the case of SolarWinds, an attack on an organisation that is a crucial part of the global software supply chain, the organisation’s primary focus is on business goals, this will ultimately lead to the detriment of everyone in the cyber domain. Negative consequences impact everyone globally.
This is called the “tragedy of the commons”. In the cyber domain, the tragedy of the commons has reached a scale only comparable to global climate change. Therefore, this problem must be addressed before any substantial improvement can happen.

Solving the tragedy of the commons is, by its very nature, non-trivial. In cyberspace, the set of stakeholders is disparate. It covers Fortune 500 companies, SMEs, lone hackers, gangs, organised criminal organisations, and nation-state actors. These actors share one common space where everything is just a few milliseconds away. Yet their contexts and incentives are fundamentally incompatible: one actor’s main driver may be making a profit while another aims at destroying enemy nations.

In cyberspace, actors behave in self-interest and prioritise short-term gains over long-term sustainability. There is unclear ownership of “cybersecurity,” not just on the level of organisations but also on national and international levels. We are lightyears away from establishing and enforcing unified principles for cybersecurity on a global scale.
Achieving coordinated cooperation between stakeholders is not just a classic collective action problem, in which one actor fears that others will not contribute their fair share, leading to a lack of collective effort, but it is also a problem of domains.

Take, for instance, the fundamental trenches between the societal domains of organisations and policymakers. Organisations often navigate through diffuse regulatory structures and nested cybersecurity frameworks by gladly exploiting their loopholes, ambiguities, or inefficiencies. Many organisations won’t hesitate when there is an opportunity to take advantage of differences in cybersecurity treatment across jurisdictions or take a minimalist approach to compliance.
Around the world, one of the most persistent and widespread complaints made by organisations about government regulation has been that it is too slow, too constraining, too cumbersome, and imposes too many unnecessary costs. No wonder cybersecurity programs in many organisations aim to technically comply with the law’s letter while enabling the business to forge ahead with technical breakthroughs, short times to market, beating the tough competition, and maximising profits.

Regulators tend to respond to this pressure for more flexibility, speed, innovation-friendliness and efficiency by taking a gradualist approach to reform. This usually means that they focus on discrete, process-oriented problems. The SEC complaint is a textbook example of this: to improve cyber security in publicly traded companies, a classical “ex-post” approach is being chosen. The main aim is to apply existing processes for regulators to assess and respond to cyber security incidents after they have occurred.
The SEC’s filing penalises organisations and individuals for security breaches. This may seem rational from the regulators’ perspective — taking bold enforcement steps against hacked organisations and encouraging improved cyber security practices. A forcefully imposed “command-and-control” regulation may seem completely logical and sensible, yet, as outlined prviously, it may have large, unintended consequences for cyber security.
One such consequence may be that security levels at publicly traded companies may worsen due to an increased focus on compliance and the negative impact on the talent pipeline in cybersecurity management.

This will not bring us any nearer to reducing cyber security risk. What is needed is not an ex-post approach that holds organisations accountable for cyber security in the short term but a much more strategic, long-term, “ex-ante” approach, in which policymakers focus on preventive measures and genuinely devote their best people and resources to improving cybersecurity risk management.
We need transformative ideas that yield substantive, global, and enduring societal benefits for this.²⁸ This includes asking fundamental, unsettling questions about the nature of policy-making — for example, how much can an “ex-post” attitude contribute to improving a domain? — as well as creating new models of sustainable cyber security²⁹.

Second, we need to prioritise principles over rules. Rule-based regulations are prescriptive and specific, providing detailed guidelines and explicit instructions on how organisations should manage cyber risk. For example, most organisations and regulators focus on NIST 800–53, the list with hundreds of security controls meant to guide organisations in implementing specific security measures. The SEC ruling against SolarWinds amplifies this focus on the rules by using low control implementation rates as a central legal argument.

Among the many problems and limitations of rule-based rulings are rigidity and inflexibility: NIST’s control catalog, for example, requires role-based access control for improving access enforcement³⁰. In most cases, this is an excellent control, yet when compliance is the leading implementation goal, there are many ways of being compliant while not improving security levels. Also, there may be situations where access requirements are highly dynamic and context-dependent. In such cases, implementing RBAC will be a struggle.
Next, rules are, per definition, backwards focussed: they are designed to handle known scenarios. Yet the cyber security posture of most organisations is highly dynamic; there are unforeseen vulnerabilities or attacks almost daily. The open and honest assessments of risk owners and subject matter experts of the controls may deviate significantly from the legally desired compliant state.

Then, if you make rules, you create rebels. The SEC’s complaint against SolarWinds shows this: organisations seek to exploit NIST or ISO framework loopholes, which generally leads to technically complying with the rules while deviating from the intended meaning.
Furthermore, technological change in cyber security is so rapid that rule-based regulations often fail to catch up. Regulators may look into compliance levels of controls that made sense some years ago. Yet, due to (primarily cloud-based) technical innovations, they are now antiquated, irrelevant, or harmful because they enable rather than reduce new threats and vulnerabilities. Rule-based regulations also rarely leverage the expertise within industries.

One of the biggest concerns of CISOs worldwide after the SEC’s complaint against SolarWinds is that their expertise is not being heard. CISOs hope and expect the SEC to start a dialogue with cybersecurity subject matter experts, or face an industry filled with individuals that mindlessly focus on rules.

So, we must shift the focus from rule-based governance of cyber security risk to principle-based governance. The question then is: what kind of principles make sense?
NIST’s guiding principle is that organisations must adopt a risk-based security approach. Its risk management framework offers a systematic process for identifying, assessing, mitigating, and continuously monitoring security risks.

This is a how-to principle, not a grounding principle for reasoning and problem-solving. It allows for breaking down cybersecurity into operational steps for individual organisations. It guides organisations in assessing and managing their security measures based on the potential risks they face. However, it is blind to fundamental problems in the cyber domain, for example, if these issues are outside the scope of individual organisations.
In other words, it is not a foundational concept. It can and should be broken down further into other principles.

We need new, freshly created first principles for the cyber domain. There may be no need to reinvent the wheel here. The principles may be closely related to those from other regulated domains, for example, accounting.
The SEC itself applies such foundational principles in financial matters in the form of the Generally Accepted Accounting Principles (GAAP). The five principles of GAAP are universal enough to be meaningful in other domains, and they can and should indeed be adopted for cyber security. Let’s look at each and see how this could be done.

1. Objectivity: A company’s financial statement should be based on objective evidence, and cyber security reporting should also be. There are ample opportunities for quantification of cyber risk.
Objective evidence in the cyber security domain will be tangible and verifiable information that can be used to support or demonstrate the effectiveness of cyber security controls, practices, or compliance with security policies.
There are plentiful sources of evidence in cyber security that promote objectivity: tons of engineering metrics cover data categories ranging from network traffic and log data to auto-generated timelines of events. Vulnerability scans quantify weaknesses in systems and applications. Patch quality can be measured. Penetration test reports provide hard data on how well systems and applications withstand simulated attacks. Access control records offer insight into how well joiners, movers, and leavers are provisioned, de-provisioned, and revalidated. Encryption keys’ secure generation, distribution, and storage can be measured and analysed.

The most crucial step to applying the principle of objectivity to the cyber domain is to define a standardised set of objective key performance indicators (KPIs) for cyber security that all organisations in all markets must provide. In an unprecedented international effort, regulatory bodies should define such KPIs and ensure they are mandatory globally. Stakeholders in the cyber domain should be mandated to acquire minimum competence levels in methodologies that allow for objective measurement in cyber security.³¹

2. Materiality: In the financial domain, something is material if it “would affect the decision of a reasonable individual”³². The current discussions and uncertainty about what “material” comprises in the cyber security domain, and what the SEC expects organisations to report upon and what not, can and should be eliminated by providing a universal definition of materiality.

Materiality in the cyber domain needs to be defined regarding its impact on the four main protection goals: confidentiality, integrity, availability, and authenticity.
How this can be done more objectively is something that needs to be figured out. Subject matter experts and regulatory bodies should align on this topic internationally. There is ample research available.

Note that I intentionally mention four protection goals here and not, as is currently done in most frameworks, three. Especially German regulators have insisted for years that authenticity cannot be subsumed under integrity, and I agree. Authenticity, or provenance, must be treated as a separate protection goal. It is, together with integrity, going to define cybersecurity in the next decade — probably even more so than confidentiality and availability. It will have a massive impact on “materiality”.
Yet such details would need to be analysed by an independent international body. The outcome of this alignment process should be a unified, normalised, and normative framework for “materiality.”

3. Consistency: this principle also makes much sense in the cyber domain. You don’t want organisations to use one definition, principle, or method in one quarter and another in the next quarter. Consistency in cyber security risk management promotes stability, accuracy, compliance, and adaptability. It is a necessary (but not sufficient) condition for maintaining a robust security posture in the face of evolving cyber threats.

4. Conservatism: Just like in the financial domain, the conservatism principle avoids overstatement of assets and income; in the cyber domain, it avoids overconfidence, in the sense that potential cyber security dangers and losses are recognised early, preferably before they materialise. To address the tragedy of the commons, such considerations should not just take the organisation’s scope. Still, they must also reflect on the possible impact of the organisation’s actions on supply chains, such as go-to-market strategies, purchasing choices, or vendor selection.

5. Cost Constraint: This is such an obvious principle that it almost seems to be a truism. But just as in financial reporting, cyber risk reporting should not impose more costs than the value of the actionable knowledge that can be gained by it, e.g., you don’t want to spend more on measuring the cyber risk exposure of a particular system than the cost of a loss of such a system.
There are costs associated with obtaining and presenting information in the cyber domain, and there are just practical limitations to the resources available for preparing cyber security statements. Risk identification and aggregation can be incredibly resource-intensive, involving extensive data analyses. A.I., huge language models, can make risk identification and aggregations more granular while providing more insights.

The outcome of this kind of first principles thinking will be that the context for cyber security management will change significantly. Instead of focusing on rules and the constant open invitation to move toward “compliance theatre,” it will focus on breaking down the hard problems we face in cyber security. It will make organisations and their investors want to understand the core of their security problems. It will lead stakeholders to question assumptions, whether their own or any other assumption made by everyone else in the cyber domain (which also includes this article).

We need the cyber community to challenge conventional thinking and encourage creative solutions collectively.
What is needed is a new culture of cyber security management that embraces curiosity, learning, continuous improvement, and independent thought.

Next to defining the problem correctly and moving from rules to principles, the third necessary change is surprisingly concrete and simple.
It is to provide a clear, universal, international definition or guidance on allocating accountability for cyber security in organisations. The National Association of Corporate Directors (NACD) has provided excellent input for this in their “2023 Director’s Handbook on cyber-risk oversight”³³. It states that for decades, cyber risk was considered part of information technology risk, and its oversight was largely delegated to engineering and security teams within an organisation. But corporate leaders now must see cyber risk for what it is: a strategic enterprise risk. Corporate leaders own cyber risk.³⁴

Outlook

In the vast and fast-changing ocean of cyber security, CISOs now find themselves in a position where they need to navigate without clarity on their authority, accountability, and methodology. The SEC complaint against SolarWinds has complicated the role significantly by making people in the cyber domain personally liable for decades of systemic issues in cybersecurity risk management.

To be sure, it is laudable that the SEC finally addresses the problems of cyber security. Today, cyber attackers increasingly exploit vendors that are essential for the global supply chain. This way, attacks spread to other organisations that depend on those vendors. This means that the consequences of even one single attack, as in the case of SolarWinds, can be devastating.³⁵

Organisations must understand how important it is to address serious known cybersecurity deficiencies. And it is good that executives now know that making materially false and misleading risk disclosures is a severe offence. It is a big step forward that organisations learn that internal control failures for cyber security will lead to regulatory scrutiny and high fines.

Now is the moment to also address the structural issues in solving the global challenges within the cyber domain.

References

(1) SEC Charges SolarWinds and Chief Information Security Officer with Fraud, Internal Control Failures
(2) Rispens, Sybe, Why the World Needs a Software Bill Of Materials Now, 2021.
(3) SEC, Press ReleaseSEC Charges Pearson plc for Misleading Investors About Cyber Breach [Last accessed: 16. November 2023]
(4) SEC Charges Software Company Blackbaud Inc. for Misleading Disclosures About Ransomware Attack That Impacted Charitable Donors [Last accessed: 16. November 2023]
(5) Zetter, Kim, “The Untold Story of the Boldest Supply- Chain Hack Ever”, in: Wired, May 2, 2023 [Last accessed: 1. November 2023]
(6) World Economic Forum, “Global Cybersecurity Outlook 2023”, https://www3.weforum.org/docs/WEF_Global_Security_Outlook_Report_2023.pdf
(7) SEC, Case 1:23-cv-09518, https://www.sec.gov/files/litigation/complaints/2023/comp-pr2023-227.pdf. (From here on abbrevated as “SEC, 2023”) [Last accessed: 15. November 2023]
(8) SEC, 2023, p. 3
(9) NIST 800–53, https://csrc.nist.gov/Projects/Risk-Management
(10) SEC, 2013, par. 78.
(11) https://www.hhs.gov/sites/default/files/cybersecurity-maturity-model.pdf
(12) NIST, “NIST SP 800–53 Rev. 5 Security and Privacy Controls for Information Systems and Organizations”, https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final
(13) The NIST Control Catalog and Control Baselines in Spreadsheet Format: https://csrc.nist.gov/CSRC/media/Publications/sp/800-53/rev-5/final/documents/sp800-53r5-control-catalog.xlsx
(14) See Zetter, Tim, “The Untold Story of the Boldest Supply- Chain Hack Ever”, Wired, 2023, https://www.wired,com/story/the-untold-story-of-solarwinds-the-boldest-supply-chain-hack-ever/
(15) SEC, 2013, par. 17.
(16) SEC, 2013, par. 54.
(17) Arluke, Arnold, Regarding Animals, Temple University Press, 2010
(18) Sloan, Rob, “How Much Cybersecurity Expertise Do Boards Really Have?”, https://www.wsj.com/articles/how-much-cybersecurity-expertise-do-boards-really-have-69f5cb0a [Last accessed: 2. December 2023]
(19) Inexpert Supervision: Field Evidence on Boards Oversight of Cybersecurity
(20) Lowry, Michelle; Vance, Anthony; Vance, Marshall, ‘Inexpert supervision: Field evidence on boards oversight of cybersecurity’, 2021.
(21) https://s1.nordcdn.com/nord/misc/0.78.0/nordpass/top-200-2023/200-most-common-passwords-en.pdf

(21b) It may be an ill omen that the SEC’s Twitter account was compromised 10/01/2024. The Guardian, “SEC says ‘compromised’ account to blame for tweet approving Bitcoin ETF”, Wed 10 Jan 2024. https://www.theguardian.com/technology/2024/jan/09/sec-twitter-account-hacked-bitcoin-etf-not-approved [Last accessed 10/01/2024]
(22) As Peiter Zatko, the former CISO of the former company Twitter. NPR, “Here’s why the Twitter whistleblower’s testimony to Congress will be crucial”, september 12, 2022, https://www.npr.org/2022/09/12/1122441128/frances-haugen-facebook-meta-twitter-whistleblower-mudge, [Last accessed: 8. December 2023]
(23) https://www.darkreading.com/cybersecurity-operations/microsoft-is-getting-new-ciso-in-new-year [Last accessed 10. December 2023]
(24) Ware, L. The Evolution of the Chief Security Officer. CIO Magazine, Feb. 23, 2017. http://www.csoonline.com/csoresearch/report35.html [Last accessed: 9. December 2023]
(25) L. A. Cox Jr., What’s Wrong with Risk Matrices? Risk Analysis 28, no. 2 (2008): 497512.
(26) Hubbard, Douglas W.; Seiersen, Richard. How to Measure Anything in Cybersecurity Risk, Wiley, 2022.
(27) Allen, Brian;Bapst, Brandon, Building a Cyber Risk Management Program. Evolving Security for the Digital Age, O’Reilly, Boston, 2024.
(28) Coglianese, Cary; Ellis, Jim. “Achieving Regulatory Excellence”, Brookings Institution Press, 2017, p. x.
(29) Coglianese, Cary, “Regulatory Breakdown: The Crisis of Confidence in U.S. Regulation”, This material may be protected by copyright.
(30) NIST, “NIST Special Publication 800–53, Revision 5, Security and Privacy Controls for Information Systems and Organizations”, AC-3(7), 2022
(31) Hubbard, Douglas W., The Failure of Risk Management: Why It’s Broken and How to fix it (1st edition), JOHN WILEY AND SONS, 2009; Hubbard, Douglas W.; Seiersen, Richard, How to Measure Anything in Cybersecurity Risk (2nd edition), Wiley, 2023.
(32) https://www.sec.gov/rule-release/33-8350
(33) NACD, “2023 Director’s handbook on cyber-risk oversight”, 2023
(34) NACD, “2023 Director’s handbook on cyber-risk oversight”, 2023
(35) Madnick, Stuart E. , “The Continued Threat to Personal Data:Key Factors Behind the 2023 Increase”, Apple, December 2023.

--

--

Dr. Sybe Izaak Rispens

PhD on the foundations of AI, ISO27001 certified IT-Security expert. Information Security Officer at Trade Republic Bank GmbH, Berlin. Views are my own.