The Value of People in Cybersecurity
The destructive nature of Taylorism in Cybersecurity management
Critical infrastructures are a foundation of modern society. They ensure an uninterrupted supply of energy, food, healthcare, money, and other vital resources. Cyber security aims to keep the IT systems powering these essential services safe. Yet, despite its critical role in ensuring the stability of a nation, cyber security management practices are still built upon a 19th-century mindset of bureaucracies and organizational effectiveness. Such aging foundations are crumbling under the heavier process demands of modern society. With each crack, a new set of attack vectors emerges.
The effort to address cyber security vulnerabilities drives a steep upward spending trend. According to 2022 estimates, global annual spending on security-related products and services has reached a peak of US$146 billion.¹ But despite such an increase in cybersecurity spending, data breach events continue to increase. According to the 2022 Cyberthreat Defense Report by CyberEdge Group, more than 80% of all surveyed organizations experienced a successful cyberattack in the past year. Furthermore, one in three survey respondents fell victim to six or more successful attacks consecutively.² If the aviation industry operated at this failure rate, airplanes would drop like flies from the sky every day.
Cybercrime will now cost the world USD 10.5 trillion annually by 2025 — a 300% increase compared to a decade ago.³ The World Economic Forum rates cyber security already as the second biggest global risk after global climate change.⁴
Why are we so bad at managing cyber security? The primary issue is that we think about IT security management through an engineering lens over a century old.
Performance of organizations
A standard KPI mantra tracks the performance of most cyber security programs: know your inventory, know your access and identities, know your third parties, know your encryption, know your vulnerabilities, know your endpoints, patch your software, and train your people. However, one aspect is notably absent from this KPI set, culture — the implicit shared assumptions within a group of people that form the foundation of collective problem-solving strategies.
Most cyber security metrics don’t measure “culture” or the impact of any human element. This is odd because culture determines the success or failure of teams, departments, and even entire organizations. The criticality of culture became painfully evident during the late ’80s and early ’90s when three-quarters of large corporate mergers and restructuring projects failed. Even today, a poor cultural fit and lack of trust are the most decisive factors for merger failures.⁵
The costly failures of mergers taught management theorists the importance of culture in business success. Slogans like Culture Eats Strategy for Breakfast drive the point home. No matter how meticulously planned an organization’s strategy is, it will fail if the people executing it do not nurture an appropriate culture for reaching its goals. The same principle applies to cyber security strategies. Even the best security technology and policies will eventually fail if your cyber security strategy isn’t integrated with organizational culture.
Productive workplace culture
Seemingly little of the now almost half-century of knowledge about planning, implementing, and managing an efficient and productive workplace culture is adapted to the security domain. If the subject gets any attention, it’s because of hard-working individuals desperately advocating for change. In the young corpus of cyber security literature, most of the focus is on technical issues like encryption, networks, access control, protocols, and so on. Discussions about culture are either absent or redefined in technical terms, such as the context for incorporating controls into additional safety measures. For example, in his legendary “Security Engineering,” Ross Anderson notes there’s a “socio-technical context of technology.” However, his acknowledgment of a “risk culture” and the challenges of getting its subtleties right is confined to just a single sentence.⁶ Practitioners in the security industry continue to ignore the cultural lessons from mergers and acquisitions and even seem to avoid the topic altogether. Instead, most current thinking about security focuses on adopting standards and the management team’s responsibility in enforcing their expectations.
For many years the classical workforce structure has largely suppressed these issues, but the pandemic suddenly forced them to the surface, triggering a rapid rise in workforce surveillance software. Not only did employees suddenly become evaluated with “productivity scores,” but they also started getting monitored for “unsafe behavior,” which included falling victim to cyber crimes. Even with more people regularly working in the office again, eight out of ten U.S. organizations still use software to track individual worker productivity. Some solutions also offer “security monitoring”.⁷ Those organizations that don’t monitor their workers with corporate spyware attempt to enforce exemplary cybersecurity practices by other means — either through a definition of “good security” decided by upper management or through security frameworks like ISO. Yet, in practice, enforcing requirements often results in a security posture that seems exemplary on the surface but, in reality, is supported by a hollow foundation. At best, levels of compliance are raised without improving systems security. At worst, it leads to a long list of conflicts — between management and engineers, engineers and security teams, superiors and employees, first and second lines of defense, organizations and regulators, etc.
Iron fist
It’s no wonder many Chief Information Security Officers (CISOs) need to rule over a company’s cybersecurity standards with an iron fist, using tactics like humiliation, micromanagement, and the fear of failure to improve organizational security. This mindset is driven by the pressure to produce effective short-term results amongst emerging critical threats and ever-increasing gaps in an organization’s cyber defenses. As a result, there is little interest in analyzing systemic security issues, questioning the usefulness of data produced by new analytic tools, or developing strategies yielding long-term results. This Short-termism influences ongoing investments into additional analysis tools — such as junkware and spyware — resulting in continued reliance on KPIs that are more precisely wrong than approximately right. This deteriorating efficiency profile can only be remedied by replacing the cyber security management engine driving its decline.
In cyber security, “management” is usually synonymous with “scientific management.” A common opinion amongst cybersecurity professionals is that the pathway towards a good state of informational, network, and operational security is paved with suitable algorithms, data solutions, and optimal workflows.
A machinist-turned-manager-engineer
The roots of scientific management go back to the nineteenth century. Frederick Winslow Taylor (1856–1915), an American machinist-turned-manager-engineer,⁸ formulated this management model based on his observations in the steel industry. Taylor postulated that operational waste could be eliminated through the standardization of best practices, or rather, the enforcement of managerial preferences. Taylor’s ideas on industrial efficiency shaped most of the manufacturing industry in the 20th-century.⁹ In his main book, “The Principles of Scientific Management,” published in 1911 and now seen as one of the most influential management books of the 20th century, Taylor proposed that managers should become “scientific” to eliminate inefficiencies in the workplace.¹⁰ Management was tasked with finding the “laws [of workers] as exact and clearly defined as the fundamental principles of engineering”¹¹ By setting workflow efficiency as the ultimate north star metric, the effective governance of an organization became a technical problem best solved scientifically. Moreover, this practice made any task open to the clinical optimization efforts of the management team.
For example, suppose one artisan worker could make fifty light bulbs in a day and another only twenty. In that case, a manager could determine a universal optimal workflow by dissecting and quantifying each step of the manual production process. In other words, the objective rule of scientific analysis would replace the expertise and experience of the foreman. Taylor’s “enlightened despotism” (or “authoritarian style of leadership” or you could also see it as “Freudian paternalism”) proposed that corporate governance should be free from disputes over values, because a management elite should make decisions based on scientific calculations of economic rationality. Employees turned into management objects, obliged to obey and perform specific roles — cogs in a great corporate machine, greased and tuned by clockwork managers.
Dark anthropology
The anthropological assumptions in Taylorism are deeply pessimistic. Workers are perceived as inherently slow, stubborn, stupid, and selfish. They are filled with evil impulses, driven by greed, envy, and anger — the personification of ignorance, incompetence, and incapability combined. In Taylor’s own words: “a man suited to handling pig iron is too stupid properly to train himself”.¹² To make workers understand even the most basic scientific standards, knowledge was chopped into pieces small enough for children to understand and then hammered into workers relentlessly. According to Taylor, the quality of any production workflow involving humans would eventually trend downwards unless workers are continuously barked at.
Such anthropological assumptions are influencing current cyber security thinking. For example, users of information systems are always regarded as the weakest links in a cybersecurity program. The term “human factor” is often used as a subdued reference to “human stupidity.” The assumption is that humans dislike and even continuously rebel against best cybersecurity practices, like installing software updates on time. To fight against this natural tendency, users require constant supervision and monitoring by real-time security systems. Employees are still considered flawed machines in Taylorist thinking — working animals with limited intellectual capacity, easily steered by simple rewards and punishments.
Let’s now see how much of these 19th-century anthropological assumptions materialize in two essential components of cyber security management — training and awareness and incident management.
Training and Awareness
If humans are the weakest components of a cybersecurity machine, what implications does this have for training and awareness programs?
Though training solutions might seem modernized, their development stems from a conviction that users cannot be trusted. As a result, many training solutions include gamification to “trick” users into completing training material. Some also include AI solutions capable of detecting moods to quantify comprehension levels.
What does training material look like against “phishing”? Phishing is a common cyber threat where victims are lured into clicking links, for example, in Email messages, leading to malicious websites that steal sensitive data. Email as the first entry point constitutes more than half of all successful ransomware infections.¹³
Trust but verify
Does the training to prevent this attack cater to a competent audience? Does it involve informative explanations of how to recognize, prevent and mitigate these threats? Or is it more of a sermon in which the Russian proverb “Doveryai, no proveryai” seems to be the guiding principle?
The proverb, which translates to: “trust but verify,” was first popularized in the 1980s by former United States president Ronald Reagan. The cyber community adopted it around 2000, ignoring the paradox that any trust requiring verification is, in fact, no trust at all. This is probably why Taylorian cyber security leadership eagerly embraced it. It proved to help hammer down the message that everyone must abide by corporate security policies and follow security procedures. Today, one can still find evidence of a Trust but Verify mentality throughout the cyber security domain.
The National Institute of Standards and Technology (NIST), for example, incorporates it implicitly in its 2020 publication “Workforce Framework for Cybersecurity.”¹⁴ The document follows a strictly formal approach to workforce cybersecurity, linking tasks (which in a phishing context would refer to: “avoiding clicking on malicious links”) to knowledge (“what are phishing attacks?”), and skills (“how to recognize a phishing attempt”). Following the framework proposed by NIST, the design of a typical phishing training and awareness program may start with defining workforce competencies and mechanisms for observing and measuring learning progress. The actual training material could be delivered through conventional methods, modern forms of training, such as game-based training, or context-based micro-trainings.¹⁵ Testing may involve sending staff simulated phishing attacks to their email, Linkedin, Twitter, or other social media inboxes and tracking responses.¹⁶ Failing such a test would result in punishment, such as forcing staff to undergo additional mandatory training or firmer consequences, like performance reviews.
19th-century Taylorism against 21st-century cyber attackers
Most current anti-phishing strategies in both industry and research follow a training model similar to NIST’s recommended process: create detailed recipient profiles and behavioral models of phishing victims,¹⁷ implement solutions that flag suspicious messages to the user,¹⁸ and introduce new intervention strategies for reducing individual phishing susceptibility.¹⁹ Will all of this be sufficient to effectively reduce the impact of one of the most prevalent security threats to contemporary organizations? I doubt it. This is, ultimately, 19th-century Taylorism contending with 21st-century cyber attackers. Cyber attackers are creative, agile, lack scruples, and in some cases, are well funded. They’re constantly sharpening their skills and can turn complex, state-of-the-art attack techniques into automated attack software at a frighteningly fast pace.
As part of a defensive strategy, natural language processing and AI-based tools may improve the efficiency and effectiveness of phishing attack identification, but their accuracy is far from perfect. It’s much easier for attackers to circumvent such detection solutions than to establish a reliable defense that doesn’t generate too many false positives. The fundamental point is this: more models and better technical tools may lead to incremental improvements, but while anthropological assumptions about users remain, any improvements will be merely cosmetic. Instead, we need to establish a training environment that doesn’t encourage blind procedural obedience or behavioral modifications dependent on barking managers and invasive algorithms.
To effectively reduce the risk of phishing attacks, we need to develop a training environment that encourages curiosity amongst users. The type of curiosity that leads to interesting questions about the complete scope of phishing attacks. Questions about:
The cybercriminals — “How come the number of unique phishing sites detected worldwide has gone up fourfold in just two quarters?”,²⁰ their strategies — “To increase success rates, why don’t cybercriminals pay native speakers to write their phishing emails? Their technological sophistication — “Does this payload really create a mini virtual machine on an iPhone!?” The broader context of phishing attacks — “How do attackers apply capitalist market principles of efficiency?”
Why do we need such users? Because curious users have the necessary growth mindset to cope with phishing attackers’ innovative and dynamic strategies. Cultivating these mindsets changes the focus of cybersecurity from processes and systems to people. People with the capacity to improve an entire organization’s resilience to phishing attacks. Can you imagine the incident response improvements in such a workplace? Organizations would address each phishing attempt in minutes with coordinated responses — not within months or days.
Happy people
A business with genuinely enthusiastic employees about cybersecurity seems more like a CISO’s fantasy than a feasible reality, right? But that’s because we are conditioned to view behavioral improvement through a Taylorist lens. Antiquated ideas can no longer determine how we think about the ways people learn, how they behave, how they solve problems, and how they grow. A workplace entirely reliant on algorithms to achieve best cybersecurity practices strips employees of their autonomy and dignity, making them feel like the senseless archetypes that Taylorism made of them.
An overwhelming amount of recent scientific evidence demonstrates that curious, happy people work faster and perform better²¹. In addition, interested individuals at work are more intrigued than frustrated when trying to understand, appreciate, and extract the unique value of new situations, like cyber-attacks. They are also more flexible in dealing with complex problems such as phishing attacks and ransomware.²²
Curiosity improvement program
A few years ago, I was involved in a curiosity improvement program run by the German chemical and pharmaceutical giant Merck²³. Merck identified that to be able to deliver the research quality necessary to address global healthcare challenges — like pandemics or diseases like cancer, Alzheimer’s, and diabetes — all 85 thousand employees in over 66 countries would need to start questioning the status quo and thinking beyond the limits of one’s area of expertise. My insights from this project are fourfold, and I find them all relevant to the cyber security domain.
Firstly, for curiosity to become the guiding principle for action, shifting the corporate culture away from nineteenth-century assumptions about the human condition toward modern conceptions about people is essential. In some organizations, this shift will require courage, boldness, and tenacity — not just in the context of senior management but also in classical engineering domains that tend to be blindfolded by Taylorist, data-driven approaches ironically perceived as “modern strategies.” This shift is not a cosmetic change to existing rules and processes — it’s about radically embracing a continuous learning paradigm as a foundational strategy. It includes moving away from a testosterone-driven “know all” to an oxytocin-driven “learn all” culture. There are probably multiple ways to approach this, but in my opinion, positive psychology is a good starting point. According to studies about what makes life most worthwhile, curious people differ from their non-curious counterparts in terms of job satisfaction, work engagement, commitment, readiness for change, learning agility, and idea generation.²⁴
What does this mean for cyber security? Since many organizations root cyber programs, anti-phishing initiatives, and software solutions in outdated thinking about people, there is little data on this, and there may not be a definite answer. But a solution could involve the following approach.
Before launching your next phishing awareness campaign, reflect on the larger organizational context of such an action. Does some form of Taylorism manage your organization? It can take effort to answer this question honestly because modern Taylorism is often well disguised.²⁵ For example, state-of-the-art behavioral software tools that quantify employee phishing susceptibility or monitor worker behavior are often built upon the same negative anthropological assumptions you seek to rinse out of your organization.
A carrot-and-stick cyber training program is straightforward to plan, set up, and monitor. It will generate metrics with which most leadership teams are familiar.
Does your workplace show signs of having a culture of fear? This is one of those organizational questions that nobody likes to address. If your employees have become accustomed to being persuaded or coerced into desired behavior by management or algorithms, nobody will want to answer it honestly. Yet, it’s essential to understand workplace perceptions on the floor because it will determine how your campaign and its metrics fit into the broader operations of your organization. An approach to planning your next phishing awareness campaign could be to start with a cultural assessment of your organization. Look out for signs of cultural fear. Ask questions like:
- Are all incentives based on results and outcomes only?
- Are any top leaders showing signs of fear-inducing behavior, like passive aggressiveness?
- Is fear-inducing behavior confronted?
- Is “corporate censorship” the norm, with “pre-meetings” where new ideas are “cleaned up” and aligned to what top management wants to hear before being presented?
- Is communication multi-directional, or does it only flow one way from the top-down?
- Are abrupt leadership changes to strategies, priorities, or goals causing hesitancy in proactive action?
Answers to such questions are essential to an effective cyber security awareness campaign. One cannot solve the critical issue of phishing attacks by implementing a Stalinist workplace culture powered by state-of-the-art spyware.
There is another rather practical aspect to be taken into account here. On the one hand, a carrot-and-stick cyber training program is straightforward to plan, set up, and monitor. It will generate metrics with which most leadership teams are familiar.
Curiosity, on the other hand, is a lot more intangible. Its scope is much broader, and the outcomes of curiosity-improving actions are blurry. Can curiosity levels even be measured? Any training and awareness program based on interest and curiosity instead of warnings and fear-mongering isn’t planned and steered as easily as classical awareness-raising programs. But suppose you genuinely want to develop a workplace where an active, questioning mindset is encouraged, the status quo can be challenged, and employees proactively develop new cyber skills. In that case, curiosity needs to become a primary metric in every cyber training program. It’s necessary to think and operate outside corporate silos to initiate this change. Curiosity-improving awareness campaigns should, ideally, involve cross-domain and cross-sector planning. This may mean involvement from domains usually unrelated to (and often uninterested in) cyber security, such as people support, recruiting, marketing, or corporate strategy.
In most organizations, the problem space and allocated resources of a “security awareness campaign” are too narrow to address such cultural issues. But for awareness campaigns to have any potency, they need to be ingrained in a corporate culture that encourages coordinated problem-solving through meaningful human relationships.
Two forms of curiousity
The second learning from my work in the project by Merck is that curiosity has two forms. One is straightforward, the other is slightly harder to convey. The first form is primarily intellectual. Questions like “How to end cardiovascular diseases?”, “How to make lung cancers less lethal” and “How to avoid the next pandemic?” appeal to a curious mindset because they resonate with the hard problems faced by humankind.
The second form of curiosity is more practical. Lab technicians and engineers who make new products or services display this curiosity. It’s a “quieter” driver because there are seemingly no big questions, and the problems influencing this form of interest are often too small to identify if you’re not directly involved in the subject.
Resources for a curious mind
Communicating this type of curiosity requires tremendous effort. You could achieve it by broadening the scientific context of the problem or by describing past unsuccessful attempts at solving the problem. Engineers, it seems, are often unaware of both. Though the scientific context is the bedrock of all engineering education, it’s typically taken for granted. Also, engineers usually have limited awareness of history — which is unfortunate since history is rich with resources for a curious mind. Questions like “how was this solved previously,” “why wasn’t this possible approached differently before?” and “what were the milestones leading to the breakthrough discovery?” force engineers to broaden the problem context, inspiring solutions for future innovations.
Given that it’s more challenging to convey this practical type of curiosity — which applies to most cyber topics — it may a tough sell to use “curiosity” as a driver for raising levels of awareness. Mainly because in the cyber domain, most problems involve a system’s invisible, non-functional aspects: the small steps leading to increased system reliability, improved levels of security, or better compliance. This may all be of limited interest to a curious mindset at first glance. So, for creating a curiosity-driven awareness campaign on security-related topics, you will need people with strong communication skills and sound technical backgrounds. In addition, these people must be capable of diving deep into technical problems and have the knowledge and skillset to direct the progress and unearth the beauty of non-functional problem-solving activities.
Multi-year effort
My third learning from the curiosity improvement program is that it needs to be planned and set up as a multi-year, cross-domain effort. You may even need to reach out to people outside of your organization. Its development shouldn’t just focus on creating content for curious minds and the processes necessary for embedding curiosity as a strategic factor in an organization. For instance, new methods for measuring staff curiosity levels needed to be developed from scratch in the Merck project. The required academic research for this was organized as a multi-year research project led by Dr. Todd Kashdan, a psychology professor at George Mason University²⁶. The measuring involved surveying thousands of people within and outside the organization about four workplace factors.
I’d like to mention these factors here because I believe they can be helpful for creating ideas for alternative metrics that measure (the absence of) cyber security competencies.
(1) Deprivation Sensitivity — Recognition of a gap in knowledge and the subsequent pondering of abstract or complex ideas for reducing this gap. Curious people experience a sense of tremendous relief when such an issue is solved.
(2) Joyous Exploration — Gaining great pleasure from recognizing and seeking out new knowledge and information at work,
(3) Distress Tolerance — Willingness to embrace the anxiety and discomfort from exploring new, unfamiliar, and uncertain situations.
(4) Openness to People’s ideas — Valuing the ideas and perspectives of others and intentionally aiming for a diversity of approaches. ²⁷
The success of implementing initiatives that cultivate curiosity is often noticeable and sometimes even becomes measurable outside of the project’s original scope. For example, because of its curiosity campaign, Merck significantly improved its reputation amongst job seekers. As a result, the quantity and quality of the company’s job applicants dramatically improved in recent years, far above industry standards²⁸. Since, in the cyber domain, the global workforce needs an influx of around three million professionals to defend critical global assets effectively, and given how difficult it already is to find capable cybersecurity specialists, the cyber industry’s urgent need for curiosity-driven campaigns is evident.²⁹
Curiosity is hard to fake
My fourth learning from the curiosity improvement program is that curiosity is hard to fake. You can fabricate “cultural alignment” with industry-agnostic buzzwords like “innovation”, “disruption”, “creativity”, etc. But genuine curiosity requires an arsenal of good people, inspiring ideas, excellent technical competence, and a knack for mind-boggling details. As a side-effect, natural curiosity as an organizational driver may lead to inventing metrics that focus on measuring employee capabilities instead of measuring employee failing.
Incident management
Pay careful attention to its incident handling processes if you want to find evidence of anthropological assumptions in an organization’s culture. Incidents squeeze organizations, forcing the underlying mechanisms driving desired employee actions to the surface. Incidents lead to large losses, like the spectacular $81m heist of the Bank of Bangladesh in 2018.³⁰ Incidents lead to data breaches, like the recent Shields Health Care Group breach that exposed the sensitive health records of tens of thousands of patients.³¹ And incidents lead to mass disruptions, like Microsoft, Amazon, and Google periodically suffering from global cloud service outages.³² How do organizations handle such incidents regarding the treatment of involved employees? Many organizations regard the human factor as a problem to be solved in a good Taylorian fashion. If you can control the human factor, you can suppress the problematic and malicious outcomes it drives.
In a recent series of publications from the German Federal Office for Information Security entitled “the human factor,” there’s the usual lip service to the statement, “we need to treat people as part of the solution and not the problem.” But what follows is, by and large, a Taylorian view on the human condition and how to control it. One case study describes how a service engineer unwittingly infects corporate systems with malware because he (as is to be expected!) uses company hardware for private enjoyment, like listening to music³³. The German regulator also considers “curiosity” a hostile factor because it may drive clicks on malicious hyperlinks. With curiosity as the enemy of good cybersecurity, strategies for improving security hygiene primarily focus on imposing interventions designed to constrain, control, and eradicate an employee’s propensity to compromise cyber security.³⁴
The German regulator considers “curiosity” a hostile factor because it may drive clicks on malicious hyperlinks.
Overall, most organizations seeking to steer their workforce into compliance with procedures and processes, or trying to change unwanted behavior, often use fear, retraining, and shaming in their strategies. This is especially the case if there is little or no control improvement, and “human failure” remains a primary cybersecurity issue each year. Some examples of poor employee actions driving such a metric include consistently poor password management, failing to comply with even rudimentary data recovery mechanisms, or acting against security policies. In a Taylorian worldview, such behavior is assumed to be driven by laziness, stupidity, self-interest, or any other combination of flawed human propensities and, therefore, must be addressed sternly³⁵.
A 2019 survey found that almost half of all organizations punish their employees for cyber security incidents
Witch hunt
The use of Taylorian tactics against employees facilitating cyber incidents has become so commonplace we’ve become accustomed to such responses and even anticipate them.³⁶ A 2019 survey found that almost half of all organizations punish their employees for cyber security incidents³⁷. Sanctions against employees who do not comply with rules include: informing line managers, decreasing access privileges, locking computers until appropriate training has been completed, and naming and shaming the guilty parties³⁸. A machinist-CISO knows how to conduct a witch hunt, complete with yelling, complaining, and publicly exposing guilty employees as examples of cyber stupidity.
It’s easy to blame others when things go wrong, but is it also effective? Having identified the people involved in an incident, most organizations use five strategies.
Goal Setting
The first is combining the fear of negative consequences with goal-setting³⁹. Goal setting, in this context, involves the development of an “action plan” designed to “guide” a person or group toward a specific cyber security goal in good, Taylorian style. Employees are held accountable for reaching the desired goal within a strict deadline. Repercussions follow if this deadline isn’t met or if there are discrepancies between the desired goal and the final progression state.
Exclusion
The second strategy is exclusion. Exclusion may be on the level of individual employees, who are barred from access to certain privileges, such as system access or social groups. It may also be on a systemic level, excluding a group of workers or all humans from systems as much as possible. The motivation behind such technological restrictions is to automate everything and anything to reduce the human error associated with incidents.⁴⁰ Indeed, minimizing the human factor from processes is sometimes an effective security strategy. But, an overreliance on automation solutions is likely to exacerbate the human errors these expensive systems are designed to eradicate.
By pushing humans further away from modern solutions, the “mental model gap” — the difference between what people think a system does and how it actually works — is increased. The result is a self-reinforcing situation: more automation leads to more marginalization of operators, which leads to less understanding of how systems work, leading to more human error, and, ultimately, more severe security incidents.⁴¹ The global outage of all of Facebook’s apps in 2021 demonstrated the detrimental effects of this phenomenon. A routine maintenance task led to a command that unintentionally took down all connections of the company’s backbone network.⁴² This outage was unanticipated by the system engineers that designed Facebook’s automated maintenance processes. So what caused the knowledge gap? The likely answer is that some solution architect based automation tools on incorrect assumptions about systems dependencies. In this instance, reducing human involvement didn’t minimize system errors; on the contrary, it exacerbated them. So, removing responsibility from the human actor and not permitting users to be part of the solution does not automatically reduce error, except, perhaps, for bulk operations or the most trivial tasks.⁴³
Enforce policies
The third strategy to control and constrain human error is to enforce policies. Policies are hierarchical, top-down, prescriptive, and binding instructions to ensure the human component in organizations behaves securely. This is what regulators prefer and what organizations are held accountable for. Such regulations can create a culture of “checkbox security,” in which good cybersecurity is measured by how well an organization aligns with the minimal security expectations for auditors. Of course, nobody in cyber security wants “checkbox security,” not even machinist-CISOs. Yet it is surprising to see how little thought is spent on questions about the organizational transformation required to avoid checkbox security.
Regulating bodies, such as the National Institute of Standards and Technology (NIST) and the International Organization for Standardization (ISO), have been massively influenced by Taylorism, especially in the 1960s-1980s. For example, the ISO27000 series was based partly on an information security policy manual developed by the Royal Dutch/Shell Group in the late 1980s and early 1990s and inherited the Taylorism ingrained in large corporate structures of the time. In addition, the Big Four consulting organizations are auditing for these standards. Thus, Taylorism is firmly institutionalized in the cyber security domain.
I’m not arguing against policies here, but I think it is good to be mindful of how to implement them. Enforcing rules in a command and control manner results in morale decline and the repetition — not eradication — of the same errors.⁴⁴
Education and training
The fourth strategy is education and training. Governing bodies usually make this a mandatory component of effective security management, yet most commonly, in good Taylorist style. Imagine the potential positive impacts on an organization’s security posture if awareness training included a mandatory component for the management team addressing the detrimental effects of things such as public shaming, ruling by fear, the inability to work on disagreements, demanding employee time at will (such as habitually calling employees outside of work hours).
Even as just a fun thought experiment, it’s clear that such a training component would have a significant positive impact on an organization’s learning culture.
Root Cause Analysis
The fifth strategy to control and constrain human error is to use Root Cause Analyses. In many cases — and most standard frameworks make this a mandatory requirement — after an incident, such an analysis is carried out to identify the person responsible for or the main reason causing the security event⁴⁵.
There’s so much of Taylorism embedded into root cause analyses I don’t even know where to start.
The binary approach to causality is the most significant Taylorian influence. If a cyber risk is materialized as an incident, there must be a linear and sequential path to the one cause that triggered it. Direct alignment is the only acceptable relationship between the events leading up to the final incident — like a perfectly stacked row of dominos tipping over. According to this theory, if event A triggers event B, event B will not occur if event A does not happen. This myopic understanding of causality leads to simplistic views of how modern, complex IT systems work. For instance, the selection of an “initiating event” or “root cause” is defined by multiple factors, which, in most cases, have little to no relevance to the problem. The event labeled as “initiating” is often just the first point in the linear trajectory where it’s assumed corrective action should have begun. As a NASA policy states, the “root cause” is “the first causal action or failure that could have been controlled systematically either by policy/practice/procedure or individual adherence to policy/practice/procedure”.⁴⁶
Hindsight bias is also a factor in the selection of root causes. Knowing the outcome of a chain of events often leads to the belief in a single trigger event, and this trigger can be understood with a high degree of certainty⁴⁷. According to hindsight bias, if the series of steps between the initial trigger and final outcome doesn’t follow what’s regarded as the most efficient process trajectory, the additional steps stand out as “mistakes” or “errors” — even if the reasons behind their inclusion were perfectly rational.
Other issues are assigned as “root causes” for the simple reason that it avoids embarrassment. However, the real reasons — such as impossible time constraints, systemic issues in work hours, or the debilitating stress caused by some managers’ barking orders are often hidden behind more socially acceptable answers. Another reason the actual root causes remain hidden can be political pressure in the workplace. For example, suppose it’s culturally inappropriate to question upper management’s strategic or technical decisions. In that case, their involvement is either completely removed or barely investigated as a potential cause for a cyber incident.
Adopting the “five why” method within root cause analysis indicates how deeply Taylorism is engrained in the cyber domain. The idea to ask five “why” questions to get to a root cause was initially developed by Sakichi Toyoda as a tool for understanding why new product features or manufacturing techniques were needed. This means the orientation of time in the successive why questions is, in its original intention, directed towards the future, because it’s referring to planned events. Reversing the time orientation in this model is only plausible if you firmly believe, as Taylorism does, in a direct and linear type of causality. With such a simplistic version of causality, it’s difficult and even impossible to reflect on the multitude of interdependencies of modern, complex IT systems, including their socio-technical dimensions or nonlinear causal relationships.
Human Failure
Another Taylorian fallacy in root cause analyses is that the leading cause identified at the root of most incidents is “human failure.” This is the “anthropological bias” of Taylorism.
Since Herbert William Heinrich wrote in the 1930s that “man’s failure” is the root of around 90% of all industrial incidents, this statistic has been mindlessly repeated and reproduced everywhere⁴⁸. The tradition continues to this day. For example, the World Economic Forum stated in 2020 that 95% of cybersecurity issues could be traced to human errors like the use of weak passwords, an inability to identify phishing scams, and a general “lack of understanding”.⁴⁹ The consequence of such figures is that it leads to the belief that an improvement in cyber security needs to begin by focusing on human errors. Once that is addressed, all the other components will naturally fall into line.
Yet almost all existing statistics on the “root cause” of cyber incidents being “human failure” are skewed by anthropological bias. It makes making human error socially, strategically, politically, and ideologically acceptable as the “root cause” of every cyber problem. Jens Rasmussen, one of the leading system safety and human factors researchers, once suggested that people are often identified as root causes because most incident analysts find it hard to identify the underlying, systemic causes of human error⁵⁰. We need to add something here: this is not because the analysts are failing — i.e., because they cannot “see through the human error” — but because of the anthropological bias clouding their reasoning. Blaming humans is part of our culture, preventing further analyses of the factors that precede and contribute to human error.
As long as cyber security strategies focus on human beings’ mental, moral, and anthropological deficiencies, guarding against this will only be possible with more supervision, governance structures, and automated monitoring systems. In addition, we will continue to need “zero-tolerance” policies and define “red-flag activities” and programs that “motivate” and “guide” people toward more secure behavior.
Beyond Taylor’s tracks
It has been more than 70 years since Peter Drucker wrote his “Practice of Management,” which stated that human resource is one of the most valuable and the least efficiently used. He stressed the value of “knowledge workers” and added some of the first considerations about the implications of automation. Of course, the technology available at the time was just the beginning of our modern computer era (the first experiments with “electronic brains” had just started in the 1940s and 1950s involved room-sized card-based systems). Yet, many of Drucker’s observations and considerations are still valuable today.
For example, he thought that automation, if applied correctly (in terms of enabling and not monitoring), would not reduce but increase the value of people. He foresaw that we would depend more on the work of people who invent and design the machines that automate laborious tasks⁵¹. He spoke highly of IBM and its decision to abolish Taylorist ideals. At the time, the corporation removed fear as a motivator for desirable conduct from the workplace while understanding that its removal was not the sole requirement of establishing a motivating culture. Drucker regarded IBM’s management style as the best illustration of his principles of abandoning output maximization, eliminating performance-based compensation, and not treating workers as commodities (IBM introduced in the 1930s, in the middle of the Great Depression, lifetime employment).
If this has all been thought of before and written down as eloquently and compelling as Drucker did more than half a century ago, then why does Taylorism still dominate the cyber domain?
Humans as failed computers
One reason that comes to mind is that the information technology industry habitually views human beings as failed computers or robots. If you do that, it is natural to dream of automated cyber security systems in which human beings are merely “dumb” parts that need to be kept in check with “smart” algorithms.
Another reason is that modern Taylorism supports production strategies focused on maximum output. Technology companies like Google, Amazon, Apple and many technology startups continuously ship new products and features at extraordinary speed. Because most manufacturing processes can now optimize every step, Taylorist leaders find it hard to resist the idea that humans can be lumped into the same category as tech solutions.
A more fundamental issue is that people assume that the strategies that work for productivity also work for security. In reality, however, this is a Taylorist fallacy. Moreover, it usually leads to systematic neglect and catastrophic management of cyber security risks.
For example, from the recent assessment by Peiter (’ Mudge’) Zatko, the former CISO of Twitter, one can read how Taylorism at Twitter has led to a security disaster. The most concerning aspect of the assessment is not just its findings (which are, frankly, the usual security shortfalls of fast-paced digital organizations — poor access control, insufficient security processes, arrogant managers ignoring compliance requirements, sloppy software development lifecycles, etc.). But the fact that the CISO appointed to address these security issues was compelled to file a whistle-blower complaint to the U.S. Securities and Exchange Commission.⁵² Taylorist organizations will, hopefully, learn a compelling lesson from this event: you can ignore, mislead, overrule, or even fire your cyber security people for challenging your harmful management style, but you will never be able to suppress the truth.
Another reason why the cyber domain is still dominated by Taylorism is that the engines of Taylorism do not have a reverse gear. When scientific management is applied to a new domain, a chain reaction of cascading alignment is initiated. First, behavioral policies are introduced, followed by surveillance mechanisms to ensure compliance. At this point, management has assumed too much power to want to turn back, and the intoxicating potential of greater control pushes them deeper into Taylorism. More behavioral policies are introduced, security standards, metrics, enforced compliance controls, etc. Taylorism is so firmly ingrained in the cyber security domain, and it’s enforced by so many governing bodies — from regulations to auditors and security frameworks — that there seems to be no simple way out.
Strive for change
Yet it’s worthwhile to strive for change. Overcoming Taylorism means, for example, employees that will instinctively reach out to security teams for security assistance. It will lead to management teams and system owners who can finally articulate cyber risks, which will spark meaningful and curiosity-driven discussions about cybersecurity improvements.
This detoxifying chain reaction will continue to spread across every avenue of the business, driving the natural flourishment of a healthy cybersecurity culture — all without a machinist-CISO brute forcing every incremental security improvement.
References
(1) Data source: https://www.statista.com/outlook/tmo/cybersecurity/worldwide
(2) Cyber Edge Group, Report Defense Cyberthreat 2022
(3) Cybercrime magazine, “Global Cybersecurity Spending To Exceed $1.75 Trillion From 2021–2025”, https://cybersecurityventures.com/cybersecurity-spending-2021-2025/
(4) https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2022.pdf
(5) Statista, “Main factors for failure of M&A deals according to M&A practitioners worldwide 2021”, https://www.statista.com/statistics/1295764/factors-for-manda-failure-worldiwide/
(6) Anderson, Ross, Security Engineering, Wiley, Cambridge, 3rd edition, 2020, pp. 1002–1003.
(7) Jodi Kantor and Arya Sundaram, “The Rise of the Worker Productivity Score”, New york Times, August 14, https://www.nytimes.com/interactive/2022/08/14/business/worker-productivity-tracking.html
(8) Kanigel, Robert , The One Best Way: Frederick Winslow Taylor and the Enigma of Efficiency, Viking Adult, 1997
(9) F. W. Taylor, Principles of Scientific Management, p. 128. For the popularity of Taylors philosophy, see Haber, Sam. Efficiency and Uplift: Scientific Management in the Progressive Era. Chicago: University of Chicago Press, 1964.
(10) Taylor, Frederick, “The Principles of Scientific Management”, Andesite Press, 2015.
(11) See: Taneja, Sonia; Pryor, Mildred Golden; Toombs, Lesley, ‘Frederick W. Taylors Scientific Management Principles: Relevance and Validity’, 16, 3, 2011, pp. 60–78.
(12) Taylor, Frederick Winslow. The Principles of Scientific Management, p. 47.
(13) 54% of all ransomware infections are delievered through phishing. Statistics provided by Statista, Joseph Johnson, Sep 9, 2021, Phishing — statistics & facts; See also: Statista, Most common delivery methods and cybersecurity vulnerabilities causing ransomware infections according to MSPs worldwide as of 2020
(14) Special Publication 800–181 Revision 1, 2020, https://csrc.nist.gov/publications/detail/sp/800-181/rev-1/final
(15) Kävrestad, Joakim, Furnell, Steven, etal, Evaluation of Contextual and Game-Based Training for Phishing Detection, Future Internet 2022, 14, 104
(16) For a recent overview of phishing testing and simulation software, see this page by McAffee.
(17) Watters, Paul, Why Do Users Trust The Wrong Messages? A Behavioural Model ofPhishing
(18) Diri, Banu, Sahingoz, Koray, “NLP Based Phishing Attack Detection from URLs”, 2018, DOI: 10.1007/978–3–319–76348–4_59
(19) Zhuo, Sijie, et. al, “SoK: Human-Centered Phishing Susceptibility”, arXiv:2202.07905v1 [cs.CR] 16 Feb 2022.
(20) Statista, Number of unique phishing sites detected worldwide from 3rd quarter 2013 to 1st Quarter 2021
(21) A large-scale study (one million U.S. army soldiers analysed by researchers for nearly a decade) found that well-being leads to outstanding job performance. See: Lester, Paul, et.al, “Happy Soldiers are Highest Performers”, Journal of Happiness Studies
(22) Hooydonk, Stefaan van, The Workplace Curiosity Manifesto: How Curiosity Helps Individuals and Workspaces Thrive in Transformational Times, pp. 110–111. New Degree Press, 2022.
(23) The project has been planned as a long-term campaign, and is still ongoing: https://www.merckgroup.com/en/company/curiosity.html. See also the bi-annual Merck State of Curiosity reports, 2016 , 2018, 2020.
(24) Peterson, Christopher, A Primer in Positive Psychology, Oxford University Press, Oxford, 2006
(25) Waring, Stephen P., Taylorism Transformed, The University of North Carolina Press, 1991.
(26) Kashdan Todd B., David Disabato, Fallon Goodman, and Patrick McKnight. The Five-Dimensional Curiosity Scale Revised (5DCR): Briefer Subscales While Separating Overt and Covert Social Curiosity. Personality and Individual Differences, April 2020. https://doi.org/10.1016/j.paid.2020.109836.
(27) Merck, Curiosity Report 2020 Our Company Results, https://www.merckgroup.com/press-releases/2021/jan/en/Curiosity-Report-2020-Factsheet-EN.pdf
(28) Merck KGaA, Group Communication, “State of Curiosity Report 2018”. https://www.merckgroup.com/company/en/State-of-Curiosity-Report-2018-International.pdf.
(29) Bureau of Labor Statistics, U.S. Department of Labor, Occupational Outlook Handbook, https://www.bls.gov/ooh/home.htm
(30) The BBC World Service has a great series about the heist: https://www.bbc.com/news/stories-57520169
(31) https://shields.com/notice-of-data-security-incident/; the example is a random pick from the endless stream of data leakages, listed on websites such as https://firewalltimes.com/recent-data-breaches/
(32) Amazon: https://www.zdnet.com/article/amazon-heres-what-caused-major-aws-outage-last-week-apologies/, Google: https://status.cloud.google.com/incident/zall/20013, Microsoft: https://www.zdnet.com/article/microsofts-azure-ad-authentication-outage-what-went-wrong/
(33) Bundesamt für Sicherheit in der Informationstechnik, Sicherheits-Faktor Mensch, published
(34) V. Zimmermann and K. Renaud. Moving from a human-as-problem to a human-as-solution cybersecurity mindset. International Journal of Human- Computer Studies, 131:169187, 2019.
(35) J. Reason. Human error: models and management. British Medical Journal, 320(7237):768770, 2000.
(36) A somewhat extreme punitive action: suing your employee who falls for an email scam: https://www.bbc.com/news/uk-scotland-glasgow-west-47135686.
(37) Helpnet Security. “4 in 10 organizations punish staff for cybersecurity errors”, 2019. https://www.helpnetsecurity.com/2020/08/05/4-in-10-organizations-punish- staff-for-cybersecurity-errors, retrieved 12–07–2022.
(38) See also: W. Presthus and K. F. Sønslien. “An analysis of violations and sanctions following the GDPR”. International Journal of Information Systems and Project Management, 9(1):3853, 2021.
(39) K. Thomson, J. Van Niekerk, Combating information security apathy by encouraging proso- cial organisational behaviour, Information Management & Computer Security 20 (1), 2012, pp. 3946.
(40) This is what companies like Cisco or Verizon advise to do. Cisco, “Cisco 2018 annual cybersecurity report”, https://www.cisco.com/c/en/us/ products/security/security-reports.html; S. Widup, M. Spitler, D. Hylender, G. Bassett, “2018 Verizon Data Breach Investigations Report”, http://www.verizonenterprise.com/de/DBIR/
(41) Leveson, Nancy G. , Engineering a Safer World: Systems Thinking Applied to Safety , MIT Press, Cambridge, Massachusetts, 2011.
(42) Facebook engineering blog, “More details about the October 4 outage”, https://engineering.fb.com/2021/10/05/networking-traffic/outage-details/
(43) Dekker, Sidney. The Field Guide to Understanding Human Error. London: Ashgate, 2006.
(44) David Marquet, a former Navy captain of a nuclear-powered submarine, gives a fascinating first-person account of this phenomenon in his book “Turn the Ship Around!”. L. D. Marquet, David Covey, Turn the Ship Around! A True Story of Turning Followers into Leaders, Gildan Media, LLC, 2013.
(45) J. P. Bagian, J. Gosbee, C. Z. Lee, L. Williams, S. D. McKnight, D. M. Mannos, The veterans affairs root cause analysis system in action, The Joint Commission Journal on Quality Improvement 28 (10, 2002, pp. 531545.
(46) As defined in: NASA Procedures and Guidelines document NPG 8621 Draft 1.
(47) Tversky, A.; Kahneman, D. “Availability: A heuristic for judging frequency and probability”. Cognitive Psychology. 5 (2), 1973, pp. 207232, doi:10.1016/0010–0285(73)90033–9.
(48) Heinrich, H. W., Industrial accident prevention (4th edition). New York: McGraw-Hill Book Company, 1959, page 31. The statistics of Heinrich have been firmly debunked in the 1980s by Nestor Roos and his team yet the 90% figure for human failure keeps returning to the present day. See: Roos, Nestor,Industrial accident prevention: a safety management approach. New York: McGraw-Hill, 1980.
(49) World Economic Forum, “After reading, writing and arithmetic, the 4th ‘r’ of literacy is cyber-risk”, https://www.weforum.org/agenda/2020/12/cyber-risk-cyber-security-education, retrieved 21–07–2022
(50) Rasmussen, Jens. “Human error and the problem of causality in analysis of accidents”. In: Human Factors in Hazardous Situations, ed. D. E. Broadbent, J. Reason, and A. Baddeley, Oxford: Clarendon Press, 1990, pp. 112.
(51) Drucker, Peter F. The Practice of Management, Harper Collins, 1951, p. 256.
(52) Menn, Joseph; Dwoskin, Elizabeth; Zakrzewski, Cat “Former security chief claims Twitter buried ‘egregious deficiencies’”, Updated Aug. 23 at 12:27 p.m., Originally published Aug. 23
Updates
28–09–2022: minor updates to the references