Cyber Security Buzzwords #1: “Zero Trust”

Dr. Sybe Izaak Rispens
6 min readJun 28, 2021

A terrible name for a bright idea

A person wearing a black robe and sitting in front of a room on a judge’s bench must be trustworthy. Or is it a case of homo homini lupus? (sorry dear wolves, but that’s how it’s seen in Aesops fable). Image: wernerwerke

In 2010, John Kindervag¹, a principal analyst on the ICT security and risk team at Forrester, introduced the term “Zero Trust.” His paper, “No More Chewy Centers: Introducing The Zero Trust Model Of Information Security,” outlines the idea that the concept of “trust” is hard, if not impossible, to implement in IT infrastructures². There is no way to verify the trustworthiness of a piece of information based on the physical properties of the infrastructure used to transport this information from a sender to a receiver. And therefore, we should not try to create digital trust based on the idea that we can make computer networks that have a “hard” shell and a “soft” center, e.g., in which we distrust everything outside a digital fence and trust everything within (the “chewy center”).

This observation is so accurate that it almost seems like a triviality: if you try to add attributes to the information in the data layer, like “trust,” based upon information from layers below the data layer — i.e., the transportation layer or the protocol layer — you are in big, big trouble.

It’s one of those many examples where human intuitions on security go awry when projected onto technical systems. In the realm of human interactions, it makes perfect sense to take something like location (transportation layer) or physical appearance (protocol layer) into account when it comes to trust.

Courthouse

For example, suppose you are inside a courthouse and hear something from a person wearing a black robe and sitting in front of a room on a judge’s bench; it is perfectly sensible to trust that this person is indeed a judge and that what this person says has the authority of a judge. So the alligator brain math goes like this: She looks like a judge, speaks like a judge, and sits in the judge’s seat. Ergo, she’s a judge!

Yet, in computer systems, this does not work. There are no context attributes that can be trusted. Nothing from the transportation or protocol layer has any relevance for any qualitative attribute in the data layer. For example, when a technical system receives a message from another system, it is trivial for an adversary to modify the part of the message that contains the information necessary for transporting the message from sender to receiver. Just like it is easy to write a random sender address on a paper envelope. This is hacking the transportation layer. Similarly, it’s easy to siphon information from one protocol to another protocol. That’s hacking the protocol layer.

Therefore, a secure system designer assumes that the network is always hostile. Meaning: external and internal threats exist on the network at all times. Trust decisions are never made based upon network locality. The overall design principle is: all nodes in a network (devices, users, information flows) must be authenticated and authorized.

Even if most engineers probably know all of this, it’s remarkable to see how much trust even sound engineers are still willing to give to information based on the fact that it is received from a “known” machine. It’s a known machine because it’s located within the firewall or the domain or has a specific IP address. Or the message is trusted because it was received on a particular port in a protocol that we understand.

From a “zero trust” subject to a “smart trust” infrastructure

There may have been a need for Kindervag’s provocative title and catchy phrase ten years ago, and even while I see engineers still make this mistake every day, it is nevertheless an incredibly misleading name. Because the focus of the “zero” is on the inventing subject, the engineers who trust systems in their thinking about information architecture, yet, the focus should be on the system itself. Today, in the best “zero trust” network designs, there is trust everywhere, except in the heads of their inventors because they don’t mix the data layer with the transportation and protocol layer anymore.

The main goal of such networks is to create trust between actors within the network automatically. These systems generate interactional trust on a massive scale. It works somewhat like this: the sender does not send information directly to a receiver but first contacts a trust broker. The trust broker sets up the encryption keys for both parties. These keys are temporarily valid and are only used for that one interaction. The trust broker sends the disposable key to the sender and receiver, and from that moment onward, they send the actual information directly to each other. It’s irrelevant what network this encrypted message is sent on or what protocol that transport is based on. The information has been properly encrypted and signed, so the message’s confidentiality, authenticity, and integrity are guaranteed.

The broker’s responsibility is to produce a numeric assessment of the riskiness of allowing a particular action in the network. It calculates a “trust score” of users based on temporal attributes (access outside of the normal window activity for that user is more suspicious), and geographical qualities (access from a different location) and so on. Based on that information, trust brokers actively adjust access based on the riskiness of activity on a network by dynamically adjusting policies and access.

Encryption

Yet, for this to work, we must trust encryption. But crypto can be broken; how to trust the trust broker? This can be partly achieved by something called a “chain of trust,” in which one issuer of cryptographic keys builds upon certificates obtained from so-called Certificate Authorities (CA). But, of course, CA’s can be compromised. And almost all big CA’s have been hacked in the past decade³. As I outlined in a previous article, attacks on the supply chain of trust are very worrying.

Because of the far-reaching control of the trust brokers over the smart network’s behavior, it’s necessary to verify all encryption thoroughly. If that is done correctly, the idea is that it is possible to create transactional trust for a short time between two parties who do not otherwise trust each other (or the broker).

It’s not perfect, but it’s much better than anything else we have, and trust brokers are getting better and better. Due to machine learning and artificial intelligence, they are able to autonomously adjust behavior to known and unknown attacks by malicious actors. And trust brokers are ramping up to massive speed and scale.

It’s exciting to see that the EU has invested heavily in an open-source cloud project called “Gaia-X”⁴. Gaia-X went public two weeks ago, and it has all of the smart trust principles outlined above firmly embedded in its architecture. This leads to a secure, federated system that meets the highest standards of digital sovereignty⁴ᵇ.

There are even start-ups with a business model based on smart trust, like the Swedish video chat company “whereby.com.” Unlike most existing video chat software companies, Whereby uses no central server that passes all information from sender to receiver. Instead, Whereby’s server is solely a trust broker between people who want to see and talk to each other online. Whereby creates disposable encryption keys, sends them to the senders and receivers, and then pulls its hands off the call. The encrypted information is transmitted directly between the peers. No need for centralized data.

These architectural ideas solve many security and privacy problems, thanks to a strict separation between data, transport, and protocol layers.

So, let’s finally drop the “zero.” And preferably also do away with “trust.” It’s an anthropocentric idea — which is, in the realm of ICT and security risk management, in most cases misleading and blurring human intuitions about trust⁵.

References

(1) https://go.forrester.com/speakers/john-kindervag/
(2) http://crystaltechnologies.com/wp-content/uploads/2017/12/forrester-zero-trust-model-information-security.pdf
(3) Mahsa Moosavi, “Certificate Authorities:Measurements and Validation Procedures”, 2021, https://users.encs.concordia.ca/~clark/projects/opc/main.pdf
(4) https://www.gaia-x.eu/sites/default/files/2021-05/Gaia-X_Architecture_Document_2103.pdf

(4b) The grand ideas seem to have been better than reality, apparently, Gaia-X is plagued by chaos and infighting.

(5) Gollmann, Why Trust is Bad for Security, Electronic Notes in Theoretical Computer Science 157 (2006) 3–9.

Revision History

30–10–2021: Added link to Gollmann’s article “Why Trust is Bad for Security”, which is making the point of this article.

21–11–2021: Added the sad link 4b, to “chaos and infighting” at Gaia-X. Sigh…

--

--

Dr. Sybe Izaak Rispens

PhD on the foundations of AI, ISO27001 certified IT-Security expert. Information Security Officer at Trade Republic Bank GmbH, Berlin. Views are my own.