Apple Platform Security and Corporate Cyber Responsibility

In February 2021, Apple quietly published two impressive documents that raise the bar for IT security strategies: the annual update to the Apple Platform Security and the new Security Certifications and Compliance Center.
These documents are exemplary reads, both because of what they do well and also because of what is missing.
Platform Security
Apple Platform Security describes the security related aspects of the Apple ecosystem, covering the hardware and software that keep iPhones, iPads, iWatches, Apple computers and iCloud services secure¹.
One of the most interesting aspects of the 200 pages of documentation is, that it all seems to be guided by a famous quote by computer scientist David Kay from 1982: “People who are really serious about software should make their own hardware” ². Steve Jobs was fond of this quote, and Apple has been using it for decades to argue that their products are better because of the fact that the company only runs its software on its own hardware³.
From a functional perspective, that strategy has worked out remarkably well. If you compare Apple’s latest operating system with current versions of Windows or Unix-variants (which both work on any kind of compatible hardware), or iOS with Android (which also runs on hardware provided by many different vendors), there is a clear qualitative difference: in general, Apple products are sleeker cause fewer issues, have a lower total cost of ownership and a substantial longer half-life (the other day, I refurbished a 15 year old iPod as a present for our daughters’ 5th birthday, and it works flawlessly, both in terms of hardware and software and in terms of our daughter being able to understand how everything works — and loving the device instantly).
Yet, you could argue that the functional differences between the various operating systems are gradually decreasing, and that it’s therefore nice for a vendor to own it’s own hardware in order to make great software (which also has a lot of other business implications that may be good for the company), but that it’s not a necessity for making great software. In the end, functional quality is subject to personal preferences, and the question “is it necessary to own hardware in order to make great software” will always remain somewhat in the realm of religious zeal.
Security is invisible
That is different from a non-functional point of view. When it comes to things like security, reliability, performance, maintainability, scalability and also energy efficiency, there is much to say for the argument that owning the hardware is necessary in order to boost things from good to great. People who are really serious about security should really make or at least own their own hardware. There is no other choice — the only degrees of freedom left are in the implementation of this strategy.
One option, requiring large amounts of resources, is the way Apple is doing things right now: the company designs and builds its own microprocessors (CPUs). This has been a long-term strategic move, that started a few years ago with a custom Apple chip, called “A1”. The A1 was first used in iPhones and iPads. Later, it was also integrated in computers, such as Mac Minis and MacBooks, as a co-processor next to the main microprocessor, which was still made by Intel. The main non-functional role for the A1 and its later versions was to take care of security related tasks such as authentication and encryption.
In the fall of last year, Apple introduced their first full-blown CPU, the “M1”, which is able to replace any third party CPU altogether. From what we know so far from lab analysis and user tests, this is really the great CPU that Apple claims it to be: it is remarkably fast and provides a lot more processing power per Watt than anything else on the market at the moment.
These functional aspects of the M1 can be objectively measured and tested⁴. The advantages of the new hardware are even verifiable by regular users: the new macs open applications fast, everything feels snappy, and due to the low power consumption, a noisy ventilator is not needed anymore. Apple’s new laptops wake up instantly, work silently and will keep working days on end with one charge of the battery. Anyone can notice that.
But none of this matters for non-functional aspects, such as security. There is no direct way to know how good or bad the M1 is in terms of security.
Indirectly, this important quality can be gauged by three means. One could evaluate the documentation Apple provides. Or one could monitor the work of hackers that try to crack the new hardware, and track known vulnerabilities listed in the central database of the National Institute of Standards and Technology (NIST) ⁵ Finally, one could notice that Apple for the first time in the company’s history announced its M1-equipped computers are certified according to the international standand for information security management, ISO/IEC 27000.
Documentation
Wat does the Apple documentation say about the security of the new M1, and the role it plays in the overall Apple platform security? The list of topics is long, and it takes considerable breath to dive into all aspects. But it’s worthwhile looking at the list of topics in full, as it gives a good high level overview.
The list starts with hardware security and biometrics. This is the custom hardware, like the chips and CPUs that Apple makes, and that forms the foundation for all security on Apple devices. It makes sure that basic authentication, such as Touch ID and Face ID, work securely, so that only legitimate users can log on to a device.
From that we move to system security. This is a combination of hard- and software that makes sure that a device can start up safely. For instance, it guarantees that the device will execute only an untampered version of the operating system, and if required, starts automatic software updates. Next is the hardware and software needed for keeping data secure at rest, in transit and in use. Related to this are several services, mechanisms that give access to security related capabilities, such as identification, password management, payments, communications and finding or remotely wiping lost devices.
Finally, there is application security, which makes sure that all services of the operating system provide a safe app ecosystem. This makes sure that apps can run in parallel securely, without compromising platform integrity.
Of course, for all of this, encryption is key. Good encryption depends on good hardware, good architecture, use of up to date encryption standards and good implementation of those standards. (I am going to write a separate article on encryption, sign up below if you want to be notified when it is published). For now it suffices to say that in order for encryption to work properly, all four aspects need to be insanely tightly integrated in order to prevent data to slip through the cracks.
Developers, Hackers, Vulnerabilities and Bug Bounty programs
Apple’s Platform Security does not stop at the perimeter of Apple-made systems and processes. Outside of the company, the integrity of its computers is improved by the efforts of application developers, researchers and hackers.
Apple provides tools and frameworks for secure development of apps. This is important, because with Apple’s own security levels going up, and the hardware chain of trust becoming harder to break, attackers often switch their attention to the weaker links in the chain. The myriad developers that make apps iOS or macOS can’t be expected to have the same level of security skills as Apple employees. Therefore, Apple offers supporting tools for Developers, mainly bundled in XCode, the integrated development environment that is needed for building applications that run on Apple devices.
Also included in this scope are security researchers and hackers: anyone who shares critical issues and the techniques used to exploit them with Apple is being rewarded. For example: anyone who is able to demonstrate a zero-click kernel code execution — an attack that is able to execute arbitrary code on a target system without any user interaction- receives one million US dollars. Successful attacks on radio communication channels, such as Bluetooth or WiFi, earn a bounty of several hundred thousand US dollars. Apple’s Bug Bounty program came relatively late. It was launched in 2018 , so several years after Google, Microsoft, Facebook and Airbnb started their Bug Bounty programs, but comparable to other companies, such as Sony and Spotify started their Bug Bounty programs on bug bounty platforms such as “YesWeHack”, “Bugcrowd” or “hackerone”⁵. Even the European Commission launched their own bug bounty initiative for popular open source projects, called “EU-FOSSA 2” in 2019.⁶
Certifications
Apple first certified their Information Security Management System (ISMS) in 2016. That makes parts of the organisation compliant with the international ISO/IEC 27000 standard for Information Security Management Systems¹. The scope of the certificate reads: “for the infrastructure, development, and operations supporting the products and services: iCloud, iMessage, FaceTime, Siri, Apple Push Notification Service, Apple School Manager, iTunes U, Schoolwork, Apple Business Manager, Apple Business Chat and Managed Apple IDs”. The certificates have been issued by the British Standards Institution (BSI) and the current certificates are valid through 2022, when it’s time for the planned regular review by external auditors and — when they have no big findings — renewal of the certificates. ⁸
Random Numbers
On a high level, all of this looks very good. There is a clear corporate vision and high commitment to continue raising the bar for security. The hardware chain of trust in particular is a sound foundation for security. Apple does not mindlessly settle for “industry standards” or “best practices”, and does not hide behind ISO compliance.
Yet, a lack of details in the platform security documentation often leaves the reader with many questions unanswered. For example, the fact that the new M1 microprocessor has a true hardware random number generator is great. Random numbers are used in many cryptographic algorithms and protocols. They are essential for generating good session keys and private keys, and a good remedy against replay attacks. The weaker the random numbers, the easier it is to break into a cryptographic systems⁹. When used for cryptographic purposes, random numbers must be as unpredictable as possible and have good statistical properties.
And this is where the limitations of the Apple documentation become apparent: there are a myriad ways hardware random numbers can fail¹⁰. The bad thing is, they fail silently — there is no way of looking at a number the generator has produced and knowing how random it really is.
Apple engineers surely have found strategies to meet the challenges of true random number generation. In lab tests, they probably used high quality standardised tests that measure various aspects of the true random number generator’s operation. It is quite likely that smart solutions have been found for the many hardware related issues of random number generation. Yet, we don’t know for sure, as there is no detailed technical information available.
For issues such as random number generation, this might be overcome by creating lab-style testing software that will test the randomness of M1’s generators. NIST for instance, has a great test suite for random numbers, that tests things like entropy levels and evaluates discrete Fourier transforms on number series ¹¹.
For many other aspects of security it may not be possible — or only with extremely high efforts, including building complex hard- and software — to create adequate test environments.
Radical Transparency

In such cases, high-quality, in-depth technical information from the vendor is essential. And that is something Apple’s Platform Security documention does not provide at the moment. We cannot understand the critical security issues of the Apple Platforms security in both the width and depth necessary to be able to validate its overall quality, let alone assess its vulnerabilities and risks.
What is needed is radical transparency. That means: sharing detailed information about the people, processes, systems and external factors that together determine Apple’s platform security.
What does that mean for Apple?
First, Apple needs to shift its corporate culture from its present preference for total secrecy to radical transparency. This is probably the most daunting task, not just because “culture eats strategy for breakfast”, as Peter Drucker famously put it but also because Apple’s current business model critically depends on secrecy¹². The most prominent aspect of that is that it’s used as a marketing halo around its consumer products. For the functional aspects of running a business like Apple’s, that strategy still seems to pay off well, but for security, it’s a dead end. It does not make sense to communicate a grand corporate vision on IT security if the culture does not support transparency about how this security goal is implemented.
Now, in theory, it would not be necessary to completely banish secrecy from Apple’s culture. It would suffice if good, detailed, expert technical information would be shared with the international security community. That way, Apple’s Platform Security could be evaluated in order to improve the efforts to analyse, deter, mitigate, and eliminate threats and vulnerabilities.
Just like the Security Platform documentation was silently published, and Apple’s bug bounty program was set up with hardly any publicity, the transparency needed for Apple’s security platform could also be introduced without further fanfare. But in practice, it seems impossible to achieve a cultural shift with such a narrow scope. Either you change your corporate culture and have it affect all of your business, or you limit the change to a specific domain and see it fail. This may be a bitter pill to swallow, but people who are really serious about security risk management should live radical transparency.
Second, Apple must leave the idea behind that good cyber security is something that can be pulled off entirely by one company or a small group of companies. IT security can never be reached by relying on the invisible hand of the market. As for most forms of security that affect society at large, regulation is essential. The environment, finance, food, healthcare, aviation and logistics: all of these sectors have reached levels of security that are acceptable by society at large only with regulatory oversight. Regulation is also an essential instrument for managing the risk of compromised ICT for our society.
By “regulation”, I don’t mean the old-school, slow, cumbersome regulatory settings that impose high costs and impede innovation. What is needed are forms of regulatory oversight where integrity, excellence and stellar competence of the regulators will steer the IT sector at large toward building more secure products and services. The goal is that regulators will define the necessary rules for the entire sector that push it towards more security and more resilience.
What can Apple’s role be in this? The most obvious thing is that it can contribute a global paradigm shift in cyber security thinking. This paradigm shift must put the center of gravity outside individual organisations and look at global security issues. ICT security needs organisations that are socially accountable — to themselves, their stakeholders, and the public.
Corporate Cyber Responsibility (CCR)

What we need, in other words, is Corporate Cyber Responsibility (CCR). CCR is the way in which businesses consistently create shared value in our digital society, through transparent development, good governance, stakeholder responsiveness and systemic cyber security improvements. This requires companies like Apple to become conscious of the impact they have on all aspects of society, including economic, social, and political.
A first step on the path of Corporate Cyber Responsibility can be to fund think-tanks aimed at solving the many issues with regulatory excellence in the field of IT. Agencies lack the resources — money, people, methods, knowledge, ideas — that are necessary to accomplish even basic work, let alone doing such new thinking. See for example the Irish Data Protection Commission, which is currently under fire because it’s so terribly underfunded and conservative in its approach that it’s no match for digital behemoths like Facebook and Google, threatening the core ideas of GDPR¹³
Corporate Cyber Responsibility demands of a company like Apple to lead the way and make sure that the current best ideas on how to achieve regulatory excellence will graduate from the stage of academic exercises to that of mainstream corporate and government policies.¹⁴
References
(1) https://support.apple.com/en-gb/guide/security/welcome/web
(2) https://iguchijp.medium.com/people-who-are-really-serious-about-software-should-make-their-own-hardware-48a2765633ba
(3) The grand vision formulated in the Platform Security Guide also reminds of Job’s times at Apple: Every Apple device combines hardware, software, and services designed to work together for maximum security and a transparent user experience in service of the ultimate goal of keeping personal information safe. Apple devices protect not only the device and its data but the entire ecosystem, including everything users do locally, on networks, and with key internet services.
(4) See the interesting work by Daniel Lemire, computer science professor at the University of Quebec: https://lemire.me/blog/2021/03/24/counting-cycles-and-instructions-on-the-apple-m1-processor/; Github: https://github.com/lemire/Code-used-on-Daniel-Lemire-s-blog/tree/master/2021/03/24. See also the work by Dougall Johnson: https://github.com/dougallj/applecpu
(5) https://hackerone.com/directory/programs?active_onlyfalse&order_directionASC&order_fieldstarted_accepting_at
(6) https://joinup.ec.europa.eu/sites/default/files/custom-page/attachment/2020-06/EU-FOSSA%202%20-%20D3.1%20Bug%20Bounties%20Summary%20Final_0.pdf
(7) https://support.apple.com/en-gb/guide/security/welcome/web
(8) https://support.apple.com/en-us/HT210897. The links on this page lead to the wrong certificates on the bsi page. The correct links can be found here: https://verifeyedirectory.bsigroup.com, eg: https://verifeyedirectory.bsigroup.com/Profile/APPLE_-0047421507-006
(9) I. Goldberg and D. Wagner. Randomness and the Netscape Browser. Dr. Dobbs Journal, January 1996.
(10) https://link.springer.com/content/pdf/10.1007%2F3-540-36400-5_32.pdf
(11) https://csrc.nist.gov/projects/random-bit-generation. The C# source code can be found here. Question: Is there anyone interested in porting NIST’s C# libraries to Swift? Please contact me at info@rispens.de
(12) This interesting investigative reporting in the New York Times shows how Apple is participating in and assisting with Chinese censorship and surveillance — for Chinese citizens, Apple’s Platform security is rendered nearly worthless, because Apple agreed on handing private encryption keys to the Chinese government.
(13) Facebook dominates Irish data protection investigations: https://www.dataprotection.ie/en/news-media/press-releases/data-protection-commission-publishes-2020-annual-report
(14) This is a great book on the subject and we need much more thinking like this: Cary Coglianese, “Achieving Regulatory Excellence”, Brookings Institution Press. 2017.