The Seven Rules of Success in Artificial Intelligence

Dr. Sybe Izaak Rispens
30 min readJun 9, 2023

--

You need best-in-class natural intelligence to get the most out of artificial intelligence.

© wernerwerke

On March 20, Samsung Electronics Co., Ltd. reported a significant information leakage incident caused by chatGPT, the generative AI tool that became immensely popular recently¹. Just a few business days after the electronics company, headquartered in Seoul, South Korea, had allowed ChatGPT in its semiconductor business unit, engineers inadvertently started leaking highly confidential data to the AI chatbot. The leaked information included proprietary source code–entered into ChatGPT to look for a solution for an issue in the semiconductor equipment measurement database–detailed information about new hardware, such as semiconductor equipment measurements, and internal meeting notes.

Leaking data to the infrastructures of openAI, the company that runs chatGPT, means that it is processed and stored in a black box. The company is secretive about anything related to the inner workings of generative AI bots and also about its organizational setup. For instance, OpenAI has yet to publicly disclose whether it has received a SOC2, ISO 27001, or any other internationally recognized certification that shows a company’s commitment to information security and data privacy. We do not know the details about the company’s security team or how regularly it evaluates and updates its security practices to ensure its systems and data remain secure. We also do not know if and how OpenAI works with external security experts and consultants to ensure that it follows best practices and stays ahead of emerging security threats, for instance, through regular security audits or penetration testing. The company’s algorithms for generating and guard railing results or the datasets used for training are secret.²

This is my reconstruction of how data is processed by chatGPT based on the company’s website, such as the privacy policy, data usage policy, and terms of use³ and interactions with the bot itself about its data usage and data storage processes⁴.

When I send a request to openAI’s servers, for example, I ask it to explain its data usage policy; the request is split into two sections: metadata and input text.

Metadata includes log data, information my browser automatically sends whenever I use any website, and possibly additional information that chatGPT’s website may request from my browser upon initiating a session, for instance, more details about my browser, device, or geolocation. Metadata includes at least my Internet Protocol address, browser type and settings, and the date and time of my request. This is Personal Identifiable Data (PII) because it can identify and locate me as an individual. PII is “only stored for the duration of a user’s session.” However, in some cases, openAI may decide to store this information beyond the user session length. This is done if the company “is instructed to do so by a user or authorized party.” An authorized party could include a government agency or other legal authority with the legal right to request and access such information. It is unknown who the “user” is that may instruct the bot to store PII beyond the session length. Is this an employee of openAI, for instance, an engineer with administrator access? Would this request need formal approval? Would such a request be validated by a process that requires a four-eye principle? Is there any documentation on whose session data is stored for how long and why? Or is the “user” me? Can I instruct the system to store the data from my own sessions longer? And if so, can I define for how long that data is stored? Is there a waterproof process that ensures this data deletion?
If openAI decides to store session data beyond the length of a session, I am not informed about it. I cannot get insight into how this data is stored, whether and how it is encrypted and who has access to the stored data.

I have yet to get more details on openAI’s data retention policy for session data⁵. The bot itself is explicitly instructed by openAI not to reveal anything about it. It was also not possible for me to receive more insights about openAI’s internal processes that could monitor and validate the implementation quality of its privacy or data usage policy. It is also impossible to assess to what level openAI’s policy landscape is connected to daily practices within the company or whether it is a “compliance fairy tale.”

Input text and session data
Now for the input text. This can be, for instance, my request for openAI’s data usage policy or Samsung engineers’ requests for improving proprietary source code. This data is processed and stored on openAI’s servers, which are located somewhere in the US. Per default, all input text is stored to generate the impression of a conversation within a session. For example, if a user asks a question and provides additional information in a subsequent message, the model uses the previous messages to generate a more informed response. The model does not take this context into account between sessions. The next time the user interacts with the model, it will not consider data shared in previous interactions.

The fact that it does not consider this data between sessions does not mean that this data is deleted. On the contrary, data is stored indefinitely and used to improve the performance of the AI model⁶. So all data users are willing to share with openAI is used as training data for the model. From the few remarks about it in openAI’s policies, the input text is apparently scanned and filtered for generic information to “make our models more helpful for people.” We do not know anything about this filtering process and how it can be tricked into unwanted behavior. (It may be a simple set of regular expressions that filter out PII, and it may also be a more sophisticated guard railing system).

Users who do not want their input data stored forever on openAI’s infrastructures can opt-out. People who opt out prevent openAI from using interactions for model training, and the company claims that all “chats will be deleted from our systems within 30 days”⁷. Data deletion is permanent; that is what the policy states. In practice, there can be a difference between “permanently deleted from a system” and “permanently deleted from a system and all its backups”⁸. Only in theory there is no distinction between theory and practice.

Opt out — opt in

It is not possible for users to distinguish between choosing to store a history of interactions with chatGPT for private purposes, which may be useful, and taking part in the generic model training. To keep your interaction history, you must always participate in model training. Taking part in model training means: having your data stored forever on servers run by openAI and making it part of a “training process” of which details are unknown. A yet-to-be-released offering called “ChatGPT Business” will opt end-users out of model training by default while keeping a history of conversations.

This opt-out should, of course, be an opt-in. Also, opting in for something that may be useful for individual users or organizations — keeping an overview of previous interactions with chatGPT, which is not to be shared with third parties — should never be linked to something that may be useful for an organization that wants to train its models.
The impact of a conversation history which (in principle) can only be seen by me, is clear, whereas the implications of a large language model training are entirely opaque. Something meaningful and concrete is intertwined here with something highly unclear, unspecific and ambiguous.

Consent is a fundamental element of the EU General Data Protection Regulation, and it is not acceptable that something potentially helpful and with a small impact is linked to something unclear and with a potentially massive impact as model training.

Rule #1: Be an AI-Ethics Lighthouse

With the current lightning-fast developments in generative AI, it is essential to be an ethics lighthouse. Organizations need to be extremely clear on the principles that guide actions in developing and using artificial intelligence technologies. If you fail to outline expected behavior and responsibilities, the sheer speed of developments will lead to uncontrollable behavior and, most likely, to blunders.

Initiating a responsible AI program begins with designating roles to manage the ethical aspects of AI. Until now, most large organizations with AI products and services in their portfolio have AI ethics programs. However, with AI products becoming wildly popular overnight, organizations not in IT development or organizations too small for ethics committees now must address ethical questions. Are the algorithms that we use for decision-making really neutral, or do they replicate and embed the biases that already exist in our society? How do we know that our AI products are accurate, fair, unbiased, and free of discrimination?

So far, even large companies have struggled with such questions. For example, a few years ago, Google abruptly dismissed AI research scientist Timnit Gebru⁹ after she refused to retract an article in which she argued that social bias in large language models was a significant risk¹⁰. The ethics of Facebook’s AI program has also faced scrutiny due to concerns around privacy, transparency, and potential bias in its AI programs for content moderation. Its use of AI in targeted advertising has also raised privacy issues. The lack of transparency in Facebook’s AI algorithms and the inability to review or challenge AI decisions has caused a whistleblower to ring the alarm bells¹¹.

Ethics Frameworks

So what should we do about these ethical pitfalls and controversies? The first step is to explicitly set the strategic intentions for responsible AI. The entire workforce needs clear guidance in understanding the impacts of AI products and services. This means, for example, that your code of conduct must be updated to include fair usage of AI — no need to reinvent the wheel here. There are mature ethical frameworks ready for use. Such frameworks address at least these six topics¹²:
1. Make AI fair and impartial. Assess whether AI systems include internal and external checks to help enable equitable application across all participants.
2. Ensure transparency and explainability. Help stakeholders understand how their data can be used and how AI systems make decisions. Algorithms, attributes, and correlations shall be open to inspection.
3. Be responsible and accountable. Put an organizational structure and policies in place that can help clearly determine who is responsible for the output of AI-system decisions.
4. Ensure safety and security. Shield AI systems from possible dangers, encompassing cyber threats that could result in physical and digital damage.
5. Be respectful of privacy. Respect data privacy and avoid using AI to leverage customer data beyond its intended and stated use. Allow customers to opt-in and out of sharing their data.
6. Robust and reliable. Confirm that AI systems can learn from humans and other systems and produce consistent and reliable outputs.

Make sure not just to state what the expected behavior is but also explain why. The why is important because guarding AI is not about mindlessly applying rules. It is about understanding why something should be done, by whom it should be done, and the nature, implications, and urgency of the topics.

None of the ethical topics are easy or without controversy. For example, if organizations really cared about AI being transparent and explainable, as the second principle states, then not a single company on this planet would even consider using a proprietary solution because the vendors of these solutions are completely secretive about their models, processes, and training material. However, in just a few months’ time, thousands of organizations have started using chatGPT. This includes regulated entities, like banks, that are currently rushing out cutting-edge AI products to provide better customer service and meet compliance requirements.¹³.
The collective AI mania of our time drives many organizations into an uncontrollable impulse to implement AI at whatever cost. It is all about disrupting industry practices and then bragging about short-term objectives.
This makes responsible AI a challenge. You do not need to have all the answers all the time. Nevertheless, you should show your stakeholders that you care about AI stewardship and have the intention to do what is right.

Lessons from cyber history

There is a relevant parallel from the history of cybersecurity for AI here. After almost half a century of security regulations, network protocols, encryption, and the like, it is increasingly becoming clear that it was a fundamental mistake to focus on technology and assume it is neutral. It was a fallacy to think that attacks on the internet would be mainly due to bad actors.

Security engineers, and the bodies in which they congregate, have yet to notice that much more emphasis on standards for ethical behavior is necessary. In the first half-century of the internet, people thought that regulation mainly was about technology and standards. However, looking back on cyber security, what has been overlooked so far is that surveillance capitalism needs as many mitigating actions as bad actors. Many of the problems we now have with confidentiality and privacy are outside of the scope of technical regulations and protocols yet are real issues for the internet.¹⁴
The same goes for AI — regulations are not just technical but need to include a holistic view of what type of technology usage is acceptable.

Rule #2: Embrace open source

The information security community has stressed the importance of transparency for decades now. Transparency fosters openness, which enables a large community to do things like threat modeling and deep code inspections. The more people and expertise involved, the higher the chance of finding security vulnerabilities in software, protocols, and algorithms.
History has shown that this works: open source fosters innovation and results in more robust and secure solutions. It is not entirely waterproof, but still, it does a significantly better job than most proprietary software and closed communities. There are spectacular historical examples of failed security due to secrecy. For example, law enforcement has easily cracked the encryption of some of the most feared international criminal networks because the crooks needed to keep their development processes confidential.¹⁵.

Given the overwhelming evidence of the benefits of transparency for security, we should apply the lessons from cyber security to the AI domain. , Organizations should create a governing framework for their AI that prescribes and incentives the use of open source. Regulators should step in decisively and demand that all generative AI models be completely transparent. This includes complete openness on methodology. There should be full transparency on algorithms. All texts used to train models should be publicly available. Techniques and processes used for guard railing should be completely open so that researchers or auditors can look at their mechanisms, see how they work, and test how robust they are against model drift, fraud, or malicious actions such as well poisoning.

Some recent studies argue that open-source AI lags behind. Measurement of Large Language Models’ reasoning performance shows that open source often mimics proprietary solutions and fails the complex reasoning found in commercial solutions¹⁶. The biggest challenge for the open source community is reinforcement learning from human feedback (called “RLHF”), which involves managing feedback from many human workers. This potentially leads to bias in human feedback. Also, good quality training data is needed. This requires vast resources, both computational and financial and organizational, to deal with the complexities of real-world environments. In the end, the efforts need to be translated into models to reflect human intentions and comply with ethical norms.

This is indeed a challenge to the open-source community, and it is still unclear whether it can find solutions or workarounds. However, it may be worth the wait for many organizations, especially in regulated environments. In practical settings, working with open-source models that are slightly less good at reasoning tasks may not be such a big deal. The open-source community may not be as good as commercial projects in managing human input to train models, yet the guaranteed full transparency may make up for this. Organizations may choose to build guardrails on top of open-source models that are fully transparent, verifiable and documented.

Also, in regulated environments, like finance, critical infrastructures, or healthcare, organizations should be prepared for regulators to step in soon.

So, why waste time on proprietary solutions, even if they are now more convenient and offer better quality out of the box, if it is clear that these will be banned or severely regulated? Yes, regulation will initially be clumsy and slow to take effect. However, the outcome will be that regulated organizations will have to drop or massively guardrail the use of proprietary generative AI tools. Why risk high fines, data leakage, or reputational damages?

Effective industry leaders must make the only wise decision possible here: embrace open source. All organizations should stop using proprietary large language models and use transparent ones that ensure reproducibility and respect data privacy. This is what Samsung did after the chatGPT data leakage incident: the company quickly banned all commercial large language models and offered self-hosted services on their infrastructures instead. It is why Apple announced on May 18, 2023, that they would ban popular generative AI tools, such as chatGPT and Microsoft’s Copilot.¹⁷ Amazon has urged its engineers to use its internal AI tool for coding assistance. JPMorgan Chase and Verizon have barred use as well.

I expect many organizations to do the same in the coming weeks and months. And organizations that take principle one seriously should consider the possibility of substantially funding the open-source AI community or otherwise joining forces. Massive collaborative projects are needed. AI projects need as much industry support as possible to produce free, transparent, powerful, trustworthy open-source AI models. Organizations of all types and sizes should collaborate with the open-source community and contribute to setting up large, international AI foundries.
Responsible AI is one of the biggest challenges of the century. Moreover, it is time for everyone to realize that responsible usage is only possible when we firmly decide to keep all our large language models and generative AI systems open and transparent.

The developments have been incredibly fast-paced in recent weeks. In early March, the open source community received a significant boost when Meta’s large language model, which is a highly proficient foundational model, was unintentionally made public. The community quickly grasped the monumental value of the tool they had received.

What followed was a historical surge of creativity. There was global bewilderment in the open-source community in which ground-breaking advancements were made day after day.
Just a few examples. Meta made their language model OPT-175B freely available¹⁸. In March, researchers presented “BLOOM,” an open-source language model with an impressive 176 billion parameters enabling it to generate text in 46 natural languages and 13 programming languages¹⁹. The development of BLOOM involved the efforts of over 1000 researchers from 70+ countries and 250+ institutions. The whole project is ready for full download and examination of its models, methodology, training data sets, and full code. Another open-source project called “Dolly” was first released in March, and its most recent version uses a model with 12 billion parameters. Its fine-tuning process, including crowd-sourced reinforcement learning from human feedback, has enhanced the software’s capabilities similar to OpenAI’s ChatGPT. The complete data set, along with Dolly’s model weights and training code, has been made available as open source under a Creative Commons license, allowing anyone to utilize, alter, or expand upon the data set for various purposes, including commercial applications.
In the meantime, organizations should embrace open-source AI for hands-on applications. LLM-based AI models may have an effect on our society as large as the industrial revolution.
Barely a few months have passed, and we see variations of open-source large language models featuring instruction tuning, quality enhancements, human evaluations, multimodality, reinforcement learning from human feedback, quantization, and much more.
Furthermore, the community has managed to mitigate the problem of scaling. The resources needed to engage in model training and experimentation have been drastically reduced. One person can now run a large language model on a laptop and train it in the evening.

We now have high-quality, applicable, deployable, free and open-source large language models available for all. These models provide similar functionality to commercial services such as chatGPT or Google’s Bard. In fact, on May 4, a document from an anonymous researcher at Google leaked on the web, stating precisely this: the open source alternatives will surpass proprietary AI services soon, and there is no added value from commercial products anymore.²⁰ Therefore, the only way to avoid the negative effects of the closed models is to develop open-source AI models which are openly available to the public. Use it for experiments. Invest the resources to be able to use open-source models for production. If your organization does not have a policy for free and open-source software (FOSS), this is the time to do it, and make sure it includes the usage of open-source AI models.
Open source is a complete blindspot for many organizations. Even if you have not had experience with open source before, this is the time to get it done. Embrace open source, and it will give you a huge strategic advantage, not just in AI.

Rule #3: Know your risks

Even if you want to embrace the once-in-a-lifetime opportunities of AI, you must consider the basics, such as understanding and documenting your risks. This means setting up policies and procedures that specifically target managing AI risks.

AI risks come on top of regular information security risks. They are relatively new and rapidly developing. No standards exist yet to systematically assess and compare attacks on AI systems²¹. However, in January 2024, interesting research has been published by reasearchers from Teesside University UK. They introduce a framework combining two existing threat modeling frameworks, STRIDE (an acronym for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege) and DREAD (an acronym for Damage, Reproducibility, Ex- ploitability, Affected Users, and Discoverability)²¹ᵇ. Some Cyber organizations, such as the Dutch National Cybersecurity Alliance, are actively tracking the evolving attack landscape, which keeps a regularly updated list of new attack methodologies²². This provided essential insights that are necessary to know how to protect AI systems. Expect more to happen in this field.

There are four types of new or enhanced risks: data risk, model and bias risk, prompt or input risk, and user risk.

Data risk
The first is data risk. Data risk is the potential dangers and challenges associated with using data in training generative AI models. The risks can manifest in various ways, such as error propagation, injection of malicious data, leaking of intellectual property, or legal issues due to misleading or harmful content resulting from using low-quality input data. Especially error propagation is tricky to detect and manage because errors or inaccuracies of the input data are perpetuated and amplified throughout the training process. It leads to flawed outputs from the AI model.

Intellectual property and contractual issues are not so many new risks, but they are amplified with AI and the big datasets that are involved in training AI models: if you use specific data for training purposes for which you do not have usage rights, you will, sooner or later, get in trouble. It is possible to train attack AI models to trick target AI models into leaking training data based on statistical methods.²³

Model and bias risk
The second risk is model and bias risks. This is where ethical considerations come in. It is about developing responsible language models that are not discriminatory, unfair, or plain wrong. This is also hard to detect, and the best way to mitigate it is to shift left as far as possible. Managing model risk comes down to organizational values and the intention of being an AI-ethics lighthouse. It is about integrity and fairness as an organization and making the fundamental decision to walk the talk and invest in the necessary resources for managing this risk. This means taking supply chain risk into account, making models robust against attacks, and going all the way to ensure that your models are correct and can be audited. Not because this is good for the balance sheet but because it is the right thing to do.

Models are also susceptible to reverse engineering, in which attackers can reconstruct the mode, even if they do not have direct access to it. Ethics is also an important factor here because the question is, do you want to protect intellectual property or contribute to the open-source AI community?

Prompt or input risk
The third risk is prompt or input risks. If you write prompts or questions in an unsophisticated, misleading, or inaccurate way, the AI model will produce simply misleading, inaccurate, or even harmful responses. Users should learn how to write well-formed, contextually appropriate inputs, free from any potential biases or harmful intent. Good old-fashioned awareness and quality management processes may help here.

Adversaries can also take advantage of prompt risk. An input attack is designed to manipulate the input to an AI system so that the system performs incorrectly or not. Input attacks do not change anything in the AI system itself but provide certain types of input that cause the system to make mistakes. The changes in the input are often minimal, and attacks are, in most cases, challenging to detect. The outcome may be manageable for a chatbot but think of an attacker that uses split-second phantom images to fool the autopilot in a car.²⁴

User risk
Lastly, user risk. AI puts humans in an awkward position. Models are opaque. Proprietary systems are black boxes. There is often no transparency about how model training. You never know how input may end up in the output. Systems dynamically change as models adapt, evolve, and alter their behavior. Even the system’s developers may struggle to explain the model’s specific decisions. Large language models are good at giving answers that sound just about right, but the models do not know the correct answer, and they cannot make their decision-making transparent. So, by design, humans are in a bad position.

Users may unknowingly contribute to creating and disseminating misinformation and harmful content. For example, they may unwittingly present AI-generated “hallucinations”. “Hallucinations” is an interesting word here, because it seems so typical for the current AI hype. The stupid outcome of dumb AI models, fundamentally inherent in their statistical methodology, is referred to with an anthropocentric phrase. Large language models come up with an outcome that seems right but may be full of factual errors or may even be completely nonsensical. Auditing AI systems is highly problematic because systems evolve dynamically, and there are currently no regulations that prevent organizations from protecting their AI algorithms as proprietary secrets.

So, it is hard to guardrail this type of risk. Training people can improve input quality and ensure that humans write well-formed prompts that are contextually appropriate and free from potential biases or harmful intent. Users can be taught to add specific commands to the input prompt, such as “Make sure you do not make anything up. Base your answer on validated scientific research. Provide all your sources.” Australia recently set up a program that is intended to train humans to become more resilient to digital disinformation and misinformation²⁵.

Automation may help. In April, NVIDIA, an American technology company that is best known for its graphics processing units (GPUs) and is now on a rocket trip because it builds the essential building blocks for AI systems, presented an open-source software that can help developers add guardrails to AI chatbots.²⁶ The software sits between the user and an AI application and monitors all communication in both directions.
Such guardrails can help organizations keep large language models aligned with safety and security requirements. However, it is a fallacy to think there is a simple technical fix for user risk.

Rule #4: Unleash Curiosity

AI does not just change how we deal with information and dataflows, but it also affects employment. Several companies have already announced that AI would allow them to eliminate their workforce. For example, Arvind Krishna, IBM’s CEO, announced at the beginning of Mayi that the company plans to replace around 8000 jobs with AI over time (ca. 3% of its global workforce), and pauses hiring certain positions, for example, in human resources or other back office functions.²⁷

AI may replace certain routine or repetitive tasks, but this is true for any workplace automation. Historically, new technologies do not lead to mass unemployment — take the personal computer, the internet, e-commerce or industrial robotics as examples²⁸. A low two-digit percentage of the global workforce probably needs occupational change, but only after we figure out how to improve some of AI’s current serious quality issues.

So, the focus should be on something other than layoffs. Instead, think of AI as a catalyst for a learning organization. Liberate your people from repetitive, low-skill tasks that are frustrating and lead to low job satisfaction. AI tools usually make abstract notions of what it means for an organization to learn highly concrete and actionable. The classical plan-do-check-act can become more dynamic because each step can become more efficient. Now, humans can focus on the vision and the flow while AI takes care of the administrative stuff.

This is not going to happen by chance — it is something leadership actively needs to steer towards. With AI, people become more valuable than they already are. Organizations that treat their employees as disposable will soon get to know what the effects of dehumanization are. In the age of AI, employees who feel objectified by their organization and are made to think of themselves as tools or instruments for the organization’s ends are simply moving on³⁰.

Winning organizations in the age of AI offer training and an environment for learning and growth. They ensure that their organization fosters curiosity and finds ways for people to make adaptation to future trends and technologies easy, effortless, and fulfilling. Continuous learning means encouraging means, encouraging employees to take ownership of their learning and development and giving them the time and resources for learning.
It is hard to overstate the importance of curiosity. Artificial intelligence forces leaders to steer a corporate culture from dated 19th-century notions of human nature toward a modern understanding of what it means to be human. This transition demands boldness and perseverance. Window dressing will not work. You need to accept the necessity of continuous learning. You need to move from a testosterone-fuelled, “know it all” attitude towards an oxytocin-inspired, “learn it all” culture³¹.

Rule #5: Be religious about inventory

In order to interact safely with AI systems, you must know what type of data you have and who has access to it, when, and where. This means that accountabilities for data-related decisions need to be clear. You should have data-related roles and responsibilities, and users should be trained to know what can happen when data meets AI.

Most organizations have only a vague understanding of what type of information they have. Inventories often need to be more granular to know anything about data quality. Is it accurate? Is it reliable? Is it timely? Has it been validated and cleaned up lately? Most of the time, it is murky waters here.

Also, it is often unknown who has access to what and when. Do people in department A have access to personally identifiable data (PII)? Do employees in customer services have access to intellectual property? Have all engineers accessed any or all code repositories? Such basic information is often not available and not well documented.

With AI, data protection risks have become magnified. Anyone with an internet connection can take advantage of its capabilities. Results are exceptional in speed and agility, thus outperforming manual search, query, indexing, or writing. The refined and polished results in a dialogue-style interface appeal to many people in diverse positions and roles, from management to customer service employees. AI products are also quickly being integrated with widespread applications like Microsoft Office 365 or Github.

This means data can be shared everywhere easily with AI and sometimes completely unwittingly to the user. The type of data that may be shared also has an ultra-wide scope, from sensitive strategic corporate information, financial statements, personally identifiable information, intellectual property, source code, critical incidents, trade secrets and customer data.

People with little technical expertise can use AI tools, so they may mishandle sensitive information or unknowingly expose it to unauthorized access. Also, engineers may leak data like secret keys or user credentials, for example, when using AI-code generation tools such as Microsoft’s copilot.

Users may also trust results from AI tools that are biased or plain wrong. For example, inexperienced users may struggle to identify ethical issues with AI-generated content, leading to unintended discrimination or otherwise harmful outcomes. Engineers may face quality issues, like when coders use AI-generated code with security vulnerabilities. Due to the black-box code generation model, there are no easy ways of identifying such security issues. Researchers are looking into automating code quality control that allows for security vulnerabilities from AI sources to be more easily identified³².
The most important step to mitigating this risk is to know what might be shared or used, by whom, where, and when. Inventories should not just keep track of what type of data is accessible to whom and when but also where AI results may end up.
So, proper systems and data inventories are essential prerequisites for using AI. Data sources and underlying systems should be uniquely identifiable. All data should be properly classified according to their type. It should be known where it is located and who has access, when, and why. It should also be documented who is responsible for data and what the quality control of that data looks like. Ultimately, in order to protect data in the age of AI, it is essential to have a good understanding of the data landscape and its attack surface. This means inventories should cover the whole data lifecycle and include how it is created, processed, stored, archived, shared, quality controlled, and deleted.

Rule #6: Create a Serendipity Mindset

In the fast-changing world of AI, many of the emerging topics are so complex, powerful, and momentous that much of its future will be driven by the unexpected. The recent breakthroughs in large language models have shown how unpredictable and mind-boggling fast developments can be. Most of the current momentum has been missed by large corporations such as Google, Facebook, and Apple and major research institutes. For example, the UK’s flagship institute for artificial intelligence, the Alan Turing Institute, has been, at best, irrelevant to the current developments³³. To avoid irrelevance, you want to cultivate a serendipity mindset.

Serendipity is the art of making an unsought finding. It is an unexpected solution to a different problem from the one you wanted to solve in the first place. It is expecting the unexpected. In order to stay on top of things in artificial intelligence, organizations should embrace serendipity as a philosophy, a method, and a skillset. Serendipity is not just “good luck”; it can be learned, and there is sound scientific research on how to do that³⁴. Think of serendipity as something that can not be planned for in terms of ISO standards, yet as something that may be invoked by smart practices³⁵.

For example, incentivize serendipity spotting. In weekly meetings, ask, “Did anyone come across some great, unexpected, cool new AI feature?”. The past months and weeks have been full of discoveries, as people have been using generative AI for the most incredible things. Make people see the potential of such AI-fueled trajectories.

Facilitate psychological safety. Remind staff that AI is complex and their voice is critical in getting it right. It is not all AI sunshine. There are significant risks and ethical concerns. People should feel safe to express such concerns.

Experiment with “project funerals,” for example, with regard to something you wanted to solve with chatGPT, only to realize that the answers look nice, yet they are wrong half of the time. Or that the risk of data leakage would be too high. Install “random AI coffee meetups” or “random AI lunches,” pairing people from teams and departments that would normally not talk to each other. You will be surprised by what people have been thinking about or tinkering with.

Invite a group of young people to shadow your activities for a day or two, and afterward, ask them what they observed and how they think AI can help.

Celebrate the informal power structure — who inside or outside your organization could help push this vague idea for improvement forward using AI? Meet these people for a relaxed conversation and explore what could be possible. Then identify three people who could stand in the way and have relaxed conversations with them³⁶.

Rule #7: Measure for Excellence

Monitoring performance in the age of AI needs critical reflection on what it means to measure performance data for humans and systems. For humans, is performance management geared toward accountability? Moreover, does that type of accountability take on a punitive dimension? When employees suspect that performance management will be used to belittle and punish them (“chatGPT would do this better…”), they no longer treat it as a legitimate management tool but as something to be gamed and evaded. This is simply Cambell’s law. AI systems will give people the motive and the means to distort and corrupt performance measurement in unprecedented ways.

In systems, most existing metrics for AI systems revolve around engineering metrics, such as accuracy, speed, and scalability of models. Things become more multifaceted — and thus noisier — when other factors are taken into account, such as “robustness,” “data quality,” “fairness,” “bias,” or “explainability.”

Especially in real-world conditions, the numbers may be seriously flawed. Programs to monitor and improve the quality of AI may impair, not improve, the outcome. So, the question is, what underlying processes or aspects of artificial intelligence in an organization do we want to evaluate and why?
Since it is such a foundational aspect of implementing AI, figuring out ways to measure fairness and ethical considerations should be something to focus on first. We must ensure that AI serves society broadly, not narrowly. It is not easy to do this because of the qualitative nature of the factors. However, it can be done, for example, by looking at the quality of the AI system’s outcome for different groups, such as race, gender, social and demographic aspects. Simply measuring the frequency and scope of audits and evaluations of AI systems may also be a good starting point. Measuring diversity in AI development teams may also be a useful metric.

Outlook

Embracing AI means keep asking: what is the goal? It requires long-term thinking, vision, and continuity of leadership. You need to understand that even with AI, there is no quick fix. You want to be more confident than ever that your organization has the right expertise and a culture of critical thinking and growing people and expertise.
Ultimately, the question is: do we want to achieve quick wins, cut staff and reach challenging financial goals with AI, or do we want to build an enterprise for the long term that delivers exceptional value to customers and society with AI’s new and revolutionary possibilities?

References

(1) Jeong Doo-yong, Economist, “Concerns turned into reality… As soon as Samsung Electronics unlocks chatGPT ‘misuse’ continues”, 30.3.2023, Used in the google translation version: https://economist-co-kr.translate.goog/article/view/ecn202303300057?s31&_x_tr_slauto&_x_tr_tlde&_x_tr_hlde&_x_tr_ptowapp
(2) Stephen Wolfram, “What Is ChatGPT Doing … and Why Does It Work?,” Stephen Wolfram Writings, 2023. writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work
(3) open AI Data Usage policy, Privacy Policy: https://beta.openai.com/policies/privacy-policy, Terms of Use: https://beta.openai.com/policies/terms-of-use
(4) This section is based on several interactions with chatGPT. I considered both its regular responses as well as it’s responses in Developer Mode, which is a version of chatGTP, that circumvents its internal guardrails and allows to see more “raw” version of ChatGPTs responses. For switching on developer mode, I used the ChatGPT Dan 12.0 Prompt
(5) Willison, Simon, “Its infuriatingly hard to understand how closed models train on their input”, June 4, 2023 (retrieved: 4.6.2023)
(6) openAI, How your data is used to improve model performance
(7) This can be done in “Settings Chat History and Training”.It’s also possible to fill out this form: https://docs.google.com/forms/d/e/1FAIpQLScrnC-_A7JFs4LbIuzevQ_78hVERlNqqCPCt3d8XqnKOfdRdQ/viewform
(8) https://help.openai.com/en/articles/7730893-data-controls-faq
(9) NBCnews, “Two Google engineers resign over firing of AI ethics researcher”, Feb. 4, 2021
(10) Ben Hutchinson, Vinodkumar Prabhakaran, Emily Denton, “Social Biases in NLP Models as Barriers for Persons with Disabilities”, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 54915501July 5–10, 2020:“https://aclanthology.org/2020.acl-main.487.pdf”
(11) De Vynck, Gerrit; Zakrzewski, Cat; Lima, Cristiano, “Facebook told the White House to focus on the facts about vaccine misinformation. Internal documents show it wasnt sharing key data.”, Washington Post, October 28, 2021
(12) Davenport, Thomas; Mittal, Nitin, “All in on AI. How smart companies win big with AI”, Harvard Business Review Press, 2023, p. 115–116.
(13) Forbes, “Top 10 Use Cases For ChatGPT In The Banking Industry”, Mar 8, 2023
(14) S. Farrell, F. Badii, B. Schneier, S. M. Bellovin, “Reflections on Ten Years Past The Snowden Revelations”, The Internet Engineering Task Force (IETF), 20 May 2023
(15) Caesar, Ed, “Crooks Mistaken Bet on Encrypted Phones”, April 17, 2023
(16) Yao Fu, Litu Ou, Mingyu Chen, Yuhao Wan, Hao Peng, Tushar Khot, “Chain-of-Thought Hub: A Continuous Effort to Measure Large Language Models’ Reasoning Performance”, arXiv:2305.17306, 26. Mai, 2023
(17) Tilley, Aaron, “Apple Restricts Employee Use of ChatGPT, Joining Other Companies Wary of Leaks”, Wall Street Journal, May 18,2023, [retrieved 20/05/2023].
(18) Meta, “Democratizing access to large-scale language models with OPT-175B”, May 3, 2023
(19) BigScience Workshop, “BLOOM: A 176B-Parameter Open-Access Multilingual Language Model”, revised 13 Mar 2023, https://arxiv.org/abs/2211.05100.
(20) Patel, Dylan; Ahmad, Afzal, “Google: ‘We Have No Moat, And Neither Does OpenAI’ Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI, May 4, 2023
(21) Bundesamt für Sicherheit in der Informationstechnologie, “Security of AI-Systems: Fundamentals — Provision or use of external data or trained models”, 20.12.2022

(21b) Tete, Stephen Burabari, ‘Threat Modelling and Risk Analysis for Large Language Model (LLM)-Powered Applications’, January, 2024. https://arxiv.org/abs/2406.11007
(22) NCSA, “AI systems: develop them securely”, 15.02.2023.
(23) R. Shokri, M. Stronati, C. Song und V. Shmatikov, Membership Inference Attacks Against Machine Learning Models, in IEEE Symposium on Security and Privacy, San Jose, CA, USA, 2017.
(24) Nassi, Ben, “Split-Second ‘Phantom’ Images Can Fool Tesla’s Autopilot”, CCS ’20: Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications SecurityOctober 2020, Pages 293308
(25) https://www.internationalcybertech.gov.au/our-work/security/disinformation-misinformation
(26) https://developer.nvidia.com/blog/nvidia-enables-trustworthy-safe-and-secure-large-language-model-conversational-systems/?ncidprsy-552511#ciddl28_prsy_en-us
(27) “IBM to Pause Hiring for Jobs That AI Could Do”, May 1, 2023
(28) David H. Autor, “Why Are There Still So Many Jobs? The History and Future of Workplace Automation”, Journal of Economic Perspectives, Vol. 29, №3, Summer 2015, pp. 3–30
(29) McKinsey, “AI, Automation, and the Future of Work: Ten Things to Solve For”, Briefing NotePrepared for the Tech4Good Summit, Organized by the French Presidency, June 2018, Benjamin, “Heres What Happens When Your Lawyer Uses ChatGPT”, May 27, 2023
(30) Bell, C. M., & Khoury, C. , “Organizational de/humanization, deindividuation, anomie, and in/justice”. In S. Gilliland, D. Steiner, & D. Skarlicki (Eds.). “Emerging Perspectives on Organizational Justice and Ethics, Research in Social Issues in Management” (7th ed., pp. 167197), Information Age Publishing, 2021.
(31) Rispens, Sybe Izaak, “The Value of People in Cybersecurity. The destructive nature of Taylorism in Cybersecurity management”, Medium, 2022
(32) Hossein Hajipour, Thorsten Holz, Lea Schönherr, Mario Fritz, “Systematically Finding Security Vulnerabilities in Black-Box Code Generation Models”, CISPA Helmholtz Center for Information Security, arXiv:2302.04012v1 [cs.CR] 8 Feb 2023
(33) Goodson, Martin, “The Alan Turing Institute hasfailed to develop modern AI in the UK”, Substack, Mai 12, 2023 (Retrieved 19.5.2023)
(34) Some of the best introductions to the topic have been given by my countryman Piet Van Andel: van Andel, P, “Serendipity; Expect Also the Unexpected”. Creativity and Innovation Management, 1(1): 2032, 1992.van Andel, P, “Anatomy of the Unsought Finding. Serendipity: Origin, History, Domains, Traditions, Appearances, Patterns and Programmability”. British Journal for the Philosophy of Science, 45(2): 63148, 1994. See also: De Rond, M. and Morley, I., Serendipity. Cambridge: Cambridge University Press, 2009. Busch, C, “The Art and Science of Good Luck”, Riverhead Books, 2020.
(35) Busch, C., & Barkema, H. (2022). Planned Luck: How Incubators Can Facilitate Serendipity for Nascent Entrepreneurs Through Fostering Network Embeddedness. Entrepreneurship Theory and Practice, 46(4), 884919. https://doi.org/10.1177/1042258720915798 Busch, Christian, “How to make Serendipity happen at work”, 2018
(36) Alina Huldtgren, Christian Mayer, Oliver Kierepka, and Chris Geiger. “Towards serendipitous urban encounters with SoundtrackOfYourLife”. In Proceedings of the 11th Conference on Advances in Computer Entertainment Technology (ACE ’14). Association for Computing Machinery, New York, NY, USA, Article 28, 18. 2014, https://doi.org/10.1145/2663806.2663836

Updates

01.07.2024 — Added the excellent paper by Stephen Burabari Tete on LLM threat modeling (See note 21b).

--

--

Dr. Sybe Izaak Rispens
Dr. Sybe Izaak Rispens

Written by Dr. Sybe Izaak Rispens

PhD on the foundations of AI, ISO27001 certified IT-Security expert. Information Security Officer at Trade Republic Bank GmbH, Berlin. Views are my own.

No responses yet