Governance of low-code application development

Dr. Sybe Izaak Rispens
27 min readMar 18, 2023

Five key strategies for the successful democratization of business app creation

© wernerwerke

Low-code platforms — software development environments that allow users to create applications with little or no coding experience — have unleashed a transformative power in organizations in recent years. Allowing non-developers to participate in the application development process through visual drag-and-drop tools has permitted businesses to automate their processes and develop software much faster and more cost-effectively.

No wonder the annual global market for low-code tools is expanding by two-digit percentage points annually and is expected to be worth around 65 billion U.S. dollars by the year 2027.¹ Market research companies such as Gartner predict that low-code will account for more than three-quarters of all new business applications developed by organisations in the next three to five years.²

There is such colossal momentum in the market for low-code tools because organizations increasingly recognize that these tools are detrimental to keeping pace with the latest digital innovations. It is a strategic cornerstone for competing, thriving, and growing in a fast-paced online world. Employees and customers expect and demand rapid development of easy-to-use applications, and organizations that don’t find ways to automate paper-based or otherwise appallingly bureaucratic processes will have a significant competitive disadvantage against those who will. Low code increases process execution efficiency and makes data capture, processing, and delivery faster. It also reduces human error in manual processes, and in general, it can raise the quality of data entry significantly. It allows organizations to create custom solutions that scale and grow quickly. For example, with low-code business apps, it’s a snap to integrate the power of conversational A.I. tools like chatGPT to make data entry much easier or do input validations on the fly.

Let’s put some numbers on this.

A recent survey among global organizations showed that low-code development is, on average, between 40 and 60 percent faster than traditional development³. Depending on the task to be solved, it can also be much faster. For example, a while ago, Schneider Electric started to invest big in low code. Soon, the organization released three low-code business apps per month, with an average delivery time of just ten weeks⁴. The U.S. Air Force also achieved significant successes with low-code tools in which applications in highly business-critical and security-related knowledge domains were developed, not in years but in weeks.⁵

Low code allowed financial institutions to streamline and automate complex regulatory workflows. Things that usually took years to automate would now have an app in a matter of months. At some point, the Dutch Bank ABN Amro demonstrated that it is possible to develop a full-fledged customer-facing mobile app in a four-hour low-code hackathon session.⁶ Such an increase in development speed obviously reduces cost.
Companies using low-code tools not just need less time developing applications, but they also have to hire less qualified I.T. developers, which can quickly save millions.⁷ Total cost reductions of low-code compared to traditional software development can go up to 90%.⁸ The data from organizations experimenting with low-code consistently show that low-code delivers on its promises.

Overall, the takeaway message is this: if a traditional enterprise app typically requires months, a dozen people, and millions of Euros to build and deploy, doing the same thing with low code brings these figures down to weeks, a handful of people, and a few thousand Euros.

Even with such a positive outlook and vendors of low-code frameworks standing in line to make you believe that low-code is the snake oil of enterprise application development, the democratization of application development comes with its challenges. Picking the right vendor is already challenging, as over two hundred low-code platforms are currently available, including the ones offered by prominent corporate players such as Microsoft, Google, IBM, and Siemens. Every framework has its niche, for example, workflow automation or generating forms for data entry. Some frameworks are open source, others proprietary.

Challenges of low-code

I see the most challenging problems with low-code in my domain, information security risk management (maybe this is my professional myopia, and there may be other significant issues that I am unaware of):

- Data leakage: The challenge of identifying data leakage or suspicious activity from applications created by end users. This is a big issue. Technically, with low-code, data leakage is almost impossible to detect, and the result can be a situation in which there is little to no control anymore over which business data is exposed to whom and when.
- Dark matter: There is often no inventory of low-code applications, so it becomes difficult to know how many applications exist within the organization, who built them, what data they can manage, and what potential risks they have. This leads to a new level of “shadow I.T.” or corporate “dark matter.”
- Quality control. Low-code, in general, affects information quality positively. For example, low code can automate repetitive tasks or make complex manual steps easier for human operators, or an AI-powered bot can continuously assist by plausibility checks. But given that low-code tools may use a broader range of data sources as input and push their results into a wider set of systems — if there are any data quality issues, the impact on the organization can easily be much larger than with any other business tool.
- Non-compliance. Citizen developers make software that works for them. They may not be aware of guidelines or procedures related to data usage or corporate network security. Or they may not know (or care much about) how to align their tools to security policies, quality standards or regulatory requirements.
- Detection and tooling. Conventional security models and tools usually fail to catch the risks of low-code frameworks. This makes the risks hard to measure and quantify. There are just a handful of products on the market that allow for analyzing and understanding the risks of low-code applications, and these tools are still in an early stage.

These challenges leave organizations with considerable security and compliance gaps. Not all of these gaps are new, and it is worthwhile to look back at the history of personal computer usage in order to see what we can learn from previous mistakes.

Historical lessons from End User Computing

For decades, organizations witnessed (or should we instead say, “endured”?) something called “end-user computing” (EUC).

EUC means the autonomous use of information technology by knowledge workers outside the information systems department.⁹It’s about organizational departments or individual employees that create solutions — usually spreadsheets — so things are done more quickly. For example, someone needed a quick fix for some shortcomings of a centralized application. Or essential reporting functionalities were missing in the corporate Enterprise Resource Planning solution, or there was a real or perceived gap between what the centralized software of an organization is capable of and the functionalities needed to get the job done. There is a spreadsheet for everything!

It’s amazing what we now know about end-users and their I.T. solutions since they were given Personal Computers. Already in 1985, with people still working on the legendary IBM 5150s, researchers at the University of Houston found that end-users may not spend enough time on problem definition and diagnosis under the pressure of daily activities. They are then likely to proceed with solving a problem by creating a new spreadsheet. And, o wonder the new spreadsheet often solves the wrong problem. Or it creates more new problems than it solved.¹⁰

For instance, end users may spend an insane amount of time developing hugely complex spreadsheets, only to find out much later that there is an existing software — perhaps an off-the-shelf software or a spreadsheet built by people in another department — that already performs the task.¹¹ Or users design and develop a spreadsheet solution in weeks or months of hard labor, that an expert could have developed in a fraction of that time using more efficient technology. Or end users get so involved in creating and maintaining their solutions that they get entirely sidetracked from their primary organizational responsibilities (Sidenote: I myself have been a long-time member of the side trackers anonymous. Building my own writing and researching tools in low-code platforms delayed my Ph.D. dissertation on the foundations of A.I. by at least several years.¹²) Or users create brilliant solutions without sufficient documentation, then leave the company or switch departments, which leads to a pile of Yet Another Useless End User Computing Solution.

One more problem to mention here, and then I stop. Some users keep adding new features to their solution, up to a point where the complexity of the solution outgrows the complexity of what needs to be solved. This is called “feature creep”. It’s not unique to end users; it’s something that is widespread in information technology.

Spreadheet Quality Issues

What are the most pressing problems today of EUC? Lack of quality is a huge issue. Users that create spreadsheets without training and a formal development and test process end up with solutions that inevitably have errors, both unintentional and sometimes even intentional ones.¹³

Reading the results of spreadsheet error research is outright depressing. Real-world spreadsheets in use by large and reputational organizations, such as multinational insurance companies and billion-heavy financial institutions, typically consist of several thousands of calculations, often linked in long, concatenated chains of formulae. Large-scale statistical sampling of such spreadsheets indicates that 2% to 5% contain errors in all those thousands of calculations.
The Spreadsheet Engineering Research Project of Tuck School of Business, for example, found that almost half of all spreadsheet files contain significant errors and have the potential to lead to very serious mistakes.¹⁴ To give a historical example of the impact of such mistakes: in 2005, Eastman Kodak was forced to restate all of its published financial results due to a spreadsheet incorrectly calculating severance and pension-related termination benefits. High costs and reputational damage were the results. The total value at risk, just for the financial sector, due to spreadsheet errors is estimated at $12.1 billion annually.¹⁵

Now, to err is human, but to allow humans to use tools that allow error at such scale, is insanity.

The human error rate in spreadsheets is not much different from the error rate in professional programming environments. Making mistakes is not due to the use of spreadsheets per se or the fact that citizen developers are more stupid than professional programmers. The problem is more fundemental: spreadsheet applications are not built for managing human mistakes, whereas professional developing pipelines are.
Quality controls are a natural part of any workflow, yet spreadsheet software ignores this. As a result, spreadsheet users tend to be insanely overconfident when estimating the quality of their work. ¹⁶
This is the first important lesson from almost half a century of end-user computing for low-code applications: just as most spreadsheets have at least one and probably several incorrect bottom-line values, low-code applications will inevitably have errors.
But here comes the advantage of low-code platforms: whereas validation of spreadsheets is a slow, laborious task, which requires manual reasonableness checks, input validations, peer reviews, in-depth reviews of user access, expert audits of change- and release processes, including its tedious manual documentation, testing down to the smallest detail of how much of the restrictive fancy EUC policies are followed by users in their day-to-day work, low-code environments can be set up in such a way that most of these steps are automated and build into the framework.

End users can easily integrate A.I. tools in low-code environments. Just as Microsoft incorporated A.I. tools in their full office product offering this week, co-pilot-style software add-ons can simply be integrated by end users in low-code tools. This will radically and fundamentally reshape how people use and view business software. People will expect low-code apps to auto-enter information or validate user input based on knowledge from powerful A.I. models.

Given the historical lessons of End User Computing, what rules can we deduce for low code governance?

Governing low-code Success Factor 1: Manage Human Error

Step one is to help citizen developers identify and manage human error throughout development. Most low-code frameworks are already a godsend compared to spreadsheets for two reasons. First, per definition, low-code aims at error avoidance. Instead of writing thousands of formulas or hundreds of lines of code, users can work with drag-and-drop interfaces. Visual ways of working are good at reducing simple mistakes, as this provides a clear and easy-to-understand representation of complex information. Low-code frameworks are also reasonably good at preventing people from defining rigid rules in their apps because it’s often easy to create additional flows that handle exceptions and edge cases. Yet errors still happen; thus, users must be assisted in identifying and addressing them.
Most low-code frameworks also feature something akin to a “software development life cycle.” The idea is to make a standard approach in professional software development available for non-specialists: so in principle, the planning, designing, developing, testing, deploying, and maintenance of software becomes a logical step in the workflow. But some tools are better at this than others. It’s a challenge to make the steps from requirements to testing intuitive and easy, and many tradeoffs exist. So it’s worthwhile to compare different products of low-code vendors.

In professional DevOps environments, automated testing is integrated from the start. However, in low-code environments, testing is usually done ad-hoc and iteratively because the low-code framework allows for quick test runs. This is great for testing out ideas and concepts fast, but to make robust applicatons, low-code developers should be guided into a more formal approach to testing. Implementing an agile version of good old quality assurance methods is a great way to do this.

The idea is to use low-code power to nudge users into creating sets of ‘test plans’ or step-by-step checklists to verify that their app functions as intended. This is a perfect use case of low code to build something that has worked well for decades but has been abandoned as a practice because it was a laborious, manual process. What we are now able to do is use the power of low code development, combined with plug-and-play A.I. models, to provide a modern time-saving toolset that is necessary for quality assurance.

For instance, low-code tools can be used to create “intelligent checklists.” These checklists are automatically updated each time a citizen developer applies a new change to the app. The idea is to turn something that used to be a tedious, manual testing process — slow, costly, and error-prone — into a splendid quality assurance tool.
However, even with tool-assisted or automated application testing, it’s a fallacy to think there is a single technical fix for managing human error. Especially in low-code development, human error usually happens at a higher abstraction level than in spreadsheets or common programming languages. Error is less the simple mistakes, slips, and lapses but more architectural choices and systemic failure.
For instance, users can connect to data sources that contain confidential information. Or a low-code application unintentionally creates so much traffic for a critical business service that it causes outages. Accounts can easily be shared among low-code applications, which can cause data leakage. Users can add vulnerable and untrusted components with the click of a button. And rarely do low-code users care about security logging and monitoring for failures.

Human error in low code can have a multitude of reasons. It can be a lack of knowledge, carelessness, too much deadline pressure, failing oversight, or a combination of all of these factors.

And this is a problem.

Low-code tools are much more powerful than spreadsheets. The potential for damage is also significantly higher than with traditional end-user computing applications. Low code can lead to devastating data breaches and loss of confidentiality, integrity, and availability on an unimaginable scale. So, the stakes for managing human failure are significantly higher.
The Open Web Application Security Project (OWASP) put together a list of the top ten security issues of low-code frameworks. The list gives insight into the most pressing problems: authorization misuse, data leakage and secure communication failures.¹⁷

Thus, the power of low-code frameworks should be used to manage human error. This needs to be a collective, coordinated, A.I-powered, cross-team and -department effort.

Governing low-code Success Factor 2: Handle Inventory

For spreadsheets, it is easy for anyone to create new files on the fly. So, in most organizations, nobody knows how many files are in use, who their owners are, what they do, and how critically departments or organizations depend on them. As a result, spreadsheets are largely invisible to corporate I.T. departments, information security management systems, management, and auditors. This is why spreadsheet applications are also called “shadow I.T.” or the “dark matter of organizations” because, like in astronomy, spreadsheets are used in every corner of the corporate universe, yet they do not appear to interact with any of the governing structures.
Thanks to data provided by one of the most prominent software vendors for automatic analysis of spreadsheet landscapes, we can put some numbers on the proliferation of spreadsheets in organizations: a midsize financial services company uses, on average, around 100.000 spreadsheets. Government agencies close to a million. In financial departments, each employee uses thousands of spreadsheets, with each document on average counting some 4000 formulas.¹⁸
In surveys, more than half of all large corporations state that the term “Spreadsheet Hell” describes their reliance on spreadsheets. This is only in part due to the unbridled ingenuity of end users, who find their way around corporate rules. It is also because of systemic issues in organizations: senior management usually stimulates the usage of spreadsheets by ad hoc reporting requests and by not allocating enough resources for effective and advanced training programs for a sufficient number of employees in the company’s internal control system, or for using more mature I.T. tools such as low-code tools.

Regulators also took on the topic of dark matter. After financial reporting scandals at Enron and other major companies, the U.S. Congress passed the Sarbanes–Oxley Act (SOX) in 2002. The act mandates publicly traded companies to address the problem of spreadsheet management.¹⁹ Public traded companies need to document, evaluate, and test internal controls for critical spreadsheets for financial reporting. Other regulations also took on EUC. For instance, the European Union’s General Data Protection Regulation (GDPR) considers personal data stored and analyzed in spreadsheets a high risk.

The reasoning is that any employee may send spreadsheets containing personal identifiable information to management, other departments, or even external parties without any oversight. Given the massive volumes of data in spreadsheets across organizations, it’s impossible to manually identify, catalog, and classify all such data.

Thus, the main lesson from half a century of end-user computing for low-code governance here is: manage your inventory.

Fortunately, low-code tools are usually server-client based. This means it is unlikely that low code can proliferate as spreadsheets because apps must run at a central place. However, the “central place” can basically be any cloud infrastructure that supports a low-code framework, so dark matter is still a risk to consider, and a centrally managed low-code repository is necessary.

This repository requires manual and automated actions. The manual steps are the responsibility of low-code app owners. They need to document and regularly update the rationale for why the solution exists, who owns what problems it tries to solve, which type of data is processed, and how things are being implemented. If you don’t make this step simple enough, low-code users will simply ignore it. If you force them, they will bend the rules — probably with some clever low-code solution.

Thus, you want to ensure that the workflow for manually registering and regularly updating information about low-code solutions is smooth. The inventory app should deliver a seamless and uninterrupted flow of the identification, data collection, and documentation steps that users are required to make. This is the only way to get current, low-code landscape information.
Such an inventory app is probably the first low-code app that any organization wants to create before doing anything else. With low code, it’s possible to build business applications that do not have a crappy user interface. It is possible to build an inventory app that does not just administer processes and bureaucratic steps but creates a useful tool that focuses on users’ needs and goals.

For instance, instead of requiring users to fill in endless checklists about their low-code application, the inventory app can ease such documentation efforts with “intelligent checklists.” These checklists use clever algorithms, including machine learning or A.I., to adapt dynamically to the user’s needs. The app should already have gathered information on the system’s context or know what type of recent changes have been made and what needs to be documented. The goal is to only ask users to input the information humans must provide. If the inventory app is built in low code, creating such a level of sophistication is entirely feasible — even with notoriously understaffed internal tool teams that usually don’t have UX-design expertise.

Governing low-code Success Factor 3: Hard Wire Authenticity

One of the main advantages of common code frameworks is integration: it’s super easy for users to connect to internal or external databases or call application programming interfaces (APIs) that let apps communicate with highly diverse systems. Connecting to internal and external systems is just a matter of clicking a button or a simple drag-and-drop exercise.

This ease of integrating data sources unearths a long-standing flaw in thinking about information security: it reveals the utter inappropriateness of a fundamental concept which has been widely held as the cornerstone of a solid and effective security program for decades.

The flawed idea is this: security is a matter of confidentiality, integrity, and availability.

To be sure, the so-called “CIA triad” has been flawed since the beginning, back in the early days of computer security in the 1970s. But now, with low code, the problems that have lingered for decades and were only recently becoming an issue in conventional development, become a tangible and imminent threat. The threat is the need for more assurance that data or information transmitted between systems is truly from the source it claims to be from. This is called “authenticity.”

The fallacy made so far is idea that authenticity is an automatic result of the requirement that information must be accurate and complete and not altered or corrupted during storage or transmission. But this is plain wrong : except, perhaps, in the most ideal, theoretical, and abstract of all worlds, authenticity can never be derived from “integrity.”

The institutionalized, mindless and uptight focus on the CIA triad has led to the nearly total neglect of the issue of authenticity. Regulatory bodies such as NIST and ISO have yet to be helpful here, as the CIA triad is the cornerstone of most information security frameworks. ²⁰

Authenticity has still not been added to the CIA triad, but it is now at least covered by something new, with a rather unfortunate name: “zero trust”.²¹

“Zero trust” is a bad way of saying that authenticity is key.

When you are focused on authenticity, you move defenses from static, network-based perimeters to a focus on users, assets, and resources. For all those cyber security theoreticians and practitioners who still think authenticity can be implicitly covered by something in the CIA-triad: look at what immense effort organizations are willing to take to implement zero trust programs.

Hardwiring authenticity (or zero trust if you insist) in low-code environments means: identifying and prioritizing assets, establishing strong authentication, and micro-segment networks, monitoring and logging all activity, implementing a least privilege access model, and, of course, implementing encryption. All low-code frameworks must be set up with authenticity in mind. This is partly something that can be prepared for in a technical sense by expert engineers. Some low-code frameworks are better at implementing zero trust on a technical level than others.

However, authenticity is not just technical implementation. It’s more of a design philosophy. It is all about this question: when can we trust assets, data, or users, especially when they are just one click away?

On a system level, corporate I.T. needs to think long and hard about how to embed low-code frameworks in a zero-trust environment. There is a lot of literature on zero trust, but this does not target low-code useage. But from a more generic system design perspective, it’s irrelevant if zero trust is applied as a guiding principle for conventional or low-code systems. NIST also has a guiding publication on implementing zero trust,²² which may be helpful here (although it needs translation into a language that non-experts can also understand).

It is essential to claim that citizen developers must learn how to implement zero trust. It is such a basic way of thinking, that it must be trained. It’s more important than coding experience. People who fail to understand the principles of authenticity, or do not get the basics of how high-level design decisions affect authentication and authorization, should not be allowed to develop low-code tools. It’s as simple as that. Organizations may want to introduce something like a driver’s license for low-code users to make sure this requirement is implemented.

Governing low-code Success Factor 4: Find the “g-spot” of governance

If anything sticks out from half a century of end-user computing, then it is this: end users’ ingenuity will find ways around governing frameworks.

They will either ignore the topic of compliance outright, do anything to be “compliant” with the minimum possible amount of effort, or will explore all options of “compliance theater.”

The primary goal of compliance theater is to create a fairy-tale-style policy and protocol landscape targeted at regulators and auditors. This landscape does not improve the quality of apps or reduce any real-world risks.

End users are brilliant at doing this.

This is not always out of malice. Most of the time, end users just look at problems differently than professional programmers or I.T. experts. They look at things more holistically and can stumble onto “solutions” without realizing what they are doing. They are mostly not as constrained by policies as I.T. people are, and they certainly don’t understand rules about computer usage in the same ways as I.T. professionals.

That makes them fantastic hackers of regulations.

With the power of low-code tools at their fingertips, they will test the rules as a sign of their independence and probably outsmart the governing framework and outsmart the designers of that framework.

So, creating a governing framework for low code may seem like squaring the circle. On the one hand, we want layman developers to be as autonomous and accessible as possible to solve problems in their knowledge space in ways that have never been possible. On the other hand, we need to ensure that all apps align with the rules.

The scope of this requirement for compliance is broad. It involves setting design criteria, building visual models, defining business logic, getting access to data, and documentation, and creating rules for system integration. The challenge is finding the sweet spot between allowing to move fast and breaking things and ensuring that citizen developers will not become kamikaze pilots.

As low-code explicitly targets non-specialists, your standard formal and technical ISO-compliant-style policies are most likely unfit for purpose. Low-code governing documents should be written in plain, crisp clear language. Document readability can be objectively measured nowadays, and the readability score for low-code policies and standards should be used as a key performance indicator.²³

Notice the by-product of low code as an organizational catalyst here: more documents in your governing landscape will likely benefit from a KPI that tracks readability.

What topics do a low-code governing framework need to cover?
First, it should set risk appetite. For instance, by classifying low-code application types by the level of business criticality, the kind of data processed in the app, or how much system integration or interference the app needs or may create. This needs to be spelled out. For example, we accept the risk of breaking things here because we consciously decide that we want the speed of low code to boost experimentation and innovation. Still, we don’t accept this risk there because that might lead to losing our licenses or is in general too great of a risk. You want to ensure that all stakeholders in low-code app development clearly understand what is allowed when, and what is not, and why (don’t forget the why, end-users won’t follow stupid rules).

Governing low-code Success Factor 5: Enable logging and monitoring

With low-code successfully rolled out in an organization, including the governing framework and proper training material, leading to end users solving their real-world problems with low-code apps, there are pressing operative questions: do we have any apps that over-share access credentials? Does this app really need write access to that database? How do our users apply the principle of least privilege, especially to personal identifiable information? Is any suspicious or malicious activity going on in our low-code landscape?

These questions can only be answered with measured facts and quantifiable data. So, we want a solution that can create visibility of how apps move sensitive or business-critical data between databases or endpoints.
With low code, there are two problems here: insufficient logging and insufficient data detection tools.

Logging in low-code development frameworks is challenging. The low-code platforms themselves may generate some technical logs, but these are usually just low-level system logs, not related to any of the business logic layers of apps. Because low-code platforms are explicitly designed to abstract away from many underlying technical details, users are usually not bothered with logging. If there are any logs, they are mostly not cohesive and standardized. And because logging can have a significant impact on the performance of low-code applications, logging is often done too infrequently to be of any use. Also, most low-code platforms have no integration with existing log management tools.

With logs being of poor quality and not connected to centralized logging and analysis tools, there is a significant blind spot in monitoring the overall low-code landscape. Where there are no good logs, there is no security information and event management (SIEM).
The challenge thus is to create an accurate, complete, timely, scaleable, performant, and integrated logging solution for the low-code framework. Unfortunately, until today, this is an almost a completely unsolved topic. Currently, only one startup company on the market promises solutions to the problem of missing logging and monitoring. Yet, even if this product may ease the pain for corporate I.T. and information security to get complete visibility and control of the low-code landscape, many issues remain to be resolved.²⁴

I think most organizations will not be able to use any off-the-shelf tooling for solving the logging challenge and will need to accept that there is no simple “technical fix” for low-code logging and monitoring. It will likely be necessary to build individual logging infrastructures and define organization-wide standards that integrate into existing infrastructures and workflows. Also, for logging to be meaningful, it needs to be connected to business logic. Some of those tasks may also be automated by making it mandatory that all building blocks for end users must contain the required logging capabilities. However, the topic of logging cannot be shielded away completely from end users. In order for logs to be meaningful, citizen developers need to decide what to log, when, and how. So, training is necessary.
Most off-the-shelf tools for network traffic monitoring could be more practical and suitable for monitoring low-code environments. Standard network security monitoring tools will, in most cases, not be able to detect specific security threats to low-code environments, including detection of malware, unauthorized access, or data exfiltration. Let me give an example.
Network analyzers can capture and analyze network traffic of the low-code framework in real time. Yet, in order to make sense of the data flow, it is necessary to have in-depth knowledge of what type of communication from the framework is to be considered normal. The low-level functioning of the framework itself is usually a “black box.” Any custom or third-party connector will add new traffic of strange behavior. Each new app will also change network patterns. So, poor logging is a big issue for monitoring low-code frameworks. Solving it may pose quite a bravura task for many organizations.

Governing low-code Success Factor 6: Third party management and lock-in prevention

A low code framework per definition makes you dependent on the supplier. A lot of things are easy to implement, because the many use cases that you need have been taken care of by the vendor of the framework. Usually, you will be able to cover around 80% of your needs by what the vendor has taken care of. But what happens with the last mile? What if you really need truly custom functionality that the low-code framework has no standard click-and-play modules for?

Often, the remaining 20% requires significant and potentially-impossible customization. So, you will need to make sure before choosing a framework, how well it is possible to add non-out-of-the-box functionality. Most of your needs may be covered by calling a custom build API, so if your framework allows calling API with ease, you are good. Yet it may be necessary to build truly custom code. And then, you may be locked in a product-specific or even proprietary language. This has several disadvantages. Such niche scripting languages are often not as powerful as is needed for closing the gaps to a fully customised solution. And also: the pool of potential developer talent for your low-code framework may be very small.

Thus, you need to be aware of the limitations of the low-code framework, and in general need to avoid the particular scripting language of the framework, to avoid lock-in. You may want to make sure that it is possible in the framework to build and match 100% of your requirements by using widely available programming languages, such as JavaScript or Python. Ideally, the low-code framework would be programming language agnostic, and allow you to add custom code in whatever language is most convenient to develop in and maintain for your organisation. You want to make sure that you can recruit developers from a pool of ubiquitous, open-source language developers.

Lock-in can also have financial consequences: often the business model of low-code vendors is build such that point-and-click modules will have a price tag, and the total cost can go up quickly²⁵.

Outlook

Overall, the benefits of low-code are considerable: enhanced productivity accelerated deployment, and a user-friendly development process with pre-designed templates and drag-and-drop interfaces.

High-performing employees will demand from organizations the democratization of business app development with low code. They will want easy access to platforms that are designed for both technical and non-technical users, enabling them to work jointly with developers, business analysts, and other stakeholders. Subject matter experts will want to be able to easily tap into the power of artificial intelligence and machine learning tools and build apps that will be able to automate and significantly improve many of the manual tasks in business processes.

Especially in the three lines of defense model for information security and governance, I see super high potential for automating things with low code. There are just so many laborious governing and risk management steps in the second and third lines of defense that usually, in hyperscaling organizations, second and third-line functions are dragging their feet and having such a hard time keeping up with the pace of first-line that the snail’s pace in itself becomes a liability.
In the second and third lines of defense, there are endless laborious tasks can be automated with low code. In order to be the independent control function that oversees risk and monitors the first-line-of-defense controls, second-line functions need an army of swiss army knives. Even if some of these tools can be off the shelf, at some point, there is still the need for a fully customizable layer on top of the tooling landscape so that information can be orchestrated with great precision and at high speed.

Despite such advantages, low code is also not for the faint of heart. If you fail at governing its powers, chaos will be the result. And low code will never be a “technical fix” for all things an organization has failed to manage so far.

But when governed well, a common tool for thinking and developing is likely to be a massive catalyst for change. It will incentivize subject matter experts to work with professional programmers and software architects. In some organizations, this in itself is already nothing less than a cultural revolution.

With low code, end users are now able to create unique business tools. We are just at the beginning of a whole new era in custom business development. Low-code plus A.I. will lead to fully customised “co-pilots” in business settings that will simply blow your head off.

References

(1) Brandessence Market Research And Consulting Private Limited, “At +26.1% CAGR, Low-code Development Platform Market size is Expected to reach 65.15 Billion by 2027, says Brandessence Market Research”, March 2021. See also Statista, Low-code development platform market revenue worldwide from 2018 to 2025
(2) Gartner, Magic Quadrant for Enterprise Low-Code Application Platforms
(3) Creatio, “The State of low-code/no code”, March 2021
(4) Bloomberg, “Low-Code Is the Future — OutSystems Named a Leader in the 2019 Gartner Magic Quadrant for Enterprise Low-Code Application”, 2019
(5) https://appian.com/why-appian/customers/all-customers/u-s-air-force.html
(6) “Modernizing the Customer Experience at ABN AMROfor Generations to Come”, 2019
(7) John Rymer, “The Forrester Wave: Low-Code Development Platforms, Q2 2016”, 2016
(8) Pathfinder report,“Intelligent Process Automation and the Emergence of Digital Automation Platforms, 2018“https://www.redhat.com/cms/managed-files/mi-451-research-intelligent-process-automation-analyst-paper-f11434-201802.pdf
(9) Baskarada, Sasa, How Spreadsheet Applications Affect Information Quality, Journal of Computer Information Systems 51(3), July 2012.
(10) Alavi, M., & Weiss, I.R. Managing the Risks Associated with End-User Computing. Journal of Management Information, 2(3), 5–20, 1985.
(11) Jenne, S. E., Audits of End-User Computing. Internal Auditor, 53(6), 30–34, 1996.
(12) LaTeX, the leading software system for creating scientific documents, also came into the world by someone becoming seriously sidetracked. LaTeX started as a side track by Donald Knuth and then became almost a life obsession of Leslie Lamport. See: Lamport, Lesie, “LaTeX: A Document Preparation System”, Addison esley, 1994.
(13) Because spreadsheets have no audit trails, no input validation and no access control, fraud is easy. A few years ago, a trader in a major European bank was able to conduct a series of unauthorized trades leading to $691 million in losses. He was able to pull this off, simply by editing the spreadsheets used to monitor his units activities. See Kroeger, Jasper, “Managing the risks of using end user computing solutions”, in: Abbas Shahim, “Research in IT-auditing. A multidsiciplinary view”, VU Amsterdam, 2019, p. 258.
(14) Kenneth R. Baker, Lynn Foster-Johnson, Barry Lawson, and Stephen G. Powell, “A Survey of MBA Spreadsheet Users”, 2012; Powel, Stephen G.; Baker, Kenneth R.; Lansion, Barry. “Errors in operational spreadsheets. Journal of Organizational and End User Computing” (JOEUC), 2009, 21. year, Nr. 3, P. 24–36.
(15) Chartis Research, “Quantification of End User Computing Risk in Financial Services”, https://www.chartis-research.com/operational-risk-and-grc/operational-risk/quantification-end-user-computing-risk-financial-services-1142, retrieved 20–01–2023.
(16) Panko, R. and R. Halverson (1997) Are Two Heads Better than One? (At Reducing Errors in Spreadsheet Modeling?) Office Systems Research Journal 15, 21–32.
(17) https://owasp.org/www-project-top-10-low-code-no-code-security-risks/
(18) Around 90% of all organisations rely heavily on spreadsheets that are of material importance in financial reporting. Hinh, J.; Lewicki, S. A., & Wilkinson, W. B.“How spreadsheets get us to Mars and beyond”. Proceedings of the Forty-Second Hawaii International Conference on System Science, IEEE, 2009. Cimcon, Spreadsheets vs. information security. Assuring Information Security withinEnd-User Controlled Applications, K., MacRuairi, R., Clynch, N., Logue, K., Clancy, C. & Hayes, S., “Spreadsheets in financial departments: An automated analysis of 60,000 spreadsheets using the Luminous Map technology. Proceedings of the 2011 European Spreadsheet Risks Interest Group”, 2011.
(19) Panko, R. R. (2006, May). Spreadsheets and SarbanesOxley: Regulations, risks, and control frameworks. Communications of the AIS, 17(9). PriceWaterhouseCoopers. (2004). The Use of Spreadsheets: Considerations for Section 404 of the Sarbanes- Oxley Act. PriceWaterhouseCoopers.
(20) The only regulator that I know of that requires “authenticity” to be one of the four explicit protection needs of IT systems is the German financial regulator. Yet even in Germany, auditors subsume authenticity under integrity and give organsiations that apply standard controls for CIA, routinely a waver.
(21) See also “Cyber Security Buzzwords #1: Zero Trust”. A terrible name for a bright idea”, Medium, 2021.
(22) Scot Rose, “Zero Trust Architecture”, Special Publication (NIST SP) — 800–207, NIST, 2020.
(23) Some of the most commonly used readability scores include the Flesch-Kincaid readability test, the Gunning Fog Index, and the Coleman-Liau Index. See: Zamanian, Mostafa; Heydari, Pooneh. “Readability of Texts: State of the Art”, in: Theory & Practice in Language Studies, 2012, 2. Year, Nr. 1.
(24) https://www.zenity.io/about-us/

(25) Scialli, Nick, “Why I’m skeptical of low-code”, 30.12.2023, https://nick.scialli.me/blog/why-im-skeptical-of-low-code/

Updates

31/12/2023: added section 6: Third party management and lock-in prevention

--

--

Dr. Sybe Izaak Rispens

PhD on the foundations of AI, ISO27001 certified IT-Security expert. Information Security Officer at Trade Republic Bank GmbH, Berlin. Views are my own.