adesso Blog

Generative Artificial Intelligence (GenAI) is more than just a buzzword – and for good reason: in contrast to traditional AI, which primarily analyses data, GenAI is able to independently develop new content such as texts, images or code, and even new AI models. Nevertheless, a (usually) human trigger in the form of a prompt is necessary. The generation itself runs without further manual intervention.

The rapid rise of transformer-based architectures such as GPT-4, MidJourney and GitHub Copilot and the associated ease of access and usability, as well as the often high speed of response generation, have greatly accelerated the spread of AI. It has never been so easy to generate code, write convincing texts, compose songs or summarise information from different sources – and for all user groups, because for the first time, in-depth technical expertise is no longer required to use AI.

However, as this technology becomes more widespread, questions are also being asked about ethical implications, security risks and governance measures. How can the enormous opportunities offered by generative AI be exploited while at the same time minimising the undeniable downsides?

1. Generative artificial intelligence: the opportunity is greater than the pure automation potential

The fascination of GenAI lies primarily in its versatility and ease of application. It enables companies to go far beyond mere automation and to tap into completely new value chains or even markets. Processes can be optimised or automated in record time, creative content can be generated in unimagined diversity, and customer interactions can be personalised not only according to individual preferences but even according to current mood. GenAI opens up new possibilities in almost all industries, for example

  • in detecting cyber attacks by analysing patterns in security alerts and log files from a wide range of systems simultaneously, thus identifying threats at an early stage.
  • in software development, where GenAI examines source code for vulnerabilities and known security holes,
  • or in banks, where it can be used to automatically generate complex reports and balance sheets.

A major advantage is the scalability of generative AI when it comes to automating business processes. Companies can easily generate content in different languages and cultural contexts for different markets without drastically increasing costs. Design and architecture also benefit, as photorealistic visualisations can be created faster than ever before.

Generating results using generative AI is quick and easy thanks to the widespread availability of the technology and the high speed of data access and information sharing. That is why it is important to adesso to provide its customers with targeted support as early as possible so that they can make the most of the potential offered by this technology. However, a certain scepticism is emerging among DACH companies in this regard: while the US and China are acting as pioneers and moving forward according to the motto ‘first to act, first to create facts’, companies in German-speaking countries are pursuing a more considered, long-term approach.

This reluctance is often due to concerns about regulatory uncertainties, intellectual property rights and data protection regulations. While the enormous potential of the technology is recognised, many companies are also facing significant legal and cultural challenges in integrating GenAI into their workflows. However, the EU AI Act, which came into force in August 2024, creates a legal framework here, offering companies in the EU better orientation and laying the foundation for fair competition.

In summary, generative AI offers both public authorities and companies the opportunity to simplify and accelerate highly repetitive tasks while maintaining a consistent level of quality. This makes it possible to optimise existing (business and public authority) processes, shorten throughput times and ultimately also minimise risks in day-to-day business. At the same time, the technology can act as a catalyst for ideas and inspiration. In order to build practical expertise in dealing with GenAI, to evaluate the technology realistically and to implement your own use cases reliably, proof of concepts (PoC) and pilot projects are valuable instruments in the first step.


We support you!

adesso supports you in identifying business processes, defining suitable technology-agnostic approaches and developing specific use cases. This is how we pave the way for a successful entry into the world of generative AI.

Contact us


2. The dark side: risks and challenges of GenAI

Opportunities and risks go hand in hand. GenAI offers enormous potential for productive applications, but at the same time carries the risk of misuse for fraudulent or harmful purposes. For example, deepfakes can be used in marketing campaigns to spread misleading promises or damage brand reputations. AI-generated phishing emails deceive employees and can lead to data leaks or financial losses. Cybercriminals also use GenAI to create malware or to tailor social engineering attacks to specific company structures and their employees.

A particularly relevant risk for companies is the manipulation of GenAI applications through so-called prompt injections. Say a chatbot is tricked into giving offensive or incorrect answers, revealing confidential company information or harming customers by means of a cleverly formulated prompt. Such manipulated responses can cause significant reputational damage, undermine customer trust and, in the worst case, lead to legal consequences.

But it is not only external threats that are problematic for companies – significant difficulties can also arise even when used properly. Bias, i.e. the prejudice of systems, is a well-known problem of Large Language Models (LLM). Many AI systems are also a black box for the operators (operators in the sense of the EU AI Act, i.e. the role or responsible party in whose name the technology is used, such as the person named in the imprint as responsible for the website) of this technology: they deliver results that even experts cannot always understand, or only with considerable effort. However, when an AI grants loans in the financial sector or decides on applications in the human resources department, transparency is essential.

In addition to the inherent weaknesses of the technology itself, targeted attacks from outside also pose a danger. ‘Adversarial attacks’ on GenAI manipulate the models by making small changes to input data that are barely perceptible to humans but can lead the AI to completely wrong conclusions. Such attacks can significantly influence decision-making in critical areas such as healthcare or the judiciary.

These examples illustrate that the risks of GenAI in a corporate context can be many and serious. These are therefore challenges that companies must actively address and monitor in their own interest.

A detailed examination of the risks can be found in the recently revised study by the BSI ‘Generative AI Models - Opportunities and Risks for Industry and Public Authorities’, Federal Office for Information Security.

3. Governance as a solution: How companies can use GenAI safely

In view of the many opportunities and risks of generative AI, a well-thought-out governance strategy is essential to use the technology safely and responsibly in the company. Effective GenAI governance encompasses several dimensions and must take into account the entire lifecycle of a generative AI model, which will be explained below.

The foundations for safe and ethical GenAI use are laid as early as the planning phase. This includes defining clear goals and use cases, assessing potential risks, and establishing ethical guidelines. Central to this phase is the definition of a governance framework.

In the development and training phase, companies that develop their own GenAI must focus on selecting suitable model architectures, implementing security mechanisms, and conducting tests. To ensure the security and reliability of the model, the implementation of a security architecture, the increase of model robustness and the execution of model tests are of great importance. Extensive tests, including Red Teaming, are essential here. For companies that use a model from a provider, this and the following step are omitted.

A validation and customisation phase is used to check the model performance and fine-tune it to specific requirements. It is important to increase the transparency and explainability of the model and to implement mechanisms for error detection and correction. Relevant measures in this phase include implementing model monitoring, promoting explainability and defining error handling. Establishing transparency is key here.

The subsequent operational phase involves the secure integration of the model into the IT infrastructure, the implementation of access controls and the ongoing monitoring of model performance and security. Crucial in this phase are the provision of a secure infrastructure, the implementation of access controls, the validation of inputs and outputs and the planning of incident response.

Central governance strategies and measures must be implemented across all phases of development to ensure that GenAI is used securely and responsibly. In addition to technical measures, this also includes raising employee awareness and providing training.

A sound understanding of the opportunities and risks of GenAI and the relevant security aspects is the basis for responsible use.

In this context, the importance of establishing a governance framework from the outset should be emphasised once again. A clearly defined governance framework sets out guidelines, processes and responsibilities for GenAI use within the company – ideally before the first implementation.


We support you!

adesso supports companies as a trusted partner in the holistic implementation of AI governance. Based on a comprehensive risk analysis and field-tested countermeasures, adesso helps you develop customised governance frameworks and gain practical expertise in handling generative AI.

Contact us

Conclusion

Generative AI models are tools that open up new horizons for companies, from optimising existing processes to tapping into new business areas. However, to establish this technology securely and sustainably in your company, AI governance is essential. Well-thought-out governance strategies and proactive measures such as intelligent filter mechanisms, comprehensive input validation and robust security architectures enable companies to effectively counter the risks of generative AI. This way, governance becomes an enabler of secure innovation, rather than a stumbling block, allowing companies to move forward boldly and make the most of the opportunities offered by the technology.

Building internal expertise in dealing with GenAI and working with experienced partners are crucial in this regard. adesso is at your side as a trusted partner.

Final remarks and note

This blog article was written with the help of artificial intelligence – what does that say about its intensity, content and quality?

Picture Maximilian Wächter

Author Dr. Maximilian Wächter

Max studied Cognitive Science at the University of Osnabrück and is thus a trained neurobiopsychologist. After graduating, he completed his doctorate in the field of human-machine interaction in the context of AI agents in neuroinformatics. Even then, he specialised in the decision-making of AI systems and the question of how exactly it can be ensured that AI systems really do behave as desired. So the leap to AI governance and regulation was not far. As a consultant, Max was involved in a variety of strategic consulting projects. Here he acted as a technical expert for the implementation of regulatory requirements, in particular in large projects to fulfil the legal requirements of the AI Act and the Data Act. Particularly in the automotive industry, he was involved in the development of large data provision platforms and AI governance initiatives. His professional focus is thus on the interface between artificial intelligence, regulatory frameworks and their strategic implementation in companies.

Picture Christian Hammer

Author Christian Hammer

As Head of AI Advisory, Christian Hammer manages AI projects at the interface of innovation, regulation and governance. His focus: to integrate generative AI securely, efficiently and sustainably into companies. He guides organisations through the opportunities and risks of this technology, develops practical AI governance strategies and navigates regulatory requirements. As the author of a book on digital transformation, he knows that progress without control is risky – which is why he combines technological excellence with strategic foresight. In this article, he highlights how companies can use GenAI profitably without falling into legal or ethical traps. If you don't control AI, it will control you – so it's high time to take a clear look at the opportunities and risks!