Generative AI is transforming business processes — but between efficiency promises and uncontrolled shadow use, a dangerous gap is widening. Here’s what companies need to know and do now.
Few technologies have swept through corporate corridors as quickly as generative AI. ChatGPT, Copilot, Le Chat — the tools are just a few clicks away, delivering texts, code, and analyses at lightning speed. But behind the impressive facade lurk risks that many companies still underestimate. A new white paper from the Platform for Learning Systems, written by experts from academia, industry, and security agencies, offers guidance: well-founded, practical — and backed by nine concrete recommendations.
Efficiency yes, but no sure thing
Generative AI refers to systems that generate new content based on massive datasets: texts, images, videos, code, music. At their core are so-called foundation models — above all Large Language Models (LLMs) such as those powering ChatGPT or Gemini. The appeal for businesses is obvious: routine tasks like answering emails, writing minutes, project planning, or documentation can be significantly accelerated with AI assistance.
But caution is warranted: efficiency gains and genuine relief for employees are not the same thing. The white paper makes clear that generative AI does not simply take over work processes — it often intensifies and compresses them. Tasks get faster but also more demanding. Those who lose control over AI-generated results risk new forms of overload rather than relief. Clear guidelines on transparency, qualification, and participation are therefore not nice-to-haves but prerequisites for successful AI deployment.
User or provider? A strategic fork in the road
A key question raised in the paper: do companies simply want to use generative AI — or also help shape it? The distinction between user and provider roles has far-reaching consequences. Those who permanently rely on solutions from large US tech corporations risk dependency: on costs, data sovereignty, and the pace of innovation.
Particularly noteworthy: Small Language Models (SLMs). They are more resource-efficient, can be run locally, and can be trained on confidential company data without sending it to external servers. For medium-sized businesses and public authorities that must operate in compliance with GDPR, SLMs may be the key to digital sovereignty. A European flagship project is Teuken 7B, an open-source model from Fraunhofer IAIS, trained in all 24 EU languages — freely usable, adaptable, and privacy-compliant.
Security: More than an IT problem
Generative AI is fundamentally changing the IT security landscape — in both directions. On one hand, it opens new defensive possibilities: security events can be analyzed more quickly, red-teaming simulations made more realistic, compliance checks partially automated. On the other hand, new attack vectors emerge: through manipulated prompts (so-called prompt injection), uncontrolled data leaks, or faulty model outputs with potentially serious consequences.
Particularly alarming: the so-called shadow deployment. Employees often use AI tools without the knowledge of the IT department, on personal devices, with company data. Once entered, sensitive information cannot be retrieved. Germany’s Federal Office for Information Security (BSI) explicitly warns: generative AI must be clearly regulated, controlled, and properly embedded within existing security structures.
Law and compliance: The AI Act is in force
The European AI Act has been in effect since August 2024 and is being gradually implemented as binding law from August 2025 onward. Companies are required to classify their AI applications by risk category — from minimal to unacceptable — and meet corresponding requirements for transparency, documentation, and human oversight. For so-called General-Purpose AI (GPAI), additional obligations apply: disclosure of training data, respect for copyright, technical documentation.
In addition, the GDPR and German Federal Data Protection Act continue to apply. Legal responsibility explicitly rests not only with the IT department but with company management. Only vetted and compliance-verified AI solutions may be deployed in an organization. Legal training for employees on the topic of “AI and the law” is therefore indispensable.
SWOT analysis instead of gut feeling
How can AI deployment be systematically assessed in specific fields of application? The white paper proposes SWOT analyses — and provides three exemplary assessments: for knowledge management, industrial production, and software development.
In knowledge management, the strengths are clear: faster research, linking of knowledge silos, and preserving the experiential knowledge of departing experts. The weaknesses are equally real: hallucinations that go unnoticed in knowledge databases, or an excessive focus on digitally captured knowledge at the expense of tacit know-how.
In industry, AI excels through standardizable processes and large structured datasets — ideal for fault analysis, CNC programming, or requirements engineering. Risks arise from black-box behavior, lack of traceability, and new cybersecurity attack surfaces. Software development benefits from faster code generation and relief from routine tasks, but struggles with quality risks, loss of know-how, and potential licensing conflicts.
Nine recommendations for practice
From all of this, the authors distill nine recommendations for action. In brief: companies should establish a culture of responsible AI use, enforce transparent usage rules, systematically evaluate pilot projects, and build security into AI systems from the ground up as a design principle. Data protection, AI governance, and legal compliance must be championed by senior management — not delegated and forgotten.
The white paper is not a call to panic, but it is a clear wake-up call: generative AI is powerful enough to advance or endanger companies — depending on how consciously it is used. The technology is advancing at a rapid pace. Those who set the right course today will have the edge tomorrow.

Dr. Jakob Jung is Editor-in-Chief of Security Storage and Channel Germany. He has been working in IT journalism for more than 20 years. His career includes Computer Reseller News, Heise Resale, Informationweek, Techtarget (storage and data center) and ChannelBiz. He also freelances for numerous IT publications, including Computerwoche, Channelpartner, IT-Business, Storage-Insider and ZDnet. His main topics are channel, storage, security, data center, ERP and CRM.
Contact via Mail: jakob.jung@security-storage-und-channel-germany.de