The Illusion of Progress

AI is everywhere. In presentations, strategy decks, performance reviews. It writes faster than we do, answers questions instantly, and seems to make organizations more efficient overnight. The promise is seductive: more output, less effort, better results. And yet, something feels off.

Many leaders sense it but struggle to articulate it. Teams deliver more, but conversations feel thinner. Decisions are faster, but confidence is lower. People rely on tools they don’t fully understand and quietly wonder whether they could still perform without them.

This is the hidden shift of the AI era: outputs are replacing capability.

Producing results faster does not mean individuals or organizations are becoming smarter. When thinking is outsourced and learning is bypassed, capability erodes quietly. Nothing fails at first. KPIs remain green because they measure outputs, not the logic used to obtain them. Someday the context shifts and systems optimized for familiar conditions begin to fracture when faced with the unknown.

This is the real divide AI is creating: Not between companies that adopt it and those that don’t but between those who are upgrading and those who are becoming faster versions of themselves, but weaker underneath.

Why the Learning Journey Still Matters

Learning has never been about getting the right answer. It is about the journey required to reach it.

Human brains follow a simple but unforgiving cycle: we encode information, store it, recollect it in different situations, and refine it through metacognition, which enables self-correction and adaptation by reflecting on assumptions and potential resulting mistakes. This is how judgment is built. This is how expertise forms.

AI disrupts this cycle in subtle but powerful ways. When answers arrive instantly, brain encoding, storage, and correction become superficial. The practice of thinking fades (an amplified version of the Google effect) while recollection is replaced by yet another prompt. Reflection feels unnecessary. The task is completed, we feel satisfied but unfulfilled. Worse still, the reward loop is reinforced: using AI feels successful, encouraging repetition, even as cognitive effort and learning steadily decline.

At first, this feels like progress. Looking-good results with less effort, less friction. But over time, something unsettling happens: people hesitate when the tool is unavailable. Confidence erodes. Judgment weakens. Capability has been outsourced without anyone explicitly deciding to do so.

Organizations follow the same path than human individuals. Real organizational learning requires debate, shared understanding, and time for “digesting and assimilating” new learning, making sense of them. When AI collapses that journey into instant conclusions, decisions accelerate but learning does not accumulate. The organization moves faster yet understands less.

Neuroscience confirms the risk. Studies now show that repeated reliance on AI assistants is associated with weaker connectivity and lower engagement in brain networks linked to memory, attention and executive control* (what researchers call cognitive debt). Like technical debt, it remains invisible until the system is stressed.

* Kosmyna, N. et al. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. arXiv:2506.08872.
* Jiang, T., Wu, J., & Leung, S. C. H. (2025). The cognitive impacts of large language model interactions on problem solving and decision making using EEG analysis. Frontiers in Computational Neuroscience, 19, 10.3389/fncom.2025.1556483.

The Cost of Skipping the Learning Journey

Learning is not a side effect of work. It is the work, the journey to the result that makes performance repeatable.

Human expertise is forged through struggle, shaped by feedback, refined through iteration, and grounded in context. Remove these elements, and learning doesn’t slow, it collapses. Outputs may still look acceptable, but the system underneath is weakening.

    • At the individual level, this shows up as quiet erosion: critical thinking dulls, judgment becomes hesitant, confidence fades. People start asking not “What do I think?” but “What does the tool say?”
    • At the organizational level, the damage is structural. Knowledge becomes fragile and non-transferable, embedded in systems rather than people. Dependence on vendors increases. Adaptability declines. The organization performs, until the context changes and the missing capability is suddenly exposed. When that moment arrives, there is no quick fix.

Assistance or Development: The Choice Most Organizations Don’t Realize They’re Making

AI creates a divide not through what it can do, but through how it is used.

    • AI to replace: In one path, AI acts as a solution provider only. It delivers answers, decisions, and content with impressive speed. Human cognitive effort is minimized. Learning moments disappear. Performance improves, briefly. Over time, dependency grows, and capability shrinks.
    • AI to develop: In the other path, AI is used as a trainer. It exposes reasoning. It challenges assumptions. It explains trade-offs. It slows people down just enough to make them better. Here, AI strengthens judgment instead of replacing it and provide confirmation results are correct.

Most organizations believe they are on the second path. Most are not. The difference is subtle but decisive. Solution-provisioning AI optimizes outputs. Trainer-driven AI builds competence.

One creates convenience. The other creates resilience. Only the second approach scales sustainably.

When Efficiency Becomes a Trap

AI delivers what organizations ask for: speed, efficiency, measurable productivity gains. Processes accelerate. Dashboards improve. Success feels tangible. But efficiency is not robustness.

Robust organizations can maintain performance under stress, changing paradigms, unexpected “black swan” events. They can reason beyond predefined rules. They adapt when assumptions break. AI-driven efficiency, when it replaces thinking, does the opposite. Systems perform well in stable conditions and fail sharply when variability increases.

This is the productivity paradox of AI. Decisions are faster, but thinner. Problems are solved quickly, but without depth. The organization becomes operationally efficient yet strategically brittle as they gradually lose their unique and tacit corporate knowledge.

A shifting competence baseline emerges. People and organizations know less than before but don’t realize it (an AI-enhanced Dunning-Kruger effect). Performance appears stable until it isn’t.

When Efficiency Becomes a Trap

AI delivers what organizations ask for: speed, efficiency, measurable productivity gains. Processes accelerate. Dashboards improve. Success feels tangible. But efficiency is not robustness.

Robust organizations can maintain performance under stress, changing paradigms, unexpected “black swan” events. They can reason beyond predefined rules. They adapt when assumptions break. AI-driven efficiency, when it replaces thinking, does the opposite. Systems perform well in stable conditions and fail sharply when variability increases.

This is the productivity paradox of AI. Decisions are faster, but thinner. Problems are solved quickly, but without depth. The organization becomes operationally efficient yet strategically brittle as they gradually lose their unique and tacit corporate knowledge.

A shifting competence baseline emerges. People and organizations know less than before but don’t realize it (an AI-enhanced Dunning-Kruger effect). Performance appears stable until it isn’t.

Why AI Often Prevents Innovation and Robustness

AI is often labelled an innovation engine, but in reality, it mostly recombines the past.

Most AI systems generate novelty by reshuffling existing patterns. The output may look new,

but it is fundamentally derivative. When organizations rely heavily on AI-generated ideas, they risk confusing variation with innovation.

True innovation requires friction: deep understanding, original insight, and the courage to challenge assumptions, think outside of the box, the box of AI training data set. These qualities emerge from learning and experimentation precisely what disappears when AI supplies ready-made solutions.

This also undermines robustness. When situations arise that fall outside historical data, AI struggles. If human capability has been displaced instead of strengthened, organizations lack the internal capacity to respond creatively under pressure.

Efficiency increases. Possibility shrinks.

Knowledge Management: The Missing Safeguard

This is where Knowledge Management (KM) becomes decisive. KM is not about storing information. It is about preserving reasoning, context, and decision logic and making them explicit, explainable, and transferable. In AI-rich environments, KM protects the learning journey.

A strong KM framework forces AI to teach, not just answer. Instead of presenting conclusions, AI must expose how and why outcomes were produced, surface assumptions, and acknowledge uncertainty. The interaction shifts from “Here is the result” to “Here is how to think.”

Without KM, AI accelerates forgetting. With KM, AI accelerates learning.

Used this way, AI becomes a learning companion: questioning, explaining, challenging. Skills develop progressively. Reflection becomes part of work. Capability compounds over time.

The Mindset Shift Most Leaders Avoid

None of this happens without a mindset shift.

The dominant AI narrative rewards speed, automation, and sound-good, mainstream, usually overly generic outputs. Developing a new capability is slower, harder to measure, and uncomfortable to prioritize. Choosing learning over convenience requires courage: the courage to resist mainstream metrics and short-term pressure.

It requires redefining performance. Not “What did we produce?” but “What did we learn, retain, and improve?”

This redefines leadership itself. Leaders become capability architects, not output maximisers. Judgment quality matters more than speed. Knowledge Management moves from support function to strategic core.

Without this shift, AI will always follow the path of least resistance: convenience over competence.

Illustration – AI in GxP Deviations — Answering vs. Building Capability

Consider a common scenario in pharmaceutical industry: a deviation from a GxP process triggers a root cause analysis (RCA) and the definition of corrective and preventive actions (CAPAs). The quality and robustness of this process directly impact patient safety, regulatory compliance, and business continuity.

1. Using AI for solution-provisioning only

Used in a productivity enhancement mode, AI is asked a simple question: “Give me the root cause of this deviation?” and “Suggest CAPAs?”

Based on available artifacts (historical deviations, batch records, and similar cases), the system produces a plausible answer accompanied by ready-made CAPAs. The investigation is completed faster, documentation is compliant (assuming the AI-enabled solution has been thoroughly designed), and the deviation is closed.

Yet experts have not performed any analytical work. The 7 Whys, the fishbone analysis, and the reasoning behind causal relationships are skipped. The conclusion exists, but understanding does not. 

When similar deviations recur, the same pattern repeats: the case is closed, documentation is produced, but no human learning takes place. As long as the process remains stable, this appears to work.

The moment the process changes, however, the illusion collapses. The more disruptive the change and the further conditions drift from the historical baseline, the less reusable the AI solution is. It must be retrained and re-optimised, yet the very expertise required to do so is gone.

An AI trained to deliver RCA and CAPA on a well-established process encodes past patterns, not understanding. When the process evolves, those patterns no longer apply. Robustness, by contrast, requires adaptiveness i.e. multiple ways to reach the same outcome, alternative reasoning paths, adaptive judgment, and the ability to respond when conditions are ambiguous or unprecedented.

In a fast-changing, VUCA environment, organizations that rely on a optimized AI models become fragile, while those that preserve human capability retain the flexibility to adapt when no model fits.

2. AI used as a trainer for improvement

Used in a development-oriented mode, AI plays a very different role. Instead of delivering the root cause, it guides the investigator through the RCA journey. It prompts each step of the 7 Whys, challenges incomplete answers, suggests dimensions for a fishbone analysis (process, people, equipment, environment), and suggests gaps in thinking logic.

For CAPAs, it tests whether proposed actions truly address the root cause or merely symptoms. The investigation takes longer but knowledge is created. The team learns how the process actually behaves, and future deviations become less likely and easier to resolve.

Both approaches close the deviation. Only one builds capability.

In highly regulated, high-stakes environments like drug research, development and manufacturing, this distinction is critical. AI that replaces thinking optimizes compliance in the short term but weakens robustness over time. AI that develops people strengthens judgment, embeds learning into quality systems, and turns every deviation into an opportunity to improve the process, not just document it.

This is the essence of the AI divide: answers close cases; learning prevents them.

The Future Belongs to the Learners

AI will not decide who wins and who falls behind. Mindset will, Knowledge Management will enable.

Organizations that use AI to develop people will build judgment, resilience, and real innovation. Those that use it to bypass learning will appear efficient until reality intervenes.

AI does not invent the future. It rearranges the past. The future belongs to those who still know how to learn.

The real divide of the AI era is not technological. It is human.