Barriers and Remedies to Knowledge Management (KM) – Applications to Life Science Industry

Knowledge Management (KM) is too often packaged as a purely technical endeavors: search engines, taxonomies, repositories of guidelines and nowadays, increasingly AI-powered dashboards. These tools may elegantly surface information, but the real determinants of KM success are psychological. What matters is how people decide what to acknowledge, share, trust, and apply knowledge. The most advanced KM infrastructure cannot compensate human beliefs, biases, blind spots, and entrenched habits that dictate whether knowledge flows or grinds to a halt.

In competitive environments where information is obscured and it’s difficult to distinguish fact from fiction, knowledge becomes a crucial form of capital. The capacity to access, interpret, and apply it faster and more accurately than competitors brings a decisive advantage. Yet in an era oversaturated with digital information, meta-knowledge (i.e. knowing what we know and what we do not) becomes the differentiator. True knowledge is grounded in evidence, obtained using scientific-thinking and experimental methodologies whereas opinionated knowledge is coming from personal beliefs, culture, life experiences; it is biased, incomplete, and misleading. People frequently fail to distinguish between the two, sometimes because they simply “don’t know what they don’t know,” sometimes because selective framing is more comfortable.

Barriers to Efficient Knowledge Management (KM)

We don’t want to know that we don’t know what we don’t know

The most insidious barrier is the realm of unknown unknowns (meta-cognitive blind spots that hide our gaps from us). The Dunning–Kruger effect demonstrates that individuals with the least competence often display the greatest confidence, largely because they lack the self-awareness to question their abilities and tend to embrace unwarranted certainties that are not grounded in fact or accurate self-assessment. Doubt (and the ability to be comfortable with it) is central to the scientific method. It stems from recognizing that much remains unknown and that certainty is risky, as today’s knowledge may be overturned tomorrow. In KM, the absence of doubt fosters premature closure: the belief that “we know enough,” often propped up by incomplete or unverified documentation, superficial validation, and resistance to outside perspectives. When such overconfidence becomes ingrained across an organization, it undermines adaptability, limits the capacity to question assumptions, stifles innovation and leads to sub-optimal decision-making process.

Hiding, Faking and Lying

There are no facts, only interpretations” (Nietzsche; Nachgelassene Fragmente) and the more complex the subject, the truer this becomes. We rarely grasp the full extent of the reality of things; every genuine discovery often spawns ten new questions. As the world’s entropy (complexity) grows (including digital complexity of our businesses; see https://www.biomedima.org/navigating-complexities-in-life-science-businesses/), we face a choice: plan for complexity to minimize damage when it strikes, or wait and react in the moment. I personally prefer to prepare, building resilience in the hope it yields an advantage over those caught unready. Yet acknowledging “we don’t know what we don’t know” can provoke discomfort (panic, shame, or defensiveness) in competitive environment (e.g. most of organizations we are working in). Without cultural norms that reward vulnerability, curiosity, and honest inquiry (see below “Psychological safety”) and promote critical-thinking, these defensive behaviors harden, eroding adaptability and innovation. When faced with difficult questions, we too often hide, fabricate, or mislead, choosing the comfort of appearance over the discomfort of truth. In KM and beyond, resisting that impulse is the first step toward genuine learning in a world that only grows more complex.

Technology at the rescue

For the past two centuries, since the discovery of petroleum supercharged the development of the Western civilization, the idea of progress has been almost entirely reduced to “technological advancement”. This reductionist view has fueled what I call “techno-messianism,” a belief system I have explored in my Tech Executive article and the BioMedima article where technology is seen as the solution to all human challenges. When becoming aware of self knowledge gaps, people may focus on protecting their image by echoing consensus, recycling familiar ideas, or feigning expertise. In the era of generative AI, it has never been easier to present borrowed thinking as original, nor more dangerous to trust unexamined output. Inherent biases our brain is using (such as the “Automation bias” drives blind acceptance of AI-generated results without verification, leading to errors of both omission and commission. Algorithm aversion bias does the opposite: discarding otherwise reliable tools after a single visible failure. Both these biases and many others (such as the “Authority bias”: the tendency to place undue trust in AI because it speaks confidently or the “Confirmation bias“: accepting AI outputs more readily when they align with pre-existing beliefs) erode effective KM. Trust in technology must be carefully calibrated: harnessing its speed and scale while maintaining human oversight, contextual judgment, and the discipline to scrutinize and validate critical outputs.

Learning Is Effortful

Learning is an uphill climb, demanding time, effort, and often money. A key cost is attention, already scarce in today’s attention economy, where social media and digital platforms compete to capture and hold it. If we start the day with an attention level of 10, it steadily declines with each stimulation, distraction, or multitasking demand to reach a level of 1 after a full day of work. In large organizations where multi-tasking is constant, it leaves little mental space for genuine learning. As a result, many people perform mandatory trainings with minimal engagement: clicking through modules, checking the boxes, earning the certification, and forgetting the content almost immediately. Effective KM depends on “desirable difficulties” such as spaced practice, retrieval testing, and interleaving (techniques that feel hard but deepen retention; see research from https://bjorklab.psych.ucla.edu/. In a global Western culture aiming at comfort and convenience as the ultimate goal, learning efforts are framed as a constraint thus an inefficiency, and the meta-skill of learning how to learn is eroding. Over-reliance on technology fills the gap with shallow recall, carrying its own biases. Social dynamics add further friction: the black sheep effect can discourage those with unique insights from speaking up, especially in organizations that overemphasize exploitation of known processes (“We have always being doing it like this, why changing?“) over the risky exploration of new ways of working.

Hindering Efficiency and Innovation

The individual and collective behaviors described above cascade into innovation attrition and reduced engagement with uncharted territories. Possessing a new or unique piece of knowledge (one that can genuinely improve our understanding of our complex world) offers a competitive advantage. To be effective, however, such knowledge must be tested (in line with the scientific method) and properly documented (by capturing contextual information as metadata) so others can grasp its value. In large organizations, managerial blind spots often foster defensive routines, prompting collaborators to withhold knowledge. This drives effort avoidance, reinforced by miscalibrated trust in tools and a drift toward conformity. Much of people’s capacity (or attention as mentioned above) is consumed by routine bureaucratic, procedural, and political tasks, causing promising ideas to be overlooked or abandoned before they can be explored or tested. Research links knowledge hiding directly to lower performance and slower adoption of new methods Knolwedge Inertia, while structural inertia and entrenched core rigidities further stifle innovation. Sustained novelty requires an implementation climate that connects all stages of the idea journey—from conception to deployment—through strong networks, open knowledge sharing, and a clear bias for action.

Examples from the Life Science Industry

In the pharmaceutical industry, breakdowns in Knowledge Management (KM) have a clear, negative impact, resulting in inefficiencies throughout the entire value chain. The sector already faces challenges with data management, often relying on technology-centric solutions that fails to handle the inherent digital complexity of the pharmaceutical business Navigating Digital Complexity in Lie Science Businesses. As a result, information management (where data is modeled and contextualized to give it meaning) is only just beginning to take hold. KM, which should connect information to decision-making and enable advanced decision intelligence (increasingly powered by AI), has yet to fully deliver its potential benefits in this environment. Consequently, promising opportunities for improved efficiency and innovation are often missed before they can be realized.

The Research-Development Gap

Translational medicine offers a vivid illustration of a common cognitive blind spot: while knowledge flow from bench to bedside is well established, the reverse (where clinical observations refine preclinical models) is often underused, not from lack of data, but from human reluctance to revisit “settled” paradigms. In one direction, cell/animals → biology → patients, the development of imatinib for chronic myeloid leukemia shows how a precise mechanistic insight can deliver dramatic patient benefit when faithfully advanced. In the other direction, patients → biology → drug, the discovery of rare PCSK9 mutations in humans reshaped lipid biology understanding and directly inspired LDL-lowering therapies. Both pathways demonstrate that science progresses fastest when it embraces two-way translation. Ignoring this reciprocity can be costly: in the TGN1412 trial, reassuring primate data masked a human-specific immune cascade, triggering life-threatening cytokine storms in volunteers. A deliberate feedback loop (integrating cross-species data, mechanistic detail, and historical trial insights) might have revealed the danger earlier. The obstacle is not only technical but also psychological, rooted in cognitive inertia, institutional silos, and discomfort with re-examining completed work. KM practices can counter these biases, ensuring every insight, whether born in the lab or the clinic, translates into safer, smarter medical innovation.

The Regulatory-Manufacturing Gap

In many organizations, Regulatory Affairs and Manufacturing departments work with mismatched knowledge of the same asset—the medicinal product—yet human psychology often amplifies these gaps. People tend to cling to their own terminologies, mental models, and “the way we’ve always done it,” resisting alignment even when shared standards exist. As a result, definitions of terms—and the content and structure of related data sets—differ significantly across systems, workflows, and individual perspectives. While the underlying concepts are the same, as captured in standards like ISO IDMP 11615 (Medicinal Product Identification) and 11616 (Pharmaceutical Product Identification), discrepancies persist between unstructured data (documents) and structured data (databases and spreadsheets). This leads to divergent ways of describing regulatory-specific products and substances versus manufacturing-focused representations of ingredients, bulk materials, packaged products, and finished goods. Siloed proprietary systems (RIM, QMS, and others) and inconsistent terminology further entrench the divide. Such misalignments can stall marketing approvals or, in the worst cases, trigger costly product recalls. Overcoming this is not purely a technical exercise—it requires addressing the human reluctance to adapt and apply learning. Robust KM practices, with harmonized definitions, shared governance, and integrated systems, can bridge these gaps, reduce cognitive inertia, and enable consistent, enterprise-wide product knowledge.

Knowledge Transfer Gap

In many siloed organization knowledge is not captured fully and does not flow easily. For instance, in the Chemistry, Manufacturing, and Controls (CMC) domain, poor knowledge transfer can undermine compliance, delay regulatory approvals, and put patients at risk. Documentation gaps, fragmented systems, and staff turnover erode tacit knowledge, making complete and consistent submissions difficult. Yet a deeper obstacle is psychological—human reluctance to adapt, share, and apply lessons learned, often reinforced by comfort with familiar routines and siloed ownership of information. The GSK Cidra, Puerto Rico (SB Pharmco) case shows the cost: repeated GMP failures—microbial contamination, tablets splitting (risking absent API or loss of controlled release), incorrect API ratios, and frequent product mix-ups—were compounded by discrepant documentation, inconsistent batch records, and poor cross-product communication. Critical compliance warnings existed but were scattered across unconnected systems, with no mechanism to aggregate, analyze, and escalate them. This cognitive inertia, combined with fragmented knowledge systems, resulted in widespread recalls, production shutdowns, and a criminal and civil settlement in 2010. KM practices (including standardized authoring, centralized repositories linking data to the documents that report it, electronic batch records with integrated deviation/CAPA tracking, and formalized lessons-learned processes) could have identified patterns earlier and prevented escalation. In an AI era where machines draft and review content, overcoming human resistance to change is as vital as the technical tools themselves.

Improving and Leveraging KM

The path to effective KM in medium and large organizations (particularly in the life sciences) is often long and cumbersome. A critical first step is for both individuals and the organization to acknowledge that _they don’t know what they don’t know_ and to take deliberate action to close those gaps. This requires, first, moving beyond a technology-centric mindset and counteracting the persistent incentives that keep organizations locked in tool-driven thinking. Instead, they must adopt a data-centric mindset (https://datacentricmanifesto.org/). Second, they need to recognize the importance of context (as described with metadata) in making data meaningful (https://doi.org/10.11113/oiji2023.11n1.235); context metadata must be handled with the same rigor and care as the data itself. Only then, and in alignment with the DIKW pyramid, can they truly address KM. This means ensuring decision intelligence by linking information (data in context) to the problems that need solving and the decisions made to solve them. The following section outlines practical proposals to implement KM effectively in life science organizations, transforming isolated data into actionable knowledge that drives informed, high-quality decisions.

Gamification

Gamification involves applying game elements (such as points, levels, and challenges) to non-game contexts like KM to enhance engagement, participation, and learning. Research indicates that while gamification can increase motivation and knowledge sharing, its success relies on thoughtful design; poorly executed schemes may reduce learning to mere badge-collecting, undermining intrinsic motivation. Effective gamification in KM should reward effort and key learning behaviors, not just end results, and use principles like _desirable difficulty_ to encourage growth. Incorporating spaced, retrieval-based activitie and immediate micro-rewards (similar to video games such as feedback and recognition) can reinforce positive actions through a pleasure (dopamine-driven) reinforcement loop. However, it’s crucial to avoid the over-justification effect by ensuring external rewards don’t overshadow the internal drive to learn. When balanced correctly, gamification can make learning more engaging, rewarding, and a sustainable part of KM culture.

Psychological Safety

Effective KM starts with revealing unknowns through tools like self-assessments, peer reviews, and “red-team” exercises, which help uncover blind spots in knowledge and processes. This process thrives in a culture of psychological safety, where employees feel comfortable admitting gaps in understanding without fear of judgment. Treating ignorance as an opportunity for growth encourages open information sharing. Equally vital is breaking defensive routines: behaviors that shield individuals from embarrassment but hinder organizational learning. Leaders set the tone by modeling vulnerability and admitting their own mistakes, which fosters a culture of candor. To further reduce knowledge retention, organizations should embed transparency into performance metrics, accountability structures, and recognition systems. When openness is rewarded and concealment discouraged, teams collaborate more effectively, challenge ideas, and refine collective knowledge, ultimately driving better decision-making and innovation across the organization.

Calibrate Trust in Technology

In modern KM frameworks, AI and automation can greatly enhance information access, but their value hinges on calibrated trust, avoiding both automation bias (blindly accepting outputs) and algorithm aversion (rejecting tools after mistakes). Achieving this balance requires making technology transparent through features like confidence scores, explainable outputs, and clear data provenance, enabling users to assess reliability. For critical decisions, outputs should be cross-checked with independent sources or expert review. Comprehensive training is vital, ensuring teams recognize and mitigate automation bias and algorithm aversion with hands-on practice in critical evaluation. By embedding transparency, verification, and informed skepticism into workflows, organizations can ensure that AI and automation act as trusted collaborators in decision-making, rather than unquestioned authorities or ignored tools, thereby safeguarding both accuracy and confidence in KM processes.

Critical Thinking and Exploration

Critical thinking is an essential part of KM; not just for knowing facts, but for evaluating their validity, context, and implications. The APA Delphi consensus defines critical thinking as a combination of skills like analysis, inference, and evaluation, along with dispositions such as open-mindedness and truth-seeking, all of which can be explicitly taught. Research shows that intelligence alone does not ensure sound reasoning, so KM systems should support judgment with tools like probabilities, evidence standards, and transparent logic. To nurture a culture of inquiry, organizations can implement exploration sprints (protected periods for experimentation) and rotate devil’s advocate roles to normalize dissent and encourage safe challenges to prevailing views. Rewarding risk-taking and openly sharing failures, including negative results and lessons learned, increases transparency, prevents repeated mistakes, and accelerates collective progress. This approach broadens the knowledge base and signals that innovative thinking is valued, making exploration a celebrated driver of organizational growth.

Conclusion

Barriers

The most persistent obstacles to effective KM are rooted in the human element and its relationship with technology or technical progress in general. KM outcomes are, at their core, psychological outcomes: while infrastructure enables access, it is the human system (its incentives, norms, and tolerances) that determines whether knowledge empowers organizations. Cognitive blind spots, overconfidence, and defensive routines commonly observed in large organization suppress curiosity and transparency. Without cultures that embrace uncertainty and critical thinking, even the best tools risk becoming liabilities, feeding innovation attrition, slowing adoption of new methods, and leaving valuable insights underused.

Remedies

Overcoming these barriers requires engineering environments for learning: spaces that surface unknowns, dismantle defensive routines, and reward cognitive effort. This includes embedding psychological safety, structured dissent, retrieval practice, and calibrated trust in technology so that healthy cognitive behaviors become second nature. Harmonized definitions, context-rich metadata, and transparent governance align people and systems, while “desirable difficulties” in learning ensure deeper retention and adaptability.

When these cultural and structural elements come together, KM evolves from a static archive to an adaptive ecosystem. By weaving together incentives for risk-taking, smart workflows, and friction-aware learning, organizations ensure knowledge is not merely stored but continually questioned, reimagined, and applied. In such environments, human insight and contextualized data flow seamlessly, enabling teams to learn and reinvent as rapidly as the world demands, securing sustained creativity, resilience, and a lasting competitive edge in the digital era.