Monday, 6 October 2025

Behavioural Sciences in the Age of AI

Unpublished Draft Paper Satish Pradhan, October, 2025)


Abstract

In an era where artificial intelligence (AI) is transforming work, organisations must blend insights from

behavioural science with technological innovation to thrive. This paper explores “Behavioural Sciences in the

Age of AI” by integrating foundational theories of human behaviour with emerging themes in AI, human–

machine teaming, organisational transformation, and dynamic capability development. We provide a historical

context for the evolution of behavioural science and its intersection with technology, from early socio-technical

systems thinking to modern cognitive science and behavioural economics. Key theoretical contributions are

discussed – including Herbert Simon’s bounded rationality, Daniel Kahneman and Amos Tversky’s heuristics

and biases, Gerd Gigerenser’s ecological rationality, Fred Emery and Eric Trist’s socio-technical systems, David

Teece’s dynamic capabilities, Edgar Schein’s organisational culture, Erik Brynjolfsson and Andrew McAfee’s

“race with the machine” paradigm, Gary Klein’s naturalistic decision-making, and Luciano Floridi’s digital

ethics – highlighting their relevance in designing human–AI collaboration. We build upon an internal strategic

framework – the composite capability paradigm and 2025+ capability stack – which posits that future-proof

organisations must orchestrate human intelligence, machine cognition, and agile interfaces within a purpose-

driven, values-grounded architecture. By situating this paradigm in the broader academic literature, we

demonstrate how purpose and trust, ethical AI, digital fluency, human agency, adaptive decision-making, and

robust governance become critical enablers of competitive advantage in the AI age. Real-world examples from

health, public services, business, and government illustrate how behavioural insights combined with AI are

enhancing decision quality, innovation, and organisational resilience. The paper argues for a rigorous yet

human-centric approach to AI integration – one that leverages behavioural science to ensure technology serves

human needs and organisational values. We conclude that the synthesis of behavioural science and AI offers a

strategic path to reclaiming human agency and purpose in a world of rapid technological change, enabling

organisations to adapt ethically and effectively in the age of AI.


Introduction

The rise of advanced AI has catalysed an inflection point in how organisations operate, decide, and evolve.

Today’s business environment has “transitioned from a predictable game of checkers to a complex, live-action

role-play of 4D chess” – an apt metaphor for the unprecedented complexity and dynamism that leaders face. In

this new game, even the “rulebook” changes continuously, rendering many traditional strategies and

organisational models obsolete. The convergence of rapid technological change with other disruptive forces

(such as globalisation, climate risks, and shifting workforce expectations) creates interconnected pressures that

demand integrated responses. As a result, organisations must fundamentally rethink their capabilities and

frameworks for decision-making. This paper contends that behavioural science, with its rich understanding of

human cognition, emotion, and social dynamics, offers essential principles for guiding this reinvention in the

age of AI.


The Imperative for Integration

Artificial intelligence, once a futuristic concept, is now embedded in core business processes across industries.

AI systems not only execute tasks or analyse data; increasingly, they function as “social-technological” actors

that form symbiotic relationships with humans. This blurring of the line between human and machine roles

raises fundamental questions about how we design work and make decisions: How do we ensure that AI

augment – rather than override – human judgment? In what ways must human cognitive biases, limitations, and

strengths be considered when deploying AI tools? How can organisations foster trust in AI systems while

preserving human agency and accountability? These questions sit at the intersection of behavioural science

(which examines how humans actually behave and decide) and technology management.

Historically, advances in technology have forced parallel evolutions in management and organisational

psychology. For instance, the introduction of electric motors in factories in the early 20th century did not yield

productivity gains until workflows and management practices were fundamentally redesigned decades later.

Today, we may be in a similar transitional period with AI: simply overlaying intelligent algorithms onto old

organisational structures is inadequate. Instead, as Erik Brynjolfsson observes, thriving in the “new machine

age” requires reshaping systems and roles to “race with the machine” rather than against it. This is a behavioural

and organisational challenge as much as a technical one. Leaders must guide their teams through “radical

unlearning” of outdated assumptions and foster a culture of continuous learning and adaptation. Edgar Schein

noted that effective transformation often demands addressing “learning anxiety” – people’s fear of new methods

– by cultivating sufficient “survival anxiety” – the realisation that failing to change is even riskier. In the context

of AI, this means creating a sense of urgency and purpose around AI adoption, while also building

psychological safety so that employees are willing to experiment with and trust new tools.


Behavioural Science and AI: A Convergence

Behavioural science spans psychology, cognitive science, behavioural economics, and sociology – disciplines

that have illuminated how humans perceive, decide, and act. AI, on the other hand, often operates on algorithms

aimed at optimal or rational outcomes. This creates a potential tension: AI might make recommendations that

are theoretically optimal, but humans might not accept or follow them due to cognitive biases, trust issues, or

misaligned values. Integrating behavioural science means acknowledging and designing for the reality of human

behaviour in all its richness and boundedness. For example, AI systems in hiring or criminal justice need to

account for issues of fairness and implicit bias – areas where social psychology provides insight into human

prejudice and decision bias. In consumer-facing AI (like recommendation engines or digital assistants),

understanding heuristics in user behaviour (from research by Daniel Kahneman, Amos Tversky, and others) can

improve design to be user-friendly and nudge positive actions. In high-stakes environments like healthcare or

aviation, the field of human factors and cognitive engineering (informed by behavioural science) has long

emphasised fitting the tool to the human, not vice versa.

Crucially, behavioural science also guides organisational behaviour and change management. As companies

implement AI, there are cultural and structural changes that determine success. Who “owns” decisions when an

algorithm is involved? How do teams collaborate with AI agents as teammates? What training and incentives

drive employees to effectively use AI tools rather than resist them? These questions invoke principles from

organisational psychology (motivation, learning, team dynamics) and from socio-technical systems theory. The

latter, pioneered by Emery and Trist in the mid-20th century, argued that you must jointly optimise the social

and technical systems in an organisation. That insight is strikingly applicable today: an AI solution will fail if it

is imposed without regard for the social system (people’s roles, skills, norms), and conversely, human

performance can be amplified by technology when designed as an integrated system.

This paper aims to bridge the past and present – anchoring cutting-edge discussions of AI and dynamic

capabilities in the timeless truths of behavioural science. We will review key theoretical foundations that inform

our understanding of human behaviour in complex, technology-mediated contexts. We will then propose a

synthesised framework (building on the composite capability paradigm and capability stack developed in an

internal strategy paper) for conceptualising how human and AI capabilities can be orchestrated. Finally, we

translate these ideas into practice: how can organisations practically build trust in AI, nurture human–machine

collaboration, uphold ethics and inclusion, and develop the dynamic capability to continuously adapt? By

examining illustrative examples from domains such as healthcare, public policy, business, and government, we

demonstrate that integrating behavioural science with AI is not a theoretical nicety but a strategic necessity. The

outcome of this integration is a new kind of enterprise – one that is technologically empowered and human-

centric, capable of “reclaiming human agency and purpose” even as algorithms become ubiquitous.


Literature Review: Foundations of Behavioural Science and Technology Interaction

Cognitive Limits and Decision Biases

Modern behavioural science began as a challenge to the notion of humans as perfectly rational actors. Herbert

A. Simon, a polymath who straddled economics, psychology, and early computer science, was pivotal in this

shift. Simon introduced the concept of bounded rationality, arguing that human decision-makers operate under

cognitive and information constraints and thus seek solutions that are “good enough” rather than optimal. He

famously coined the term “satisficing” to describe how people settle on a satisfactory option instead of

exhaustively finding the best. Simon’s insight – that our minds, like any information-processing system, have

limited capacity – has direct parallels in AI. In fact, Simon was an AI pioneer who, in the 1950s, built some of

the first software to mimic human problem-solving. The bounded rationality concept laid the groundwork for

behavioural economics and decision science, highlighting that if AI tools are to support human decisions, they

must account for our finite attention and memory. For example, too much information or too many choices can

overwhelm (a phenomenon later popularised as cognitive overload or the paradox of choice), so AI systems

need to be sensitive to how recommendations or data are presented to users – an idea reinforced by the

heuristics-and-biases research tradition.

Daniel Kahneman and Amos Tversky carried the torch forward by cataloguing the systematic heuristics (mental

shortcuts) and biases that affect human judgment. Their work demonstrated that humans deviate from classical

rationality in predictable ways – we rely on intuitive System 1 thinking (fast, automatic) which can be prone to

errors, as opposed to the more deliberate System 2 thinking. They identified biases like availability

(overestimating the likelihood of events that come readily to mind), confirmation bias (seeking information that

confirms prior beliefs), loss aversion (weighing losses more heavily than equivalent gains), and numerous

others. Kahneman’s influential book Thinking, Fast and Slow (2011) synthesised these ideas for a broad

audience, cementing his reputation as “the father of behavioural science”. The implication for AI and human-

machine teaming is profound: AI can either mitigate some human biases or amplify them, depending on design.

For instance, algorithmic decision aids can counteract certain biases by providing data-driven forecasts (helping

humans overcome intuition that might be flawed), but if not carefully implemented, they might also lull humans

into automation bias (over-reliance on the AI, assuming it is always correct) or confirmation bias (the AI might

learn from human decisions that are biased and reinforce them). An understanding of cognitive biases has thus

become vital in AI ethics and design – e.g. ensuring that an AI’s explanations don’t trigger biased reasoning or

that its user interface nudges appropriate attention.

Not all scholars agreed that human deviation from economic rationality was truly irrational. Gerd Gigerenzer, a

prominent psychologist, offered a counterpoint with the concept of ecological rationality. Gigerenzer argues that

heuristics are not just “biases” or flaws; rather, they are often adaptive responses to real-world environments. In

his view, the success of a decision strategy depends on the context – a heuristic that ignores certain information

can actually outperform complex models in low-information or high-uncertainty situations. He demonstrated,

for example, that simple rules like the “recognition heuristic” (preferring options one recognises over those one

doesn’t) can yield surprisingly accurate decisions in certain domains. Gigerenzer has been a strong critic of

Kahneman and Tversky’s emphasis on biases, cautioning that labeling human thinking as “irrational” in lab

experiments misses how humans have adapted to their environments. He suggests that rationality should be seen

as an adaptive tool, not strictly bound by formal logic or probability theory. This perspective is highly relevant

when considering AI-human interaction: rather than always trying to “debias” humans into perfect logicians,

sometimes the better approach is to design technology that complements our natural heuristics. For example,

decision dashboards might highlight key information in ways that align with how experts naturally scan for cues

(leveraging heuristics developed through experience), or AI might handle aspects of a task that humans are

known to do poorly at (like very large-scale calculations) while leaving intuitive pattern recognition to the

human. Gigerenzer’s work reminds us that context matters – a theme also echoed in machine learning through

the “no free lunch” theorem (no one model is best for all problems). In practice, it means organisations should

strive for human-AI systems where each does what it is comparatively best at – as one Gigerenzer quote puts it,

“intelligent decision making entails knowing what tool to use for what problem”.

Gary Klein, another figure in the decision sciences, provides additional nuance with his studies of naturalistic

decision-making (NDM). While Kahneman often highlighted errors in human judgment using tricky puzzles or

hypothetical bets, Klein studied experts (firefighters, pilots, doctors) making high-stakes decisions under time

pressure. He found that these experts rarely compare options or calculate probabilities in the moment; instead,

they draw on experience to recognise patterns and likely solutions – a process he described in the Recognition-

Primed Decision model. Klein and Kahneman once famously debated, but eventually co-authored a paper (“A

Failure to Disagree”) noting that their perspectives actually apply to different contexts: in high-validity

environments with opportunities to learn (e.g. firefighting, where feedback is clear and experience builds

genuine skill), intuition can be remarkably effective; in other cases, intuition can mislead. Klein’s emphasis on

tacit knowledge and skilled intuition has implications for AI: organisations should be cautious about completely

displacing human experts with algorithms, especially in domains where human expertise encodes nuances that

are hard to formalise. Instead, AI can be used to support expert intuition by handling sub-tasks or offering a

“second opinion.” For example, in medical diagnosis, an experienced radiologist might quickly intuit a

condition from an X-ray; an AI can provide a confirmatory analysis or flag something the radiologist might have

overlooked, with the combination often proving more accurate than either alone. Indeed, a 2023 study in

European Urology Open Science showed that a radiologist + AI hybrid approach achieved higher sensitivity and

specificity in detecting prostate cancer from MRI scans than either the radiologist or AI alone, demonstrating

how “a combination of AI and evaluation by a radiologist has the best performance”. This is a concrete example

of human intuition and AI analysis working in tandem – aligning with Klein’s insights that experienced human

judgment has unique strengths that, rather than being replaced, should be augmented by AI.


Socio-Technical Systems and Organisational Adaptation

As early as the 1950s, researchers like Fred Emery, Eric Trist, and others at the Tavistock Institute in London

began examining organisations as socio-technical systems (STS) – meaning any workplace has both a social

subsystem (people, culture, relationships) and a technical subsystem (tools, processes, technologies), and these

must be designed together. Trist and colleagues, working with British coal miners, noted that introducing new

machinery without altering work group norms and job designs led to suboptimal outcomes, whereas redesigning

work to give teams autonomy and better align with the new tech yielded significant productivity and satisfaction

gains. They coined “socio-technical” to emphasise this joint optimisation. Another famous work by Emery &

Trist (1965) introduced the idea of different environmental turbulences that organisations face, from placid to

turbulent fields, and argued that in more turbulent (fast-changing, unpredictable) environments, organisations

need more adaptive, open strategies. This foreshadowed today’s VUCA (volatility, uncertainty, complexity,

ambiguity) world. The lesson is that successful adoption of any advanced technology (like AI) isn’t just about

the tech itself, but about how work and human roles are reconfigured around it. Emery and Trist would likely

view AI integration as a prime example of STS in action: the firms that excel will be those that thoughtfully

redesign job roles, team structures, and communication patterns in light of AI capabilities – rather than those

who treat AI implementation as a purely technical upgrade. Indeed, current discussions about human-centric AI

and AI ergonomics are essentially socio-technical perspectives, emphasising user experience, change

management, and organisational context in deploying AI.

Parallel to STS theory, the field of organisational development (OD) and culture was greatly influenced by

Edgar Schein. Schein’s model of organisational culture delineated culture as existing on three levels: artifacts

(visible structures and processes), espoused values (strategies, goals, philosophies), and basic underlying

assumptions (unconscious, taken-for-granted beliefs). According to Schein, transforming an organisation – say,

to become more data-driven or AI-friendly – isn’t simply a matter of issuing a new policy or training people on

a new tool. It often calls for surfacing and shifting underlying assumptions about “how we do things.” For

example, a company might have an implicit assumption that good decisions are made only by seasoned

managers, which could lead to resistance against algorithmic recommendations. Changing that might require

leaders to model openness to data-driven insights, thereby altering assumptions about authority and expertise.

Schein also introduced the concept of learning culture and noted that leaders must often increase “survival

anxiety” (the realisation that not adopting, say, digital tools could threaten the organisation’s success or the

individual’s job relevance) while reducing “learning anxiety” (the fear of being embarrassed or losing

competence when trying something new). In the AI era, this is highly salient: employees may fear that AI will

render their skills obsolete or that they won’t be able to learn the new tools (learning anxiety), even as the

organisation’s competitive survival may depend on embracing AI (survival anxiety). Effective leaders use clear

communication of purpose – why adopting AI is critical – and create supportive environments for upskilling to

resolve this tension. We see enlightened companies investing heavily in digital fluency programs, peer learning,

and even redesigning performance metrics to encourage use of new systems rather than punish initial drops in

efficiency as people climb the learning curve. These practices reflect Schein’s principles of culture change.

Another relevant Schein insight is about ethical and cultural alignment. He argued that organisations should

have cultures that reinforce desired behaviours, and that when you introduce a foreign element (be it a new CEO

or a new technology), if it clashes with entrenched culture, the culture usually wins unless actively managed.

Thus, if a company values high-touch customer service as part of its identity, introducing AI chatbots needs to

be done in a way that augments that value (e.g., bots handle simple queries quickly, freeing up human reps to

provide thoughtful service on complex issues) rather than contradicting it (replacing all human contact).

Ensuring AI deployment aligns with organisational purpose and values – an idea from our internal capability

stack framework – is essentially a cultural alignment problem. If done right, AI can even reinforce a culture of

innovation or analytical decision-making; done poorly, it can create dissonance and distrust.

Dynamic adaptation at the organisational level has been formalised by David Teece in his Dynamic Capabilities

framework. Teece defines dynamic capability as a firm’s ability to “integrate, build, and reconfigure internal

and external competences to address rapidly changing environments”. This theory, originating in strategic

management, is particularly apt for the AI age, where technologies and markets change fast. Teece describes

dynamic capabilities in terms of three sets of activities: sensing (identifying opportunities and threats in the

environment), seizing (mobilising resources to capture opportunities through new products, processes, etc.), and

transforming (continuously renewing the organisation, shedding outdated assets and aligning activities). In the

context of AI, an example of sensing would be recognising early on how AI could change customer behaviour

or operations (for instance, a bank sensing that AI-enabled fintech apps are shifting consumer expectations).

4Seizing would involve investing in AI development or acquisitions, piloting new AI-driven services, and scaling

the ones that work. Transforming would mean changing structures – perhaps creating a data science division,

retraining staff, redesigning workflows – to fully embrace AI across the enterprise. Teece’s core message is that

adaptive capacity itself is a strategic asset. We can relate this to behavioural science by noting that an

organisation’s capacity to change is rooted in human factors: learning mechanisms, leadership mindset, and

organisational culture (again Schein’s domain). For example, dynamic capabilities require an organisational

culture that encourages experimentation and tolerates failures as learning – essentially a growth mindset

organisation. Behavioural science research on learning organisations (e.g., work by Peter Senge or Amy

Edmondson on psychological safety) complements Teece’s macro-level view by explaining what human

behaviours and norms enable sensing, seizing, and transforming. Edmondson’s research on psychological safety

– the shared belief that it’s safe to take interpersonal risks – is crucial if employees are to speak up about new

tech opportunities or flag problems in implementations. Without it, an organisation may fail to sense changes

(because employees are silent) or to learn from mistakes (because failures are hidden), thus undermining

dynamic capability. Therefore, we see that frameworks like Teece’s implicitly depend on behavioural and

cultural underpinnings.


Technology, Work, and Society: Human–AI Collaboration and Ethics

No discussion of behavioural science in the age of AI would be complete without addressing the broader socio-

economic and ethical context. Erik Brynjolfsson and Andrew McAfee, in works like The Second Machine Age

(2014), examined how digital technologies including AI are reshaping economies, productivity, and

employment. They observed a troubling trend: productivity had grown without commensurate job or wage

growth, hinting at technology contributing to inequality or job polarisation. However, they argue that the

solution is not to halt technology but to reinvent our organisations and skill sets – essentially to race with the

machines. Brynjolfsson’s famous TED talk recounted how the best chess player in the world today is not a

grandmaster nor a supercomputer alone, but rather a team of human plus computer – in freestyle chess, a

middling human player with a good machine and a strong process to collaborate can beat even top computers.

He concludes, “racing with the machine beats racing against the machine.” This vivid example underscores a

powerful concept: complementarity. Humans and AI have different strengths – humans excel at context,

common sense, ethical judgment, and novel situations; AI excels at brute-force computation, pattern recognition

in large data, and consistency. The best outcomes arise when each side of this partnership does what it does best

and they iterate together. This theme appears in many domains now. For instance, in medicine, some diagnostic

AI systems initially aimed to replace radiologists, but a more effective approach has been to let AI highlight

suspected anomalies and have radiologists make the final call, significantly improving accuracy and speed. In

customer service, AI chatbots handle routine FAQs, while human agents tackle complex or emotionally

sensitive cases, yielding better customer satisfaction. These human–AI team models are fundamentally about

organising work in ways that fit human behavioural strengths and limitations (as identified by behavioural

science) with machine strengths. Implementing such models requires careful attention to workflow design, user

experience, and trust. If the AI is too assertive or not transparent, the human may distrust it or disengage (there’s

evidence that some professionals will ignore algorithmic advice if they don’t understand or agree with it – a

phenomenon known as algorithm aversion). Conversely, if humans over-trust AI, they may become complacent

and skill atrophy can occur. Thus, a balance of trust – sometimes called calibrated trust – must be achieved,

which is an active research area in human factors and HCI (Human–Computer Interaction). Lee and See (2004)

suggested that trust in automation should be calibrated to the automation’s true capabilities; to do this, systems

might need to provide feedback on their confidence level, explanations, or have mechanisms for humans to

oversee and intervene.

Trust and ethics are tightly intertwined. Luciano Floridi, a leading philosopher in digital ethics, has argued that

we must develop a “Good AI Society” where AI is aligned with human values and the principles of beneficence,

non-maleficence, autonomy, justice, and explicability. Floridi’s work with the AI4People initiative synthesised

numerous AI ethics guidelines into a unified framework. Two principles stand out for behavioural science

integration: autonomy (respecting human agency) and explicability (the ability to understand AI decisions).

From a behavioural perspective, respecting autonomy means AI should be a tool that empowers users, not an

opaque mandate that constrains them. Users are more likely to adopt and appropriately use AI if they feel in

control – for example, a decision support system that suggests options and allows a human to override with

justification tends to be better received than an automated system with no human input. Explicability is critical

for trust and for human learning; if an AI system can explain why it made a recommendation, a human can

decide whether that reasoning is sound and also learn from it (or catch errors). Floridi and colleagues even

propose “AI ethics by design”, meaning ethical considerations (like transparency, fairness, accountability)

should be built into the development process of AI, not slapped on later. For practitioners, this could involve

5interdisciplinary teams (with ethicists or social scientists working alongside engineers), bias audits of

algorithms, and participatory design involving stakeholders who represent those affected by the AI’s decisions.

Another facet of ethics is inclusion and fairness. Behavioural sciences remind us how prevalent biases

(conscious and unconscious) are in human decisions; ironically, AI trained on historical human data can embed

and even amplify those biases if we’re not careful. There have been real cases: hiring algorithms that

discriminated against women (because they were trained on past hiring data skewed toward men), or criminal

risk scoring algorithms that were biased against minorities. Addressing this isn’t just a technical fix of the

algorithm; it requires understanding the social context (why the data is biased) and often a human judgement of

what fairness means in context (an ethical decision). Various definitions of fairness (e.g., demographic parity vs.

equalised odds) have to be weighed, which is as much a policy question as a math question. Here, governance

comes into play – organisations need governance mechanisms to oversee AI decision-making, much like human

decision processes are subject to oversight and compliance. Floridi’s emphasis on governance aligns with

emerging regulations (like the EU AI Act) that push for transparency, accountability, and human oversight of

AI. Behavioural science contributes to this conversation by highlighting factors such as: how do individuals

react to algorithmic decisions? what organisational incentives might cause people to deploy AI in harmful ways

(for example, a manager might be tempted to use an AI system to surveil employees in ways that hurt trust)? and

how can we create cultures of responsible AI use? Organisational behaviour research on ethical climates, tone

from the top, and decision biases (like the tendency to conform to perceived pressure) are all relevant when

instituting AI governance. A practical example is the creation of AI ethics committees or review boards within

organisations, which often include people with diverse backgrounds (legal, technical, HR, etc.) to review

sensitive AI deployments (e.g., systems affecting hiring or customer rights). These committees work best when

they consider not just compliance with regulations but also the psychological impact on those subject to the AI

decisions and on employees using the AI.

Finally, a macro societal perspective: behavioural sciences and AI are jointly shaping what it means to work and

live. Issues of human agency loom large. There is a risk that if we delegate too much decision-making to

algorithms, humans could experience a loss of agency or a “de-skilling” effect. On the flip side, AI can also

enhance human agency by providing people with better information and more options (for example, citizens

using AI-powered tools to understand their energy usage can make more informed choices, or disabled

individuals using AI assistants gain independence). This dual potential – to diminish or amplify agency – again

depends on design and context. A theme across behavioural literature is the importance of purpose and

meaningfulness for motivation. As AI takes over more routine work, what remains for humans should ideally be

the more purpose-rich tasks (creative, interpersonal, strategic). This calls for organizational vision: leaders need

to articulate how AI will free employees to focus on more meaningful aspects of their jobs rather than simply

framing it as a cost-cutting or efficiency drive. The theme of purpose is central to sustaining trust and morale.

Studies have shown that employees are more likely to embrace change (including tech adoption) when they

believe it aligns with a worthy mission or values, rather than just boosting the bottom line. Thus, infusing AI

strategy with a sense of higher purpose (e.g., “we are using AI to better serve our customers or to make

employees’ work lives better or to solve societal challenges”) is not just a PR move but a psychologically

important factor.

In summary, the literature suggests that an effective interplay of behavioural science and AI requires

recognising humans’ cognitive biases and strengths, designing socio-technical systems that leverage

complementarity, fostering organisational cultures that learn and adapt, and instituting ethical guardrails that

maintain trust, fairness, and human agency. With these foundations laid, we now turn to a conceptual framework

that synthesises these insights: the Composite Capability Paradigm and its accompanying capability stack for the

AI-era organisation.


Theoretical Framework: The Composite Capability Paradigm and Capability Stack

To navigate the age of AI, we propose an integrative framework termed the Composite Capability Paradigm,

rooted in the idea that organisational capabilities now arise from an orchestrated combination of human and

machine elements. This framework, developed internally as the 2025+ Capability Stack, posits that there are

distinct layers to building a resilient, adaptive, and ethical AI-era enterprise. By examining these layers in light

of broader academic perspectives, we illuminate how they resonate with and expand upon existing theory.


Orchestrating Human, Machine, and Interface Intelligence

At the heart of the composite capability paradigm is the recognition that capabilities are no longer confined to

“tidy boxes” of human-versus-technical functions. Instead, capability is seen as a dynamic interplay – “a

combined—and occasionally chaotic—dance of human intelligence, technical expertise, machine cognition, and

agile interfaces”. In other words, whenever an organisation delivers value (be it a product innovation, a

customer service interaction, or a strategic decision), it is increasingly the outcome of this fusion of

contributions: what humans know and decide, what machines calculate and recommend, and how the two

connect through interfaces. The paradigm likens this to a “jam session” in music, where different instruments

improvise together in real-time. Just as a jazz ensemble’s brilliance comes from the interaction among players

rather than any one instrument in isolation, an organization’s performance now hinges on synergy – how

effectively people and AI tools can complement each other’s riffs and how flexibly they can adapt to change in

unison.


Let’s break down the components of this dance:

Human Intelligence: This encompasses the uniquely human attributes that AI currently cannot replicate or that

we choose not to delegate. These include empathy, ethical judgment, creativity, strategic insight, and contextual

understanding. For instance, humans can understand subtleties of interpersonal dynamics, exercise moral

discretion, and apply common sense in novel situations. In the capability stack model, human intelligence is

essential for providing purpose and a “moral compass” to technological endeavours. It aligns with what

behavioural scientists would call System 2 thinking (deliberative, reflective thought) as well as emotional and

social intelligence. Gary Klein’s experienced firefighter exercising gut intuition, or a manager sensing the

morale of their team, are examples of human intelligence in action. In AI integration, human intelligence sets

the goals and defines what “success” means – reflecting our values and objectives. This is why the Foundational

Layer of the capability stack is Purpose, Values, and Ethical Leadership, ensuring that the enterprise’s direction

is guided by human insight and integrity. A key insight from behavioural science is that people are not cogs;

they seek meaning in work and will support change if it resonates with their values. Therefore, having a clear

purpose (for example, “improve patient health” in a hospital setting or “connect the world” in a tech firm) and

ethical guidelines at the base of your AI strategy engages the workforce and garners trust. It also provides the

lens through which any AI initiative is evaluated (Does this AI use align with our values? Does it help our

stakeholders in a way we can be proud of?).

Technical Expertise: Traditionally, this meant the specialised knowledge of how to operate machinery,

engineering know-how, domain-specific analytical skills (e.g., financial modeling). In the modern paradigm,

technical expertise is evolving under the influence of AI. Experts must now collaborate with AI and

continuously update their knowledge as AI tools change their fields. For example, a supply chain expert still

needs logistics knowledge, but they also need to understand how to interpret outputs from an AI demand

forecasting system, and perhaps even how to improve it. The capability stack envisions that technical expertise

“harmonises with predictive models”, meaning human experts and AI models work in tandem. This resonates

with socio-technical theory: rather than AI replacing experts, the nature of expertise shifts. A doctor with AI

diagnostics is still a doctor – but one augmented with new data patterns (e.g., AI image analysis) and thus able

to make more informed decisions. A data-savvy culture is part of technical expertise too: widespread digital

fluency (not just a few data scientists sequestered in IT) is needed so that throughout the organisation people

understand AI’s capabilities and limits. This democratisation of technical competence is facilitated by trends

like low-code or no-code AI tools, which allow non-programmers to leverage AI – effectively broadening who

can contribute technical know-how. In sum, technical expertise in the composite capability paradigm is about

humans mastering their domain plus mastering how AI applies to that domain.

Machine Cognition: This refers to the AI systems themselves – the algorithms, models, and computational

power that constitute the machine’s “intelligence.” From a capability standpoint, machine cognition brings

speed, precision, and scale to problem-solving. It includes everything from simple process automation bots to

sophisticated machine learning models and generative AI (like GPT-4). Machine cognition can detect patterns

invisible to humans (e.g., subtle correlations in big data), work tirelessly 24/7, and execute decisions or

calculations in milliseconds. However, machine cognition has its own limitations: lack of genuine

understanding, potential for errors or biases based on training data, and inability to account for values or context

unless explicitly programmed. This is why the paradigm stresses the interplay – machine cognition is powerful,

but it requires the other elements (human oversight, proper interface) to be truly effective and safe. In the

capability stack, machine cognition sits in the core layer as part of the fusion, not at the top or bottom,

symbolising that AI is integrated into the fabric of how work is done, guided by purpose from above and

controlled/governed by structures around it. The behavioural science angle on machine cognition is mainly

about human interpretation: how do humans perceive and react to AI outputs? Research on decision support

systems finds that factors like the AI’s explainability, transparency of confidence levels, and consistency affect

whether humans will accept its advice. Thus, a machine might be extremely “intelligent” in a narrow sense, but

if humans don’t trust or understand it, its capability doesn’t translate into organisational performance. In

designing composite capabilities, organisations are learning to invest not just in algorithms, but in features that

make those algorithms usable and reliable in human workflows (for example, an AI-generated insight might be

accompanied by a natural-language explanation or a visualisation for the human decision-maker).

Agile Interfaces: Perhaps the most novel element is the idea of agile interfaces as the “conductor” of the

human-machine symphony. Interfaces include the user experience design of software, the dashboards, the

collaboration tools, or even the organisational processes that mediate human-AI interaction. The paradigm notes

that “agile interfaces are the critical conduits for effective human-AI interaction”, enabling translation of AI’s

raw power into forms humans can act on. Examples range from a well-designed alert system in a cockpit that

draws a pilot’s attention at the right time, to a chatbot interface that a customer finds intuitive and helpful, to an

augmented reality tool that guides a factory worker in performing a task with AI assistance. These interfaces

need to be agile in the sense of flexible, user-centered, and evolving. We now recognise new skills like prompt

engineering (formulating questions or commands to get the best results from AI models) and data storytelling

(translating data analysis into compelling narratives) as part of this interface layer. If human intelligence sets

goals and machine cognition generates options, the interface is what makes sure the two can “talk” to each other

effectively. From a behavioural perspective, interface design draws on cognitive psychology (how to present

information in ways that align with human attention and memory limits), on social psychology (how to

engender trust – for instance, by giving the AI a relatable persona in a customer service chatbot), and on

behavioural economics nudges (how the choice architecture can influence safer or more productive behaviours).

A trivial example: a decision portal might default to a recommended option but allow override, thus nudging

users toward the statistically superior choice while preserving agency – this is an interface-level nudge that can

lead to better outcomes without coercion.

The Composite Capability Core (Layer 2 of the stack) is essentially the synergy of these human, machine, and

interface components. It is where, to quote the internal framework, “pattern fusion” occurs – “the seamless

integration of human sense, domain depth, machine precision, and systemic perspective”. Pattern fusion implies

that when humans and AI work together, they can solve problems neither could alone, by combining strengths:

human sense (intuition, ethics, meaning) + deep domain expertise + AI’s precision + a systemic or holistic view

of context. Notably, the inclusion of systemic perspective reflects the need to consider the whole environment –

a nod to systems thinking (as per Emery & Trist’s focus on interdependencies). In practice, pattern fusion might

manifest as follows: imagine an urban planning scenario where deciding traffic policy needs data on vehicles

(AI can optimise flows), understanding of human behaviour (people’s commuting habits, which a behavioural

expert can provide), political acceptability (requiring empathy and negotiation by leaders), and tools to simulate

scenarios (an interface for experiments). A fused approach could create a solution that optimises traffic without,

say, causing public backlash – something a purely AI optimisation might miss or a purely human intuition might

get wrong. The framework argues that such fusion leads to “wiser, kinder, better decisions” – interestingly

attributing not just smartness (wiser) but kindness (reflecting values) to the outcome, and also calls out

interpretability as a benefit (humans and AI together can make the solution more explainable).


Layers of the 2025+ Capability Stack

Surrounding this fusion core are two other layers in the stack model: the Foundational Layer and the Finishing

Layer. These roughly correspond to inputs that set the stage (foundation) and oversight/outcomes that ensure

sustainability (finishing).

Foundational Layer: Purpose, Values, and Ethical Leadership. This bottom layer is the base upon which

everything rests. It includes the organisation’s purpose (mission), its core values, and the tone set by leaders in

terms of ethics and vision. In essence, it is about why the organisation exists and what it stands for. Grounding

an AI-enabled enterprise in a strong foundation of purpose and values serves several roles. First, it guides

strategy: AI investments and projects should align with the mission (for example, a healthcare company whose

purpose is patient care should evaluate AI not just on cost savings but on whether it improves patient outcomes,

consistent with their purpose). Second, it provides a moral and ethical compass: decisions about AI usage (such

as how to use patient data, or whether to deploy facial recognition in a product) can be filtered through the lens

of values like integrity, transparency, and respect for individuals. This is effectively what Floridi et al. advocate

– embedding principles so that ethical considerations are front and center. Third, a clear purpose and ethical

stance help in trust-building with both employees and external stakeholders. Employees are more likely to trust

and engage with AI systems if they see that leadership is mindful of ethical implications and that the systems

uphold the company’s values (for instance, an AI decision tool that is demonstrably fair and used in a values-

consistent way will face less internal resistance). Externally, customers and partners today scrutinise how

companies use AI – a strong foundational layer means the company can articulate why its use of AI is

responsible and beneficial. Behavioural science here intersects with leadership studies: transformational

leadership research shows that leaders who inspire with purpose and act with integrity foster more innovation

and buy-in from their teams. Therefore, having Ethical AI governance as a leadership imperative is part of this

layer – boards and executives must champion and monitor the ethical deployment of AI, making it a core part of

corporate governance (indeed, the internal report suggests boards treat AI governance as a “fiduciary duty”). In

practice, this could mean regular board-level reviews of AI projects, training leaders about AI ethics, and

including ethical impact in project KPIs.

Composite Capability (Fusion) Core: Human–AI Fusion and Interfaces. We discussed this above – it’s the

middle layer where the action happens. It is dynamic and process-oriented, concerned with how work gets done

through human-AI teaming. In the stack model, this is depicted as the engine of innovation and decision-

making. It includes elements like the use of multimodal AI (combining text, image, voice data) and ensuring

Explainable AI (XAI) for transparency, as well as emerging methodologies like Human-in-the-Loop (HITL)

which keeps a human role in critical AI processes. All these features align with the idea of making the human-

machine collaboration effective and trustworthy.

Finishing Layer: Wellbeing, Inclusion, and Governance. The top layer of the capability stack is termed the

“Finishing Layer (The Frosting)”, emphasising the need for a stable and positive environment in which the other

capabilities function. It includes employee wellbeing, inclusion and diversity, and robust governance structures

(particularly AI governance around data, privacy, and ethics). While called “finishing,” it is not an afterthought

– it’s what ensures the whole cake holds together and is palatable. Wellbeing is crucial because a highly capable

organisation could still fail if its people are burned out, disengaged, or fearful. Behavioural science highlights

that change (like digital transformation) can be stressful, and prolonged stress undermines performance,

creativity, and retention. Thus, efforts to maintain reasonable workloads, provide support for employees

adapting to new roles alongside AI, and focus on ergonomic job design (so that AI doesn’t, say, force people

into hyper-monitoring or repetitive check work that hurts satisfaction) are part of sustaining capabilities.

Inclusion in this context has multiple facets: ensuring a diverse workforce (so that the people working with and

designing AI have varied perspectives, which can reduce blind spots and biases), and ensuring that AI systems

themselves are inclusive (accessible to people with different abilities, not biased against any group of users). A

practical example is providing training opportunities to all levels of employees so that digital literacy is

widespread, preventing a digital divide within the company where only an elite handle AI and others are

marginalised. Inclusion also refers to bringing employees into the conversation about AI deployment

(participatory change management), which increases acceptance – people support what they help create, as

classic OD teaches. Robust Governance ties to ethical AI and regulatory compliance. It’s about structures and

policies that maintain oversight of AI. For instance, data privacy committees to vet use of personal data

(anticipating regulations like GDPR or the new AI laws mentioned in the internal report), or AI model

validation processes to ensure models are fair and robust before they are put into production. Essentially, the

finishing layer provides checks and balances and ensures sustainability. It resonates with concepts like corporate

social responsibility and stakeholder theory: the organisation monitors the impact of its capabilities on all

stakeholders (employees, customers, society) and corrects course when needed. In behavioural terms, having

strong governance and an inclusive, healthy environment feeds back into trust – employees who see that

leadership cares about these issues will be more engaged and proactive in using AI responsibly themselves.

Conversely, if this layer is weak, one might get initial performance gains from AI but then face issues like

ethical scandals (which can destroy trust and brand value) or employee pushback and turnover.

In sum, the Composite Capability Paradigm anchored by the 2025+ Capability Stack is a strategic schema that

marries behavioural and technical elements. It mirrors many principles found in academic literature: it has the

human-centric values focus (aligning with Schein’s cultural emphasis and Floridi’s ethics), it leverages human-

machine complementarities (echoing Brynjolfsson’s augmentation strategy and socio-technical systems theory),

it invests in learning and adaptation (reflecting Teece’s dynamic capabilities and Argyris’s organisational

learning concepts), and it institutionalises trust and wellbeing (drawing on behavioural insights about motivation

and ethical conduct). By framing these as layers, it provides leaders a mental model: Start with purpose and

values, build the human+AI engine on that foundation, and secure it with governance and care for people.

One can see how this addresses the challenges noted earlier in our literature review. For example, consider trust.

The foundation of ethical leadership sets a tone of responsible AI use; the fusion core includes explainability

and human oversight, which directly fosters trust; the finishing layer’s governance monitors and enforces

trustworthy practices. Or consider adaptive decision-making. The fusion core is all about agility – humans and

AI adjusting in real time (the “jam session”), and the dynamic capabilities thinking is baked into the need for

orchestration and continuous upskilling mentioned in the paradigm. The finishing layer’s focus on learning (e.g.,

psychological safety as part of wellbeing, inclusion of diverse voices) enables adaptation too. Human agency is

reinforced by the foundation (purpose gives meaningful direction; ethical leadership ensures humans remain in

charge of values) and by design choices in the core (HITL, interfaces that allow human override). Digital

fluency is specifically called out as something to be fostered (“universal AI fluency”), meaning training and

comfort with AI at all levels – that’s both a skill and a cultural aspect.

To illustrate how this framework plays out, here are some real-world vignettes:

 In Customer Service, customer empathy is augmented by AI doing sentiment analysis in real time,

allowing human agents to tailor their responses – a perfect example of composite capability (machine

gauges tone, human shows empathy, interface feeds the insight live).

 In Operations, Lean principles are turbocharged by AI that predicts machine failures from sensor data

and video, improving efficiency.

 In Product Design, AI can suggest creative variations (say, generating design mockups) which

designers then refine – AI amplifying human creativity.

 In Strategic Foresight, AI (like GPT-based scenario simulators) helps leaders envision various future

scenarios (e.g., climate futures) so they can better plan, combining data-driven simulation with human

judgment and values to choose a path.

All these examples follow the pattern of human + AI synergy aligned to purpose. The composite capability

paradigm thus serves as a bridge between theory and practice: it gives a language and structure to ensure that

when we implement AI, we do so in a way that is holistic – considering technology, people, and process

together – and principled – guided by purpose and ethics.

Next, we move from concept to concrete practice: what should leaders and organisations actually do to realise

these ideas? In the following section, we discuss how to integrate behavioural science insights with AI

initiatives on the ground, through targeted strategies around purpose, trust, skills, decision processes, and

governance.


From Theory to Practice: Integrating Behavioural Science and AI in Organisations

Implementing the vision of human-centric, behaviourally informed AI integration requires action on multiple

fronts. In this section, we outline practical approaches and examples across key themes – purpose and culture,

trust and human–AI teaming, digital fluency and skills, adaptive decision-making, and governance and ethics –

highlighting how organisations in various sectors are putting principle into practice.


Cultivating Purpose-Driven, Human-Centered Culture in the AI Era

A clear sense of purpose and strong organisational culture are not “soft” niceties; they are strategic assets in

times of technological upheaval. As discussed, purpose forms the foundation that guides AI adoption.

Practically, this means organisations should start AI initiatives by asking: How does this technology help us

fulfill our mission and serve our stakeholders? By framing projects in these terms, leaders can more easily

secure buy-in. For example, a public sector agency implementing AI to speed up service delivery might

articulate the purpose as improving citizen experience and fairness in accessing public services, resonating with

the agency’s public service mission. This was effectively demonstrated by the UK Behavioural Insights Team

(BIT), which applied behavioural science to public policy: they would define the purpose of interventions (e.g.,

increasing tax compliance to fund public goods) and design nudges accordingly. Their success – like

simplifying tax reminder letters to encourage on-time payments – came from aligning interventions with a clear

public purpose and an understanding of human behaviour. Organisations can analogously use AI as a tool to

advance purposeful goals (such as targeting healthcare resources to the neediest populations, or customising

education to each learner’s needs), and communicate that clearly to employees.

Communication is a vital part of culture. Change management research emphasizes over-communicating the

“why” in transformations. Leaders should consistently connect AI projects to core values. For instance, if

innovation is a value, an AI project might be touted as enabling employees to experiment and create new

solutions faster. If customer centricity is a value, management can stress how AI will help staff respond to

customer needs more promptly or personalise services – thus framing AI not as a threat, but as a means to better

live out the company’s values. Satya Nadella of Microsoft provides a real-world example: under his leadership,

Microsoft’s culture shifted to a “learn-it-all” (growth mindset) culture, encouraging experimentation. When

incorporating AI (like Azure AI services or GitHub’s Copilot), Nadella consistently frames it as empowering

developers and organisations – aligning with Microsoft’s mission “to empower every person and every

organisation on the planet to achieve more.” This kind of narrative helps employees see AI as supportive of a

shared purpose, not a top-down imposition of technology for its own sake.

In practical terms, organisations can embed purpose and human-centric principles into AI project charters and

evaluation criteria. Some companies have introduced an “ethical impact assessment” or purpose-impact

assessment at the start of AI projects. This involves multidisciplinary teams (including HR, legal, user

representatives) reviewing proposals by asking questions: Does this AI use align with our values? Who could be

adversely affected and how do we mitigate that? Will this improve the employee or customer experience

meaningfully? By institutionalising such reflection, the project is shaped from the outset to be human-centric.

This practice aligns with CIPD’s call for HR to ensure interventions “are in sync with how people are ‘wired’

and don’t inadvertently encourage undesirable behaviour” – essentially a reminder to align any new tools with

positive behaviours and outcomes.

Another concrete practice is storytelling and exemplars: sharing stories internally where AI helped a person do

something better or live the company values. For example, an insurance company might circulate a story of how

an AI risk model helped a risk officer identify a struggling customer and proactively offer help – highlighting

empathy enabled by tech. These stories reinforce a culture where AI is seen as enabling employees to achieve

the organization’s human-centered goals.


Building Trust and Effective Human–AI Teams

Trust is the cornerstone of any successful human–AI partnership. Without trust, employees may resist using AI

systems or use them improperly, and customers may reject AI-mediated services. Building trust requires both

technical measures (like reliability and transparency of AI) and social measures (like training and change

management to build confidence and understanding).

On the technical side, organisations should prioritise Explainable AI (XAI) in applications where users need to

understand or validate AI decisions. For instance, a fintech company deploying an AI credit scoring tool might

implement an interface that not only gives a score but also highlights key factors contributing to that score (debt

ratio high, short credit history, etc.) in plain language. This allows loan officers to trust the system and explain

decisions to customers, aligning with the principle of explicability. Many high-performing firms now treat

explainability as a requirement, not a luxury, for any AI that interacts with human decision-makers. This stems

from a behavioural understanding: people trust what they understand.

In addition to transparency, performance consistency of AI fosters trust. Users need to see that the AI is right

most of the time (or adds value) in order to rely on it. To that end, phased rollouts where AI recommendations

are first provided in parallel with human decisions (allowing humans to compare and give feedback) can

calibrate trust. A hospital, for example, might introduce an AI diagnostic tool by initially running it “silently” –

doctors see its suggestion but still make decisions independently; over time, as they see that the AI often catches

things they might miss or confirms their hunches, their trust grows. This staged approach was recommended by

some naturalistic decision-making experts to avoid abrupt shifts that could trigger algorithm aversion.

Training is critical: digital literacy and AI fluency training doesn’t only teach how to use the tool, but also

covers the tool’s limitations and the importance of human judgement. For instance, pilots train on autopilot

systems extensively to know when to rely on them and when to disengage – by analogy, a financial analyst

might be trained on an AI forecasting tool to know scenarios where it’s likely to err (perhaps during market

disruptions) so they can be extra vigilant. This idea of appropriate reliance comes straight from behavioural

research on automation (Parasuraman et al., 1997) which showed that people often either under-trust (ignore

useful automation) or over-trust (get complacent). The goal is calibrated trust.

From a social perspective, involving end-users in the design and testing of AI solutions fosters trust. If a new AI

tool is coming to an employee’s workflow, having some of those employees participate in its pilot, give

feedback, and witness improvements based on their input can turn them into change champions who trust the

end product. This participatory approach also surfaces usability issues that, if left unaddressed, could erode trust

later. It mirrors the behavioural principle that people fear what they don’t understand; involvement demystifies

the AI.

Organisational roles may also need to evolve to optimise human–AI teaming. Some companies are creating

roles like “AI liaison” or “human-AI team facilitator” – individuals who understand both the tech and the work

domain and can mediate between data science teams and frontline staff. These facilitators might observe how

employees interact with AI tools, gather suggestions, and continuously improve the human-AI interface. This is

analogous to having a user experience (UX) expert, but specifically focusing on the collaboration between

human and AI. For example, in a call center that introduced an AI that listens to calls and suggests responses (a

real technology in use), a facilitator monitored calls to see if the suggestions were helpful or if they annoyed the

agents, then tweaked the system or trained the agents accordingly (maybe the AI needed to wait a few seconds

more before popping up suggestions, to not interrupt the agent’s own thought process). Such adjustments make

the partnership smoother and bolster trust in the AI as a helpful colleague rather than an intrusive overseer.

Team norms can also be established for human–AI interaction. If decisions are being made with AI input, teams

can adopt norms like: Always double-check critical decisions with another human or source if the AI gives low

confidence, or Use the AI’s recommendation as a starting point but consider at least one alternative before

finalising (to avoid lock-in). These are akin to pilot checklists or medical second-opinion norms, and they

acknowledge that while AI is a team member, human members are ultimately accountable. By formalising such

practices, organisations signal that AI is a tool, not a replacement for human responsibility. This can alleviate

anxiety (employees know they’re not expected to blindly follow AI) and encourage learning (comparing AI and

human conclusions can be instructive).

A case in point for trust and teaming comes from the military domain, where “centaur” teams (a term borrowed

from chess human–AI teams) are being explored. Fighter pilots work with AI assistants that might drone-fly

wingman UAVs or manage defensive systems. The military has found that trust is built through rigorous testing

in exercises and the ability of pilots to easily take control from the AI when needed – reflecting the principle of

keeping humans in the loop for lethal decisions. In business, the stakes are usually lower, but the same concept

of giving humans an “eject button” or override and making that as easy as pressing a button fosters a safety net

that ironically makes users more open to letting the AI handle things up to that point. It’s analogous to having

brakes when using cruise control.

Finally, an often overlooked element: celebrating successes of human–AI collaboration. When an AI-assisted

effort leads to a win (say, an AI+human sales team exceeds their targets or an AI-driven quality control catches

a defect that human inspectors missed, avoiding a costly recall), leaders should acknowledge both the human

and the AI contribution. This sends a message that using the AI is praiseworthy teamwork, not something that

diminishes human credit. If employees fear that AI will steal the credit or make their role invisible, they’ll resist

it. Recognising augmented achievements in performance reviews or team meetings helps normalise AI as part of

the team.


Developing Digital Fluency and Adaptive Skills

One of the most tangible ways to integrate behavioural science with AI strategy is through learning and

development (L&D) initiatives. The half-life of skills is shrinking; dynamic capability at the organisational level

rests on continually upskilling and reskilling the workforce (sensing and seizing opportunities, in Teece’s

terms). Behavioural science-informed L&D focuses not just on knowledge transmission, but on motivation,

reinforcement, and practical application.

A key capability for 2025 and beyond is digital fluency – the ability for employees to comfortably understand,

interact with, and leverage AI and data in their roles. Companies leading in AI adoption often launch company-

wide digital academies or AI training programs. For example, AT&T and Amazon have large-scale reskilling

programs to train workers in data analysis and machine learning basics, offering internal certifications. The

behavioural insight here is to reduce learning anxiety: make learning resources abundant, accessible (online,

self-paced), and rewarding (through badges, recognition, or linking to career advancement). By building a

culture where continuous learning is expected and supported (and not punitive if one is initially unskilled),

employees are more likely to engage rather than fear the new technology. This also ties to Carol Dweck’s

growth mindset concept – praising effort and learning rather than static ability – which many organisations now

incorporate into their competency models.

Another tactic is experiential learning through pilot projects or innovation labs. Instead of classroom training

alone, employees learn by doing in sandbox environments. For instance, a bank might set up a “bot lab” where

any employee can come for a day to automate a simple task with a robotic process automation (RPA) tool, with

coaches on hand to assist. This hands-on experience demystifies AI (or automation) and builds confidence.

Behaviourally, adults learn best when solving real problems that matter to them (a principle from adult learning

theory). So if an employee can automate a tedious part of their job through an AI tool, they directly see the

benefit and are likely to be more enthusiastic about AI adoption.

Mentoring and peer learning also accelerate digital fluency. Some firms have implemented a “reverse

mentoring” system where younger employees or tech-savvy staff mentor senior managers on digital topics

(while in turn learning domain knowledge from those seniors). This not only transfers skills but breaks down

hierarchical barriers to learning – a major cultural shift in some traditional organisations. It leverages social

learning: people often emulate colleagues they respect, so having influential figures vocally learning and using

AI can create a bandwagon effect.

A concept gaining traction is the creation of fusion teams (also called citizen developer teams), which pair

subject-matter experts with data scientists or IT developers to co-create AI solutions. For example, in a

manufacturing firm, a veteran production manager teams up with a data scientist to develop a machine learning

model for predictive maintenance. The production manager learns some data science basics in the process

(digital upskilling) and the data scientist learns the operational context (domain upskilling). This cross-

pollination means the resulting solution is more likely to be adopted (since it fits the work context) and the

participants become champions and trainers for others. It’s an application of Vygotsky’s zone of proximal

development in a way – each learns from someone a bit ahead of them in another dimension, scaffolded by

collaboration.

Adaptive decision-making skills are also crucial. Employees need training not just in using specific tools, but in

higher-level skills like interpreting data, running experiments, and making decisions under uncertainty –

essentially, decision science literacy. Some organisations train their staff in basic statistics and hypothesis

testing so they can better design A/B tests or understand AI output (which often comes with probabilities or

confidence intervals). This is informed by the behavioural notion that people are prone to misinterpreting

probabilistic data (e.g., confusion between correlation and causation, or biases like overconfidence). By

educating the workforce on these pitfalls (perhaps using engaging examples, like common fallacies), companies

improve the collective ability to make sound decisions with AI.

Continuous feedback loops are another practice: dynamic capabilities demand quick learning cycles. Companies

can implement frequent retrospectives or after-action reviews when AI is used in projects. For instance, after a

marketing campaign guided by an AI analytics tool, the team can review what the AI suggested, what they did,

and what the outcome was, extracting lessons (did we trust it too much, did we under-utilise it, did we encounter

surprising customer reactions?). These insights then feed into refining either the AI model or the human

strategies. Such reflective practices are advocated in agile methodologies and are rooted in Kolb’s experiential

learning cycle (concrete experience → reflective observation → abstract conceptualisation → active

experimentation). Over time, they build an organisational habit of learning from both success and failure, key to

adaptation.

It’s also worth noting the leadership skill shifts needed. Leadership development programs are incorporating

training on leading hybrid human–AI teams, asking the right questions about AI (since leaders might not be the

technical experts, they need the fluency to challenge and query AI outputs – e.g., “What data was this model

trained on? How confident should we be in this prediction?”). Leading by example, if managers regularly use

data and AI insights in their decisions and explain how they balanced that with experience, employees pick up

on that decision-making approach.

A concrete example of adaptive skill-building can be drawn from healthcare: during the COVID-19 pandemic,

hospitals had to adapt quickly to new data (like predictive models of patient influx). Some hospitals created ad

hoc data teams and trained clinicians to read epidemiological models – a crash course in data literacy under

pressure. Those who managed to integrate the predictions with frontline insights navigated capacity issues

better. This underscores that when the environment changes rapidly (turbulent environment in Emery & Trist’s

term), organizations benefit from having invested in general adaptability skills beforehand.


Enhancing Decision-Making and Innovation through Human–AI Collaboration

Organisations can leverage AI and behavioural insights together to drive better decisions and innovation on an

ongoing basis. One method is establishing a culture and processes of evidence-based decision-making. The idea,

championed by movements like evidence-based management and supported by CIPD research, is to encourage

decisions based on data, experiments, and scientific findings rather than just intuition or tradition. AI naturally

provides more data and analytical power, but behavioural science reminds us that simply having data doesn’t

ensure it’s used wisely – cognitive biases or political factors can still lead to suboptimal choices.

To address this, some organisations have set up “decision hubs” or analytics centers of excellence that both

churn out insights and coach decision-makers on how to interpret them. A bank, for instance, might require that

any proposal for a new product comes with an A/B test plan and data analysis – essentially building a decision

process that forces a more scientific approach. Product teams at tech companies routinely do this: before rolling

out a feature, they run experiments and the go/no-go is based on statistically significant results, not just the

HIPPO (highest paid person’s opinion). This discipline is part technical (knowing how to run tests) and part

behavioural (committing to act on what the data says, which can be hard if it contradicts one’s intuition).

Leaders play a role here by reinforcing that changing course in light of data is a strength, not a weakness. Jeff

Bezos called this “being stubborn on vision, flexible on details” – hold onto your core purpose but be willing to

change tactics when evidence suggests a better way.

Adaptive governance structures, like rapid steering committees or innovation task forces, can empower faster

decision loops. For example, during a crisis or fast-moving market change, a company might assemble a cross-

functional team that meets daily to review AI-generated forecasts and frontline reports, then make quick

decisions (similar to a military OODA loop: observe, orient, decide, act). This was observed in some

companies’ COVID responses – they effectively set up a nerve center mixing data (sometimes AI models

predicting scenarios) with human judgment to navigate uncertainty. The behavioural key is that these teams had

the mandate to act and adjust, avoiding the paralysis that can come from either fear of uncertainty or

bureaucratic slowness. They embraced adaptive decision-making, making small reversible decisions quickly

rather than waiting for perfect information.

In terms of innovation, AI can generate ideas (like design suggestions or optimisations) but human creativity

and insight are needed to choose and implement the best ideas. Companies are thus exploring human–AI co-

creation processes. One practical approach is ideation sessions with AI: for instance, marketers might use GPT-4

to produce 50 variations of an ad copy, then use their creative judgment to refine the best ones. In engineering,

generative design algorithms propose thousands of component designs, and engineers use their expertise to pick

one that best balances performance and feasibility. This speeds up the trial-and-error phase of innovation

dramatically, allowing humans to consider far more possibilities than they could alone. But it also requires a

mindset shift: designers and experts must be open to letting AI contribute and not feel that it diminishes their

role. To facilitate this, some organisations frame AI as a creative partner or brainstorming assistant. They

encourage teams to treat AI suggestions not as final answers, but as provocations or starting points. This reduces

the psychological defensiveness (“a robot is doing my job”) and instead fosters curiosity (“let’s see what it

comes up with, maybe it will spark something”). Pixar, for example, has experimented with AI for generating

plot ideas or character visuals – not to replace writers or artists, but to help break through creative blocks or

explore alternatives. They report that artists actually enjoyed riffing off AI outputs once they felt it was their

choice what to use or discard.

Bias mitigation in decisions is another area where behavioural science and AI together can help. AI can be used

to debias human decisions – for instance, in hiring, structured algorithmic screening can counteract individual

manager biases (though one must also ensure the AI itself is fair). Meanwhile, behavioural tactics like blinding

certain info or using checklists can be applied to AI outputs; e.g., if an AI produces a recommendation, a

checklist for managers might ask “What assumptions is this recommendation based on? Have we considered an

opposite scenario?” which forces a consideration of potential bias or error. The combination ensures neither

human nor AI biases dominate unchecked. The “premortem” technique by Gary Klein (imagine a future failure

and ask why it happened) can be used on AI-driven plans to uncover hidden issues. Some AI development teams

now do bias impact assessments as part of model development (a practice encouraged by IBM, Google etc.),

essentially bringing a social science lens into the tech development.


Strengthening Governance, Ethics, and Trustworthy AI Practices

Governance provides the scaffolding that holds all the above initiatives accountable and aligned. It’s the

embodiment of the “robust governance” and “ethical AI” focus in the capability stack’s top layer. Several

concrete governance measures are emerging as best practices:

AI Ethics Boards or Committees: Many organisations (Google, Facebook, Microsoft, to name tech giants, but

also banks, healthcare systems, universities, and governments) have convened advisory boards or internal

committees to review AI projects. The composition is typically cross-functional – legal, compliance, technical,

HR, and often external independent experts or stakeholder representatives. Their role is to examine proposed

high-impact AI uses for ethical risks, alignment with values, and compliance with regulations. For example, a

global bank’s AI ethics committee might review a new algorithmic lending platform to ensure it doesn’t

discriminate and that it has an appeal process for customers – effectively implementing principles of fairness

and accountability. These boards are a direct response to both ethical imperatives and looming regulations (like

the EU AI Act’s requirements for high-risk AI systems). They institutionalise the “slow thinking” System 2

oversight to balance the fast-moving deployment of AI. Behavioural science supports this by recognising that

individual developers or product owners may have conflicts of interest or cognitive blind spots – a formal

review by a diverse group brings more perspectives (avoiding groupthink and the bias of tunnel vision) and

creates a checkpoint for reflection (mitigating the rush that can lead to ethical lapses).

Policies and Principles: Organisations often publish AI principles (e.g., “Our AI will be fair, accountable,

transparent, and explainable” – similar to Floridi’s five principles) and then derive concrete policies from them.

A policy might dictate, for example, that sensitive decisions (hiring, firing, credit denial, medical diagnosis) will

not be made solely by AI – there must be human review (human-in-the-loop), which echoes one of the EU draft

AI regulations as well. Another might require that any customer-facing AI makes clear to the user that it is an AI

(so people aren’t duped into thinking a chatbot is a human, respecting autonomy). These policies are essentially

commitment devices at the organisational level – they set default behaviours that align with ethical intentions,

making it easier for employees to do the right thing and harder to do the wrong thing. They also serve to build

public trust, since companies can be held to their promises.

Transparency and Communication: Internally, transparency means informing employees about how AI is

affecting decisions about them (like performance evaluations or promotions, if algorithms play a role) and

decisions they make (providing insight into the tools they use). Externally, it means being honest with customers

about when AI is used and what data is collected. Some banks, for instance, let customers know that an

automated system did the initial credit assessment and give a route to request human reassessment – this kind of

candour can actually improve trust, as customers feel they are respected and have recourse. It also pressures the

AI to perform well since its suggestions might be scrutinised. Interestingly, behavioural research shows people

appreciate procedural fairness: even if they get a negative outcome, if they believe the process was fair and

transparent, they react less negatively. So transparency is not just an ethical duty, but also a strategy to maintain

trust even when AI systems must deliver unwelcome news.

Monitoring and Auditing: The governance framework should include continuous monitoring of AI

performance and impacts, not just one-time reviews. AI models can drift (their accuracy degrades if data

patterns change), and their use can evolve in unintended ways. Companies are starting to implement AI

monitoring dashboards, analogous to financial controls, tracking key metrics like bias indicators, error rates, and

usage statistics. For example, if an AI recruiting tool suddenly starts filtering out a higher percentage of female

candidates than before, that flag can trigger an investigation. This is similar to the way credit scoring models are

monitored for bias in lending. Some jurisdictions are likely to mandate such audits (the proposed EU AI Act

would require logging and oversight for high-risk AI). Incorporating this proactively is wise. It again brings in

behavioural science at the organisational level: what gets measured gets managed. By measuring ethical and

human-impact metrics, not just performance, an organisation signals its priorities and catches issues early. There

is also a behavioural aspect in that knowing one is monitored can deter negligent behaviour – if teams know

their AI deployment will be audited for fairness, they’re more likely to design it carefully from the start.

Responsive Governance: Governance shouldn’t just be rigid control; it must also be adaptive. If an audit or a

whistleblower or an external event reveals a problem (say, an AI is implicated in a privacy breach or bias

incident), an agile governance process can pause that AI’s deployment and convene a response team to fix it.

This happened in some tech companies, for example, when a facial recognition product was found to have racial

bias, the company voluntarily halted sales to law enforcement and invested in improvements. The ability to

respond quickly to ethical issues – essentially an organisational form of course correction – will define

companies that can retain public trust. It is analogous to product recalls in manufacturing: how you handle a

flaw can make or break your reputation.

A specific domain example: Public services and government are increasingly using AI (for welfare eligibility,

policing, etc.), and they have set up governance like independent oversight panels and algorithm transparency

portals where the code or at least a description is published for public scrutiny. The Netherlands, after a scandal

where a biased algorithm falsely flagged welfare fraud (the SyRI system), established stricter oversight and even

legal bans on such algorithms until proper safeguards are in place. The lesson taken was that not tempering

technical possibility with behavioural and ethical oversight can lead to serious harms, which then require

rebuilding trust from scratch. Now they emphasise citizen privacy, feedback from social scientists, and smaller

pilot programs to evaluate impacts before scaling.

Within organisations, employee involvement in governance is an interesting trend. For instance, some

companies have ethics champions or ambassadors in each department who ensure local context is considered in

AI use and act as liaisons to the central AI ethics committee. This decentralises ethical mindfulness – a bit like

having safety officers throughout a factory, not just at HQ. It leverages the behavioural principle of ownership:

people on the ground often see problems early, and if they feel responsible for ethics, they’re more likely to

speak up rather than assume “someone else up high will take care of it.” Creating safe channels for such voices

(whistleblower protections, open-door policies on AI concerns) is vital, reflecting Edmondson’s psychological

safety concept again, but in the ethics domain.

Finally, regulatory engagement is part of governance now. Organisations should keep abreast of and even help

shape emerging AI regulations and industry standards (like IEEE’s work on AI ethics standards). This proactive

approach means they’re not caught off guard by compliance requirements and can even gain a competitive edge

by being early adopters of high standards (much like companies that embraced environmental sustainability

early reaped reputational rewards). It also ensures that their internal governance aligns with external

expectations, making the whole ecosystem more coherent.

In conclusion, the practical integration of behavioural science and AI requires concerted effort in culture, trust-

building, skill development, decision processes, and governance. The themes we’ve discussed are deeply

interrelated: a purpose-driven culture facilitates trust; trust and skills enable adaptive decision-making; good

decisions and experiences reinforce trust and culture; and governance sustains it all by ensuring accountability

and alignment with values. Organisations that weave these elements together are effectively operationalising the

composite capability paradigm – they are designing themselves to be both high-tech and deeply human,

dynamic yet principled.


Conclusion

Behavioural Sciences in the Age of AI is not just an academic topic; it is a lived strategic journey for

organisations today. In this paper, we have traversed the historical and theoretical landscape that underpins this

journey – from Simon’s realisation that human rationality is bounded, to Brynjolfsson’s insight that humans and

machines, working as partners, can achieve more than either alone, to Floridi’s urging that AI be guided by

human-centric principles for a flourishing society. These insights form a tapestry of wisdom: they tell us that

effective use of AI requires understanding human cognition and behaviour at individual, group, and societal

levels.

We anchored our discussion in a practical framework – the composite capability paradigm – which captures

how human intelligence, machine cognition, and agile interfaces must seamlessly interact for organisations to

thrive. We situated this paradigm within broader literature, showing it resonates with socio-technical theory’s

call for joint optimisation, dynamic capabilities’ emphasis on agility and reconfiguration, and ethical

frameworks’ insistence on purpose and values. In doing so, we positioned the user’s internal frameworks as part

of a continuum of scholarly and practical evolution, rather than isolated ideas. This enriched perspective reveals

that the challenges of the AI era – building trust, preserving human agency, ensuring ethical outcomes, and

maintaining adaptability – are new in form but not in essence. They echo age-old themes of organisational life:

trust, purpose, learning, and justice, now cast in new light by technology.

Through real-world examples across health, public services, business, and government, we illustrated both the

opportunities and the cautionary tales. We saw how a hybrid of radiologists and AI improves diagnostic

accuracy, and how a poorly overseen algorithm can cause public harm and outrage (as in the welfare case).

These examples reinforce a key takeaway: human–AI collaboration works best when it is designed and

governed with a deep appreciation of human behaviour – our strengths (creativity, empathy, judgment) and our

weaknesses (bias, fear of change, fatigue). In healthcare, education, finance, and beyond, those deployments of

AI that succeed tend to be those that augment human decision-making and are accepted by humans; those that

fail often neglected the human factor, whether by ignoring user experience, eroding trust, or conflicting with

values.

Several cross-cutting themes emerged in our analysis: purpose, trust, digital fluency, human agency, adaptive

decision-making, and governance. It is worth synthesising how they interplay to inform a vision for

organisations moving forward. Purpose and values form the north star – they ensure AI is used in service of

meaningful goals and set ethical boundaries. Trust is the currency that allows humans to embrace AI and vice

versa; it is earned through transparency, reliability, and shared understanding. Digital fluency and skills are the

enablers, equipping people to work alongside AI confidently and competently. Human agency is the lens of

dignity – maintaining it means AI remains a tool for human intentions, not a black box authority; it means

employees at all levels feel they can influence and question AI, thereby avoiding a dystopia of uncritical

automation. Adaptive decision-making is the modus operandi for a complex world – using data and

experimentation (often powered by AI) but guided by human insight to navigate uncertainty in an iterative,

learning-focused way. And governance and ethics are the safety rails – without them, short-term wins with AI

can lead to long-term crashes, whether through regulatory penalties or loss of stakeholder trust.

Looking ahead, the Age of AI will continue to evolve with new advancements: more multimodal AI, more

autonomous systems, more integration into daily life. Behavioural science, too, will evolve as we learn more

about how people interact with increasingly intelligent machines. Concepts like algorithmic nudges (AI shaping

human behaviour subtly) or extended cognition (humans thinking with AI aids) will grow in importance. But the

core insight of this paper is likely to endure: that the human in the loop is not a weakness to be engineered away,

but the very source of direction, purpose, and ethical judgment that technology alone cannot provide. As the

internal strategy document eloquently put it, we are witnessing “a philosophical shift: reclaiming human agency

and purpose by ensuring capabilities reflect organisational values and aspirations in a world of rapid change.” In

practical terms, this means organisations must consciously design their AI deployments to amplify human

potential and align with human values, not suppress them.

For strategists and leaders, then, the task is clear. It is to become, in a sense, behavioural engineers of

organisations – crafting structures, cultures, and systems where humans and AI together can excel. It is to

champion ethical innovation, proving that we can harness powerful technologies while keeping humanity at the

center. And it is to invest in learning and adaptation as primary capabilities, so that as new research and new

technologies emerge, the organisation can incorporate them responsibly and effectively. The organisations that

succeed in the coming years will be those that manage this integration deftly – who neither fall into the trap of

techno-centrism (trusting technology blindly, neglecting the people) nor the trap of techno-skepticism (fearing

technology and falling behind), but find a harmonious path of augmentation, where technology elevates people

and people steer technology.

In conclusion, behavioural science offers both a caution and a promise in the age of AI. The caution is that

ignoring human factors can lead even the most advanced AI solutions to fail or cause harm. The promise is that

by embracing a human-centered approach, we can unlock the full potential of AI to create organisations that are

not only more innovative and efficient, but also more resilient, ethical, and responsive to those they serve. By

learning from the past and grounding ourselves in foundational principles of human behaviour, we equip

ourselves to shape a future where AI amplifies human wisdom and creativity rather than undermining them. In

doing so, we ensure that the Age of AI remains, fundamentally, an age of human progress and empowerment,

aligned with the values and behaviours that define our humanity.



Sources:

Investopedia – Herbert A. Simon: Bounded Rationality and AI Theoristinvestopedia.cominvestopedia.com

The Decision Lab – Daniel Kahneman profilethedecisionlab.com

The Decision Lab – Gerd Gigerenzer profilethedecisionlab.com

The Decision Lab – Gerd Gigerenzer (quote)thedecisionlab.com

CIPD – Our Minds at Work: Developing the Behavioural Science of HR

Medium (Link Daniel) – Structured thinking and human-machine success (chess example)

TED Talk (Brynjolfsson) – Race with the machine (freestyle chess)blog.ted.comblog.ted.com

Workplace Change Collab. – Radiologist + AI outperforms either alonewpchange.org

New Capabilities (Internal doc) – Composite capability paradigm excerpt

New Capabilities (Internal doc) – Reclaiming human agency and purpose

New Capabilities (Internal doc) – 4D chess analogy for modern environment

New Capabilities (Internal doc) – Human-machine “dance” metaphor

AI4People (Floridi et al.) – AI ethics principles (human dignity, autonomy,

etc.)pmc.ncbi.nlm.nih.govpmc.ncbi.nlm.nih.gov

Workplace Change Collab. – Radiologist-AI hybrid sensitivity/specificitywpchange.orgwpchange.org

David J. Teece – Dynamic capabilities definitiondavidjteece.com

David J. Teece – Sensing, seizing, reconfiguring (agility)davidjteece.com

Wikipedia – Sociotechnical systems theory (Trist et al.)en.wikipedia.orgen.wikipedia.org

P. Millerd (Blog) – Schein on learning anxiety vs survival anxietypmillerd.compmillerd.com

Workplace Change Collab. – Ethical AI, inclusion, governance as “finishing layer”

New Capabilities (Internal doc) – AI as social-technological actor symbiosis

New Capabilities (Internal doc) – Universal AI fluency and continuous upskilling