Tuesday, 21 October 2025

Bali Pratipada—Textual Archaeology, Traditional Significance, and the Subaltern Critique

 I. Introduction: Contextualizing Bali Pratipada in the Hindu Calendar

A. Definition and Nomenclature: The Festival of Return

Bali Pratipada is a prominent Hindu festival celebrated on the first day of the Shukla Paksha (bright fortnight) in the lunar month of Kartik.1 This annual placement typically ensures that the festival falls on the fourth day of the widespread Diwali celebrations, often coinciding with Govardhan Puja.1 The term Pratipada itself implies "below the opponent's foot," a direct reference to the central event of the associated mythology.1

The festival is known by a variety of regional names that highlight its cultural integration across the subcontinent. These names include Bali Padyami (in Karnataka and Andhra Pradesh), Bali Padva (in Maharashtra and Goa), Vira Pratipada, Dyuta Pratipada 1, and Barlaj (in Himachal Pradesh).2 Crucially, in Gujarat and Rajasthan, the day is recognized as the regional traditional New Year Day in the Vikram Samvat calendar, known as Bestu Varas or Varsha Pratipada.2

The core commemoration is the notional return of the virtuous daitya (demon) King Bali, also known as Mahabali, from the netherworld (Sutala) back to Earth for a single day.1 This annual visit was granted as a boon by Bhagwan Vishnu, who had appeared in the Vamana (dwarf) avatar to subdue Bali and restore cosmic order.1

B. Scope and Purpose of the Analysis

This report undertakes a comprehensive analysis of Bali Pratipada, moving beyond its function as a celebratory occasion. The objective is to conduct a tripartite examination that includes: tracing the textual-historical evolution of the core myth in ancient Hindu scriptures; detailing its contemporary pan-Indian rituals and devotional significance; and performing a critical sociological deconstruction to understand how the narrative has been radically reinterpreted through the lens of identity politics and the subaltern question regarding caste and power dynamics.6

II. The Textual Archaeology of the Vamana-Mahabali Myth

This section establishes the textual authority and evolution of the myth, tracking its journey from abstract Vedic concepts to the formalized Puranic narrative celebrated today.

A. Vedic and Brahmana Antecedents: The Trivikrama Strides

The fundamental concepts underpinning the Vamana avatar are traceable to the earliest layers of Hindu scripture. The RigVeda (e.g., 1.22, 1.154) already celebrates Vishnu for his "Three Strides" (Trivikrama), which encompass the entire cosmos, symbolizing his universal sovereignty long before a specific avatar narrative was established.8

The myth evolved considerably in the Brahmana period, texts focused on explaining ritual practice. The Shatapatha Brahmana formalizes the dwarf motif, setting the stage for the Vamana narrative.9 In this older account, the Asuras claim the world, and the Gods (Devas) call upon Vamana to reclaim it. However, the Shatapatha Brahmana describes Vamana gaining the earth not by footsteps, but by acquiring as much land as he could "lie upon as a sacrifice," linking the event directly to the efficacy of the sacrificial fire (yajna).9Crucially, this Brahmana account does not yet personalize the Asura opponent as Mahabali.9

The chronological layering of these texts demonstrates a clear developmental progression. The shift from the RigVeda's abstract, cosmic metaphor of the three strides to the Shatapatha Brahmana's focus on gaining land for a ritual altar shows that the narrative developed in parallel with the elaboration of Vedic sacrifice. The later Puranic story is an amalgamation, merging the cosmic Trivikrama concept with this sacrificial context, thereby providing a narrative explanation for divine intervention intended to restore cosmic balance (dharma).10

B. The Puranic Synthesis: Vamana and the Virtuous Daitya

The definitive, detailed account of the Vamana-Mahabali interaction is found primarily in the Srimad Bhagavata Purana (Skandha 8), which establishes Vamana as the fifth avatar of Vishnu, born to Aditi and Kashyapa.13

King Mahabali, a daitya and the grandson of the great Vishnu devotee Prahlada, is consistently depicted as powerful, generous, and highly virtuous.1 His flaw, however, was ahankara (pride) and an overreach of power that led him to conquer Svarga (heaven), thereby disturbing the established cosmic hierarchy.14 To resolve this imbalance, Vishnu appeared as Vamana, the dwarf Brahmin, and asked Bali for three steps of land.14Despite explicit warnings from his preceptor Shukra, Bali, bound by his own generosity, agreed.8 Vamana then transformed into the cosmic giant Trivikrama, covering Earth and Heaven in two steps, leaving Bali to offer his head for the third step.4

Bali's willing surrender of his head is viewed theologically as the ultimate act of devotion.17 Because Bali was a pious devotee (a lineage traced back to Prahlada) 15, his subjugation was not an act of annihilation but a redemptive act of grace (moksha). Vishnu was pleased by his humility, granted him immortality (Chiranjivi), and designated him the sovereign ruler of the beautiful subterranean world of Sutala, promising to be his eternal guardian.2 Furthermore, Vishnu blessed Bali with the specific boon of returning to Earth annually to accept worship from his devotees, an act that sanctifies the festival of Bali Pratipada.2 This narrative structure underscores the Puranic assertion that spiritual greatness lies in actions and devotion, transcending even the daitya lineage.17

The worship mandate is affirmed in texts like the Bhavisyottara Purana, which specifically instructs devotees to consecrate and worship an image of King Bali, often made of rice grains, inside their homes on the Kartika Pratipada lunar day, confirming the festival’s ancient scriptural recognition.19

The evolution of the narrative across key texts is summarized below:

Table 1: Textual Evolution of the Vamana-Mahabali Myth

Textual Source

Period/Type

Core Narrative Element

Mahabali's Role

RigVeda (e.g., 1.22, 1.154)

Vedic (c. 1500–1200 BCE)

Vishnu's Three Strides (Trivikrama) across the cosmos, symbolic of universal reach.

Not mentioned.

Shatapatha Brahmana

Brahmana/Late Vedic (c. 700 BCE)

Vamana (dwarf) gains the Earth from Asuras through ritual sacrifice; linking the myth to yajna.

Mentioned only as a general Asura party; not personalized as Mahabali.

Srimad Bhagavata Purana(Skandha 8)

Puranic (Post-Classical Era)

Vamana requests three steps from Bali, who is humbled and exiled to Sutala, receiving the boon of annual return and becoming a Chiranjivi.

Central figure; virtuous Daitya devotee whose surrender earns salvation.

Bhavisyottara Purana

Puranic

Stipulates the ritual consecration and worship of King Bali's image on Kartika Pratipada.

Focus on his veneration as the subject of the festival.

III. Bali Pratipada: Observance, Rituals, and Pan-Indian Significance

Bali Pratipada is celebrated as a multi-layered festival across India, blending devotional worship of King Bali and Vishnu with strong socio-economic and agrarian themes, reflecting a significant degree of regional heterogeneity.

A. Central Theological and Economic Significance

The day is fundamentally a celebration of renewed prosperity and the restoration of a virtuous reign, symbolized by the return of Bali Chakravarty.1 This mythological theme is integrated directly into the Hindu calendar: Bali Pratipada is considered one of the half-day Muhūrtas (supremely auspicious timings) of the year.2 Consequently, the day is highly regarded as auspicious for inaugurating new endeavors, launching businesses, making investments, and arranging marriages or property purchases, as new initiatives begun on this day are believed to be prosperous and successful.1

Rituals emphasize purification and domestic sanctity. Devotees undertake Abhyangasnan, an early morning bath involving an oil massage, as a compulsory rite of renewal.1 A central domestic custom involves drawing an image of King Bali, often with his wife Vindhyavali, at the center of the house floor using colorful powders, powdered rice, or cow dung.1 Offerings (Naivaidya) are performed to satisfy the hunger and thirst of the returning Bali.1

B. Regional Manifestations and Heterogeneity

The underlying myth of Bali's prosperity provides a flexible framework, adapting to suit diverse local needs, economies, and social structures.

In Maharashtra and Gujarat, the festival, known as Bali Padva, is closely linked to marital fidelity and domestic harmony.3 Wives perform aarti for their husbands, apply tilak, and pray for their longevity, while husbands reciprocate with gifts, reinforcing their relational bond.3 This tradition subtly shifts the mythological theme of Bali’s generous gift-giving into a reciprocal domestic rite.1 Simultaneously, in Gujarat and parts of Rajasthan, the day operates as Bestu Varas, the traditional New Year Day in the Vikram Samvat, highlighting its role as a period for financial and social resetting.2

In South Indian states, such as Karnataka and Tamil Nadu, the observance is deeply agrarian, timed to coincide with the post-monsoon harvest.2 Farmers celebrate Bali Padyami by performing rituals centered on agricultural productivity and fertility. These include Gopuja (worship of the cow), Kedaragauri vratam, and Gouramma puja (worship of Goddess Parvati and her forms).2 The cowshed (goushala) is ceremoniously cleaned, and a triangular image of Bali made from cow dung is decorated with Kolam and worshipped, directly integrating the myth of the prosperous king with harvest rites.2

Even in the Himalayan regions like Himachal Pradesh, where it is known as Barlaj (a corruption of "Bali Raj"), the festival maintains a dual focus on Vishnu and Bali.2 Here, the observance extends to honoring tools: farmers abstain from using the plough, and artisans worship their implements in deference to Vishvakarma.2

The varying emphasis across regions (marital bonds, financial renewal, harvest worship, tool veneration) confirms that the core narrative of Bali’s virtuous reign and return provides a consistent theme of renewal and success. The festival's placement in the Kartik month, immediately following the agricultural season, naturally integrates the mythological prosperity theme into the cycles of both economic and agricultural life.2

IV. Philosophical and Ethical Dimensions of the Surrender

The Puranic narrative of Vamana and Bali is not simply a tale of mythological warfare; it is a profound ethical dialogue concerning the proper role of power, the necessity of humility, and the supreme path to spiritual liberation.

A. The Doctrine of Divine Humbling (Leela)

Vamana’s decision to appear as a diminutive, humble Brahmin symbolizes divine humility and the idea that righteousness and wisdom hold ultimate power, surpassing material strength or military might.10 The purpose of Vamana’s Lila (divine play) was defined as restoring cosmic order (dharma), not merely vanquishing a wicked foe. Vamana’s strategic intervention is viewed as a demonstration of Vishnu's commitment to maintaining universal balance.10

Bali’s downfall stemmed not from malice but from his ahankara (ego), which led to his overreach.3 The symbolic act of Vamana covering the universe and placing the final step on Bali’s head is interpreted as an act of divine grace designed to crush the ego, not the individual devotee.11 The narrative establishes a concept known as the paradox of devotional defeat: Bali, though materially defeated and exiled, achieves an unparalleled spiritual victory.2

The theological implication is that Vishnu used deception to test and ultimately elevate Bali.14 By forcing the King to give up all worldly possessions, power, and pride, Vamana facilitated Bali’s attainment of moksha(liberation) through complete self-surrender (sharanagati).11 This illustrates that true spiritual fulfillment lies in non-attachment and devotion, rendering material loss spiritually insignificant.18

B. Ethics of Contentment and Generosity

Vamana’s philosophical exchange with Bali emphasizes the doctrine that eternal contentment is the path to liberation, while unquenchable desire leads to perpetual suffering. Vamana points out that a dissatisfied individual who craves more than three steps of land will never be satisfied, even if granted all three worlds.21

The story also champions the virtues of generosity and loyalty. Bali’s willingness to surrender his head after losing Earth and Heaven is celebrated as the peak of integrity and adherence to his promise.14 This act teaches that true leadership is defined by selflessness and loyalty to divine principles, even when facing severe personal consequence.3

V. Critical Reinterpretation: Mahabali and the Subaltern Question

The explicit reference to critical social analysis necessitates an examination of the socio-political reinterpretation of the Vamana-Mahabali myth, particularly its localization in Kerala and its adoption by identity movements engaged with caste and subaltern discourse.

A. The Geopolitical and Ideological Shift

Traditional Puranic sources, specifically the Srimad Bhagavata Purana, locate the events involving Vamana and Bali near the Narmada River, in the region of Bhṛgukaccha (modern Bharuch, Gujarat).13 The localization of the myth in Kerala, where Mahabali (known as Maveli) is revered as the state’s beloved utopian ruler, represents a cultural appropriation and shift over centuries.23

An essential aspect of this critique involves the timing. Bali Pratipada occurs in the month of Kartik (Oct/Nov), while Kerala’s Onam occurs earlier in Chingam (Aug/Sept).25 The popular folk belief that Mahabali returns annually during Onam, though central to the local narrative ("Maaveli Naadu Vaaneedum Kaalam"), is noted by scholars as a cultural development that lacks support in ancient textual authority.25

The creation of the utopian narrative connecting Mahabali's just rule with Onam is historically recent, traceable to the 20th century. Major Malayalam literary figures preceding this era were conspicuously silent on Mahabali’s rule over Kerala.27 This transformation accelerated with the work of social reformers like Sahodaran Ayyappan, whose 1934 poem helped solidify Mahabali as the hero of Onam.23 This modern dating indicates that the myth was actively re-engineered to address socio-political imperatives, such as emerging anti-caste and self-respect movements.23

B. Framing the Conflict: Mahabali as Subaltern Hero

Within this subaltern framework, the Mahabali myth is radically inverted. Mahabali is framed as an indigenous, egalitarian, and casteless "Dravidian" or Dalit-Bahujan monarch.23 His reign is remembered as a golden age of "absolute equality, honesty, and prosperity".23 In some ideological circles, he is even identified as a "Buddhist egalitarian king" or "Comrade Mahabali".27

Consequently, Vamana is cast as the archetypal antagonist. He is portrayed as a cunning "'upper-caste' Brahmin" or an external "Aryan" force.23 His deception is viewed as a violent act of cultural colonization intended to destroy the indigenous, egalitarian social structure and impose the oppressive varna or caste system.28

Drawing on theoretical frameworks from Subaltern Studies 7, this reinterpretation uses the myth as a powerful metaphor for resistance against dominant elitism and cultural hegemony.6 The act of venerating Maveli thus becomes an existential tactic to disrupt established codes and assert the "unyielding spirit of a moral protagonist who remains resistant to full colonisation".6

C. Contradictions in Identity: The Politics of Mythological Race and Caste

The subaltern critique often focuses on Vamana's Brahmin appearance, framing the conflict along simplistic Aryan/Dravidian or Brahmin/Dalit lines.23 However, traditional Puranic genealogy states that Mahabali was a descendant of the sage Kashyapa 15, a lineage that places him historically within a Brahminical framework, despite his identity as an Asura.29 Furthermore, traditional paintings sometimes depict Mahabali with the choti(tuft) associated with Brahmins.29

This intellectual tension demonstrates that the myth serves as a flexible cultural tool. When the subaltern argument reframes Vamana as the Brahminical/Aryan colonizer, the goal is not strict textual accuracy (like Bali's lineage) but establishing a clear symbolic antagonist necessary for sociopolitical mobilization and the critique of institutional hierarchy. The power of the Vamana-Mahabali narrative is its capacity to simultaneously sustain devotional surrender (Puranic tradition) and politically charged resistance (Subaltern critique).

Table 3: Comparison of Traditional and Subaltern Interpretations of the Vamana-Bali Dyad

Interpretive Framework

King Mahabali/Maveli

Vamana/Vishnu Avatar

The Event (Three Steps)

Traditional/Puranic View

A pious, generous Daitya king afflicted by ahankara. A devotee saved by divine grace through ultimate surrender.

The merciful Preserver of Dharma; acts strategically (Lila) to restore cosmic balance and grant salvation (moksha).

Divine humbling of ego; a test of devotion; the defeat of pride leads to spiritual victory and eternal protection in Sutala.11

Subaltern/Critical View

An ideal, casteless (Dravidian/Dalit-Bahujan) ruler who governed a utopian society. A hero unjustly displaced and victimized by deceit.

A cunning Brahmin/Aryan figure representing external, invading power, imposing the oppressive varna system.23

An act of Brahmanical deception, cultural colonization, and the violent destruction of an indigenous, egalitarian social order.30

VI. Conclusion: Synthesis and Enduring Legacy

A. The Bifurcation of the Bali Narrative

Bali Pratipada is a complex cultural phenomenon resulting from the confluence of ancient Vedic ritual, Puranic theology, and modern political interpretation. The festival effectively functions as two distinct narratives based on the geographical and ideological context. In its pan-Indian expression (Bali Pratipada in Kartik), it remains a devotional and economic festival centered on the triumph of humility, the grace of divine intervention, and the renewal of prosperity.1

However, the localized Maveli narrative, central to the subaltern critique, utilizes the same mythological framework to articulate themes of historical injustice, resistance to dominant cultural norms, and the aspiration to reclaim an egalitarian past.23

B. The Cyclical Nature of Myth and Meaning

The enduring power of the Vamana-Mahabali story lies in its inherent capacity to adapt. It provides a foundational myth that simultaneously validates the established cosmic order and offers a structural template for social critique and the expression of subaltern identity.12 Bali Pratipada demonstrates that mythological narratives are not static historical records but dynamic cultural assets capable of sustaining radically divergent, yet equally passionate, meanings depending on the theological, economic, or political lens through which they are observed.


________________________________________________________________________________________________________________

 

Annexure: Sources and Research Material

  1. Definition, date, general significance, nomenclature, and auspicious timings of Bali Pratipada/Padva (Kartik month, Diwali) and its recognition as the regional New Year.
  2. Regional names like Bali Padva, Bali Padyami, Vira Pratipada, Dyuta Pratipada, Barlaj, Bestu Varas/New Year. Auspicious Muhūrtas. Bali's boon of annual return. Rituals like drawing Bali's image, Gopuja, tool worship. Bali's moksha through surrender.
  3. Mahabali's ahankara (pride) leading to his downfall. Traditional rituals celebrating marital fidelity and reciprocity in Maharashtra and Gujarat.
  4. Placement of Bali Pratipada on the fourth day of Diwali, coinciding with Govardhan Puja, and its connection to the Vamana-Mahabali story.
  5. Regional New Year (Bestu Varas), auspicious timing, rituals like Abhyangasnan, agrarian rites in Karnataka/Tamil Nadu: GopujaGouramma puja, and creating a triangular Bali image from cow dung.
  6. The subaltern critique framing Mahabali as a moral protagonist whose spirit is resistant to full colonization, used as an existential tactic against dominant elitism.
  7. The theoretical foundation of Subaltern Studies concerning critiques of elitism, power dynamics, identity, and modernization.
  8. References to Vishnu's "Three Strides" (Trivikrama) in Vedic texts (RigVeda) and its later incorporation into the Puranic Vamana narrative.
  9. The Shatapatha Brahmana account detailing Vamana (dwarf) gaining the earth from Asuras through ritual sacrifice, predating the personalization of the opponent as Mahabali.
  10. Vamana's strategic Lila (divine play) to restore cosmic order (dharma), using cunning and wisdom over brute force.
  11. The philosophical symbolism of Vamana crushing Bali's ego; the concept that surrender (sharanagati) leads to salvation (moksha) and spiritual victory.
  12. The mythological concept that narratives demonstrate the cyclical nature of cosmic balance, wealth renewal, and possess the inherent capacity to sustain divergent meanings.
  13. The Srimad Bhagavata Purana's detailed account, locating the Vamana-Bali events near the Narmada River at Bhṛgukaccha.
  14. Details of the Puranic account: Bali is exiled to Sutala, Vishnu promises protection/guardianship, Bali’s devotion is tested, and his willing surrender of his head for the third step.
  15. Mahabali's traditional Puranic genealogy as the grandson of Prahlada, a pious daitya king, and a descendant of the sage Kashyapa.
  16. Description of King Bali's virtues (benevolent, prosperous, just) and his flaw of overreach (conquering the three worlds).
  17. Theological assertion that greatness lies in devotion and actions, transcending birth lineage (such as daitya status).
  18. Spiritual significance of surrender: relinquishing material wealth and power for ultimate devotion and fulfillment.
  19. Instructions from the Bhavisyottara Purana for the ritual consecration and worship of King Bali's image made of rice grains on Kartika Pratipada.
  20. Celebration of renewed prosperity, the ceremonial cleaning of the cowshed, and Gopuja.
  21. Vamana's philosophical lesson that dissatisfaction leads to the cycle of birth and death, while humility and contentment lead to moksha.
  22. Analysis that Mahabali's downfall was rooted in his pride and overreach (ahankara).
  23. Modern, subaltern reinterpretation framing Mahabali as an indigenous, casteless "Dravidian" monarch ruling a utopia, contrasted with Vamana as the "upper-caste" or "Aryan" antagonist.
  24. The popular folk memory of Maveli/Mahabali’s rule as a utopian past of absolute equality, honesty, and prosperity.
  25. Scholarly note that Bali Pratipada is celebrated in Kartik (Oct/Nov), contradicting the popular folk belief that Mahabali returns annually during Onam (Chingam/Aug-Sept).
  26. Observation of the "conspicuous silence" of pre-20th-century Malayalam literary figures regarding Mahabali's rule over Kerala.
  27. Localization traced to the 20th century, specifically crediting the 1934 poem by social reformer Sahodaran Ayyappan for solidifying Mahabali as the hero of Onam.
  28. Interpretation by some leftist thinkers that Mahabali was an "egalitarian Buddhist king" or "Comrade Mahabali."
  29. Evidence from traditional paintings and Mahabali's lineage used to contradict the simplistic Brahmin/Dalit framing in the subaltern critique.
  30. The use of the Vamana-Mahabali myth as a structural template for social critique, viewing Vamana's act as deception and the imposition of hierarchy.
  31. The perspective that Vamana's act was the violent destruction of an indigenous, egalitarian social order.
  32. Principles of corporate and personal leadership focusing on selective memory, curatorial practice, forgetting ego, and remembering core purpose.

Friday, 10 October 2025

The ‘AND Generation’: India’s Zoomers and the New Grammar of Ambition

 At a recent campus event, a student asked with disarming clarity: “Why must I choose between security and freedom?

The room went silent—not because it was naïve, but because it felt like the real question of our times.

For India’s Gen Z—digital natives who have grown up in a landscape of both abundance and precarity—the old binaries no longer fit. They want careers that pay well and feel purposeful; jobs that offer structure and space; leaders who are professional and personal. The “either–or” logic that governed the professional lives of their parents has given way to an “and–also” consciousness.

This is the AND Generation—fluent in contradiction, but uninterested in being trapped by it.

Beyond the Contradiction Frame

Recent writing about India’s young workforce often circles around paradox: ambition versus anxiety, hustle versus hierarchy, global vision versus local constraint. Those tensions are real. But they also miss a deeper movement.

Zoomers are not caught between poles—they are learning to compose across them. What earlier generations experienced as conflict, they inhabit as continuum. This is the most significant psychological and social shift of our moment.

Where an older worker might agonize between a stable job and an entrepreneurial dream, the Zoomer imagines a portfolio career—joining a fintech by day, building a side hustle by night, volunteering over the weekend.

Where her parents measured loyalty in years, she measures it in cycles of meaning: twelve or eighteen months of learning, contribution, and then reinvention.

This fluidity doesn’t come from fickleness. It comes from living in a world where technology, networks, and uncertainty have fused permanence and impermanence into a single lived rhythm.


The Abundance Mindset

Generations shaped by scarcity learn to choose; generations shaped by exposure learn to curate. India’s Zoomers grew up in a time when the internet collapsed distance, when a smartphone placed both Harvard and Hardik Pandya in their pocket.

Abundance, paradoxically, produces anxiety—but also possibility. It dissolves the old virtue of singularity. The defining value is now optionality: keeping multiple doors ajar, multiple selves alive.

That’s why the Zoomer’s résumé looks like a collage—designer, coder, content creator, sustainability intern, crypto investor. It’s not confusion; it’s composition.

In this world, success is no longer a ladder—it’s a web. Progress happens sideways as often as upward.


The New Equations of Desire

“I want this and that” is not indecision; it’s the declaration of a generation raised on ecosystems, not hierarchies.

1. Freedom and Feedback

They want autonomy, but not abandonment. The worst sin a manager can commit is indifference. What Zoomers crave is responsive structure—space to experiment with scaffolding that catches them if they fall. The best leaders are those who act less like bosses and more like orchestrators of context.

2. Technology and Touch

This is the most connected yet loneliest cohort in history. They use AI for productivity, but seek authenticity in relationships. Offices that offer both—digital flexibility and human warmth—will win their trust.

3. Purpose and Paycheck

Zoomers are idealists with calculators. They will join NGOs but expect competitive compensation; they’ll work for a global brand but ask about carbon footprint. For them, money and meaning aren’t opposites—they’re dual metrics of value.

4. Individual and Collective

Social media may have amplified the self, but it has also bred a hunger for community. Zoomers flourish in teams that feel like tribes, where contribution and visibility coexist. Hierarchies that flatten voices will struggle; those that enable participation will thrive.


The System Shock

The “AND” generation is forcing institutions to confront their own outdated binaries.

Loyalty and learning: The corporate obsession with retention must give way to cultivating repeat allegiance—designing 18-month arcs where talent learns, leaves, and returns richer.

Performance and possibility: Instead of chasing promotions, they seek range. Growth is measured not in title but in texture—how many domains they can touch.

Work and life: For them, the two are intertwined. A Zoomer on a coding sprint at midnight may also be editing a music reel. Productivity now flows in pulses, not shifts.

Risk and safety: The absence of a social safety net still anchors Indian choices. But the smartest organizations are building internal buffers—micro-sabbaticals, side-project grants, flexible tenures—that make experimentation survivable.

In short, they’re teaching us that the future of work isn’t hybrid by location alone—it’s hybrid by logic.


Reframing Work: From Hierarchy to Harmony

The challenge is not how to manage them—it’s how to re-architect systems around them.

Workplaces built on 20th-century assumptions—linear careers, control systems, loyalty as tenure—will appear unintelligible to this generation. They understand reputation economies, not reporting lines. They operate on feedback loops, not annual appraisals. They seek psychological safety before stability, and narrative coherence before hierarchy.

For leaders, the invitation is to move from control to composition: creating organizations that can hold multiplicity without losing focus. Think of the modern workplace less as a pyramid and more as a soundscape—multiple instruments, improvising around a shared rhythm.


From Contradiction to Composition

India’s youth are not rejecting structure; they’re re-imagining it as fluid scaffolding. Their rebellion is not against authority but against arbitrariness. They don’t want to burn down the system; they want it to make more sense.

This shift from contradiction to composition carries profound implications for how we design everything—from education to policy to leadership.

It means replacing the rhetoric of “balance” with the practice of integration. It means accepting that consistency is no longer a virtue—coherence is.

Zoomers are not torn between global and local, personal and professional, idealism and pragmatism. They are weaving these strands into something new—an emergent tapestry of identity that is both restless and rooted, self-directed and socially aware.


The Leadership Imperative

The leaders who will matter in this era are not the ones who give answers but those who frame better questions:

How do we make stability feel dynamic?

How do we turn flexibility into discipline?

How do we let people be multiple, yet united?

Leadership today is less about vision and more about sense-making—helping people navigate paradox with grace.

The old playbook of authority is dying; the new one is orchestral. The task is not to eliminate tension but to convert it into music.


Tailpiece: The Grammar of ‘And’

India’s young workers are asking for something radical yet reasonable: the right to be whole.

They want to build start-ups and have savings, to explore and belong, to lead and learn, to win and wonder.

If we listen carefully, the question echoing through every classroom and cubicle is not “What do I choose?” but “What can I combine?”

In that question lies the future of work—and perhaps, of India itself.


______________________________________________________________________________


References and Acknowledgements

This essay draws on conversations and field insights from A New Corporate Mantra (2024) and from research dialogues across Indian campuses and start-up ecosystems. It also synthesizes perspectives from the Requisite Organization, human-systems design, and current generational studies on post-scarcity work culture (Twenge, 2023; Deloitte Gen Z Survey, 2024; World Economic Forum Future of Jobs Report, 2025).


Monday, 6 October 2025

Behavioural Sciences in the Age of AI

Unpublished Draft Paper Satish Pradhan, October, 2025)


Abstract

In an era where artificial intelligence (AI) is transforming work, organisations must blend insights from

behavioural science with technological innovation to thrive. This paper explores “Behavioural Sciences in the

Age of AI” by integrating foundational theories of human behaviour with emerging themes in AI, human–

machine teaming, organisational transformation, and dynamic capability development. We provide a historical

context for the evolution of behavioural science and its intersection with technology, from early socio-technical

systems thinking to modern cognitive science and behavioural economics. Key theoretical contributions are

discussed – including Herbert Simon’s bounded rationality, Daniel Kahneman and Amos Tversky’s heuristics

and biases, Gerd Gigerenser’s ecological rationality, Fred Emery and Eric Trist’s socio-technical systems, David

Teece’s dynamic capabilities, Edgar Schein’s organisational culture, Erik Brynjolfsson and Andrew McAfee’s

“race with the machine” paradigm, Gary Klein’s naturalistic decision-making, and Luciano Floridi’s digital

ethics – highlighting their relevance in designing human–AI collaboration. We build upon an internal strategic

framework – the composite capability paradigm and 2025+ capability stack – which posits that future-proof

organisations must orchestrate human intelligence, machine cognition, and agile interfaces within a purpose-

driven, values-grounded architecture. By situating this paradigm in the broader academic literature, we

demonstrate how purpose and trust, ethical AI, digital fluency, human agency, adaptive decision-making, and

robust governance become critical enablers of competitive advantage in the AI age. Real-world examples from

health, public services, business, and government illustrate how behavioural insights combined with AI are

enhancing decision quality, innovation, and organisational resilience. The paper argues for a rigorous yet

human-centric approach to AI integration – one that leverages behavioural science to ensure technology serves

human needs and organisational values. We conclude that the synthesis of behavioural science and AI offers a

strategic path to reclaiming human agency and purpose in a world of rapid technological change, enabling

organisations to adapt ethically and effectively in the age of AI.


Introduction

The rise of advanced AI has catalysed an inflection point in how organisations operate, decide, and evolve.

Today’s business environment has “transitioned from a predictable game of checkers to a complex, live-action

role-play of 4D chess” – an apt metaphor for the unprecedented complexity and dynamism that leaders face. In

this new game, even the “rulebook” changes continuously, rendering many traditional strategies and

organisational models obsolete. The convergence of rapid technological change with other disruptive forces

(such as globalisation, climate risks, and shifting workforce expectations) creates interconnected pressures that

demand integrated responses. As a result, organisations must fundamentally rethink their capabilities and

frameworks for decision-making. This paper contends that behavioural science, with its rich understanding of

human cognition, emotion, and social dynamics, offers essential principles for guiding this reinvention in the

age of AI.


The Imperative for Integration

Artificial intelligence, once a futuristic concept, is now embedded in core business processes across industries.

AI systems not only execute tasks or analyse data; increasingly, they function as “social-technological” actors

that form symbiotic relationships with humans. This blurring of the line between human and machine roles

raises fundamental questions about how we design work and make decisions: How do we ensure that AI

augment – rather than override – human judgment? In what ways must human cognitive biases, limitations, and

strengths be considered when deploying AI tools? How can organisations foster trust in AI systems while

preserving human agency and accountability? These questions sit at the intersection of behavioural science

(which examines how humans actually behave and decide) and technology management.

Historically, advances in technology have forced parallel evolutions in management and organisational

psychology. For instance, the introduction of electric motors in factories in the early 20th century did not yield

productivity gains until workflows and management practices were fundamentally redesigned decades later.

Today, we may be in a similar transitional period with AI: simply overlaying intelligent algorithms onto old

organisational structures is inadequate. Instead, as Erik Brynjolfsson observes, thriving in the “new machine

age” requires reshaping systems and roles to “race with the machine” rather than against it. This is a behavioural

and organisational challenge as much as a technical one. Leaders must guide their teams through “radical

unlearning” of outdated assumptions and foster a culture of continuous learning and adaptation. Edgar Schein

noted that effective transformation often demands addressing “learning anxiety” – people’s fear of new methods

– by cultivating sufficient “survival anxiety” – the realisation that failing to change is even riskier. In the context

of AI, this means creating a sense of urgency and purpose around AI adoption, while also building

psychological safety so that employees are willing to experiment with and trust new tools.


Behavioural Science and AI: A Convergence

Behavioural science spans psychology, cognitive science, behavioural economics, and sociology – disciplines

that have illuminated how humans perceive, decide, and act. AI, on the other hand, often operates on algorithms

aimed at optimal or rational outcomes. This creates a potential tension: AI might make recommendations that

are theoretically optimal, but humans might not accept or follow them due to cognitive biases, trust issues, or

misaligned values. Integrating behavioural science means acknowledging and designing for the reality of human

behaviour in all its richness and boundedness. For example, AI systems in hiring or criminal justice need to

account for issues of fairness and implicit bias – areas where social psychology provides insight into human

prejudice and decision bias. In consumer-facing AI (like recommendation engines or digital assistants),

understanding heuristics in user behaviour (from research by Daniel Kahneman, Amos Tversky, and others) can

improve design to be user-friendly and nudge positive actions. In high-stakes environments like healthcare or

aviation, the field of human factors and cognitive engineering (informed by behavioural science) has long

emphasised fitting the tool to the human, not vice versa.

Crucially, behavioural science also guides organisational behaviour and change management. As companies

implement AI, there are cultural and structural changes that determine success. Who “owns” decisions when an

algorithm is involved? How do teams collaborate with AI agents as teammates? What training and incentives

drive employees to effectively use AI tools rather than resist them? These questions invoke principles from

organisational psychology (motivation, learning, team dynamics) and from socio-technical systems theory. The

latter, pioneered by Emery and Trist in the mid-20th century, argued that you must jointly optimise the social

and technical systems in an organisation. That insight is strikingly applicable today: an AI solution will fail if it

is imposed without regard for the social system (people’s roles, skills, norms), and conversely, human

performance can be amplified by technology when designed as an integrated system.

This paper aims to bridge the past and present – anchoring cutting-edge discussions of AI and dynamic

capabilities in the timeless truths of behavioural science. We will review key theoretical foundations that inform

our understanding of human behaviour in complex, technology-mediated contexts. We will then propose a

synthesised framework (building on the composite capability paradigm and capability stack developed in an

internal strategy paper) for conceptualising how human and AI capabilities can be orchestrated. Finally, we

translate these ideas into practice: how can organisations practically build trust in AI, nurture human–machine

collaboration, uphold ethics and inclusion, and develop the dynamic capability to continuously adapt? By

examining illustrative examples from domains such as healthcare, public policy, business, and government, we

demonstrate that integrating behavioural science with AI is not a theoretical nicety but a strategic necessity. The

outcome of this integration is a new kind of enterprise – one that is technologically empowered and human-

centric, capable of “reclaiming human agency and purpose” even as algorithms become ubiquitous.


Literature Review: Foundations of Behavioural Science and Technology Interaction

Cognitive Limits and Decision Biases

Modern behavioural science began as a challenge to the notion of humans as perfectly rational actors. Herbert

A. Simon, a polymath who straddled economics, psychology, and early computer science, was pivotal in this

shift. Simon introduced the concept of bounded rationality, arguing that human decision-makers operate under

cognitive and information constraints and thus seek solutions that are “good enough” rather than optimal. He

famously coined the term “satisficing” to describe how people settle on a satisfactory option instead of

exhaustively finding the best. Simon’s insight – that our minds, like any information-processing system, have

limited capacity – has direct parallels in AI. In fact, Simon was an AI pioneer who, in the 1950s, built some of

the first software to mimic human problem-solving. The bounded rationality concept laid the groundwork for

behavioural economics and decision science, highlighting that if AI tools are to support human decisions, they

must account for our finite attention and memory. For example, too much information or too many choices can

overwhelm (a phenomenon later popularised as cognitive overload or the paradox of choice), so AI systems

need to be sensitive to how recommendations or data are presented to users – an idea reinforced by the

heuristics-and-biases research tradition.

Daniel Kahneman and Amos Tversky carried the torch forward by cataloguing the systematic heuristics (mental

shortcuts) and biases that affect human judgment. Their work demonstrated that humans deviate from classical

rationality in predictable ways – we rely on intuitive System 1 thinking (fast, automatic) which can be prone to

errors, as opposed to the more deliberate System 2 thinking. They identified biases like availability

(overestimating the likelihood of events that come readily to mind), confirmation bias (seeking information that

confirms prior beliefs), loss aversion (weighing losses more heavily than equivalent gains), and numerous

others. Kahneman’s influential book Thinking, Fast and Slow (2011) synthesised these ideas for a broad

audience, cementing his reputation as “the father of behavioural science”. The implication for AI and human-

machine teaming is profound: AI can either mitigate some human biases or amplify them, depending on design.

For instance, algorithmic decision aids can counteract certain biases by providing data-driven forecasts (helping

humans overcome intuition that might be flawed), but if not carefully implemented, they might also lull humans

into automation bias (over-reliance on the AI, assuming it is always correct) or confirmation bias (the AI might

learn from human decisions that are biased and reinforce them). An understanding of cognitive biases has thus

become vital in AI ethics and design – e.g. ensuring that an AI’s explanations don’t trigger biased reasoning or

that its user interface nudges appropriate attention.

Not all scholars agreed that human deviation from economic rationality was truly irrational. Gerd Gigerenzer, a

prominent psychologist, offered a counterpoint with the concept of ecological rationality. Gigerenzer argues that

heuristics are not just “biases” or flaws; rather, they are often adaptive responses to real-world environments. In

his view, the success of a decision strategy depends on the context – a heuristic that ignores certain information

can actually outperform complex models in low-information or high-uncertainty situations. He demonstrated,

for example, that simple rules like the “recognition heuristic” (preferring options one recognises over those one

doesn’t) can yield surprisingly accurate decisions in certain domains. Gigerenzer has been a strong critic of

Kahneman and Tversky’s emphasis on biases, cautioning that labeling human thinking as “irrational” in lab

experiments misses how humans have adapted to their environments. He suggests that rationality should be seen

as an adaptive tool, not strictly bound by formal logic or probability theory. This perspective is highly relevant

when considering AI-human interaction: rather than always trying to “debias” humans into perfect logicians,

sometimes the better approach is to design technology that complements our natural heuristics. For example,

decision dashboards might highlight key information in ways that align with how experts naturally scan for cues

(leveraging heuristics developed through experience), or AI might handle aspects of a task that humans are

known to do poorly at (like very large-scale calculations) while leaving intuitive pattern recognition to the

human. Gigerenzer’s work reminds us that context matters – a theme also echoed in machine learning through

the “no free lunch” theorem (no one model is best for all problems). In practice, it means organisations should

strive for human-AI systems where each does what it is comparatively best at – as one Gigerenzer quote puts it,

“intelligent decision making entails knowing what tool to use for what problem”.

Gary Klein, another figure in the decision sciences, provides additional nuance with his studies of naturalistic

decision-making (NDM). While Kahneman often highlighted errors in human judgment using tricky puzzles or

hypothetical bets, Klein studied experts (firefighters, pilots, doctors) making high-stakes decisions under time

pressure. He found that these experts rarely compare options or calculate probabilities in the moment; instead,

they draw on experience to recognise patterns and likely solutions – a process he described in the Recognition-

Primed Decision model. Klein and Kahneman once famously debated, but eventually co-authored a paper (“A

Failure to Disagree”) noting that their perspectives actually apply to different contexts: in high-validity

environments with opportunities to learn (e.g. firefighting, where feedback is clear and experience builds

genuine skill), intuition can be remarkably effective; in other cases, intuition can mislead. Klein’s emphasis on

tacit knowledge and skilled intuition has implications for AI: organisations should be cautious about completely

displacing human experts with algorithms, especially in domains where human expertise encodes nuances that

are hard to formalise. Instead, AI can be used to support expert intuition by handling sub-tasks or offering a

“second opinion.” For example, in medical diagnosis, an experienced radiologist might quickly intuit a

condition from an X-ray; an AI can provide a confirmatory analysis or flag something the radiologist might have

overlooked, with the combination often proving more accurate than either alone. Indeed, a 2023 study in

European Urology Open Science showed that a radiologist + AI hybrid approach achieved higher sensitivity and

specificity in detecting prostate cancer from MRI scans than either the radiologist or AI alone, demonstrating

how “a combination of AI and evaluation by a radiologist has the best performance”. This is a concrete example

of human intuition and AI analysis working in tandem – aligning with Klein’s insights that experienced human

judgment has unique strengths that, rather than being replaced, should be augmented by AI.


Socio-Technical Systems and Organisational Adaptation

As early as the 1950s, researchers like Fred Emery, Eric Trist, and others at the Tavistock Institute in London

began examining organisations as socio-technical systems (STS) – meaning any workplace has both a social

subsystem (people, culture, relationships) and a technical subsystem (tools, processes, technologies), and these

must be designed together. Trist and colleagues, working with British coal miners, noted that introducing new

machinery without altering work group norms and job designs led to suboptimal outcomes, whereas redesigning

work to give teams autonomy and better align with the new tech yielded significant productivity and satisfaction

gains. They coined “socio-technical” to emphasise this joint optimisation. Another famous work by Emery &

Trist (1965) introduced the idea of different environmental turbulences that organisations face, from placid to

turbulent fields, and argued that in more turbulent (fast-changing, unpredictable) environments, organisations

need more adaptive, open strategies. This foreshadowed today’s VUCA (volatility, uncertainty, complexity,

ambiguity) world. The lesson is that successful adoption of any advanced technology (like AI) isn’t just about

the tech itself, but about how work and human roles are reconfigured around it. Emery and Trist would likely

view AI integration as a prime example of STS in action: the firms that excel will be those that thoughtfully

redesign job roles, team structures, and communication patterns in light of AI capabilities – rather than those

who treat AI implementation as a purely technical upgrade. Indeed, current discussions about human-centric AI

and AI ergonomics are essentially socio-technical perspectives, emphasising user experience, change

management, and organisational context in deploying AI.

Parallel to STS theory, the field of organisational development (OD) and culture was greatly influenced by

Edgar Schein. Schein’s model of organisational culture delineated culture as existing on three levels: artifacts

(visible structures and processes), espoused values (strategies, goals, philosophies), and basic underlying

assumptions (unconscious, taken-for-granted beliefs). According to Schein, transforming an organisation – say,

to become more data-driven or AI-friendly – isn’t simply a matter of issuing a new policy or training people on

a new tool. It often calls for surfacing and shifting underlying assumptions about “how we do things.” For

example, a company might have an implicit assumption that good decisions are made only by seasoned

managers, which could lead to resistance against algorithmic recommendations. Changing that might require

leaders to model openness to data-driven insights, thereby altering assumptions about authority and expertise.

Schein also introduced the concept of learning culture and noted that leaders must often increase “survival

anxiety” (the realisation that not adopting, say, digital tools could threaten the organisation’s success or the

individual’s job relevance) while reducing “learning anxiety” (the fear of being embarrassed or losing

competence when trying something new). In the AI era, this is highly salient: employees may fear that AI will

render their skills obsolete or that they won’t be able to learn the new tools (learning anxiety), even as the

organisation’s competitive survival may depend on embracing AI (survival anxiety). Effective leaders use clear

communication of purpose – why adopting AI is critical – and create supportive environments for upskilling to

resolve this tension. We see enlightened companies investing heavily in digital fluency programs, peer learning,

and even redesigning performance metrics to encourage use of new systems rather than punish initial drops in

efficiency as people climb the learning curve. These practices reflect Schein’s principles of culture change.

Another relevant Schein insight is about ethical and cultural alignment. He argued that organisations should

have cultures that reinforce desired behaviours, and that when you introduce a foreign element (be it a new CEO

or a new technology), if it clashes with entrenched culture, the culture usually wins unless actively managed.

Thus, if a company values high-touch customer service as part of its identity, introducing AI chatbots needs to

be done in a way that augments that value (e.g., bots handle simple queries quickly, freeing up human reps to

provide thoughtful service on complex issues) rather than contradicting it (replacing all human contact).

Ensuring AI deployment aligns with organisational purpose and values – an idea from our internal capability

stack framework – is essentially a cultural alignment problem. If done right, AI can even reinforce a culture of

innovation or analytical decision-making; done poorly, it can create dissonance and distrust.

Dynamic adaptation at the organisational level has been formalised by David Teece in his Dynamic Capabilities

framework. Teece defines dynamic capability as a firm’s ability to “integrate, build, and reconfigure internal

and external competences to address rapidly changing environments”. This theory, originating in strategic

management, is particularly apt for the AI age, where technologies and markets change fast. Teece describes

dynamic capabilities in terms of three sets of activities: sensing (identifying opportunities and threats in the

environment), seizing (mobilising resources to capture opportunities through new products, processes, etc.), and

transforming (continuously renewing the organisation, shedding outdated assets and aligning activities). In the

context of AI, an example of sensing would be recognising early on how AI could change customer behaviour

or operations (for instance, a bank sensing that AI-enabled fintech apps are shifting consumer expectations).

4Seizing would involve investing in AI development or acquisitions, piloting new AI-driven services, and scaling

the ones that work. Transforming would mean changing structures – perhaps creating a data science division,

retraining staff, redesigning workflows – to fully embrace AI across the enterprise. Teece’s core message is that

adaptive capacity itself is a strategic asset. We can relate this to behavioural science by noting that an

organisation’s capacity to change is rooted in human factors: learning mechanisms, leadership mindset, and

organisational culture (again Schein’s domain). For example, dynamic capabilities require an organisational

culture that encourages experimentation and tolerates failures as learning – essentially a growth mindset

organisation. Behavioural science research on learning organisations (e.g., work by Peter Senge or Amy

Edmondson on psychological safety) complements Teece’s macro-level view by explaining what human

behaviours and norms enable sensing, seizing, and transforming. Edmondson’s research on psychological safety

– the shared belief that it’s safe to take interpersonal risks – is crucial if employees are to speak up about new

tech opportunities or flag problems in implementations. Without it, an organisation may fail to sense changes

(because employees are silent) or to learn from mistakes (because failures are hidden), thus undermining

dynamic capability. Therefore, we see that frameworks like Teece’s implicitly depend on behavioural and

cultural underpinnings.


Technology, Work, and Society: Human–AI Collaboration and Ethics

No discussion of behavioural science in the age of AI would be complete without addressing the broader socio-

economic and ethical context. Erik Brynjolfsson and Andrew McAfee, in works like The Second Machine Age

(2014), examined how digital technologies including AI are reshaping economies, productivity, and

employment. They observed a troubling trend: productivity had grown without commensurate job or wage

growth, hinting at technology contributing to inequality or job polarisation. However, they argue that the

solution is not to halt technology but to reinvent our organisations and skill sets – essentially to race with the

machines. Brynjolfsson’s famous TED talk recounted how the best chess player in the world today is not a

grandmaster nor a supercomputer alone, but rather a team of human plus computer – in freestyle chess, a

middling human player with a good machine and a strong process to collaborate can beat even top computers.

He concludes, “racing with the machine beats racing against the machine.” This vivid example underscores a

powerful concept: complementarity. Humans and AI have different strengths – humans excel at context,

common sense, ethical judgment, and novel situations; AI excels at brute-force computation, pattern recognition

in large data, and consistency. The best outcomes arise when each side of this partnership does what it does best

and they iterate together. This theme appears in many domains now. For instance, in medicine, some diagnostic

AI systems initially aimed to replace radiologists, but a more effective approach has been to let AI highlight

suspected anomalies and have radiologists make the final call, significantly improving accuracy and speed. In

customer service, AI chatbots handle routine FAQs, while human agents tackle complex or emotionally

sensitive cases, yielding better customer satisfaction. These human–AI team models are fundamentally about

organising work in ways that fit human behavioural strengths and limitations (as identified by behavioural

science) with machine strengths. Implementing such models requires careful attention to workflow design, user

experience, and trust. If the AI is too assertive or not transparent, the human may distrust it or disengage (there’s

evidence that some professionals will ignore algorithmic advice if they don’t understand or agree with it – a

phenomenon known as algorithm aversion). Conversely, if humans over-trust AI, they may become complacent

and skill atrophy can occur. Thus, a balance of trust – sometimes called calibrated trust – must be achieved,

which is an active research area in human factors and HCI (Human–Computer Interaction). Lee and See (2004)

suggested that trust in automation should be calibrated to the automation’s true capabilities; to do this, systems

might need to provide feedback on their confidence level, explanations, or have mechanisms for humans to

oversee and intervene.

Trust and ethics are tightly intertwined. Luciano Floridi, a leading philosopher in digital ethics, has argued that

we must develop a “Good AI Society” where AI is aligned with human values and the principles of beneficence,

non-maleficence, autonomy, justice, and explicability. Floridi’s work with the AI4People initiative synthesised

numerous AI ethics guidelines into a unified framework. Two principles stand out for behavioural science

integration: autonomy (respecting human agency) and explicability (the ability to understand AI decisions).

From a behavioural perspective, respecting autonomy means AI should be a tool that empowers users, not an

opaque mandate that constrains them. Users are more likely to adopt and appropriately use AI if they feel in

control – for example, a decision support system that suggests options and allows a human to override with

justification tends to be better received than an automated system with no human input. Explicability is critical

for trust and for human learning; if an AI system can explain why it made a recommendation, a human can

decide whether that reasoning is sound and also learn from it (or catch errors). Floridi and colleagues even

propose “AI ethics by design”, meaning ethical considerations (like transparency, fairness, accountability)

should be built into the development process of AI, not slapped on later. For practitioners, this could involve

5interdisciplinary teams (with ethicists or social scientists working alongside engineers), bias audits of

algorithms, and participatory design involving stakeholders who represent those affected by the AI’s decisions.

Another facet of ethics is inclusion and fairness. Behavioural sciences remind us how prevalent biases

(conscious and unconscious) are in human decisions; ironically, AI trained on historical human data can embed

and even amplify those biases if we’re not careful. There have been real cases: hiring algorithms that

discriminated against women (because they were trained on past hiring data skewed toward men), or criminal

risk scoring algorithms that were biased against minorities. Addressing this isn’t just a technical fix of the

algorithm; it requires understanding the social context (why the data is biased) and often a human judgement of

what fairness means in context (an ethical decision). Various definitions of fairness (e.g., demographic parity vs.

equalised odds) have to be weighed, which is as much a policy question as a math question. Here, governance

comes into play – organisations need governance mechanisms to oversee AI decision-making, much like human

decision processes are subject to oversight and compliance. Floridi’s emphasis on governance aligns with

emerging regulations (like the EU AI Act) that push for transparency, accountability, and human oversight of

AI. Behavioural science contributes to this conversation by highlighting factors such as: how do individuals

react to algorithmic decisions? what organisational incentives might cause people to deploy AI in harmful ways

(for example, a manager might be tempted to use an AI system to surveil employees in ways that hurt trust)? and

how can we create cultures of responsible AI use? Organisational behaviour research on ethical climates, tone

from the top, and decision biases (like the tendency to conform to perceived pressure) are all relevant when

instituting AI governance. A practical example is the creation of AI ethics committees or review boards within

organisations, which often include people with diverse backgrounds (legal, technical, HR, etc.) to review

sensitive AI deployments (e.g., systems affecting hiring or customer rights). These committees work best when

they consider not just compliance with regulations but also the psychological impact on those subject to the AI

decisions and on employees using the AI.

Finally, a macro societal perspective: behavioural sciences and AI are jointly shaping what it means to work and

live. Issues of human agency loom large. There is a risk that if we delegate too much decision-making to

algorithms, humans could experience a loss of agency or a “de-skilling” effect. On the flip side, AI can also

enhance human agency by providing people with better information and more options (for example, citizens

using AI-powered tools to understand their energy usage can make more informed choices, or disabled

individuals using AI assistants gain independence). This dual potential – to diminish or amplify agency – again

depends on design and context. A theme across behavioural literature is the importance of purpose and

meaningfulness for motivation. As AI takes over more routine work, what remains for humans should ideally be

the more purpose-rich tasks (creative, interpersonal, strategic). This calls for organizational vision: leaders need

to articulate how AI will free employees to focus on more meaningful aspects of their jobs rather than simply

framing it as a cost-cutting or efficiency drive. The theme of purpose is central to sustaining trust and morale.

Studies have shown that employees are more likely to embrace change (including tech adoption) when they

believe it aligns with a worthy mission or values, rather than just boosting the bottom line. Thus, infusing AI

strategy with a sense of higher purpose (e.g., “we are using AI to better serve our customers or to make

employees’ work lives better or to solve societal challenges”) is not just a PR move but a psychologically

important factor.

In summary, the literature suggests that an effective interplay of behavioural science and AI requires

recognising humans’ cognitive biases and strengths, designing socio-technical systems that leverage

complementarity, fostering organisational cultures that learn and adapt, and instituting ethical guardrails that

maintain trust, fairness, and human agency. With these foundations laid, we now turn to a conceptual framework

that synthesises these insights: the Composite Capability Paradigm and its accompanying capability stack for the

AI-era organisation.


Theoretical Framework: The Composite Capability Paradigm and Capability Stack

To navigate the age of AI, we propose an integrative framework termed the Composite Capability Paradigm,

rooted in the idea that organisational capabilities now arise from an orchestrated combination of human and

machine elements. This framework, developed internally as the 2025+ Capability Stack, posits that there are

distinct layers to building a resilient, adaptive, and ethical AI-era enterprise. By examining these layers in light

of broader academic perspectives, we illuminate how they resonate with and expand upon existing theory.


Orchestrating Human, Machine, and Interface Intelligence

At the heart of the composite capability paradigm is the recognition that capabilities are no longer confined to

“tidy boxes” of human-versus-technical functions. Instead, capability is seen as a dynamic interplay – “a

combined—and occasionally chaotic—dance of human intelligence, technical expertise, machine cognition, and

agile interfaces”. In other words, whenever an organisation delivers value (be it a product innovation, a

customer service interaction, or a strategic decision), it is increasingly the outcome of this fusion of

contributions: what humans know and decide, what machines calculate and recommend, and how the two

connect through interfaces. The paradigm likens this to a “jam session” in music, where different instruments

improvise together in real-time. Just as a jazz ensemble’s brilliance comes from the interaction among players

rather than any one instrument in isolation, an organization’s performance now hinges on synergy – how

effectively people and AI tools can complement each other’s riffs and how flexibly they can adapt to change in

unison.


Let’s break down the components of this dance:

Human Intelligence: This encompasses the uniquely human attributes that AI currently cannot replicate or that

we choose not to delegate. These include empathy, ethical judgment, creativity, strategic insight, and contextual

understanding. For instance, humans can understand subtleties of interpersonal dynamics, exercise moral

discretion, and apply common sense in novel situations. In the capability stack model, human intelligence is

essential for providing purpose and a “moral compass” to technological endeavours. It aligns with what

behavioural scientists would call System 2 thinking (deliberative, reflective thought) as well as emotional and

social intelligence. Gary Klein’s experienced firefighter exercising gut intuition, or a manager sensing the

morale of their team, are examples of human intelligence in action. In AI integration, human intelligence sets

the goals and defines what “success” means – reflecting our values and objectives. This is why the Foundational

Layer of the capability stack is Purpose, Values, and Ethical Leadership, ensuring that the enterprise’s direction

is guided by human insight and integrity. A key insight from behavioural science is that people are not cogs;

they seek meaning in work and will support change if it resonates with their values. Therefore, having a clear

purpose (for example, “improve patient health” in a hospital setting or “connect the world” in a tech firm) and

ethical guidelines at the base of your AI strategy engages the workforce and garners trust. It also provides the

lens through which any AI initiative is evaluated (Does this AI use align with our values? Does it help our

stakeholders in a way we can be proud of?).

Technical Expertise: Traditionally, this meant the specialised knowledge of how to operate machinery,

engineering know-how, domain-specific analytical skills (e.g., financial modeling). In the modern paradigm,

technical expertise is evolving under the influence of AI. Experts must now collaborate with AI and

continuously update their knowledge as AI tools change their fields. For example, a supply chain expert still

needs logistics knowledge, but they also need to understand how to interpret outputs from an AI demand

forecasting system, and perhaps even how to improve it. The capability stack envisions that technical expertise

“harmonises with predictive models”, meaning human experts and AI models work in tandem. This resonates

with socio-technical theory: rather than AI replacing experts, the nature of expertise shifts. A doctor with AI

diagnostics is still a doctor – but one augmented with new data patterns (e.g., AI image analysis) and thus able

to make more informed decisions. A data-savvy culture is part of technical expertise too: widespread digital

fluency (not just a few data scientists sequestered in IT) is needed so that throughout the organisation people

understand AI’s capabilities and limits. This democratisation of technical competence is facilitated by trends

like low-code or no-code AI tools, which allow non-programmers to leverage AI – effectively broadening who

can contribute technical know-how. In sum, technical expertise in the composite capability paradigm is about

humans mastering their domain plus mastering how AI applies to that domain.

Machine Cognition: This refers to the AI systems themselves – the algorithms, models, and computational

power that constitute the machine’s “intelligence.” From a capability standpoint, machine cognition brings

speed, precision, and scale to problem-solving. It includes everything from simple process automation bots to

sophisticated machine learning models and generative AI (like GPT-4). Machine cognition can detect patterns

invisible to humans (e.g., subtle correlations in big data), work tirelessly 24/7, and execute decisions or

calculations in milliseconds. However, machine cognition has its own limitations: lack of genuine

understanding, potential for errors or biases based on training data, and inability to account for values or context

unless explicitly programmed. This is why the paradigm stresses the interplay – machine cognition is powerful,

but it requires the other elements (human oversight, proper interface) to be truly effective and safe. In the

capability stack, machine cognition sits in the core layer as part of the fusion, not at the top or bottom,

symbolising that AI is integrated into the fabric of how work is done, guided by purpose from above and

controlled/governed by structures around it. The behavioural science angle on machine cognition is mainly

about human interpretation: how do humans perceive and react to AI outputs? Research on decision support

systems finds that factors like the AI’s explainability, transparency of confidence levels, and consistency affect

whether humans will accept its advice. Thus, a machine might be extremely “intelligent” in a narrow sense, but

if humans don’t trust or understand it, its capability doesn’t translate into organisational performance. In

designing composite capabilities, organisations are learning to invest not just in algorithms, but in features that

make those algorithms usable and reliable in human workflows (for example, an AI-generated insight might be

accompanied by a natural-language explanation or a visualisation for the human decision-maker).

Agile Interfaces: Perhaps the most novel element is the idea of agile interfaces as the “conductor” of the

human-machine symphony. Interfaces include the user experience design of software, the dashboards, the

collaboration tools, or even the organisational processes that mediate human-AI interaction. The paradigm notes

that “agile interfaces are the critical conduits for effective human-AI interaction”, enabling translation of AI’s

raw power into forms humans can act on. Examples range from a well-designed alert system in a cockpit that

draws a pilot’s attention at the right time, to a chatbot interface that a customer finds intuitive and helpful, to an

augmented reality tool that guides a factory worker in performing a task with AI assistance. These interfaces

need to be agile in the sense of flexible, user-centered, and evolving. We now recognise new skills like prompt

engineering (formulating questions or commands to get the best results from AI models) and data storytelling

(translating data analysis into compelling narratives) as part of this interface layer. If human intelligence sets

goals and machine cognition generates options, the interface is what makes sure the two can “talk” to each other

effectively. From a behavioural perspective, interface design draws on cognitive psychology (how to present

information in ways that align with human attention and memory limits), on social psychology (how to

engender trust – for instance, by giving the AI a relatable persona in a customer service chatbot), and on

behavioural economics nudges (how the choice architecture can influence safer or more productive behaviours).

A trivial example: a decision portal might default to a recommended option but allow override, thus nudging

users toward the statistically superior choice while preserving agency – this is an interface-level nudge that can

lead to better outcomes without coercion.

The Composite Capability Core (Layer 2 of the stack) is essentially the synergy of these human, machine, and

interface components. It is where, to quote the internal framework, “pattern fusion” occurs – “the seamless

integration of human sense, domain depth, machine precision, and systemic perspective”. Pattern fusion implies

that when humans and AI work together, they can solve problems neither could alone, by combining strengths:

human sense (intuition, ethics, meaning) + deep domain expertise + AI’s precision + a systemic or holistic view

of context. Notably, the inclusion of systemic perspective reflects the need to consider the whole environment –

a nod to systems thinking (as per Emery & Trist’s focus on interdependencies). In practice, pattern fusion might

manifest as follows: imagine an urban planning scenario where deciding traffic policy needs data on vehicles

(AI can optimise flows), understanding of human behaviour (people’s commuting habits, which a behavioural

expert can provide), political acceptability (requiring empathy and negotiation by leaders), and tools to simulate

scenarios (an interface for experiments). A fused approach could create a solution that optimises traffic without,

say, causing public backlash – something a purely AI optimisation might miss or a purely human intuition might

get wrong. The framework argues that such fusion leads to “wiser, kinder, better decisions” – interestingly

attributing not just smartness (wiser) but kindness (reflecting values) to the outcome, and also calls out

interpretability as a benefit (humans and AI together can make the solution more explainable).


Layers of the 2025+ Capability Stack

Surrounding this fusion core are two other layers in the stack model: the Foundational Layer and the Finishing

Layer. These roughly correspond to inputs that set the stage (foundation) and oversight/outcomes that ensure

sustainability (finishing).

Foundational Layer: Purpose, Values, and Ethical Leadership. This bottom layer is the base upon which

everything rests. It includes the organisation’s purpose (mission), its core values, and the tone set by leaders in

terms of ethics and vision. In essence, it is about why the organisation exists and what it stands for. Grounding

an AI-enabled enterprise in a strong foundation of purpose and values serves several roles. First, it guides

strategy: AI investments and projects should align with the mission (for example, a healthcare company whose

purpose is patient care should evaluate AI not just on cost savings but on whether it improves patient outcomes,

consistent with their purpose). Second, it provides a moral and ethical compass: decisions about AI usage (such

as how to use patient data, or whether to deploy facial recognition in a product) can be filtered through the lens

of values like integrity, transparency, and respect for individuals. This is effectively what Floridi et al. advocate

– embedding principles so that ethical considerations are front and center. Third, a clear purpose and ethical

stance help in trust-building with both employees and external stakeholders. Employees are more likely to trust

and engage with AI systems if they see that leadership is mindful of ethical implications and that the systems

uphold the company’s values (for instance, an AI decision tool that is demonstrably fair and used in a values-

consistent way will face less internal resistance). Externally, customers and partners today scrutinise how

companies use AI – a strong foundational layer means the company can articulate why its use of AI is

responsible and beneficial. Behavioural science here intersects with leadership studies: transformational

leadership research shows that leaders who inspire with purpose and act with integrity foster more innovation

and buy-in from their teams. Therefore, having Ethical AI governance as a leadership imperative is part of this

layer – boards and executives must champion and monitor the ethical deployment of AI, making it a core part of

corporate governance (indeed, the internal report suggests boards treat AI governance as a “fiduciary duty”). In

practice, this could mean regular board-level reviews of AI projects, training leaders about AI ethics, and

including ethical impact in project KPIs.

Composite Capability (Fusion) Core: Human–AI Fusion and Interfaces. We discussed this above – it’s the

middle layer where the action happens. It is dynamic and process-oriented, concerned with how work gets done

through human-AI teaming. In the stack model, this is depicted as the engine of innovation and decision-

making. It includes elements like the use of multimodal AI (combining text, image, voice data) and ensuring

Explainable AI (XAI) for transparency, as well as emerging methodologies like Human-in-the-Loop (HITL)

which keeps a human role in critical AI processes. All these features align with the idea of making the human-

machine collaboration effective and trustworthy.

Finishing Layer: Wellbeing, Inclusion, and Governance. The top layer of the capability stack is termed the

“Finishing Layer (The Frosting)”, emphasising the need for a stable and positive environment in which the other

capabilities function. It includes employee wellbeing, inclusion and diversity, and robust governance structures

(particularly AI governance around data, privacy, and ethics). While called “finishing,” it is not an afterthought

– it’s what ensures the whole cake holds together and is palatable. Wellbeing is crucial because a highly capable

organisation could still fail if its people are burned out, disengaged, or fearful. Behavioural science highlights

that change (like digital transformation) can be stressful, and prolonged stress undermines performance,

creativity, and retention. Thus, efforts to maintain reasonable workloads, provide support for employees

adapting to new roles alongside AI, and focus on ergonomic job design (so that AI doesn’t, say, force people

into hyper-monitoring or repetitive check work that hurts satisfaction) are part of sustaining capabilities.

Inclusion in this context has multiple facets: ensuring a diverse workforce (so that the people working with and

designing AI have varied perspectives, which can reduce blind spots and biases), and ensuring that AI systems

themselves are inclusive (accessible to people with different abilities, not biased against any group of users). A

practical example is providing training opportunities to all levels of employees so that digital literacy is

widespread, preventing a digital divide within the company where only an elite handle AI and others are

marginalised. Inclusion also refers to bringing employees into the conversation about AI deployment

(participatory change management), which increases acceptance – people support what they help create, as

classic OD teaches. Robust Governance ties to ethical AI and regulatory compliance. It’s about structures and

policies that maintain oversight of AI. For instance, data privacy committees to vet use of personal data

(anticipating regulations like GDPR or the new AI laws mentioned in the internal report), or AI model

validation processes to ensure models are fair and robust before they are put into production. Essentially, the

finishing layer provides checks and balances and ensures sustainability. It resonates with concepts like corporate

social responsibility and stakeholder theory: the organisation monitors the impact of its capabilities on all

stakeholders (employees, customers, society) and corrects course when needed. In behavioural terms, having

strong governance and an inclusive, healthy environment feeds back into trust – employees who see that

leadership cares about these issues will be more engaged and proactive in using AI responsibly themselves.

Conversely, if this layer is weak, one might get initial performance gains from AI but then face issues like

ethical scandals (which can destroy trust and brand value) or employee pushback and turnover.

In sum, the Composite Capability Paradigm anchored by the 2025+ Capability Stack is a strategic schema that

marries behavioural and technical elements. It mirrors many principles found in academic literature: it has the

human-centric values focus (aligning with Schein’s cultural emphasis and Floridi’s ethics), it leverages human-

machine complementarities (echoing Brynjolfsson’s augmentation strategy and socio-technical systems theory),

it invests in learning and adaptation (reflecting Teece’s dynamic capabilities and Argyris’s organisational

learning concepts), and it institutionalises trust and wellbeing (drawing on behavioural insights about motivation

and ethical conduct). By framing these as layers, it provides leaders a mental model: Start with purpose and

values, build the human+AI engine on that foundation, and secure it with governance and care for people.

One can see how this addresses the challenges noted earlier in our literature review. For example, consider trust.

The foundation of ethical leadership sets a tone of responsible AI use; the fusion core includes explainability

and human oversight, which directly fosters trust; the finishing layer’s governance monitors and enforces

trustworthy practices. Or consider adaptive decision-making. The fusion core is all about agility – humans and

AI adjusting in real time (the “jam session”), and the dynamic capabilities thinking is baked into the need for

orchestration and continuous upskilling mentioned in the paradigm. The finishing layer’s focus on learning (e.g.,

psychological safety as part of wellbeing, inclusion of diverse voices) enables adaptation too. Human agency is

reinforced by the foundation (purpose gives meaningful direction; ethical leadership ensures humans remain in

charge of values) and by design choices in the core (HITL, interfaces that allow human override). Digital

fluency is specifically called out as something to be fostered (“universal AI fluency”), meaning training and

comfort with AI at all levels – that’s both a skill and a cultural aspect.

To illustrate how this framework plays out, here are some real-world vignettes:

 In Customer Service, customer empathy is augmented by AI doing sentiment analysis in real time,

allowing human agents to tailor their responses – a perfect example of composite capability (machine

gauges tone, human shows empathy, interface feeds the insight live).

 In Operations, Lean principles are turbocharged by AI that predicts machine failures from sensor data

and video, improving efficiency.

 In Product Design, AI can suggest creative variations (say, generating design mockups) which

designers then refine – AI amplifying human creativity.

 In Strategic Foresight, AI (like GPT-based scenario simulators) helps leaders envision various future

scenarios (e.g., climate futures) so they can better plan, combining data-driven simulation with human

judgment and values to choose a path.

All these examples follow the pattern of human + AI synergy aligned to purpose. The composite capability

paradigm thus serves as a bridge between theory and practice: it gives a language and structure to ensure that

when we implement AI, we do so in a way that is holistic – considering technology, people, and process

together – and principled – guided by purpose and ethics.

Next, we move from concept to concrete practice: what should leaders and organisations actually do to realise

these ideas? In the following section, we discuss how to integrate behavioural science insights with AI

initiatives on the ground, through targeted strategies around purpose, trust, skills, decision processes, and

governance.


From Theory to Practice: Integrating Behavioural Science and AI in Organisations

Implementing the vision of human-centric, behaviourally informed AI integration requires action on multiple

fronts. In this section, we outline practical approaches and examples across key themes – purpose and culture,

trust and human–AI teaming, digital fluency and skills, adaptive decision-making, and governance and ethics –

highlighting how organisations in various sectors are putting principle into practice.


Cultivating Purpose-Driven, Human-Centered Culture in the AI Era

A clear sense of purpose and strong organisational culture are not “soft” niceties; they are strategic assets in

times of technological upheaval. As discussed, purpose forms the foundation that guides AI adoption.

Practically, this means organisations should start AI initiatives by asking: How does this technology help us

fulfill our mission and serve our stakeholders? By framing projects in these terms, leaders can more easily

secure buy-in. For example, a public sector agency implementing AI to speed up service delivery might

articulate the purpose as improving citizen experience and fairness in accessing public services, resonating with

the agency’s public service mission. This was effectively demonstrated by the UK Behavioural Insights Team

(BIT), which applied behavioural science to public policy: they would define the purpose of interventions (e.g.,

increasing tax compliance to fund public goods) and design nudges accordingly. Their success – like

simplifying tax reminder letters to encourage on-time payments – came from aligning interventions with a clear

public purpose and an understanding of human behaviour. Organisations can analogously use AI as a tool to

advance purposeful goals (such as targeting healthcare resources to the neediest populations, or customising

education to each learner’s needs), and communicate that clearly to employees.

Communication is a vital part of culture. Change management research emphasizes over-communicating the

“why” in transformations. Leaders should consistently connect AI projects to core values. For instance, if

innovation is a value, an AI project might be touted as enabling employees to experiment and create new

solutions faster. If customer centricity is a value, management can stress how AI will help staff respond to

customer needs more promptly or personalise services – thus framing AI not as a threat, but as a means to better

live out the company’s values. Satya Nadella of Microsoft provides a real-world example: under his leadership,

Microsoft’s culture shifted to a “learn-it-all” (growth mindset) culture, encouraging experimentation. When

incorporating AI (like Azure AI services or GitHub’s Copilot), Nadella consistently frames it as empowering

developers and organisations – aligning with Microsoft’s mission “to empower every person and every

organisation on the planet to achieve more.” This kind of narrative helps employees see AI as supportive of a

shared purpose, not a top-down imposition of technology for its own sake.

In practical terms, organisations can embed purpose and human-centric principles into AI project charters and

evaluation criteria. Some companies have introduced an “ethical impact assessment” or purpose-impact

assessment at the start of AI projects. This involves multidisciplinary teams (including HR, legal, user

representatives) reviewing proposals by asking questions: Does this AI use align with our values? Who could be

adversely affected and how do we mitigate that? Will this improve the employee or customer experience

meaningfully? By institutionalising such reflection, the project is shaped from the outset to be human-centric.

This practice aligns with CIPD’s call for HR to ensure interventions “are in sync with how people are ‘wired’

and don’t inadvertently encourage undesirable behaviour” – essentially a reminder to align any new tools with

positive behaviours and outcomes.

Another concrete practice is storytelling and exemplars: sharing stories internally where AI helped a person do

something better or live the company values. For example, an insurance company might circulate a story of how

an AI risk model helped a risk officer identify a struggling customer and proactively offer help – highlighting

empathy enabled by tech. These stories reinforce a culture where AI is seen as enabling employees to achieve

the organization’s human-centered goals.


Building Trust and Effective Human–AI Teams

Trust is the cornerstone of any successful human–AI partnership. Without trust, employees may resist using AI

systems or use them improperly, and customers may reject AI-mediated services. Building trust requires both

technical measures (like reliability and transparency of AI) and social measures (like training and change

management to build confidence and understanding).

On the technical side, organisations should prioritise Explainable AI (XAI) in applications where users need to

understand or validate AI decisions. For instance, a fintech company deploying an AI credit scoring tool might

implement an interface that not only gives a score but also highlights key factors contributing to that score (debt

ratio high, short credit history, etc.) in plain language. This allows loan officers to trust the system and explain

decisions to customers, aligning with the principle of explicability. Many high-performing firms now treat

explainability as a requirement, not a luxury, for any AI that interacts with human decision-makers. This stems

from a behavioural understanding: people trust what they understand.

In addition to transparency, performance consistency of AI fosters trust. Users need to see that the AI is right

most of the time (or adds value) in order to rely on it. To that end, phased rollouts where AI recommendations

are first provided in parallel with human decisions (allowing humans to compare and give feedback) can

calibrate trust. A hospital, for example, might introduce an AI diagnostic tool by initially running it “silently” –

doctors see its suggestion but still make decisions independently; over time, as they see that the AI often catches

things they might miss or confirms their hunches, their trust grows. This staged approach was recommended by

some naturalistic decision-making experts to avoid abrupt shifts that could trigger algorithm aversion.

Training is critical: digital literacy and AI fluency training doesn’t only teach how to use the tool, but also

covers the tool’s limitations and the importance of human judgement. For instance, pilots train on autopilot

systems extensively to know when to rely on them and when to disengage – by analogy, a financial analyst

might be trained on an AI forecasting tool to know scenarios where it’s likely to err (perhaps during market

disruptions) so they can be extra vigilant. This idea of appropriate reliance comes straight from behavioural

research on automation (Parasuraman et al., 1997) which showed that people often either under-trust (ignore

useful automation) or over-trust (get complacent). The goal is calibrated trust.

From a social perspective, involving end-users in the design and testing of AI solutions fosters trust. If a new AI

tool is coming to an employee’s workflow, having some of those employees participate in its pilot, give

feedback, and witness improvements based on their input can turn them into change champions who trust the

end product. This participatory approach also surfaces usability issues that, if left unaddressed, could erode trust

later. It mirrors the behavioural principle that people fear what they don’t understand; involvement demystifies

the AI.

Organisational roles may also need to evolve to optimise human–AI teaming. Some companies are creating

roles like “AI liaison” or “human-AI team facilitator” – individuals who understand both the tech and the work

domain and can mediate between data science teams and frontline staff. These facilitators might observe how

employees interact with AI tools, gather suggestions, and continuously improve the human-AI interface. This is

analogous to having a user experience (UX) expert, but specifically focusing on the collaboration between

human and AI. For example, in a call center that introduced an AI that listens to calls and suggests responses (a

real technology in use), a facilitator monitored calls to see if the suggestions were helpful or if they annoyed the

agents, then tweaked the system or trained the agents accordingly (maybe the AI needed to wait a few seconds

more before popping up suggestions, to not interrupt the agent’s own thought process). Such adjustments make

the partnership smoother and bolster trust in the AI as a helpful colleague rather than an intrusive overseer.

Team norms can also be established for human–AI interaction. If decisions are being made with AI input, teams

can adopt norms like: Always double-check critical decisions with another human or source if the AI gives low

confidence, or Use the AI’s recommendation as a starting point but consider at least one alternative before

finalising (to avoid lock-in). These are akin to pilot checklists or medical second-opinion norms, and they

acknowledge that while AI is a team member, human members are ultimately accountable. By formalising such

practices, organisations signal that AI is a tool, not a replacement for human responsibility. This can alleviate

anxiety (employees know they’re not expected to blindly follow AI) and encourage learning (comparing AI and

human conclusions can be instructive).

A case in point for trust and teaming comes from the military domain, where “centaur” teams (a term borrowed

from chess human–AI teams) are being explored. Fighter pilots work with AI assistants that might drone-fly

wingman UAVs or manage defensive systems. The military has found that trust is built through rigorous testing

in exercises and the ability of pilots to easily take control from the AI when needed – reflecting the principle of

keeping humans in the loop for lethal decisions. In business, the stakes are usually lower, but the same concept

of giving humans an “eject button” or override and making that as easy as pressing a button fosters a safety net

that ironically makes users more open to letting the AI handle things up to that point. It’s analogous to having

brakes when using cruise control.

Finally, an often overlooked element: celebrating successes of human–AI collaboration. When an AI-assisted

effort leads to a win (say, an AI+human sales team exceeds their targets or an AI-driven quality control catches

a defect that human inspectors missed, avoiding a costly recall), leaders should acknowledge both the human

and the AI contribution. This sends a message that using the AI is praiseworthy teamwork, not something that

diminishes human credit. If employees fear that AI will steal the credit or make their role invisible, they’ll resist

it. Recognising augmented achievements in performance reviews or team meetings helps normalise AI as part of

the team.


Developing Digital Fluency and Adaptive Skills

One of the most tangible ways to integrate behavioural science with AI strategy is through learning and

development (L&D) initiatives. The half-life of skills is shrinking; dynamic capability at the organisational level

rests on continually upskilling and reskilling the workforce (sensing and seizing opportunities, in Teece’s

terms). Behavioural science-informed L&D focuses not just on knowledge transmission, but on motivation,

reinforcement, and practical application.

A key capability for 2025 and beyond is digital fluency – the ability for employees to comfortably understand,

interact with, and leverage AI and data in their roles. Companies leading in AI adoption often launch company-

wide digital academies or AI training programs. For example, AT&T and Amazon have large-scale reskilling

programs to train workers in data analysis and machine learning basics, offering internal certifications. The

behavioural insight here is to reduce learning anxiety: make learning resources abundant, accessible (online,

self-paced), and rewarding (through badges, recognition, or linking to career advancement). By building a

culture where continuous learning is expected and supported (and not punitive if one is initially unskilled),

employees are more likely to engage rather than fear the new technology. This also ties to Carol Dweck’s

growth mindset concept – praising effort and learning rather than static ability – which many organisations now

incorporate into their competency models.

Another tactic is experiential learning through pilot projects or innovation labs. Instead of classroom training

alone, employees learn by doing in sandbox environments. For instance, a bank might set up a “bot lab” where

any employee can come for a day to automate a simple task with a robotic process automation (RPA) tool, with

coaches on hand to assist. This hands-on experience demystifies AI (or automation) and builds confidence.

Behaviourally, adults learn best when solving real problems that matter to them (a principle from adult learning

theory). So if an employee can automate a tedious part of their job through an AI tool, they directly see the

benefit and are likely to be more enthusiastic about AI adoption.

Mentoring and peer learning also accelerate digital fluency. Some firms have implemented a “reverse

mentoring” system where younger employees or tech-savvy staff mentor senior managers on digital topics

(while in turn learning domain knowledge from those seniors). This not only transfers skills but breaks down

hierarchical barriers to learning – a major cultural shift in some traditional organisations. It leverages social

learning: people often emulate colleagues they respect, so having influential figures vocally learning and using

AI can create a bandwagon effect.

A concept gaining traction is the creation of fusion teams (also called citizen developer teams), which pair

subject-matter experts with data scientists or IT developers to co-create AI solutions. For example, in a

manufacturing firm, a veteran production manager teams up with a data scientist to develop a machine learning

model for predictive maintenance. The production manager learns some data science basics in the process

(digital upskilling) and the data scientist learns the operational context (domain upskilling). This cross-

pollination means the resulting solution is more likely to be adopted (since it fits the work context) and the

participants become champions and trainers for others. It’s an application of Vygotsky’s zone of proximal

development in a way – each learns from someone a bit ahead of them in another dimension, scaffolded by

collaboration.

Adaptive decision-making skills are also crucial. Employees need training not just in using specific tools, but in

higher-level skills like interpreting data, running experiments, and making decisions under uncertainty –

essentially, decision science literacy. Some organisations train their staff in basic statistics and hypothesis

testing so they can better design A/B tests or understand AI output (which often comes with probabilities or

confidence intervals). This is informed by the behavioural notion that people are prone to misinterpreting

probabilistic data (e.g., confusion between correlation and causation, or biases like overconfidence). By

educating the workforce on these pitfalls (perhaps using engaging examples, like common fallacies), companies

improve the collective ability to make sound decisions with AI.

Continuous feedback loops are another practice: dynamic capabilities demand quick learning cycles. Companies

can implement frequent retrospectives or after-action reviews when AI is used in projects. For instance, after a

marketing campaign guided by an AI analytics tool, the team can review what the AI suggested, what they did,

and what the outcome was, extracting lessons (did we trust it too much, did we under-utilise it, did we encounter

surprising customer reactions?). These insights then feed into refining either the AI model or the human

strategies. Such reflective practices are advocated in agile methodologies and are rooted in Kolb’s experiential

learning cycle (concrete experience → reflective observation → abstract conceptualisation → active

experimentation). Over time, they build an organisational habit of learning from both success and failure, key to

adaptation.

It’s also worth noting the leadership skill shifts needed. Leadership development programs are incorporating

training on leading hybrid human–AI teams, asking the right questions about AI (since leaders might not be the

technical experts, they need the fluency to challenge and query AI outputs – e.g., “What data was this model

trained on? How confident should we be in this prediction?”). Leading by example, if managers regularly use

data and AI insights in their decisions and explain how they balanced that with experience, employees pick up

on that decision-making approach.

A concrete example of adaptive skill-building can be drawn from healthcare: during the COVID-19 pandemic,

hospitals had to adapt quickly to new data (like predictive models of patient influx). Some hospitals created ad

hoc data teams and trained clinicians to read epidemiological models – a crash course in data literacy under

pressure. Those who managed to integrate the predictions with frontline insights navigated capacity issues

better. This underscores that when the environment changes rapidly (turbulent environment in Emery & Trist’s

term), organizations benefit from having invested in general adaptability skills beforehand.


Enhancing Decision-Making and Innovation through Human–AI Collaboration

Organisations can leverage AI and behavioural insights together to drive better decisions and innovation on an

ongoing basis. One method is establishing a culture and processes of evidence-based decision-making. The idea,

championed by movements like evidence-based management and supported by CIPD research, is to encourage

decisions based on data, experiments, and scientific findings rather than just intuition or tradition. AI naturally

provides more data and analytical power, but behavioural science reminds us that simply having data doesn’t

ensure it’s used wisely – cognitive biases or political factors can still lead to suboptimal choices.

To address this, some organisations have set up “decision hubs” or analytics centers of excellence that both

churn out insights and coach decision-makers on how to interpret them. A bank, for instance, might require that

any proposal for a new product comes with an A/B test plan and data analysis – essentially building a decision

process that forces a more scientific approach. Product teams at tech companies routinely do this: before rolling

out a feature, they run experiments and the go/no-go is based on statistically significant results, not just the

HIPPO (highest paid person’s opinion). This discipline is part technical (knowing how to run tests) and part

behavioural (committing to act on what the data says, which can be hard if it contradicts one’s intuition).

Leaders play a role here by reinforcing that changing course in light of data is a strength, not a weakness. Jeff

Bezos called this “being stubborn on vision, flexible on details” – hold onto your core purpose but be willing to

change tactics when evidence suggests a better way.

Adaptive governance structures, like rapid steering committees or innovation task forces, can empower faster

decision loops. For example, during a crisis or fast-moving market change, a company might assemble a cross-

functional team that meets daily to review AI-generated forecasts and frontline reports, then make quick

decisions (similar to a military OODA loop: observe, orient, decide, act). This was observed in some

companies’ COVID responses – they effectively set up a nerve center mixing data (sometimes AI models

predicting scenarios) with human judgment to navigate uncertainty. The behavioural key is that these teams had

the mandate to act and adjust, avoiding the paralysis that can come from either fear of uncertainty or

bureaucratic slowness. They embraced adaptive decision-making, making small reversible decisions quickly

rather than waiting for perfect information.

In terms of innovation, AI can generate ideas (like design suggestions or optimisations) but human creativity

and insight are needed to choose and implement the best ideas. Companies are thus exploring human–AI co-

creation processes. One practical approach is ideation sessions with AI: for instance, marketers might use GPT-4

to produce 50 variations of an ad copy, then use their creative judgment to refine the best ones. In engineering,

generative design algorithms propose thousands of component designs, and engineers use their expertise to pick

one that best balances performance and feasibility. This speeds up the trial-and-error phase of innovation

dramatically, allowing humans to consider far more possibilities than they could alone. But it also requires a

mindset shift: designers and experts must be open to letting AI contribute and not feel that it diminishes their

role. To facilitate this, some organisations frame AI as a creative partner or brainstorming assistant. They

encourage teams to treat AI suggestions not as final answers, but as provocations or starting points. This reduces

the psychological defensiveness (“a robot is doing my job”) and instead fosters curiosity (“let’s see what it

comes up with, maybe it will spark something”). Pixar, for example, has experimented with AI for generating

plot ideas or character visuals – not to replace writers or artists, but to help break through creative blocks or

explore alternatives. They report that artists actually enjoyed riffing off AI outputs once they felt it was their

choice what to use or discard.

Bias mitigation in decisions is another area where behavioural science and AI together can help. AI can be used

to debias human decisions – for instance, in hiring, structured algorithmic screening can counteract individual

manager biases (though one must also ensure the AI itself is fair). Meanwhile, behavioural tactics like blinding

certain info or using checklists can be applied to AI outputs; e.g., if an AI produces a recommendation, a

checklist for managers might ask “What assumptions is this recommendation based on? Have we considered an

opposite scenario?” which forces a consideration of potential bias or error. The combination ensures neither

human nor AI biases dominate unchecked. The “premortem” technique by Gary Klein (imagine a future failure

and ask why it happened) can be used on AI-driven plans to uncover hidden issues. Some AI development teams

now do bias impact assessments as part of model development (a practice encouraged by IBM, Google etc.),

essentially bringing a social science lens into the tech development.


Strengthening Governance, Ethics, and Trustworthy AI Practices

Governance provides the scaffolding that holds all the above initiatives accountable and aligned. It’s the

embodiment of the “robust governance” and “ethical AI” focus in the capability stack’s top layer. Several

concrete governance measures are emerging as best practices:

AI Ethics Boards or Committees: Many organisations (Google, Facebook, Microsoft, to name tech giants, but

also banks, healthcare systems, universities, and governments) have convened advisory boards or internal

committees to review AI projects. The composition is typically cross-functional – legal, compliance, technical,

HR, and often external independent experts or stakeholder representatives. Their role is to examine proposed

high-impact AI uses for ethical risks, alignment with values, and compliance with regulations. For example, a

global bank’s AI ethics committee might review a new algorithmic lending platform to ensure it doesn’t

discriminate and that it has an appeal process for customers – effectively implementing principles of fairness

and accountability. These boards are a direct response to both ethical imperatives and looming regulations (like

the EU AI Act’s requirements for high-risk AI systems). They institutionalise the “slow thinking” System 2

oversight to balance the fast-moving deployment of AI. Behavioural science supports this by recognising that

individual developers or product owners may have conflicts of interest or cognitive blind spots – a formal

review by a diverse group brings more perspectives (avoiding groupthink and the bias of tunnel vision) and

creates a checkpoint for reflection (mitigating the rush that can lead to ethical lapses).

Policies and Principles: Organisations often publish AI principles (e.g., “Our AI will be fair, accountable,

transparent, and explainable” – similar to Floridi’s five principles) and then derive concrete policies from them.

A policy might dictate, for example, that sensitive decisions (hiring, firing, credit denial, medical diagnosis) will

not be made solely by AI – there must be human review (human-in-the-loop), which echoes one of the EU draft

AI regulations as well. Another might require that any customer-facing AI makes clear to the user that it is an AI

(so people aren’t duped into thinking a chatbot is a human, respecting autonomy). These policies are essentially

commitment devices at the organisational level – they set default behaviours that align with ethical intentions,

making it easier for employees to do the right thing and harder to do the wrong thing. They also serve to build

public trust, since companies can be held to their promises.

Transparency and Communication: Internally, transparency means informing employees about how AI is

affecting decisions about them (like performance evaluations or promotions, if algorithms play a role) and

decisions they make (providing insight into the tools they use). Externally, it means being honest with customers

about when AI is used and what data is collected. Some banks, for instance, let customers know that an

automated system did the initial credit assessment and give a route to request human reassessment – this kind of

candour can actually improve trust, as customers feel they are respected and have recourse. It also pressures the

AI to perform well since its suggestions might be scrutinised. Interestingly, behavioural research shows people

appreciate procedural fairness: even if they get a negative outcome, if they believe the process was fair and

transparent, they react less negatively. So transparency is not just an ethical duty, but also a strategy to maintain

trust even when AI systems must deliver unwelcome news.

Monitoring and Auditing: The governance framework should include continuous monitoring of AI

performance and impacts, not just one-time reviews. AI models can drift (their accuracy degrades if data

patterns change), and their use can evolve in unintended ways. Companies are starting to implement AI

monitoring dashboards, analogous to financial controls, tracking key metrics like bias indicators, error rates, and

usage statistics. For example, if an AI recruiting tool suddenly starts filtering out a higher percentage of female

candidates than before, that flag can trigger an investigation. This is similar to the way credit scoring models are

monitored for bias in lending. Some jurisdictions are likely to mandate such audits (the proposed EU AI Act

would require logging and oversight for high-risk AI). Incorporating this proactively is wise. It again brings in

behavioural science at the organisational level: what gets measured gets managed. By measuring ethical and

human-impact metrics, not just performance, an organisation signals its priorities and catches issues early. There

is also a behavioural aspect in that knowing one is monitored can deter negligent behaviour – if teams know

their AI deployment will be audited for fairness, they’re more likely to design it carefully from the start.

Responsive Governance: Governance shouldn’t just be rigid control; it must also be adaptive. If an audit or a

whistleblower or an external event reveals a problem (say, an AI is implicated in a privacy breach or bias

incident), an agile governance process can pause that AI’s deployment and convene a response team to fix it.

This happened in some tech companies, for example, when a facial recognition product was found to have racial

bias, the company voluntarily halted sales to law enforcement and invested in improvements. The ability to

respond quickly to ethical issues – essentially an organisational form of course correction – will define

companies that can retain public trust. It is analogous to product recalls in manufacturing: how you handle a

flaw can make or break your reputation.

A specific domain example: Public services and government are increasingly using AI (for welfare eligibility,

policing, etc.), and they have set up governance like independent oversight panels and algorithm transparency

portals where the code or at least a description is published for public scrutiny. The Netherlands, after a scandal

where a biased algorithm falsely flagged welfare fraud (the SyRI system), established stricter oversight and even

legal bans on such algorithms until proper safeguards are in place. The lesson taken was that not tempering

technical possibility with behavioural and ethical oversight can lead to serious harms, which then require

rebuilding trust from scratch. Now they emphasise citizen privacy, feedback from social scientists, and smaller

pilot programs to evaluate impacts before scaling.

Within organisations, employee involvement in governance is an interesting trend. For instance, some

companies have ethics champions or ambassadors in each department who ensure local context is considered in

AI use and act as liaisons to the central AI ethics committee. This decentralises ethical mindfulness – a bit like

having safety officers throughout a factory, not just at HQ. It leverages the behavioural principle of ownership:

people on the ground often see problems early, and if they feel responsible for ethics, they’re more likely to

speak up rather than assume “someone else up high will take care of it.” Creating safe channels for such voices

(whistleblower protections, open-door policies on AI concerns) is vital, reflecting Edmondson’s psychological

safety concept again, but in the ethics domain.

Finally, regulatory engagement is part of governance now. Organisations should keep abreast of and even help

shape emerging AI regulations and industry standards (like IEEE’s work on AI ethics standards). This proactive

approach means they’re not caught off guard by compliance requirements and can even gain a competitive edge

by being early adopters of high standards (much like companies that embraced environmental sustainability

early reaped reputational rewards). It also ensures that their internal governance aligns with external

expectations, making the whole ecosystem more coherent.

In conclusion, the practical integration of behavioural science and AI requires concerted effort in culture, trust-

building, skill development, decision processes, and governance. The themes we’ve discussed are deeply

interrelated: a purpose-driven culture facilitates trust; trust and skills enable adaptive decision-making; good

decisions and experiences reinforce trust and culture; and governance sustains it all by ensuring accountability

and alignment with values. Organisations that weave these elements together are effectively operationalising the

composite capability paradigm – they are designing themselves to be both high-tech and deeply human,

dynamic yet principled.


Conclusion

Behavioural Sciences in the Age of AI is not just an academic topic; it is a lived strategic journey for

organisations today. In this paper, we have traversed the historical and theoretical landscape that underpins this

journey – from Simon’s realisation that human rationality is bounded, to Brynjolfsson’s insight that humans and

machines, working as partners, can achieve more than either alone, to Floridi’s urging that AI be guided by

human-centric principles for a flourishing society. These insights form a tapestry of wisdom: they tell us that

effective use of AI requires understanding human cognition and behaviour at individual, group, and societal

levels.

We anchored our discussion in a practical framework – the composite capability paradigm – which captures

how human intelligence, machine cognition, and agile interfaces must seamlessly interact for organisations to

thrive. We situated this paradigm within broader literature, showing it resonates with socio-technical theory’s

call for joint optimisation, dynamic capabilities’ emphasis on agility and reconfiguration, and ethical

frameworks’ insistence on purpose and values. In doing so, we positioned the user’s internal frameworks as part

of a continuum of scholarly and practical evolution, rather than isolated ideas. This enriched perspective reveals

that the challenges of the AI era – building trust, preserving human agency, ensuring ethical outcomes, and

maintaining adaptability – are new in form but not in essence. They echo age-old themes of organisational life:

trust, purpose, learning, and justice, now cast in new light by technology.

Through real-world examples across health, public services, business, and government, we illustrated both the

opportunities and the cautionary tales. We saw how a hybrid of radiologists and AI improves diagnostic

accuracy, and how a poorly overseen algorithm can cause public harm and outrage (as in the welfare case).

These examples reinforce a key takeaway: human–AI collaboration works best when it is designed and

governed with a deep appreciation of human behaviour – our strengths (creativity, empathy, judgment) and our

weaknesses (bias, fear of change, fatigue). In healthcare, education, finance, and beyond, those deployments of

AI that succeed tend to be those that augment human decision-making and are accepted by humans; those that

fail often neglected the human factor, whether by ignoring user experience, eroding trust, or conflicting with

values.

Several cross-cutting themes emerged in our analysis: purpose, trust, digital fluency, human agency, adaptive

decision-making, and governance. It is worth synthesising how they interplay to inform a vision for

organisations moving forward. Purpose and values form the north star – they ensure AI is used in service of

meaningful goals and set ethical boundaries. Trust is the currency that allows humans to embrace AI and vice

versa; it is earned through transparency, reliability, and shared understanding. Digital fluency and skills are the

enablers, equipping people to work alongside AI confidently and competently. Human agency is the lens of

dignity – maintaining it means AI remains a tool for human intentions, not a black box authority; it means

employees at all levels feel they can influence and question AI, thereby avoiding a dystopia of uncritical

automation. Adaptive decision-making is the modus operandi for a complex world – using data and

experimentation (often powered by AI) but guided by human insight to navigate uncertainty in an iterative,

learning-focused way. And governance and ethics are the safety rails – without them, short-term wins with AI

can lead to long-term crashes, whether through regulatory penalties or loss of stakeholder trust.

Looking ahead, the Age of AI will continue to evolve with new advancements: more multimodal AI, more

autonomous systems, more integration into daily life. Behavioural science, too, will evolve as we learn more

about how people interact with increasingly intelligent machines. Concepts like algorithmic nudges (AI shaping

human behaviour subtly) or extended cognition (humans thinking with AI aids) will grow in importance. But the

core insight of this paper is likely to endure: that the human in the loop is not a weakness to be engineered away,

but the very source of direction, purpose, and ethical judgment that technology alone cannot provide. As the

internal strategy document eloquently put it, we are witnessing “a philosophical shift: reclaiming human agency

and purpose by ensuring capabilities reflect organisational values and aspirations in a world of rapid change.” In

practical terms, this means organisations must consciously design their AI deployments to amplify human

potential and align with human values, not suppress them.

For strategists and leaders, then, the task is clear. It is to become, in a sense, behavioural engineers of

organisations – crafting structures, cultures, and systems where humans and AI together can excel. It is to

champion ethical innovation, proving that we can harness powerful technologies while keeping humanity at the

center. And it is to invest in learning and adaptation as primary capabilities, so that as new research and new

technologies emerge, the organisation can incorporate them responsibly and effectively. The organisations that

succeed in the coming years will be those that manage this integration deftly – who neither fall into the trap of

techno-centrism (trusting technology blindly, neglecting the people) nor the trap of techno-skepticism (fearing

technology and falling behind), but find a harmonious path of augmentation, where technology elevates people

and people steer technology.

In conclusion, behavioural science offers both a caution and a promise in the age of AI. The caution is that

ignoring human factors can lead even the most advanced AI solutions to fail or cause harm. The promise is that

by embracing a human-centered approach, we can unlock the full potential of AI to create organisations that are

not only more innovative and efficient, but also more resilient, ethical, and responsive to those they serve. By

learning from the past and grounding ourselves in foundational principles of human behaviour, we equip

ourselves to shape a future where AI amplifies human wisdom and creativity rather than undermining them. In

doing so, we ensure that the Age of AI remains, fundamentally, an age of human progress and empowerment,

aligned with the values and behaviours that define our humanity.



Sources:

Investopedia – Herbert A. Simon: Bounded Rationality and AI Theoristinvestopedia.cominvestopedia.com

The Decision Lab – Daniel Kahneman profilethedecisionlab.com

The Decision Lab – Gerd Gigerenzer profilethedecisionlab.com

The Decision Lab – Gerd Gigerenzer (quote)thedecisionlab.com

CIPD – Our Minds at Work: Developing the Behavioural Science of HR

Medium (Link Daniel) – Structured thinking and human-machine success (chess example)

TED Talk (Brynjolfsson) – Race with the machine (freestyle chess)blog.ted.comblog.ted.com

Workplace Change Collab. – Radiologist + AI outperforms either alonewpchange.org

New Capabilities (Internal doc) – Composite capability paradigm excerpt

New Capabilities (Internal doc) – Reclaiming human agency and purpose

New Capabilities (Internal doc) – 4D chess analogy for modern environment

New Capabilities (Internal doc) – Human-machine “dance” metaphor

AI4People (Floridi et al.) – AI ethics principles (human dignity, autonomy,

etc.)pmc.ncbi.nlm.nih.govpmc.ncbi.nlm.nih.gov

Workplace Change Collab. – Radiologist-AI hybrid sensitivity/specificitywpchange.orgwpchange.org

David J. Teece – Dynamic capabilities definitiondavidjteece.com

David J. Teece – Sensing, seizing, reconfiguring (agility)davidjteece.com

Wikipedia – Sociotechnical systems theory (Trist et al.)en.wikipedia.orgen.wikipedia.org

P. Millerd (Blog) – Schein on learning anxiety vs survival anxietypmillerd.compmillerd.com

Workplace Change Collab. – Ethical AI, inclusion, governance as “finishing layer”

New Capabilities (Internal doc) – AI as social-technological actor symbiosis

New Capabilities (Internal doc) – Universal AI fluency and continuous upskilling