I am incredibly excited to finally share this: my next book… my labour of love… is almost here. Born from a deep desire to bridge the gap between humans and technology at a time when machines are accelerating faster than our ability to make sense of them.
This work brings together years of reflection, lived experiences and hard questions about the duality of AI and what it means to stay human.
This has been a long time coming. Since AI went mainstream, I’ve been researching and working to make sense of it… not just as a digital futurist or keynote speaker but as an ordinary human navigating the same uncertainty as everyone else.
Beyond the headlines and hype cycles, I’ve watched excitement turn into anxiety, curiosity morph into dependency and opportunity collide with resistance and fear.
I saw organisations rush to adopt AI without preparing their people and some felt pressured to “keep up” without understanding what they are keeping up with.
This book is the result of years spent working with leaders, organisations and everyday people who feel both fascinated by AI and quietly overwhelmed by it.
AI: Humanity’s Greatest Frenemy
AI: Humanity’s Greatest Frenemy is not about choosing sides between humans and machines. It’s about confronting the tension between them. It’s about leadership, culture, adaptability and the dangerous myth that technology alone will save us.
This book started as questions that kept surfacing in keynote talks, panel conversations, boardrooms, fire-side chats and one-on-one conversations:
- What happens to identity, purpose and dignity when AI takes over jobs?
- What happens to humanity when intelligence is no longer our exclusive advantage?
- How do we protect creativity, originality and critical thinking?
- Who is accountable when AI gets it wrong?
- What does leadership look like in the age of AI?
- What does “being educated” even mean in an AI-first world?
- How do we preserve truth and reality when AI can generate convincing fakes at scale?
And so, this book is my attempt to sit with these questions… honestly, critically and unapologetically, without rushing to neat answers or comfortable conclusions. The stakes are too high for convenient answers.
In this book, I deliberately resist the urge to oversimplify a reality that is anything but simple.
Instead, I’m creating a space for reflection, tension and nuance because the hardest questions AI raises have nothing to do with the technology itself and everything to do with humanity.
In this book, I’m taking readers on a journey through:
- The awakening: why this moment in history is humanity’s red-pill or blue-pill moment.
- The confrontation: the fears, hopes, myths and existential risks that AI unleashes.
- The collapse of old models: how legacy systems, outdated education, and analogue mindsets cannot survive exponential reality.
- The rise of machine intelligence… and why even experts fear what humanity may lose.
- The two faces of AI (Terminator vs Iron Man: This book lives in the tension between them, because the truth is that both can exist at the same time and the outcome depends far more on human choices than on the technology itself.
- Technological singularity: not as a sci-fi end point or an inevitable breakthrough but as a warning signal. A moment that forces us to ask whether we are racing toward intelligence without wisdom and progress without control.
- The survival blueprint: 21st-century human skills, AI literacy and the mindset shifts required to stay relevant.
- Leadership in the age of AI: the new rules for governments, companies, cultures and communities.
- Africa’s pivotal role: exploring the question… will Africa be a casualty in the AI war?
AI doesn’t need another doomsday prophet. What it needs (what we need) is an honest conversation. One that moves beyond fear-mongering and blind optimism, and instead confronts the real, lived impact of AI.
An honest conversation about the opportunities and the trade-offs, the gains and the losses, the empowerment and the erosion happening at the same time.
Honesty demands that we look inward, that we examine our own complacency and our addiction to convenience.
Exploring the Duality and Complexities of AI
AI forces us to hold two uncomfortable truths at the same time: that it can elevate human potential and that it can also expose how fragile our systems, skills and assumptions really are.
This book lives in that tension… exploring the contradictory nature of AI as both a powerful tool for progress and a source of significant risks.
At its core is the duality of “co-intelligence” and “co-dependence.” AI can amplify human capability while quietly eroding human agency.
It can democratise access to knowledge while concentrating power in the hands of a few. It can enhance creativity, yet flatten originality into predictable patterns.
The same technology that helps a doctor diagnose disease faster can also automate entire professions.
The same algorithms that personalise learning can also shape behaviour, reinforce bias and blur the line between choice and influence.
Progress and consequence are no longer separate conversations. They are happening simultaneously and how we respond will shape not just the future of AI but the future of humanity.
This book doesn’t frame AI as something to be feared or blindly embraced. Instead, it examines the tension… the grey areas where opportunity and risk coexist.
Because the real challenge of AI isn’t whether it’s good or bad. It’s whether we are prepared to live responsibly with something so powerful, adaptive and indifferent to human values unless we deliberately embed them.
Naming a Fear Many People Can’t Articulate
At its core, this book is about naming a fear many people can’t articulate. Not the loud, cinematic fear of robots taking over the world but the quieter, more personal one.
- The fear of becoming irrelevant.
- The fear of falling behind without knowing when it happened.
- The fear of waking up one day and realising that the skills, experience, and identity you worked so hard to build no longer hold the same weight in an AI-shaped world.
It’s the discomfort people feel when technology moves faster than their ability to adapt and they’re told to “just learn the tools” as if that alone will protect them.
It’s the anxiety beneath the productivity promises, the unspoken shame of needing help from AI and the growing sense that intelligence itself is being redefined without our consent.
This book doesn’t dismiss that fear or dramatize it. It brings it into the open… so that we can confront it honestly, think critically about it and respond with intention rather than panic or denial.
Because fear unnamed becomes fear unchallenged. And in the age of AI, unchallenged fear is far more dangerous than the technology itself.
Who is this Book written for?
This book is a lifeline for:
- people feeling overwhelmed by AI
- professionals afraid of losing their jobs
- leaders struggling to navigate digital transformation
- parents concerned for their children’s future
- students preparing for an unpredictable world
- entrepreneurs and creators trying to stay ahead
- everyday people who simply want to understand what is happening
AI: Humanity’s Greatest Frenemy is the book for this moment. You will explore the promises and perils of intelligent machines, the future of work, the psychological traps of AI dependency, the risks leaders refuse to acknowledge, the existential risks of AI and the extraordinary opportunities available to those willing to adapt.
You’ll also step into the global conversation… with chapters examining the rise of AI-powered deception, deepfake dangers, the failures of legacy systems, Africa’s leapfrog potential, AI-induced stupidity and the urgent need for human-centric thinking.
Whether you’re AI-curious, AI-anxious or simply trying not to get left behind, AI: Humanity’s Greatest Frenemy will equip you with the mindset, skills and awareness you need to stay relevant and empowered in the most disruptive decade of our lifetime.
AI Is Not the End Game. Humanity Is.
Technology has always been a tool, never the destination. Fire, electricity, the internet… each reshaped civilisation but none of them defined what it meant to be human.
AI is no different, even though it feels unprecedented in its speed, scale and reach. The danger lies in mistaking capability for purpose and intelligence for wisdom.
AI can calculate faster, recognise patterns at scale and automate decisions but it cannot decide what should matter.
AI has no lived experience, no moral intuition, no accountability for the consequences of its outputs. Those responsibilities remain human.
And yet, the more powerful AI becomes, the more tempting it is to hand over judgment, agency and responsibility in the name of efficiency.
AI cannot define purpose, meaning or values. That responsibility remains firmly human.
AI doesn’t change organisations… people do.
AI doesn’t transform culture… leadership does.
AI doesn’t decide our future… our choices do.
Humanity as the end game means resisting the temptation to equate progress with speed alone.
It means designing organisations that value critical thinking over compliance, curiosity over certainty and adaptability over outdated expertise.
It means teaching people not just how to use AI but when not to… when to pause, question and intervene.
Coming Soon (Early 2026)
Arriving in early 2026, AI: Humanity’s Greatest Frenemy explores the uncomfortable but necessary questions surrounding artificial intelligence, existential risks, singularity and the future of humanity.
This book exists to help us navigate the tension between progress and responsibility, capability and consequence, intelligence and wisdom.
Until then, this conversation continues…
If these ideas resonate, I invite you to follow the journey, join the dialogue and reflect alongside me as we explore the complexities and duality of AI: Humanity’s Greatest Frenemy