In the race for innovation, Artificial Intelligence (AI) is the engine powering business growth across industries.
From predictive analytics to autonomous systems, businesses are embracing AI not just to streamline operations but to redefine what’s possible.
Yet, as we stand at the frontier of this technological revolution, one question echoes louder than the buzz of machine learning algorithms:
Are we solving problems for society, or creating more?
The European Union’s recent AI Act (a landmark piece of legislation) casts a spotlight on this tension. It doesn’t just regulate AI; it forces us to confront a deeper issue:
“How do we balance the pursuit of profit with the responsibility to protect people?”

Or, to borrow from Spider-Man:
“With great power comes great responsibility”
And AI is nothing if not great power.
AI Adoption through The Business Lens
For many organisations, AI is the ultimate growth hack.
Businesses are adopting AI to:
Optimise efficiency: Automating tasks, reducing errors, and saving costs.
Enhance customer experience: Personalising interactions at scale, predicting consumer behaviour.
Drive innovation: Unlocking new products, services, and even business models.
Consider AI in healthcare. Diagnostic algorithms can detect diseases faster than humans, potentially saving lives.
In finance, AI-powered fraud detection systems identify suspicious transactions in real time, protecting both institutions and individuals.
What happens when the same tools designed to optimise also start to dehumanise?
The Dark Side of Experimentation
While some businesses adopt AI cautiously, others treat it like a playground (an experimental space with few guardrails). This unrestrained approach can lead to unintended consequences:
Bias and discrimination: AI learns from data, and data reflects human biases. Without oversight, algorithms can reinforce systemic inequalities.
Erosion of privacy: Surveillance technologies powered by AI push the boundaries of personal freedom and data rights.
Job displacement: Automation threatens to replace not just routine jobs but roles that were once considered uniquely human.
It’s here I’m reminded of a line from Jurassic Park, where Dr. Ian Malcolm warns:
“Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.”

Swap out ‘scientists’ for ‘tech leaders’, ‘developers,’ or even ‘business strategists’, and the cautionary tale fits all too well.
Just because we can build increasingly powerful AI systems doesn’t mean we should, at least not without asking critical questions about their impact.
And if history has taught us anything, it’s that unchecked power rarely ends well.
Just ask the Terminator.
“Skynet became self-aware at 2:14 a.m. Eastern Time, August 29th.”
While we’re not (yet) facing an AI uprising, the underlying message holds true: The consequences of technological decisions often outpace our ability to control them.

The Real Question we should be asking is this:
What Problem Are We Solving?
Too often, AI solutions are designed to solve business problems such as:
reducing costs,
increasing profits,
scaling faster.
But technology doesn’t exist in a vacuum.
Every algorithm deployed has ripple effects across society.
Are we using AI to solve for efficiency at the expense of empathy?
Are we optimising for convenience while compromising on fairness?
Are we creating growth for businesses while shrinking opportunities for people?
Or, to quote The Matrix:
“The problem is choice.”
The choice isn’t whether AI will shape our future because it already is. The real choice is how we let it.

A New Approach could be Purpose-Driven AI
If businesses truly want to create infinite impact, the question isn’t “Can we build this? but “Should we?”
Here’s how businesses could reframe their approach:
Align AI with Purpose: Growth should not be the only metric. Evaluate AI initiatives based on how they contribute to societal well-being.
Embed Ethics into Design: Ethical considerations shouldn’t be an afterthought. Build them into the development process from day one.
Foster Transparency: Make AI decisions explainable. If an algorithm affects people’s lives, they deserve to know how and why.
Empower Human Oversight: AI should augment human capabilities, not replace human judgment. Keep people in the loop, especially for critical decisions.
Think of it like assembling The Avengers. Each member (or AI system) brings unique strengths, but without a unifying purpose, or, say, a Nick Fury to keep things in check, you risk chaos instead of collaboration.
The Infinite Impact of AI
AI isn’t inherently “good or bad” it’s a tool. The impact it has depends on how we wield it.
Businesses have the power to harness AI not just for profit but for progress.
To solve not just operational inefficiencies but societal challenges.
To create growth that is not only exponential but also ethical.
The future of AI is not about man versus machine. It’s about how we, as humans, choose to define its role in our lives.
Will we create a world where technology serves us all or one where we serve it?
The choice is ours.
The choice is ours.
https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence