There’s no question that 2025 was a banner year for Artificial Intelligence. It felt like leaps were being made weekly, especially in the second half of the year. It wasn’t always easy to keep up, but it was exciting, and mind bending, to say the least.
Throughout the last year, increasingly more business leaders grew bigger and bigger dollar signs in their eyes, and even wilder ambitions for harnessing the magic of AI within their businesses. Dreaming of double-digit reductions in operational cost, human effort and times-to-outcomes, while crossing their fingers for simultaneous double-digit increases in growth and profit, expecting massive results with a snap of the finger and a pinch of Agentic Artificial Intelligence.
This was in no way surprising. We’ve bombarded across the media, the technorati, every consultant with a social media account, and of course the producers of frontier reasoning models themselves: AI is the inescapable, inevitable future of everything, the accelerator, the do-it-all machine, the saviour, the end of work as we know it.
It’s been beaten into us that “it’s so easy that all you have to do is turn Agentic AI on, and your business will see everything change overnight”, that it’s now become the expectation. We’re told that AI is faster, smarter, more capable and more diligent than most humans… and better still, it never sleeps. It’s proclaimed that AI is this magical super power that everyone can use instantly, without effort, without breaking a sweat. It’s hard not to draw similarities to Ozempic’s positioning, all the benefits without the effort. According to the narrative, everyone should already be on the AI bandwagon, racing to realize the material gains to deliver their products or services as quickly as possible, more efficiently, with fewer human hands and larger profit margins. What could go wrong?
It’s undeniable that AI is the most significant and rapidly evolving technological advancement in human history, and I have seen a few of them first hand over the last 30 years. AI’s potential is indeed tremendous. Smart, secure and well-built AI implementations can be a huge differentiator for businesses, having significant impacts on the bottom line, the capital required to produce outcomes, and the speed, agility, and adaptability at which a business can operate… perpetually building bigger moats, increasing customer satisfaction, retention and revenues.
It’s a no-brainer right?
While it sounds wonderful, it’s akin to someone in 1995 saying “hey, the internet is here, all your customers will now only buy your goods online, you should close all your brick and mortar stores” while completely side-stepping and disregarding the infrastructure requirements, the corporate reorganization, the new logistics, and the digital transformation that’s required to truly realize the full extent of the theoretical benefits. Nothing that comes with such immense change comes that easily, and AI is no exception.
Meanwhile, for most established businesses, the reality of internal AI projects has not been magical at all, let alone even met expectations. In fact, the results have been pretty grim.
Whether it’s the inability to fully complete the required task, or deliver the desired output, or hallucinated results, or ignoring of policies and regulations, or simply disappointingly slim reductions in time, speed, and / or cost, it appears that AI isn’t coming anywhere near meeting those magical high points in large business, corporate or enterprise applications. The truth has been trickling in throughout 2025: according to an MIT study “95% of internal AI projects in large businesses fail”, according to RAND “AI implementations have 2x the failure rate of traditional IT projects”, and “more than half of AI projects never make it to production” says Gartner.
These numbers aren’t just bad, they’re doom-stats. This isn’t a “some work, some don’t” situation, this is an abysmal, systemic failure of the promise of AI in practical application in large businesses.
At the same time, we’re hearing and seeing the exact opposite of the success spectrum with individuals, solopreneurs, and small teams. They’re having a hay day. At least based on the conversations I’ve had with respected colleagues and clients on the subject, as well as my own experience exploiting AI in my personal and professional life. When looking to social media, of course it’s even louder, with people screaming from atop the tallest hill they can find, about their AI expertise, or their unimaginable successes. It’s become the norm to see “I fired my entire staff and now run my business on these 4 prompts” or “never lift a finger to run your business again” posts spreading like a virus across your feeds. While these are obviously blatant exaggerations, if not simply short sighted, there is truth, albeit exaggerated, hidden in these proclamations. I’ve seen the magic others have conjured, and I’ve conjured my share of magic myself. The magic is real.
There’s a massive, opposite end of the spectrum difference being reported here between smaller, newer companies, and those that have been around for a long time, which should, from a data perspective, be sitting on goldmines. The conundrum is why?
There’s no question that individuals, smaller teams or solopreneurs have more agility, less red tape, and most importantly, significantly less technical debt and legacy system lock-ins than the enterprise, and that these players have been able to more easily adopt AI to make more with less, faster and cheaper. This is evidenced by the sheer number of solopreneurs or small team startups that have come out of nowhere to gain millions, and in some cases billions, of investment right out of the gate.
However, even with the benefits of smaller size, increased agility, blue-sky thinking and truly blank canvasses, the results for smaller teams, solopreneurs and individuals have been… while more tangible than those of larger companies… not as mind blowing, magical, or the kind of once-in-a-generation leaps that are to be expected with such a technological tsunami. Sure they’re doing better and seeing larger benefits than enterprise or corporate entities, but not in relation to the reduced roadblocks they face.
I mean, AI is supposed to essentially be the skill-hive-mind, allowing us to tap into the talent, skill and experience of everyone who has ever posted, shared, or had their thinking digitized and put online (disregarding the copyright and IP arguments for the purpose of this argument). AI supposedly gives us the ability to at least, in a passable fashion, do the job any other human has ever done right? And we’re building automations? Workflows? Things that do what we tell it to in the background? Common. That’s not even scratching the surface of what’s possible with AI. If we have access to everything everyone has previously thought, argued, or contemplated digitally, shouldn’t we all be in absolute awe of what’s coming out every week? Shouldn’t we be leveraging all of what’s come before as the basic starting point, and building from there? And yet, even with the most funded, most celebrated AI products and services, we’re just seeing what we would see from human effort, albeit insanely faster. What a waste. What a disappointment. We need to do better.
So, why is the reality of AI implementations so mediocre relative to what we’ve been told of its potential? What’s blocking the innovation, the seismic shifts, the “where the hell did that come from?”, the “how the hell did they do that?”, the “our world just completely changed” moments?
The easy answer is that it’s because there’s more cogs and gears spinning in businesses, there’s more people involved, there’s more rules, regulations, policies and procedures. That’s due to tech debt, dirty data, and legacy platforms that don’t communicate, getting AI to speed things up, or work more efficiently in larger businesses is far more complicated. And there’s truth to that, but it is not the real root of the problem. It’s merely an excuse.
1. AI is not as structured as traditional tech projects
We’ve trained generations of product and software teams to work procedurally. To work small, to reduce risk, to build incrementally to see results faster, to not take on too much at once, because as humans we can introduce too many errors and misinterpretations, that if not checked, can be built upon to create an absolute mess and failure of a product or service.
We’ve been forced to freeze time and base product decisions on what we knew at that time, and then take so long to strategize, design, and build solutions that the data those solutions were based on is no longer relevant or accurate. We’ve institutionalized the solution process, the means, methods and practices into immutable laws to follow, based on human capability and the time it takes humans, and worse, teams of humans to successfully create things.
AI is not linear, it is not human, it is not bound by the knowledge, experience, and skill constraints that we are. It does not get distracted, it does not think the way we do, and yet, we still place human constraints on always on, always working, always thinking technology.
2. AI is seen as a get rich quick scheme
So many companies and individuals I’ve talked to are more focussed on how to make money off of AI than they are on using it, let alone exploiting it, to solve real business problems. “Let’s build a chatbot”. “Let’s automate this workflow”. “Let’s replace these workers with AI”. None of it is unique, none of it is thoughtful, none of it with meaning. It’s just lazy.
AI is a means, but without a prescribed end, without a truly unique value proposition, you’re just wrapping someone else’s revenue stream with a set of instructions, and guardrails. That will not last, that has no moat, and that will simply be replicated and replaced by the large model providers. If AI is not used with purpose, or is merely seen as the purpose, it will fail. Every time.
3. AI is treated like a science project
In many organizations, large or small, AI is seen as something to dip your toe into, to sandbox, to play with, not something to bet on. Sure it looks good to investors and the board to say “we’re exploring AI” without impacting how your business operates today. It’s safe, it’s separate, it’s clean. But if it’s not seen as core to the future of the business, it will remain seen as child’s play, and respected and nurtured as such.
For AI to succeed in business, it can’t be about prompt engineering training, or educating the workforce on the benefits of AI, or deploying co-pilots to all employees. This only creates wasted efforts, individual solutions that compete and conflict with the multitude of other individual-built solutions, and will result in exponential costs, and operational challenges, reduced efficiencies, and an eradication of trust in AI throughout the organization.
4. AI is being built on request-response tech stacks
I get it, it’s what companies have experience with, it’s what’s already deployed. It’s sort of like trying to shoehorn a V12 into a Ford Fiesta. It won’t fit, and even if it did, the car isn’t built to handle that kind of power, not the tires, not the brakes, not the frame. The car will spin out, leave the ground, and careen into the nearest lightpost before you even try to take your first turn.
AI truly requires a new stack. From hardware, to software, to backend and middleware, and frontend. Everything needs to change to make it work at its peak. Trust me, in strategizing and designing AI-Native products and services over the last few years, the way things have worked, doesn’t work. Embracing AI means going all in. A whole new approach, a new technical architecture. Otherwise you’re just shooting yourself in the foot. Otherwise, you might as well not even start.
5. AI is being force-fed dirty, unstructured data
Today’s AI is pretty amazing at understanding things from written or spoken context. You can feed AI meeting transcripts, notes, documents, and even sketches (gasp!) and have it tell you what it all means, what was intended, what was inferred. But feed it a huge dataset, in an unstructured fashion, and ask it to make meaning of it, and it will choke. The context windows aren’t sufficient, and context is absent in traditional datasets. If you feed a model swaths of disconnected, poorly labeled or contextualized data from multiple sources, even if you vectorize it, or create a knowledge graph from it, without proper context, you are basically manually engineering hallucinations.
The difference is that LLMs can understand language in the context of the conversation or long form copy, because it flows, because it follows a thought process. Raw data, in short unstructured formats, forces models to derive relationships, not bind them or even infer them. This is a key challenge for larger organizations that have tremendous amounts of data, spread across a number of disconnected or hacked together products and services, or connected via logic built into the APIs or middleware. It comes out as an indiscernible soup that cannot be made sense of.
6. AI is still new
In most companies, the folks that built databases, infra, and / or wrote code are being asked to learn, understand and implement AI solutions and deploy foundational technologies. Knowledge workers are being asked to use a blinking cursor in the prompt field of whichever AI platform the company subscribed to in order to do more faster. Staff are building their own shadow processes and workflows, because they’re not supported by leadership, and their competitors are moving faster than they can.
Everyone is saying they have AI experience today, and yeah, many do for sure. In the use, the application, the set up, and the building of AI solutions. But, very few are experienced enough to think about the where, the why, and the how to truly change a business with AI.
7. AI is changing every few weeks
AI experts are everywhere. I challenge you to go on LinkedIn, scroll your feed and not see someone with AI in their title, their post content, or their byline in the first 5 posts. But, how? How can anyone be an expert unless they’re building the models… and even then, it’s like the builders of a chaos machine. You know what you made, but not what will come out of it. The AI space is changing so fast, it’s pretty crazy to assume anyone is an expert. We’re all fumbling in the dark here, and as soon as we all admit that, we’ll be better off.
Not being an expert is actually a good thing, when you think about it. When nothing is certain, nothing is static, nothing is permanent, it unlocks child-like exploration and experimentation, it opens the doors to true innovation, it creates real freedom.
8. AI is still facing quiet sabotage
Let’s be honest for a second, everyone, including Sam Altman himself, is scared about where AI is headed. The average worker, if they’re not living under a rock or blissfully ignorant, is probably at least a little worried that AI will take their livelihood. And honestly, we all should be scared.
As such, consciously or subconsciously, people are subverting, or actively sabotaging AI implementations in self-preservation. It’s human nature after all. Whether it’s as small as not telling the whole truth to the team trying to build their job into a workflow, or as subtle as not providing the AI all the details in a prompt so that they can still feel superior, or as insidious as purposely breaking or working against an AI implementation, people are working against AI. This is an existential problem, and humans, as much as they may state the opposite, do not like change.
On the surface, people are putting on brave faces, feigning adoption, and pretending to be advocates while deep down, their fear is festering, manifesting in internal and external blocking behaviours that are adversely affecting the success of AI initiatives.
9. The Obsession with off-the-shelf chatbots & co-pilots
Many organizations start and stop with deploying generic, off-the-shelf co-pilot or single-function tools to their workforce, mistaking widespread AI adoption for true transformation. While it may shave minutes or hours here or there, this approach does little to improve the business. In reality, it creates a multitude of individual, siloed solutions that compete and conflict with one another, leading to wasted efforts, exponential licensing costs, and a fragmentation of effort, which ultimately reduces overall efficiency and erodes trust in AI throughout the organization.
In practice, “empowering staff with AI” this way, leads more often to staff offloading the important thinking, rather than offloading the monotonous, rules-based, parts of the work that don’t really need to be, or shouldn’t have to be, thought about.
10. Lack of accountability and ROI definition
Traditional IT projects have clear metrics: uptime, ticket resolution, feature completeness. AI initiatives often start with vague goals like "increase efficiency" or "reduce human effort," making success, whether intentional or not, extremely hard to measure.
Without clear, traceable, and revolutionary KPIs that hold both the AI and business stakeholders accountable for truly transformative change, AI projects drift, derail, and stumble. They end up as perpetual betas with no finish line, delivering little success, because success was not defined.
While the above are undeniably tangible obstacles and roadblocks to AI implementation success, even in their absence, I propose that the true potential of AI implementation projects would still not be realized. Sure, these situations definitely play a part, but they’re not at the core of why AI initiatives fail. There’s something much, much deeper at play.
The real reason behind the failure of AI implementations, whether in enterprise, medium or large business, or solopreneurs however, is far, far simpler than most think, once you’ve cleared one critical, painful and difficult mental hurdle.
The real barrier to realizing, let alone maximizing, AI ROI in internal AI initiatives is, in fact, ‘human’. Not the physical beings themselves, nor the work they perform, their creativity or ingenuity, nor their flexibility or adaptability. It’s not even the lack of skill in AI, nor the fear or the losing their jobs to AI agents. At its core, the true crux of the failure in any AI implementation is when we try to map our human ways of working onto machines.
Humans are emotional, illogical, easily distracted, political, fickle and petty creatures, with equally complex and convoluted workplace social dynamics — often more focussed on personal success rather than the greater good. It’s just who we are as a species.
AI on the other hand, when we move beyond the mass-audience chatbot — whether it be autonomous agents, or AI-powered workflows, is entirely vacant of the psychological and behavioural challenges and self-involvement presented by humans. AI agents and workflows are programmed to do what we ask them to, with no distraction, can logically self-verify and self-correct, research, source, analyze, process and hold in memory significantly more real-time data, facts, and knowledge than a human can. Agents don’t compete with each other for promotions (though, that in itself is not a bad idea), they don’t take jabs at each other behind their backs by the water cooler, don’t punch the clock, and work at a speed and capacity that no human can.
So, why do we try to build AI solutions that replicate how humans work, and wonder why machines aren’t delivering the expected results?
Today, most businesses try to simply lift and shift what they already do, in the ways they’ve always done it, with processes and procedures designed to accommodate for the shortcomings of humans. The ways things work inside businesses have been engineered over decades to eliminate, prevent or accommodate for the failures of human individuals and teams, their inherent communication and collaboration weaknesses, and play into the egos and politics that inevitably weasel their way into teams more concerned with fiefdoms than getting the job done.
For the most part, we’ve done pretty well operationally to account for human error and emotion in business processes, because we’ve had to. But, what can be accomplished if we didn’t? What’s possible, when we take the human (mostly) out of the loop? What can we reimagine, what can we do away with, and what can we now do that was simply impossible before? What can we do when we start to think in terms of possibilities, rather than adapting what we do onto AI?
1. AI isn’t human, stop treating it like one
We are trying to fit the relentless, logical, self-correcting power of a machine into a behavioural management framework built to account for human lunch breaks, emotional baggage, corporate politics, infighting, and the occasional sick day. The moment you design an AI solution with human-style bottlenecks, requiring a human review cycle for every agent action, imposing a traditional 4-step approval process, or limiting its scope to a single human's role, you have defeated the entire purpose. AI doesn't need to be managed; it needs to be architected. It needs to be unleashed to work at speeds and scales, in net-new, non-linear, exponential ways that are fundamentally non-human. Otherwise you might as well just hire another intern.
To rethink everything, to reimagine how work should be done in a machine context rather than a human one is really, really hard. It’s not conducive to the way our biological brains work, it’s not anything we’re familiar with, and it flies in the face of all that we have indoctrinated unto ourselves as rule and law. To shift the thinking, to truly see the real possibilities, a massive mental break has to occur, tearing apart the fabric of what we think we know, what we believe to be true about work. Only then can the human brain begin to think in terms of what a machine can do, and how the work will evolve. This requires the confidence, and a lot of craziness, to do away with norms, best practices and the things that “have always been done that way”. It requires fresh eyes, it demands feeling uneasy, scared, and a little sick to your stomach. It requires freedom from the past.
2. AI won’t deliver remarkable results, without remarkable thinking
The garbage-in, garbage-out rule still applies, but now the garbage isn't just dirty data, it’s lazy human ambition. If your "outstanding thinking" begins and ends with "how do we do this faster or cheaper," you’re signing up for mediocre results at best, and a catastrophic brand failure at worst. The magic of AI doesn't come from the algorithm; it comes from the audacity of the business problem you choose to solve. You have to envision a world that is not incrementally better, but fundamentally different, and then work backward from that impossibility.
In fact, impossibility should be the starting point. If it was impossible 2 years ago, but now seems like it might be achievable, that’s a good place to start. If it was just too hard or too expensive, stop. You’re barking up the wrong tree. When we think about AI implementations today, if it centers around “automation”, rather than “reasoning”, “anticipating”, “proactivity”, “inference”, “process elimination” or “opportunity creation” you have failed before you even started. AI isn’t about doing things faster or more economically than humans, it’s about doing things humans can’t. If you’re not thinking that way, you’re not thinking hard enough.
3. You can’t make something truly magical, without breaking everything
You are holding onto legacy platforms, technical debt, and decades-old operational processes because they’re comfortable. Because they require zero thought. Because they are safe. They represent the known quantity. They are familiar. But the true, mind-bending potential of AI is incompatible with incrementalism. You must be willing to shatter the current way of doing business, the very processes that define your company's identity and in many cases what made you successful in the first place, to build something AI-native in its place. If you are not prepared to break, rebuild, and completely change your operating structure, then you are not prepared for AI. Period.
Unfortunately, businesses today do not have the luxury of “not being ready.” Your competitors think they’re ready, and while they may be experiencing the same challenges, the same failures… what if they aren’t? What if they have broken their thinking, their processes down, and are rebuilding them in an AI-first way as you read this? What if they do, and you don’t? When looking at AI initiatives in your business, if there is anything at all that is considered off-limits, protected, or not to be toyed with, then the likelihood of exponential gains from AI is unlikely. Nothing can be safe from rethinking. From being challenged. From being laid bare for all its inefficiencies and legacy thinking and practices to be seen by all. For successful AI implementations isn’t about doing the same things faster, cheaper, or more efficiently, it’s about exposing the failures built into institutionalized practices, systems and thinking, then removing the failures, or rewriting processes that prevent them from ever happening in the first place.
4. You have to question everything, even yourself
The single greatest enemy of AI transformation is institutional ego. Everyone, from the CEO to the frontline manager, must be willing to honestly question the validity of their own knowledge, their job, and every rule they've ever enforced. If the rule was designed to mitigate human error or accommodate human limitations, an agent doesn't need that rule. In fact, that rule is more than likely detrimental to AI implementation success. The minute you say, "This is how we've always done it," you have found the exact spot where your AI project will fail. The revolution starts when you admit that yesterday's expertise is today's ball and chain.
The truly difficult part is not questioning the processes, but yourself. Once you truly begin to think about the possibilities, the capabilities, and what can change, something deeply painful happens: you question your own worth and value. Your self-image and what you feel “makes you… you”, at least professionally, will come into question. I speak from experience here, as AI has evolved, and our understanding of how to best exploit it has expanded, much of what I have historically felt made me unique, or skilled, or talented in my work, has been under direct fire. This is the deep, dark, secret. AI is going to replace so much of what we all do, and will leave many equally gutted and carved out in terms of our professional self-perception, as it will to employment markets in general. However, if through the exploration and embracing of AI you begin to feel like your professional value, your 10,000 hours, your experience or your talent are mostly being replaced by reasoning models, fear not, as you are on the verge of truly understanding how to use it. This isn’t a bad thing, it’s a required thing to be able to cross the chasm of AI-first thinking.
5. AI is more about organizational and business transformation, than it is about tech
The most successful AI projects aren't run by the IT department; they're led by the executive committee. This isn't a server upgrade; it's a structural overhaul of your entire company. Focusing on which LLM to use or whether to employ vector databases is just a distraction. The real work is defining new roles, dismantling fiefdoms, and rewriting the business rules to allow AI to flourish. If your AI initiative is just another line item in the tech budget, it will be treated as a cost center, not as the new core of your business.
While it needs to be led from the top, like anything that is a complete departure from the status quo, AI initiatives often require fresh, impartial, outside perspectives. Unbound by the “way things have always been done”, unshackled by established patterns, norms and processes, external resources are uniquely positioned to openly, curiously, and sincerely question everything. Like a child, devouring information as it maps its understanding of the world, asking “why?” over and over, until the core of the issue is exposed and there’s no more “why’s” to ask. AI projects that begin with the practical “how”, will never match those that start with the strategic “what” and “why”.
6. Most businesses, and people, can’t look honestly, vulnerably enough at themselves
To build a truly AI-native process, you must first document the current process, not as the beautiful, efficient system it should be, but as the convoluted, political, and emotionally charged mess it actually is. This requires a level of organizational vulnerability that most businesses are simply incapable of. They’d rather maintain the polite fiction of their streamlined operation than expose the years of accumulated workarounds, personal agendas, and irrational decisions that the AI will instantly expose and render obsolete.
The journey to truly transformative change, is an emotional rollercoaster. And truth be told, that rollercoaster, especially the 90˚ drops, corkscrew inversions and upside down loops, is a necessary part of the process for the people involved. It has to be scary, it has to make you question yourself and your choices, it has to make you face your fears, analyze them and sit with them. Real change will not come from AI, it will come from the people who questioned everything, who broke the status quo, who laid aside their own firmly held beliefs, and who were courageous enough to willingly and openly brake themselves.
7. You might just build yourself and your colleagues out of a job, and that’s the point
Damned if you do, damned if you don’t: if your AI initiative is a failure, the responsibility rests on you, and may have negative ramifications on your career. If you’re successful on the other hand, you might just implement yourself out of a job. To be successful in fact, the obsolescence of your job should be your goal.
Obviously, no one wants to architect their own demise, however, it’s the only way to empower yourself to envision what’s truly possible with AI, understand where human judgement and direction will still be required, and how you as a person and an employee have to yourself evolve to thrive in the new world. At the end of the day, AI implementations should aim to free humans from the mundane, the linear, the repeatable, and the purely procedural, so they can focus on higher level thinking and reasoning. If that’s not your approach, you’re not being ambitious enough to realize the gains you’re likely expecting.
8. Your collaborators don’t trust AI. They fear it.
Every piece of advice, direction and feedback you receive on AI implementations, both internally and externally, is filtered through the advisor's existential fear. The consultant, the manager, the competitor, they’re all just as worried about their futures as you are. That fear, whether conscious or unconscious, often results in strategies designed to contain, constrain, hamper, hobble or safely sandbox AI, ensuring it never really gets powerful enough to threaten their own status quo, value, and career.
To truly be successful, you must ignore the noise, question the motivation behind everything you read, see and hear, and focus solely on the machine's potential, not on the human's comfort. You must be courageous enough to push forward, and suspicious enough to consider all input and feedback you receive may be intentionally or unintentionally trying to throw you off course. You must also be open to the real possibility that your teammates, while on the outside are singing the praises of AI, inside, may be actively working against you and your AI initiative every step of the way.
9. The human fear of being wrong, is a far greater worry than the risk of actual failure
Corporate culture is built on minimizing risk and avoiding public error. AI, however, is a technology of rapid, exploratory iteration; it will fail, hallucinate, and make mistakes, often spectacular ones, on the road to a revolutionary breakthrough. The political cost of a single, highly visible failure is often deemed too high, leading to over-engineered guardrails and endless beta cycles that kill the project's velocity and its transformative power. The safe bet is replication, but the only path to truly magical outcomes is accepting the high-stakes risk of being fundamentally, and spectacularly, wrong on the first few tries.
Transformation will not come from the first build. It will take many, many builds just to figure out what you should in fact build. Iteration is key. The willingness to throw away work to try it a different way, and the acceptance that no two iterations will likely be the same, ever. But today, those iterations can take minutes to hours, not weeks and months. So get used to being wrong. Embrace it. Never before has “failing fast” been so fundamentally true. And if you fail, it might just take an hour to start over from scratch. We need to change the narrative: it’s not bad to fail, failing is now a positive part of the process.
10. Companies are designing for replication, not revolution, and revolution is the only path forward
Most internal AI projects are merely an exercise in digital paleontology: they simply find, uncover, analyze, tweak then automate existing human workflows, ultimately resulting in a meager 10% gain in speed for a process that should have probably been eliminated in the first place. While risk aversion is natural, the benefits, potential and opportunities that organizations are leaving on the table with this safe approach, are inconceivable.
They’re using a supercomputer to build a slightly better horse and cart (not referencing the false Henry Ford quote here). True success requires starting with a blank canvas and asking, "What does this business process look like if humans were never involved in the first place?", and more importantly “is it actually even necessary?” If your AI is merely replicating a human's job description, you’ve just set yourself on a path to disappointment.
How do you make it work then?
To build your moat today, you have to be ruthless. You have to be willing to make yourself obsolete, you have to be willing to rewrite the business. That’s what your competitors are trying to do, that’s what someone in their basement or garage is trying to do. There’s no incremental path to exponential change.
This is the unshackling we have yet to truly see with AI. This is what we’ve been sold, but have yet to receive. This is the mind-bending leap that AI should be realizing. And it’s possible today, if approached the right way.
The AI-Native Blueprint is not a checklist of technical features; it is a declaration of a new operating philosophy:
The thoughts, insights and ideas within this article have emerged, grown and evolved through my own personal journey into and out the other side of AI, my company tension’s reimagining of itself in how we leverage AI ourselves, and in working on AI-Native products and services for our clients. It’s never been easy, but it has been transformative for both myself, and the business.
I hope that by laying out our perspective here, others currently on their own AI journey may benefit from what we’ve had to learn the hard, uncomfortable and painful way.
It’s very much a strange new world that we live in as 2026 begins. Let’s approach the new year with open eyes, courage, excitement and wonder.