"AI is not a bolt-on. It’s not something you just buy, bring into the organization, and say 'check-box, I’ve done my AI.' It is a fundamental transformation that must be led from the top."
In a recent appearance on the Beetroot Podcast, smartR AI™ Founder and CEO, Dr. Oliver King-Smith, sat down with Sebastian Streiffert to discuss a sobering statistic: why nearly 95% of AI initiatives fail to deliver results.
The conversation spanned from the early days of AlexNet in 2012 to the modern "battle of the models" between US and Chinese architectures. Oliver shared his "right-to-left" philosophy on implementation and why the secret to AI success isn't just better code—it’s an engineering mindset.
Key Takeaways from the Interview:
1. The "IT Trap"
Oliver notes that many companies fail because leadership treats AI as an IT project. "IT often doesn't have the authority or the overview of the organization to understand how and where to apply AI," he explains. Successful AI transformation must be driven by senior management who understand the business goals first, then look for the AI solution—a process Oliver calls "Right-to-Left Thinking."
2. Engineering vs. Programming
Coming from an engineering background, Oliver argues that AI development is more like building a physical pipeline than writing traditional software. "In computer science, you write code, debug it, and it works. In AI, you have tools you must tweak and put together to get the result you want." This is why smartR AI™ hires for an engineering mindset to manage the inherent uncertainty of machine learning models.
3. The "Clean Data" Myth
A common mistake is spending years "cleaning data" before starting a project. Oliver suggests that data cleaning is only useful once you know exactly what problem you are trying to solve. He recommends starting with a validation test set to benchmark progress immediately, rather than waiting for "perfect" data that may never arrive.
4. Privacy, Security, and Private LLMs
For high-stakes industries like Aerospace and Defense, public models aren't always an option. Oliver discusses the necessity of Private AI ecosystems—running smaller, fine-tuned models on private hardware or air-gapped clouds to ensure ITAR/NIST compliance and data sovereignty.
The "Zebra Problem" of Modern AI
The interview concludes with a deep dive into how Oliver tests the world's most advanced models, including his use of "Zebra Problems" (logic puzzles like Einstein’s Riddle) to see which models can actually think rather than just predict the next word.
"We’ve got this technology now where we understand the 'how' but not always the 'why.' That’s a unique challenge for any organization bringing AI into their workflow."
Listen to the Full Interview
Want to hear the full discussion on the geopolitics of LLMs and how to avoid the "hallucination trap"?