4 minute read time.

Welcome to the second instalment in our blog series exploring the real-world implications of Artificial Intelligence (AI) in business, inspired by the IET AI Technical Network’s insightful podcast discussions. This week, hosts Kirsten McCormick, Phil Clayson, and Andrew Williams return to explore the motivations driving AI adoption, the ethical questions surrounding its use, and the role open source plays in shaping the future of AI deployment.

Watch the Podcast below!

Key Highlights

What’s motivating AI adoption? Is it genuine innovation… or badge-slapping?

Kirsten opens with a candid reflection on how some organisations rush to adopt AI not for strategic advantage, but for image. “You start to see companies slapping the badge of AI onto their products,” she says, recalling a noticeable trend around 2019 when AI branding surged—often without real AI underpinning the product.

Andrew agrees, noting that “a lot of companies seem to be wanting to just get this artificial intelligence badge.” As the AI label becomes increasingly broad, it can blur the lines between advanced machine learning and basic rule-based automation. This can mislead both customers and internal stakeholders.

Ethical AI or unchecked ambition?

Phil raises an essential question: what happens when AI is adopted without proper safety or ethical oversight? Kirsten, coming from a defence background, stresses that safety is non-negotiable in her industry—but that’s not always the case elsewhere. “If you're claiming it to be something that it's not, where is the guidance for that grey area?” she asks. With varying global ethical standards and a lack of alignment on definitions, many businesses risk navigating AI adoption without clear ethical footing.

Andrew adds a pointed example: the misuse of terminology around predictive maintenance. “A lot of companies say they carry out predictive maintenance... but when you drill into the detail, it's just anomaly detection,” he explains. Anomaly detection is valuable, but conflating it with forecasting can mislead buyers and pose operational risks.

The open-source dilemma: innovation vs. risk

Open-source AI is fuelling innovation—but with freedom comes responsibility. Kirsten highlights how organisations often adopt open-source tools without fully understanding the underlying risks. “Are you including a vulnerability into your products that you’ve not understood?” she asks, citing risks like data poisoning during model training.

Andrew supports this view, noting that using open-source models still requires customisation and due diligence. “You can’t just take them at face value,” he says. Like GPT-based tools, pre-trained models need careful adaptation to serve business needs securely and effectively.

Where should organisations really start with AI?

The discussion closes with a key question from Phil: where should execs really begin if they want to drive meaningful AI innovation? Kirsten emphasises the need for clarity and education. “Execs don’t have time to understand every tech area deeply,” she says. It’s up to technical leaders to bridge the gap and ensure AI isn’t misunderstood or misapplied.

Andrew adds that ethical and societal impact should be part of the conversation from day one—not an afterthought. The right starting point varies by organisation, but it must be grounded in real business value and a clear-eyed view of the technology.

Key Takeaways

  • Avoid badge-slapping: Don’t label products as “AI-powered” unless they genuinely are. Clear communication builds trust.

  • Prioritise ethics and safety: Understand the form of AI being used and apply appropriate governance.

  • Open-source is powerful—but risky: Use it wisely, with thorough vetting and adaptation.

  • Educate from the ground up: Execs must be briefed clearly to make informed strategic decisions.

  • Start where value aligns with purpose: Whether customer service or societal impact, pick use cases that matter—and build from there.

Join the Conversation

1. Have you encountered "AI badge-slapping" in your industry?
2. How does your organisation balance innovation with ethical responsibility?
3. What safeguards do you have in place when using open-source AI tools?
4. Where do you think most businesses should start with AI deployment?

We’d love to hear your insights. Share your thoughts in the comments below!

About the Speakers

  is the IET’s AI technical network Chair and the AI lead at General Dynamics Mission Systems in Hastings. She also serves as a senior systems engineer, bringing a wealth of experience in AI and Defence technologies.

Phil Clayson is a Chief Technology Officer (CTO) in the tech industry, with extensive experience in the gaming sector. He has been an IET volunteer for many years and is passionate about leveraging AI to improve business operations.

  is the Innovation and Data Director at LoweConex and the vice chair of the AI technical network. He focuses on the impact of AI in various contexts and is dedicated to driving innovation through data and AI technologies.