Are We Relying Too Much on AI and Losing Our Own Thinking?

As we move deeper into the AI age, I’ve been reflecting on how much we rely on these systems in our daily work and decision-making.

AI is incredibly powerful , it helps us analyze faster, automate tasks, and even generate ideas. But here’s the question I keep coming back to:

Are we starting to rely on AI so much that we’re slowly sidelining our own thinking, instincts, and engineering judgment?

In engineering especially, critical thinking, intuition built from experience, and problem-solving ability have always been our core strengths. If AI begins to take over too much of that process, are we at risk of weakening those skills over time?

I am not against AI. Far from it. I see it as a tool, a powerful assistant. But I feel the key is balance.
Using AI to enhance our thinking, not replace it.

Maybe the real challenge isn’t adopting AI, it’s learning how to use it without losing what makes us engineers in the first place.

Curious to hear others’ thoughts:

  • Where do you draw the line between assistance and over-reliance?
  • Have you noticed changes in how you approach problem-solving since using AI tools?

Let’s discuss.

Parents
  • I don't think there's any issue with using the likes of ChatGPT, Grok, Claude, Gemini, etc to help with learning, but the problem lies with what people perceive AI chat models are capable of.

    The Large Language Models are capable of producing very eloquent descriptions of things, can break down very complex documents into easy to understand summaries, convert plain writing into infographics and assist people that don't have great writing skills to communicate effectively.

    However, what they're not good at is giving the correct answer. They're infinitely better than your average pub quiz team, but I certainly wouldn't trust them with answering anything that's necessary for safety and wellbeing.

Reply
  • I don't think there's any issue with using the likes of ChatGPT, Grok, Claude, Gemini, etc to help with learning, but the problem lies with what people perceive AI chat models are capable of.

    The Large Language Models are capable of producing very eloquent descriptions of things, can break down very complex documents into easy to understand summaries, convert plain writing into infographics and assist people that don't have great writing skills to communicate effectively.

    However, what they're not good at is giving the correct answer. They're infinitely better than your average pub quiz team, but I certainly wouldn't trust them with answering anything that's necessary for safety and wellbeing.

Children
No Data