The IET is carrying out some important updates between 17-30 April and all of our websites will be view only. For more information, read this Announcement

AI, Surveillance and Privacy

I’ve been thinking a lot about how fast AI surveillance is evolving- facial recognition, emotion detection, predictive policing… it’s all moving so quickly.

Governments and big tech companies say it’s for our safety or to make life more convenient, but honestly, I’m starting to feel like we’re giving up way more than we realize.

If AI can track where we go, what we do, even how we feel—where’s the line?

Are we gradually trading our privacy for convenience without fully understanding the consequences? Or is this just the new normal in a digital world?

Would love to hear how others are thinking about this.

Parents
  • You’ve raised an important point, and I really appreciate the depth of your concern. Since I’ve been working in many of the areas you mentioned for several years, I can definitely relate and you’re absolutely right to question the balance between safety and convenience versus privacy and autonomy. But let me answer your question in three areas:

    1. On Privacy and Regulation
    In many countries, there are local regulations in place to manage the privacy aspects of facial recognition and other AI-driven video analytics. While concerns are valid, it’s worth noting that:

    Governments often manage databases containing “allowed,” “blocked,” or “wanted” individuals.

    These systems existed even before the rise of AI used by police, border control, airports, and ports to reduce crime and improve security.

    AI has simply accelerated and enhanced the efficiency of these existing systems.

    But as you rightly pointed out, it’s not just about what AI can do it’s about who is using it, how it’s being used, and what safeguards exist.

    2. On Data Control
    Another crucial dimension is data ownership and control:

    Who owns the data?

    Who gets to analyze it?

    Who profits from it?

    And who has the authority to make decisions based on it?

    These are ongoing ethical and legal challenges that deserve close scrutiny.

    3. Beyond Security: Broader Applications
    It’s also important to recognize that AI surveillance technologies have applications beyond just security let me give you examples:

    In retail and operations, for example, video analytics can help determine which products customers are engaging with—not necessarily to monitor individuals, but to optimize layout and service.

    In the healthcare sector, especially during the pandemic, thermal cameras played a vital role in detecting patients who might need urgent medical attention or were showing symptoms.

    I worked on a project where behavior analysis played a key role in preventing a fire in a car inside a large mall, stopping the situation before it escalated into a major crisis.


    I hope this answered your requestion.

Reply
  • You’ve raised an important point, and I really appreciate the depth of your concern. Since I’ve been working in many of the areas you mentioned for several years, I can definitely relate and you’re absolutely right to question the balance between safety and convenience versus privacy and autonomy. But let me answer your question in three areas:

    1. On Privacy and Regulation
    In many countries, there are local regulations in place to manage the privacy aspects of facial recognition and other AI-driven video analytics. While concerns are valid, it’s worth noting that:

    Governments often manage databases containing “allowed,” “blocked,” or “wanted” individuals.

    These systems existed even before the rise of AI used by police, border control, airports, and ports to reduce crime and improve security.

    AI has simply accelerated and enhanced the efficiency of these existing systems.

    But as you rightly pointed out, it’s not just about what AI can do it’s about who is using it, how it’s being used, and what safeguards exist.

    2. On Data Control
    Another crucial dimension is data ownership and control:

    Who owns the data?

    Who gets to analyze it?

    Who profits from it?

    And who has the authority to make decisions based on it?

    These are ongoing ethical and legal challenges that deserve close scrutiny.

    3. Beyond Security: Broader Applications
    It’s also important to recognize that AI surveillance technologies have applications beyond just security let me give you examples:

    In retail and operations, for example, video analytics can help determine which products customers are engaging with—not necessarily to monitor individuals, but to optimize layout and service.

    In the healthcare sector, especially during the pandemic, thermal cameras played a vital role in detecting patients who might need urgent medical attention or were showing symptoms.

    I worked on a project where behavior analysis played a key role in preventing a fire in a car inside a large mall, stopping the situation before it escalated into a major crisis.


    I hope this answered your requestion.

Children
  • Thanks for laying out such a thoughtful perspective. it’s clear you’ve got a lot of experience to draw from, and I’m glad to dig into this with you. You’ve hit on some critical tensions here: safety versus privacy, efficiency versus autonomy, and the broader implications of who’s holding the reins on these systems.
    1. Privacy Over Efficiency: While AI enhances security systems, the loss of personal privacy often outweighs the benefits, as individuals have little control over how their data is collected or used, even with regulations in place.
    2. Regulation Gaps: Local regulations may exist, but they’re inconsistent globally and frequently fail to keep pace with AI advancements, leaving significant loopholes for misuse by governments or corporations.
    3. Data Ownership Ambiguity: The question of who owns and controls data remains unresolved, with individuals rarely having a say, while powerful entities exploit this lack of clarity for profit or surveillance without consent.
    4. Mission Creep Risk: Broader applications like retail analytics or healthcare monitoring sound beneficial, but they normalize surveillance creep, where systems built for one purpose (e.g., safety) get repurposed for invasive tracking or profiling.
    5. Accountability Weakness: Even with safeguards, the “who” and “how” of AI use often lack transparency, those in control face little accountability, undermining trust in the systems regardless of their intended benefits.