Digital ID Cards in the UK

It is being reported in the news this morning that the government is planning to introduce mandatory Digital ID cards, initially for a "right to work" purpose.

Forgoing the many legal and civil arguments for and against this; I wondered if people in the IET had opinions on the technical aspects.

Personally I am against what I have heard so far. There are no details on implementation yet, however with schemes like this that will usually not come until implementation long after parliamentary debate is over, so social debate cannot wait for full detail.

My main worry is that they have framed it as based around the smartphone. Saying it will be "like a bank card" (Lisa Nandy on Today). This seems (from admittedly vague and unsure descriptions from not-very-tech-savvy MPs) to be locking us into the duopoly of smartphone OSes, Apple's iOS and Google/Alphabet's Android. Neither is open source, and both are utterly controlled by businesses in the US. Obiovusly there are social concerns around forcing mandatory ID onto smartphones (it makes smartphones mandatory for one thing, despite the other worries about their effects. The same government is looking to ban them in schools!). But techincally how long will they be supported? How secure will they be? I suspect they will be very secure, but support will be expensive and tail-chasing after a while (5-10 years). Making the system web-based would be less secure, more open to abuse like DDoS attacks; but would unlink the system from operating systems. I doubt there will be any other variants, like a Linux-based way of providing your ID.

I am ok with many functions on my smartphone as they are optional, things I chose to do for convienence sake like banking and email. I am do not think this is the same. The mandatory nature should come with other support, be that physical cards for those that want to move away from these devices or businesses, or some other way to ensure that we are not destroying technical freedoms and future innovation by tying our entire society into 2 smartphone makers who already have immense influence and control, and whom the state have no sway over.

What are other people's thoughts? Any other technical issues you have concerns about (forgery, data breaches, verification)?

Parents
  • To me one of the healthy things about this thread is that there are engineers here (and I'd be one of them) who are prepared to to admit that engineering doesn't always work. My day job is in systems assurance for safety critical systems, and I can almost guarantee that in any new project which involves software at some point an engineer will say the dreaded words "the software won't let that happen". Well, years and years of experience have taught us that software will let "that" (whatever unwanted behaviour it is) happen. Even if we throw huge amounts of time and effort and money at it - for example as we do with safety critical software - there's still a finite probability that it will fail. And that's fine, provided we plan for that. 

    Horizon was of course a very high profile and appalling case of this.

    Of course the problem is that no company tendering for a major government contract believes that it can admit that it's software has a finite probability of failure or error. And it does seem that there may be a lack of informed buying to ask searching questions as to how that risk is being managed - and to push back against any supplier who refuses to accept the risk exists.

    So the reason for this post is to pose the thought, can we do more to support the engineering community to be prepared to say "we've done the best we can, but we must make sure there are mitigations should the system fail". I've certainly - fortunately the last time was many years ago now - found myself having to push back hard against company commercial teams who wanted our engineering concerns brushed under the carpet, and certainly not aired to the end client, and it's a tough position to be in.

    And can we do more to spread the word that excellent engineering is not about assuming that our engineering will work, but it's about assuming that it won't work - because only then will we both try to make it better and also properly consider the consequences should that happen.  

    As a footnote,  it's interesting being an Independent Safety Assessor, you meet two types of clients. There are those who say "we think we've done everything we can, but it will be reassuring if you could look at the project as well in case we've missed anything". Those are the projects where you rarely find anything missed. Then there are those projects who say "here's the evidence, it's all tested and working, we need your report next week, don't be late". Those are the projects which tend to have really really scary things missed in them... 

    Thanks,

    Andy

Reply
  • To me one of the healthy things about this thread is that there are engineers here (and I'd be one of them) who are prepared to to admit that engineering doesn't always work. My day job is in systems assurance for safety critical systems, and I can almost guarantee that in any new project which involves software at some point an engineer will say the dreaded words "the software won't let that happen". Well, years and years of experience have taught us that software will let "that" (whatever unwanted behaviour it is) happen. Even if we throw huge amounts of time and effort and money at it - for example as we do with safety critical software - there's still a finite probability that it will fail. And that's fine, provided we plan for that. 

    Horizon was of course a very high profile and appalling case of this.

    Of course the problem is that no company tendering for a major government contract believes that it can admit that it's software has a finite probability of failure or error. And it does seem that there may be a lack of informed buying to ask searching questions as to how that risk is being managed - and to push back against any supplier who refuses to accept the risk exists.

    So the reason for this post is to pose the thought, can we do more to support the engineering community to be prepared to say "we've done the best we can, but we must make sure there are mitigations should the system fail". I've certainly - fortunately the last time was many years ago now - found myself having to push back hard against company commercial teams who wanted our engineering concerns brushed under the carpet, and certainly not aired to the end client, and it's a tough position to be in.

    And can we do more to spread the word that excellent engineering is not about assuming that our engineering will work, but it's about assuming that it won't work - because only then will we both try to make it better and also properly consider the consequences should that happen.  

    As a footnote,  it's interesting being an Independent Safety Assessor, you meet two types of clients. There are those who say "we think we've done everything we can, but it will be reassuring if you could look at the project as well in case we've missed anything". Those are the projects where you rarely find anything missed. Then there are those projects who say "here's the evidence, it's all tested and working, we need your report next week, don't be late". Those are the projects which tend to have really really scary things missed in them... 

    Thanks,

    Andy

Children
No Data