Planes grounded as mass worldwide IT outage hits airlines, media and banks

Parents
  • If I was being slightly facetious, I would question whether they have any Linux or Apple OS customers..

    On the presumption that almost everything runs windows (which wouldn't be entirely accurate - the vast majority of serious back-end stuff uses anything but windoze - usually some Unix derivative) ... or because other OSs don't seem to attract quite as much attention from the black hats?

    And the doubtful wisdom of all the updates being installed remotely as soon as they become available, rather than only on some machines, then waiting a day to make sure it works, to do the rest.

    Absolutely - a few years ago when one of my responsibilities was to apply updates to a number of servers, we'd do all the internal/testing ones first, then wait a day or two before applying the same to production servers (and of course things could be speeded up if there was a particularly nasty vulnerability). Seemed like an obvious precaution at the time, but an option that seems to have got lost in the current world of fully automatic updates. As simple configuration to delay applying updates for so many hours (set longer on more critical boxes) might be a useful option.

      - Andy.

Reply
  • If I was being slightly facetious, I would question whether they have any Linux or Apple OS customers..

    On the presumption that almost everything runs windows (which wouldn't be entirely accurate - the vast majority of serious back-end stuff uses anything but windoze - usually some Unix derivative) ... or because other OSs don't seem to attract quite as much attention from the black hats?

    And the doubtful wisdom of all the updates being installed remotely as soon as they become available, rather than only on some machines, then waiting a day to make sure it works, to do the rest.

    Absolutely - a few years ago when one of my responsibilities was to apply updates to a number of servers, we'd do all the internal/testing ones first, then wait a day or two before applying the same to production servers (and of course things could be speeded up if there was a particularly nasty vulnerability). Seemed like an obvious precaution at the time, but an option that seems to have got lost in the current world of fully automatic updates. As simple configuration to delay applying updates for so many hours (set longer on more critical boxes) might be a useful option.

      - Andy.

Children
  • Much of the chat in the other forums I read is that Linux doesn't need to have such software, because its secure by design. This of course, isn't true, Linux can equally be compromised in similar ways. But the fractured nature of Linux means that its probably a more complex target for hackers and the fact that its less common with End Users (which are an organisations greatest vulnerability) means there is less value in attacking it.

    However, last time I checked, approximately half of the worlds webservers run on a form of Linux or similar. A lot of the routers offered by ISPs run on something based around the Linux Kernel as well. So in terms of damaging infrastructure, Linux is absolutely a target.

    The information I'm not seeing is that it was a null pointer dereference and the discussions seem to have moved on to whether it would have been avoided if they had been using Rust (or another language with inherent safety) instead of C/C++.