How does the choice of software engineering methodology impact the adaptability and responsiveness of a development team in the face of changing project requirements?

The choice of software engineering methodology directly impacts how effectively a development team can adapt to changing project requirements. Agile methodologies, such as Scrum or Kanban, enable quick adjustments and continuous collaboration, enhancing adaptability and responsiveness. Conversely, traditional methodologies like Waterfall may hinder the team's ability to respond promptly to evolving project needs, potentially causing delays or inefficiencies.

Parents
  • There is a significant issue here - you have assumed waterfall and agile are mutually exclusive. This isn't correct. Agile relies on a set of independent tasks, but something has to have determined the rough scope such that the you can do the agile assessment. This essentially is a little bit of waterfall - just enough so you can get started.

    Even in a agile sprint, you are going to be refining those requirements so you can implement them, implementing them and then testing what you have done. This is a mini-waterfall in itself.

    So there is actually nothing that stops you from taking concepts from both, but it has to be done for the right reasons, being careful to avoid some kind of wagile approach that combines the worst of both.

    The other thing hinted by the original question is that waterfall is a one-hit approach. It isn't. I can't think of any major program I've ever worked on that was a one-hit. All of them had phases accounting for iterative and incremental growth of the functionality. Iterative and incremental are also one of the key aspects of agile.

    However, change can be an issue irrespective of what approach you take. The real challenge here is knowing the impact of that change and that is all about traceability. A minor change may only result in the requirement, code and test being changed. A more significant change, such as changing the microcontroller in a product during development, can have significant impact in all areas that you might have start many things from scratch (been there, got the t-shirt).

    With those "agile" projects I have worked on, the common issue was that pure agile assumes everything is independent. For complex projects this is rarely the case and as any Systems Engineer will tell you, there is often unexpected emergent properties of a system. Hardware is ultimately finite (especially in embedded systems).

    So, my answer to the original question would be, the choice of methodology may seem to impact responsiveness to change, but ultimately how fast you can change is the wrong thing to measure, what you should be measuring is how well you are meeting the time, cost and quality targets of the project. None of the methodologies by themselves give you that answer.

    Mark

  • So there is actually nothing that stops you from taking concepts from both, but it has to be done for the right reasons, being careful to avoid some kind of wagile approach that combines the worst of both.

    The latest railway standard for safety related software (EN50716 which replaces EN50128) has a very sensible and pragmatic informative section on the use of iterative lifecycle models in a world which has - at least according to the standards - traditionally been very waterfall. Which is good, and a relief, for years (30+ in my case) as an industry we've been trying to pretend that we develop safety critical software (and, in fact, systems and hardware) in a waterfall form whereas in practice real R&D rarely is. And similarly we have occasions where we want to bring in software developed for other applications which simply won't have been developed waterfall. (I'm leaving aside COTS and even worse SOUP which is a whole other problem - I'm just thinking about software where we do know how it was developed.)

    So it seems that the standards are, belatedly, catching up with your comment, which I totally agree with. Don't get too excited, it doesn't go into huge detail, but it's a good start.

    but ultimately how fast you can change is the wrong thing to measure, what you should be measuring is how well you are meeting the time, cost and quality targets of the project

    Even in the high reliability / safety critical world I'd put it slightly differently. I'd say that these are all KPIs, i.e. responsiveness to change should be included alongside meeting time, cost and quality targets. I am coming at this from a rail industry point of view, where traditionally our responsiveness to change, including the adoption of new technologies, has been so slow and risk averse that it could be argued that opportunities for improving quality, reliability and safety have actually been missed. But equally I don't like the approach of just keeping launching test rockets until they stop blowing up...as you suggest it is possible to combine innovative development techniques with sound engineering practice.

    Cheers,

    Andy

Reply
  • So there is actually nothing that stops you from taking concepts from both, but it has to be done for the right reasons, being careful to avoid some kind of wagile approach that combines the worst of both.

    The latest railway standard for safety related software (EN50716 which replaces EN50128) has a very sensible and pragmatic informative section on the use of iterative lifecycle models in a world which has - at least according to the standards - traditionally been very waterfall. Which is good, and a relief, for years (30+ in my case) as an industry we've been trying to pretend that we develop safety critical software (and, in fact, systems and hardware) in a waterfall form whereas in practice real R&D rarely is. And similarly we have occasions where we want to bring in software developed for other applications which simply won't have been developed waterfall. (I'm leaving aside COTS and even worse SOUP which is a whole other problem - I'm just thinking about software where we do know how it was developed.)

    So it seems that the standards are, belatedly, catching up with your comment, which I totally agree with. Don't get too excited, it doesn't go into huge detail, but it's a good start.

    but ultimately how fast you can change is the wrong thing to measure, what you should be measuring is how well you are meeting the time, cost and quality targets of the project

    Even in the high reliability / safety critical world I'd put it slightly differently. I'd say that these are all KPIs, i.e. responsiveness to change should be included alongside meeting time, cost and quality targets. I am coming at this from a rail industry point of view, where traditionally our responsiveness to change, including the adoption of new technologies, has been so slow and risk averse that it could be argued that opportunities for improving quality, reliability and safety have actually been missed. But equally I don't like the approach of just keeping launching test rockets until they stop blowing up...as you suggest it is possible to combine innovative development techniques with sound engineering practice.

    Cheers,

    Andy

Children
No Data