Requirement Structuring

A coffee break question?

I have a legacy requirements set, the requirements have been structured in alignment with disciplines (software, hardware, mechanical, etc). The project work breakdown structure has a similar pattern. It should be noted that there are multiple parts that come together to form the whole system and those parts are not necessarily the product of a single team.

The issue I take here is 1) it has a risk of siloed working 2) it's not aligned to the integration of the system components 3) I'm pretty sure there are missing requirements relating to interfaces and integration

My preference would be to stucture along a more logical/functional methodology. However, I need to get buy-in for the change.

I won't be able to actually change the legacy system, but the intent is to make reuse and leverage of requirements for future projects and I want to set up that stucture. I also doubt I'll be able to change the Work Breakdown Structure.

I would be interested in views to the matter on the pro's and con's of the various approaches.

Assume this is a reasonably complex product with a multidisciplinary team working on it.

Many Thanks in Advance

Mark

Parents
  • Hi Mark,

    I'd totally agree with Simon, assuming sw, hw and mech are all implemented by individual teams then each needs their own subset of requirements. And a single system level requirement may result in a tree (or maybe better root system) of different sw, hw and mech requirements to implement it. 

    A good way to think about this is by thinking about the verification and validation of requirements, at the low level you will want software verification of all the software requirements etc, hardware verification of hardware etc. Then you will want higher level integration verification and validation. 

    However, at the highest level yes I agree it can be useful to group requirements by functionality, both to aid V&V again, and also to make it clearer that any have been missed. But the breakdown to lower levels is best done by the teams they will be devolved to to implement them.

    Outsourcing parts of the design work is a way of getting really good at this, if you don't give a clear requirement set to the outsourced team within their implementation scope then you will not get back what you expect! So I always recommend imagining that this is what will happen: it tends to encourage the production of really good well structured requirements.

    And don't forget the basic requirement of whatever it is supposed to do, we once spent a year or two developing about 200 high level requirements for a train  detection, but realised we'd forgotten to say that it should actually detect trains!

    Thanks,

    Andy

  • Some interesting thoughts in reply to my thread, but I want to pick up on reuse.

    I've fairly commonly come across a scenario where someone in the bid team has stated that in order to keep the project costs down, they will reuse aspects of a previous design. This is another one of those situations.

    As a concept, its generally sound. Many products are essentially iterative evolutions. Sometimes you are reusing an exact component. This also fits in with structuring for buy/make decisions.

    However, where it falls down is when the definition of the candidate design hasn't completely been verified. Do you understand why that design decision was previously made, does that decision still hold true for this product? What is the maturity of the system that you are reusing from?

    Commonly, the reuse is taken before the previous design is fully verified and in some extreme cases, its actually co-development, as you are relying on another project to develop that design so you can use it in your project. Have you considered the risk if that source project doesn't deliver?

    The best example I recall, from many years ago, was where it was discovered that there was an error in the requirements set for a project that had gone into production. Those requirements had been flowed down to multiple other projects, but these had been taken before it went into production.

    Fortunately that flow down had been reasonably documented, so it was possible to identify the impact of the issue and fix the requirement in all the places it had been used as well as the original project. The defect had managed to escape through the peer review and V&V processes - which is pretty rare. The child projects of course didn't test it because it was carryover.

    Of course, there is the case where it is stated as "reuse" but in reality something is being changed...

    My "reuse" is in reality leverage with a certain aspect of co-development due to the maturity of the source.

    One interesting thing I found in my assessment of suitability for reuse of the existing requirements set was evidence of satisfies links that go horizontally between the lowest level requirements modules in different sub-systems... This is because those sets of components are intrinsically linked, but are the responsibility of diffferent teams.

  • Absolutely, this is one of those issues that keeps me in gainful employment! In the rail industry we have incredibly long product lives, e.g. I'm just starting an ISA review of a major update to an electronic product that was originally developed in 1963! And of course in these cases there is usually no structured requirement specification, and equally no validation to that specification (of course). So one of the key areas I will be assessing is how the project have demonstrated that they have reverse engineered a sufficiently complete requirement set - not an easy task.

    Although in a way that's an easier case, because they absolutely have to do it. A tougher case, which again I work with a lot, is justifying relatively minor modifications to "proven in use" equipment, where the scale of the mod can't justify that complete reverse engineering approach, but equally the project must justify that they have sufficiently understood the underlying product so that they can argue that the change will only have a positive impact. And there is a huge risk here of "unknown unknowns", projects not realising that a feature they have disturbed (which they may not even realise was present) has a potential impact.

    Sometimes in discussions like this I'm tempted to say "I could write a book about how to do that". This one, even though it's formed a large part of my career over the last 20-30 years, I honestly couldn't - every single case is so different to each other. I will frequently be heard to mutter about how useless the standards are on advising us on what "good" looks like here, but to be fair to them it's phenomenally difficult to do so. That said, IEC61508 does have probably the best guidance I've come across - if you can find it in there.

    The best approach I've found to this is to make sure you have a development team who are worried that they've missed something. However structured the analysis approach is, a team that is too bullish can always make it sound like they've thought of everything - and this is, in my experience, where it goes horribly wrong.

    And remember reuse is good. However thorough a theoretical reliability (and safety where applicable) analysis is (or appears to be), actual field data is always better - provided it is a credible sample over a credible period of time. Of course that still begs the question as to whether the field data has actually been captured thoroughly (e.g. were all faults actually reported back to the manufacturer), plus of course then the delta between the proven in use system and the new design / application / environment.

    I give multi day training courses on all this to our clients, but even then I'm only barely scratching the surface...but again, if the development team are a team of worriers they'll probably be fine!

  • £1 for the chalk mark, £1000 for knowing where to put it...

  • Yes, we're lucky in the rail industry (at least on the infrastructure side) that generally clients appreciate that...

    One more thought I had, from your thoughts: it's really useful - but hard work - to try to explain to projects that if they get this right now then their future modifications will be so much easier. Because they'll have the correct format evidence of the baseline system in the first place. Again I've been lucky across two companies that they happened to have their mainstream products in production for very long periods of time: about 20 years in one case, and 62 years and counting in the other! It really teaches you the importance of retaining all information as you'll need it again one day. For example, for those of us who use hazard logs or similar, a hazard log is not just to get you through an audit, it's for life...

    And in the second of these companies, where we had exceptionally low turnover of staff, realising that even with the same staff you don't fully remember the rationale for the product design decisions 5 years later, or indeed 6 months later - or sometimes even a week later! It MUST be formally recorded in in a structured way where you can find it. 9 years after I left that company I still occasionally get messages from them asking if I can remember why we did things, and that's from somewhere where we were really pretty good at capturing this stuff.

  • the correct format evidence

    Should that be 'formal'? Guessing so Wink

  • It MUST be formally recorded in in a structured way where you can find it

    Certainly that's the ideal, but it's hard to capture all the implicit knowledge. One often only finds out in retrospect what was important, in a niche way.

    The worst parts for younger engineers is when the answer is 'we didn't have that technology back then, hence..'. 

  • Should that be 'formal'? Guessing so

    Maybe both - formal and in a (sensible) format!

    One often only finds out in retrospect what was important, in a niche way.

    Oh absolutely, we can only do our best. But at least if people realise they have a problem it's a step towards it. I find it's easier with safety critical systems, as it's easier to make people aware of what could go wrong if requirements get forgotten or misinterpreted. But even in that world people make lots of assumptions along the lines of "we know we're going to design it like this" - no you don't, not unless it's written down as an clear and unambiguous requirement. Or "we've already designed it like this" - again that will only stay designed like that if you have a requirement to control it. (Spent this morning chairing a HAZID with the usual issue of the supplier responding to potential hazards by explaining how they couldn't happen in their system because of xyz feature...and explaining to that supplier that we had to capture that as a safety requirement to ensure that it was recorded that they would do what they were planning to do, or indeed had done, anyway. Fortunately we've been working with that company for a year or two now so they've learned that that's what needs to happen - I've known some equipment suppliers take this as a personal insult.)

  • they will reuse aspects of a previous design.

    One of the classic "Grandfathers Axe" re-use cases.

    New mounting plates. Different bulges and cut-outs. PCBs relaid. Alternate connectors. Identical in almost every way [M. Poppins 1964] Smiley

  • Identical in almost every way

    I know a very serious product where the EMC argument was that the new product was "similar" to the old product. The similarity was that they both had the same name, which had in fact been kept purely to try to claim "similarity". By the time the CE marking authorities found this out, many years later, the business manager who'd made the "similar" statement had left the business (and, sadly, had in fact left the human race) and I was called to explain this. Not a position I ever want to be in again...even though we'd carefully documented at the time that the EMC team did not endorse the "similar" statement, and that we did not consider that the EMC case applied to the new system, it was still a very very embarrassing meeting. (Fortunately for the company the product had been supplied and commissioned just before the EMC directive came into force, so the company was able to demonstrate that it hadn't actually broken the law, and in fact we'd ignored some of the "requirements" from that business manager and had surreptitiously taken reasonable EMC precautions in the design, but it was definitely not good practice.)

    In the rail industry we have the Common Safety Method for Risk Assessment regulation, which allows the use of "similar reference systems" to argue that reapplication of a known system does not require a new risk assessment (I'm heavily simplifying there). It's a really sensible and pragmatic idea, and should be really useful, but in practice it's proved incredibly complicated to demonstrate that systems genuinely are "similar". Both that they are technically sufficiently identical and that they are used in sufficiently similar operational and environmental conditions. As an independent assessor I am forever asking "are you sure you've identified all the differences?" Again, it all comes down to how well the requirements, and compliance to those requirements, of the baseline system were stored in the first place.

  • You can get some interesting scenarios.

    I've come across a product (many years ago) where from the outside it is the same product, the (product) requirements are the same, the interfaces are the same. There were no lower level requirements.

    However, what they had done is replace the microprocessor to a different one to address obsolescence - but not a pin compabitible family upgrade. At the design level, this has several implications, there may be new internal power supplies, the software and firmware may need to change.

    I've also known of manufacturing departments to replace (passive) components that may on paper look the same, but the performance was very different (in fact, one instance resulted in a "thermal" event).

Reply
  • You can get some interesting scenarios.

    I've come across a product (many years ago) where from the outside it is the same product, the (product) requirements are the same, the interfaces are the same. There were no lower level requirements.

    However, what they had done is replace the microprocessor to a different one to address obsolescence - but not a pin compabitible family upgrade. At the design level, this has several implications, there may be new internal power supplies, the software and firmware may need to change.

    I've also known of manufacturing departments to replace (passive) components that may on paper look the same, but the performance was very different (in fact, one instance resulted in a "thermal" event).

Children
No Data