Requirement Structuring

A coffee break question?

I have a legacy requirements set, the requirements have been structured in alignment with disciplines (software, hardware, mechanical, etc). The project work breakdown structure has a similar pattern. It should be noted that there are multiple parts that come together to form the whole system and those parts are not necessarily the product of a single team.

The issue I take here is 1) it has a risk of siloed working 2) it's not aligned to the integration of the system components 3) I'm pretty sure there are missing requirements relating to interfaces and integration

My preference would be to stucture along a more logical/functional methodology. However, I need to get buy-in for the change.

I won't be able to actually change the legacy system, but the intent is to make reuse and leverage of requirements for future projects and I want to set up that stucture. I also doubt I'll be able to change the Work Breakdown Structure.

I would be interested in views to the matter on the pro's and con's of the various approaches.

Assume this is a reasonably complex product with a multidisciplinary team working on it.

Many Thanks in Advance

Mark

Parents
  • Those requirements were set a while ago. Should the system be upgraded they will probably be out of date. So I suggest you leave them alone but store them where they could be used in the future as a legacy input to a future conceptual model should the system be upgraded.

  • In terms of project lifecycles, the "donor" project is right at the start of its lifecycle - so its requirements are actively being changed and updated as the design develops. Not so much out of date, more, immature currently.

    However, its the structuring of those requirements that I'm questioning here.

  • if the requirements ae in a database, that is the purpose of keywords

  • I now tend to separate Requirements from Specifications - it took me decades to understand the distinction. 

    The ideal Requirement is a narrative, without numeric limits.

    Meanwhile the consequent Specification is all about setting the testable numeric limits for blind testing of the product or it's sub-system in a test environment.

    There are very few cases where the narrative requirement is actually achieved: E.g. President Kennedy's "landing a man on the Moon and returning him safely to the Earth before this decade is out", (old now) Quantel Paintbox drawing software "the capability of an HB pencil", and Steve Jobs mouse pointer to operate on "a pair of jeans" (maybe apocryphal).

    Also noted is that the functional technical performance often fits within three quarters of a page - the other 60 pages are environmental survival aspects (power provision, EMC, vibration, temperature ranges, etc.) They are all zero function requirements - input this, get out zero change. Hence the Gold Plated Brick requirements that can turn up every now and again.

    Finally, given some "requirement", look at how big the pile of lower level part specifications become, all from that simple short narrative - where did all that come from? Have a look at Knowledge levels (similar to TRLs - Technology readiness levels), and the Knowledge tree.

    1 Bohn, R.E.: ‘Measuring and Managing Technological Knowledge’, in ‘The Economic Impact of Knowledge’ (Elsevier, 1998), pp. 295–314,

    Also as article in Sloan Management Review · January 1994 (MAM) DOI: 10.1016/B978-0-7506-7009-8.50022-7

  • This is of course the whole point of it being in a database.

    However, if those keywords are not constrained then what you find is that people populate it with a varied selection of keywords, which may or may not align to the design and architecture. With the extreme being where different teams call the same thing by different names or acronyms.

    You end up going through the database to correct all the names, because the alternative is that when you do searches or filters on the database you don't get the right data.

    Personally, I always suggest that the keywords should be constrained to items shown as architecture elements or ontological elements, whichever is most relevant.

  • I always suggest that the keywords should be constrained

    Though there can be a foresight bias of believing that one has completeness in the keywords, or that folk actually understand what the terms mean, include/exclude. One man's ontology is another lady's hierarchy.. Being able to update the keyword list quickly also matters..

    Manoeuvring gets an honourable mention for the number of potential ways folks can re-spell it. Along with how many reference datums you can have [surely the plural of datum is data?]

    The Ergodic, Stochastic and Ensemble distinctions always get me.

  • There's also the unintended consequence that keywords (or any similar grouping strategies) can mean that interactions between those groups get missed - when doing a modification on sub-system "A" the engineering team reasonably search for requirements on sub-system "A". But don't check the source and rationale of those requirements which means that they miss the fact that their changes inadvertently impact their interaction with subsystem "B".

    In my mind it's not so much a problem with the database, it's awareness of the fact that by the time your requirement set is so big that you need a database to mange them then, by definition, the set is so big that no one person can keep a level of awareness of all the requirements and their interactions. So sort of back to the start of this thread: in those circumstances the team need to be very aware of this and be careful about letting teams work solely on sub-sets - which practically of course those teams will mostly need to. 

    I'll admit that this is why I like working on small projects! And why the big fails I've seen have been on big integration projects - everyone was doing their best, but there were just to many interactions to consider in the processes that were used. As has been commented here, it's a whole area of systems engineering by itself, and my feeling is that it's an area that needs more intelligence put into it on each project than is often appreciated.

  • but there were just to many interactions to consider in the processes that were used

    Very much the "Crappers brainfull" analogy of Prof Hitchins (INCOSE & IEE, as was), whereby Mrs. Thomas Crapper (of flush toilet fame) was said to the last person to be able to understand all the elements of a complete system (all rather apocryphal, but very relatable). Reference somewhere in an IEE conference proceedings up in my loft...

  • A direct reference to Hitchin's "Crapper's Brainfull"

  • I hadn't come across that before, that's wonderful!

    Since I moved into consultancy, which has the interesting attribute of having the opportunity to see many different organisations' processes at work, I've been thinking more and more about the ideal development team size. The development team I ran for many years got us all into a (very) high state of stress as we felt woefully understaffed, however in hindsight we were incredibly effective. But only because the project scale was such that, despite being highly innovative and highly safety critical, it was still pretty much a Crapper's brainfull - none of us understood all of it, but all of us understood it at, say, block diagram level and hence the interactions between blocks. What I've seen is that there is some tipping point, which probably is the Crapper's brainfull point, where you need a completely different approach - many teams rather than one integrated team - but then this also needs a totally different management approach. And one conclusion I've definitely come to is that applying small project approaches to big projects, or big project approaches to small projects, doesn't work at all.

    I think historically this was less of a problem, civils projects for example have always tended to be big logistical projects. Many electronics (i.e. only hardware) development projects by contrast lent themselves to small teams who could work from one (reasonably) well defined input to another (reasonably) well defined output, and could cope with the uncertainties in the middle. But software has really muddied the waters, with huge interactive projects where even each individual component exceeds Crapper's brainfull. 

    A very personal view now: I get slightly frustrated that at the moment it feels as if every conference and discussion is swamped with talking about AI, when we still haven't bottomed out the underlying issues in practice of managing the reliability and safety engineering - including the critical fundamental of requirements management - for existing large complex systems, whether AI or otherwise. I may well be wrong, but my feeling is that there are overambitious claims currently being made for AI because of underestimating these management issues - despite the fact that we routinely see the outcomes of non-AI large IT systems (in particular) failing. I've seen first hand the (impressive) amount of work that went into managing the requirements that keeps the Crossrail / Elizabeth line trains running safely autonomously through central London where all they need to do is not hit the train in front. And the sadly inevitable reliability issues that arise from maintaining that safety ("if in doubt, stop the trains"). (Simplified drastically there, yes the reliability could be improved, but at costs which would be disproportionate.) Having seen that, when some people start talking very optimistically about automated cars managing a vastly more complex and more poorly designed set of requirements I have to wonder whether they've failed to appreciate that they met Crapper's brainfull very early on. And I think that's key, appreciating that there is a limit to how much of a complex system each of us can understand.

    My personal approach when I'm aware that I'm reaching Crapper's brainfull is usually to gently extract myself from the project! But only because I'm very well aware that my strengths have always lain in small innovation projects, there are many others who are much better than me at big projects.

Reply
  • I hadn't come across that before, that's wonderful!

    Since I moved into consultancy, which has the interesting attribute of having the opportunity to see many different organisations' processes at work, I've been thinking more and more about the ideal development team size. The development team I ran for many years got us all into a (very) high state of stress as we felt woefully understaffed, however in hindsight we were incredibly effective. But only because the project scale was such that, despite being highly innovative and highly safety critical, it was still pretty much a Crapper's brainfull - none of us understood all of it, but all of us understood it at, say, block diagram level and hence the interactions between blocks. What I've seen is that there is some tipping point, which probably is the Crapper's brainfull point, where you need a completely different approach - many teams rather than one integrated team - but then this also needs a totally different management approach. And one conclusion I've definitely come to is that applying small project approaches to big projects, or big project approaches to small projects, doesn't work at all.

    I think historically this was less of a problem, civils projects for example have always tended to be big logistical projects. Many electronics (i.e. only hardware) development projects by contrast lent themselves to small teams who could work from one (reasonably) well defined input to another (reasonably) well defined output, and could cope with the uncertainties in the middle. But software has really muddied the waters, with huge interactive projects where even each individual component exceeds Crapper's brainfull. 

    A very personal view now: I get slightly frustrated that at the moment it feels as if every conference and discussion is swamped with talking about AI, when we still haven't bottomed out the underlying issues in practice of managing the reliability and safety engineering - including the critical fundamental of requirements management - for existing large complex systems, whether AI or otherwise. I may well be wrong, but my feeling is that there are overambitious claims currently being made for AI because of underestimating these management issues - despite the fact that we routinely see the outcomes of non-AI large IT systems (in particular) failing. I've seen first hand the (impressive) amount of work that went into managing the requirements that keeps the Crossrail / Elizabeth line trains running safely autonomously through central London where all they need to do is not hit the train in front. And the sadly inevitable reliability issues that arise from maintaining that safety ("if in doubt, stop the trains"). (Simplified drastically there, yes the reliability could be improved, but at costs which would be disproportionate.) Having seen that, when some people start talking very optimistically about automated cars managing a vastly more complex and more poorly designed set of requirements I have to wonder whether they've failed to appreciate that they met Crapper's brainfull very early on. And I think that's key, appreciating that there is a limit to how much of a complex system each of us can understand.

    My personal approach when I'm aware that I'm reaching Crapper's brainfull is usually to gently extract myself from the project! But only because I'm very well aware that my strengths have always lain in small innovation projects, there are many others who are much better than me at big projects.

Children
  • Ah, thats a wonderful article - I'll take a note of that one and see if I can use it somewhere.

    AI. My issue is that its treated like its a completely new thing, when AI has been around a very long time. Whether thats the auto-routing opf PCBs, or paths on maps, or even Clippy.

    We have been using Wolfram for maths problems for several years. Although this isn't sold on its AI capabilities, the facts it can respond with a "did you mean to ask this" type response shows it has some.

    What has changed is that you can have generic statistical based solvers that source their input data from a wide range of sources. The challenge here is to train that response so its useful. There are definately some use cases where AI can have a benefit, but many where it doesn't.

    It seems every company wants to think about how they can introduce AI into their products and processes with out really assessing what value it will bring and how they will verify that it did bring value.

    Fortunately the nature of the systems I'm woking on means I will dodge AI for the foreseable future.

    Although, my best example of crappers brainful, was the person that told me that they were writting a book (on alternative diets) with ChatGPT, and that ChatGPT ran on quantum computers...

    ChatGPT, please can you restructure my requirements into a suitable systems architecture... 

    (I would almost be tempted to try to see what it does, but not with real project requirements).