Errors in software (or spreadsheet) design tools

I am currently conducting research on error prevention strategies in engineering design software and would greatly appreciate your insights. The goal is to gather the aggregated responses, anonymise them, and share the findings in a research paper. This paper aims to highlight current shortcomings in our field and suggest methods for improvements.

My primary focus is understanding how various companies approach the auditing of both internal and external calculation tools.

  1. Audit Methods: Do you employ peer reviews, code audits, or self-assessments? Could you share which method is your go-to and why it is preferred?
  2. Audit Frequency: How often do you conduct these audits? I’ve observed that some companies only audit when a code change occurs, while others do so every time a major error is spotted. What is your company’s practice and why?
  3. External Tools: If you use external tools, how do you ensure their reliability? Do you conduct your own audits on the software, or do you rely on the supplier’s auditing system? If you do review their audit, what does that process look like and why?
  4. Access to Source Code: If you conduct an audit, how do you access the source code? If you don’t have access to the source code, what alternative methods do you use and why is that method acceptable?
  5. Raising Concerns: Lastly, how do you raise concerns about potential errors? More importantly, how do you validate that changes have been implemented effectively?

If you would prefer to private message me, please hover over my name and select 'Send Private Message' in the pop up window, or visit my full profile and then 'connect' - alternatively I have set up a google form which will not require your name, or email to populate your answers, and you will remain entirely anonymous: https://forms.gle/B8MkNGMqyEt2VhgK9

Thank you in advance for your time and insights. Your input will be invaluable to my research.

Parents
  • There's an old adage in the software world that quality can't be inspected in and it can't be tested in - it must be designed-in and it must be built-in. The ideal is that you use a developer who has a thorough understanding of the code and a thorough understanding of what it's meant to achieve. Everything else is just useful add-ons but won't of itself make up for rubbish generated at the earlier stages.

    In my part of the world no change goes out of the door unless it's been reviewed by someone else. There are several layers of testing, but error still get out now and again. 3rd party stuff (anything from operating systems to databases to 3rd party data sources, in the usual situation where the supplies claims backward compatibility) is tested insofar as as our system passes regression tests when built on top of the new versions. Major version changes of 3rd party stuff results in a much more comprehensive analysis, upgrade and re-test. Our customers will routinely do their own acceptance testing before putting new versions into production,

       - Andy.

Reply
  • There's an old adage in the software world that quality can't be inspected in and it can't be tested in - it must be designed-in and it must be built-in. The ideal is that you use a developer who has a thorough understanding of the code and a thorough understanding of what it's meant to achieve. Everything else is just useful add-ons but won't of itself make up for rubbish generated at the earlier stages.

    In my part of the world no change goes out of the door unless it's been reviewed by someone else. There are several layers of testing, but error still get out now and again. 3rd party stuff (anything from operating systems to databases to 3rd party data sources, in the usual situation where the supplies claims backward compatibility) is tested insofar as as our system passes regression tests when built on top of the new versions. Major version changes of 3rd party stuff results in a much more comprehensive analysis, upgrade and re-test. Our customers will routinely do their own acceptance testing before putting new versions into production,

       - Andy.

Children
  • Hi Andy,

    Thank you for your response. You’ve touched on some key points that resonate with my research. The practices you’ve outlined align with what I consider to be effective in software generation and management.

    I find this quote interesting:

    Our customers will routinely do their own acceptance testing before putting new versions into production,

    In my experience with User Acceptance Testing (UAT) in engineering design tools and spreadsheets, I’ve noticed a diverse range of approaches. With your experience have you developed any 'best practice' protocols or procedures with your customers? How do you support them in ensuring these tests are conducted thoroughly, accurately, and within a reasonable time frame?

    I appreciate your time and insights. They are invaluable to my research.

    Graeme

  • I'm not sure about "best practice' protocols" - the state of the craft seems to be far from perfect still. We're getting just past the edge of my day-to-day work now, but I understand the we usually provide a list/description of all the intended changes (at least one that should affect that customer) so allow them to "focus" their testing (and manage any parallel changes - e.g. to user training or internal procedures). I don't think we'd try to influence the detail of their testing too much - at best they'd then be taking the same approach as we had during our internal testing - which isn't likely to find any problems we haven't found and fixed already. Variety of tests and letting them focus on the particular ways they actually use the system (which probably isn't entirely as we'd imagined) probably gives the most bang for the buck.

       - Andy.

  • Hi Andy,

    Your feedback is really interesting, and opening lots of areas which I hadn't considered. Thank you very much for taking the time to respond to me, I really appreciate your thoughts.

    Graeme