Question Home

Position:Home>Philosophy> What sort of controls would be needed to prevent this sort of doctoring of safet


Question:<<Mightn't there be a rogue staff member tweaking the results.
He might first get into the algorithms and test cases and doctor them, believing he is "adding value".
He will then get into the duty statements and position numbers of staff, tweaking duty statements and position numbers to cover up.>>

Would the following 3 controls be enough:

1/"Generally there is a transparent logging algorithm to catch anyone manipulating the figures in the software."

2/"People can mess with the software.
But important/secure use software (such as banking software) is often checked by independant programmers and companies."


Best Answer - Chosen by Asker: <<Mightn't there be a rogue staff member tweaking the results.
He might first get into the algorithms and test cases and doctor them, believing he is "adding value".
He will then get into the duty statements and position numbers of staff, tweaking duty statements and position numbers to cover up.>>

Would the following 3 controls be enough:

1/"Generally there is a transparent logging algorithm to catch anyone manipulating the figures in the software."

2/"People can mess with the software.
But important/secure use software (such as banking software) is often checked by independant programmers and companies."

What would the motive be for a "rogue staff member" to endanger the lives of people? Banking software is a bit of a different proposition: the motive in that case is that if you can inject a trojan into the software you can potentially make millions of dollars. The same kind of incentive isn't there in your case.

Code reviews can help, to a point. But the kinds of code reviews you're talking about would impair productivity by a factor of 10x or 20x, and there's STILL no guarantee that you're going to get a working system when you're done.

Logging really doesn't do much. If you're going to fudge the results, you're going to fudge the logs too. There's no way I can think of to effectively protect log files any more than there is to protect the data. The only real use of log files that I can think of is that they allow mistakes to be retroactively corrected. (e.g. incorrectly executed bank deposits over a period of months).

In my experience, log files tend to reduce code quality. Programmers rely on megabytes of log files to allow them to diagnose the situations that caused the problem after the fact, after the entire system has crashed,instead of consistently and ruthlessly dealing with errors defensively, in the code. Log file programmers tend to be programmers that are lazy when it comes to anticipating and responding to unusual conditions in the code itself.

Messing with the software: nobody has any incentive to break mission critical systems on a submarine, putting peoples' lives in danger. Banking is different case: an injected trojan can put millions of dollars into a rogue programmer's pocket. Different problem domain, entirely. Code reviews in this case would be looking for code that looks odd. That's a totally different proposition from reviewing to ensure correct operation. Code reviews work to a point, but can't effectively deal with problems that arise out of interactions between code that's more than a few dozen liens apart. And it's the interactions that take down large systems.

There's a good case to be mode for having programmers write their own unit tests as they go. Unit tests written by someone else are unlikely to execute the really difficult peices of code, and are unlikely to provide complete functional coverage. See methodologies like XP and Scrum for examples of methodologies that rely heavily on unit tests and code writtin in parallel. For any given test written in advance, there's a 50-50 chance that problems encountered while running the test are in the test code rather than the production code. I've seen really impressive results with XP-style unit-testing. I'd go with something like that if my life depended on it. Bugs tend to occur during refactoring. A change here, breaks something over there. Fully automated tests allow refactoring to be done fearlessly. Traditionally, large system development collapses because, without fully automated unit tests, refactoring is too dangerous, and the system collapses under the wieght of cruft accumulated over time.

Sure, you can make it difficult to modify the test cases. But why? If you make it difficult for people to do the right thing, and fix the test case, people will do the wrong thing, and change the software to pass the test, even though the behaviour isn't correct. Because it's just so much easier.

So, no. Probably, all three of those methodologies are inapropriate. These methods might be appropriate for money-critical systems, but I have to tell you that if my life depends on a peice of software, the fact that the software is happily logging away while it crashes and kills me is not a reassurance. If it were up to me, I would forbid informational logging altogether, because I really do beleive that encourages poor software development practice.

The truth of the matter is that building large computer systems is hugely complex process. There are no simple solutions. It takes a lifetime of experience to learn how to manage systems like these well.