When security problems in computer software and hardware occur, they are very expensive to fix. For an operating system level fix, this costs the vendors themselves hundreds of thousands of dollars. For the customers, the cumulative cost to deploy the patch is probably more expensive. To define the various steps in fixing a security problem and issuing a patch, here is a list of steps one must coordinate and the associated costs:
- Finding the vulnerable code
- Fixing the vulnerable code
- Testing the feasibility of the fix
- Testing the setup of the fix
- Creating and testing international versions
- Posting the fix
- Writing any support documentation related to the fix
- Handling negative public perception
- Bandwidth and download expenses
- Lost productivity
- Customer implementation efforts
- Potential loss or postponement of market opportunities
Therefore, it is often said:
For general security – “An ounce of prevention is worth a pound of cure.”For internet security – “An ounce of prevention is worth a ton of cure.”
- Holding up to storms, heat, wear, tear
- Normal presumptions
- “Acts of God”
- Carelessness of man
- “Murphy’s Law”
- Other fairly predictable factors
- Test for errors and mischance
Safety engineering takes things a step further:
- All of the above plus …
- Random, accidental and transient faults
- All of the above plus …
- Ordinary people who put convenience first
- Malicious intent and behavior by intelligent, clever, devious, unpredictable and cheating opponents
Here is a list of some (from a much longer/technical list) of security development principles. Some are a bit technical so I will elaborate:
- Complexity is the worst enemy of security – Since it is very important to architect security from the very beginning of a design process, it is hard to make sure that all the security issues are addressed in a complex system as it grows larger.
- Minimize your attack surface – This essentially means that when you are designing a product, you should minimize the number of inputs and combinations that the user could enter. The permutations, and coverage of all possible permutations, grow geometrically, and all possibilities eventually need to have a security support.
- Use defense in depth – You can never have enough security. If you can protect something from multiple different ways (not just the same thing applied multiple times), it will create a more resilient application.
- Use least privilege – When in doubt, give the end user just enough security privileges to get the job done. It is easier to assign “root” privileges for most functions, but it’s usually the least important function with root privilege that gets exploited for an attack.
- Employ secure defaults – As all systems will fail, or are made to fail (i.e., power outages), it is important that the software can always fail (i.e., immediately shutdown) into a secure mode.
- Backwards compatibility – Sometimes the biggest headache and cause of grief. If possible, avoiding backwards compatibility means you are avoiding backwards security holes.
- Assume external systems are insecure – When you receive information from a third-party application or network, always assume that the data is insecure or compromised. Just because you think the data is from a secure system, doesn’t always mean that it is.
- Never depend on security through obscurity – Trying to be secure by hiding something or through obfuscation will never work in the long term. Security should be implemented in a logically sound fashion.
Of course, this only touches on the difficult and complex topic of security engineering. I hope to add more subsections to this, but I believe just scratching the surface here, one should get a pretty good idea that writing secure software is not easy.