Securing Legacy Applications
Legacy applications. If there’s one thing that developers agree on, it’s that they don’t want to work on them. If not that, it’s that they’re often, by default, assumed to be insecure. These beliefs aren’t without some justification.
It’s been extremely common in software development circles to disregard anything that appears to be inconsequential to “getting the release out.” All too often, these have included such best practices as testing and security.
As a result, it’s often fair to assume that older software likely isn’t as secure as it could be. Given that, if you’re in charge of maintaining legacy software, or you’re on the team tasked with managing it, how do you make it secure?
Today, I’m going to walk you through a series of approaches and techniques which you can use to help ensure that your legacy applications are either as secure as they can be or are becoming ever more secure.
So, you’re on board with the need to make your software more secure. The question is, how do you do that? There’s an often-quoted adage which springs to mind at this point:
Before you can go somewhere you first need to know where you are.
If you don’t know where you are, how can you know what you need to do, to add, to remove, to fix, to deprecate and so on? To do this, what you’ll need to do, for each of your applications, is to create a threat model.
A complete understanding of what this is, how they work, and how to do them is outside the scope of this article. But here’s the basics; according to OWASP, application threat modeling is:
A structured approach that enables you to identify, quantify, and address the security risks associated with an application….It allows the reviewer to see where the entry points to the application are and the associated threats with each entry point.
They suggest that three core steps form the basis of the process.
- Decompose the Application. Gain an understanding of the application and how it interacts with external entities.
- Determine and rank threats. Identify threats both from the attacker and the defensive perspective.
- Determine countermeasures and mitigation techniques.
Once the threat model has been created, next integrate security scanning and analysis into the SDLC of your applications. An excellent way to start is by doing a PEN or Penetration test.
Penetration tests are designed to look for weaknesses in the application that would allow attackers to gain access to it and, potentially, to the data which it contains.
The tests should determine if the application is secure, or if not where the weaknesses in it lie. There is a range of tools available to perform PEN testing. Here are a few of them:
- Metasploit: is “a tool for developing and executing exploit code against a remote target machine“
- Wireshark: is an open source packet and protocol analyzer, designed for network troubleshooting and analysis.
- w3af: is a web application attack and audit framework and security scanner. It is “a complete environment for auditing and attacking web applications.”
- Nessus: is a proprietary vulnerability scanner which can scan for misconfigurations, DOS attacks, default passwords, and remote access vulnerabilities.
By integrating one or more of these into your software development process, you should be able to stay abreast of security-related issues before they can be released into the wild.
But, even the best of efforts likely won’t catch everything. And, as the old saying goes, to err is human (or: everyone makes mistakes). So, encourage people, both inside and outside your organization, to find and address bugs.
One excellent way to do this is by starting a bug bounty and disclosure program. These programs offer rewards, sometimes financial, sometimes not to people who find and report bugs in software. Numerous companies run them; including Apple, Google, 99designs, Beanstalk, and Braintree. To find out more, check out the list on Bugcrowd.
Once you’ve gotten a baseline of quality, so that you know what you’re working with, it’s time to start improving the code. To do that, I suggest that you develop “in the small”.
What I mean by that is that when you make changes don’t make a large amount of them at any one time. Instead, start small and make a series of small changes one at a time. By doing so, you can make them quickly, and you can also document what you’ve changed.
This has the advantage of allowing you to perform fine-grained analysis of what worked and what didn’t. You can correlate specific changes with the growth or degradation in the quality of the security of the application.
To do this properly, I encourage you also to keep simplicity in mind as well as to seek out and remove all forms of over-engineering and to avoid adding more of it as you go along.
This may not be the most traditional approach with your team, because software is seen as something that can happen quickly, even immediately. It’s seen as a field where things happen at the speed of thought, where cool people sweep in, make sweeping changes, and disappear again.
Well, that’s nice for Hollywood movies. But it’s rarely — if ever — the case in reality. So keep simplicity in mind. Take it one piece at a time. And measure all the things.
If there’s one saying that’s stuck with me about software security, it is: don’t trust user data. And it’s a very important saying. It’s something that the OWASP Top Ten continues to say, as do a myriad of software security experts.
And with good reason. Not validating data can lead to all kinds of attacks, such as SQL injection, interpreter injection, and buffer overflows.
If you’re on board with this, how do you do it? Here’s the short answer; OWASP’s Data Validation wiki recommends four approaches, in descending order of suitability, of which I’ll cover the top three:
- Accept known good: check that the data is one of a set of tightly constrained known good values. Any data that doesn’t match should be rejected.
- Sanitize: Rather than accept or reject input, another option is to change the user input into an acceptable format.
Personally, I prefer a combination of taking known good input and then sanitizing it afterward.
In addition to these steps, review:
- The criteria users need to pass to authenticate with your applications.
- What users can access once they’re authenticated.
Do users have too broad a level of access throughout the system? Are they able to see things in the system that they shouldn’t? If so, revise the rules to make them more restrictive.
I don’t mean on a case by case basis. I mean that you should analyze all of the existing rules and then consider that access in light of the current needs of your users.
Ensure that catch-all rules are either made only with good reason or not made at all. Then, adjust the rules to make sure that users can access what they need, but nothing more. Also, ensure that these rules are clear, unambiguous, and well documented.
After analyzing, simplifying, and validating, it’s time to check what’s already happening. Specifically, it’s time to check what information your applications are giving away. What I mean by that is:
- What information is being leaked out in logs or by applications which form part of the software stack on which your application rests?
- Do your web servers give out information about themselves, such as build numbers or details about the operating system on which they rest?
- Does the information which is being written to your logs contain sensitive data, such as usernames, passwords, directory or database structures?
While it’s often very helpful to know, in intimate detail, what was happening at the time an error occurred, this information may not always be only in your hands.
If the information’s being written to an external service, or the log servers are stored in a data center outside of your control, can you be sure that no one else can access them?
For that reason, review the information that is written to them, and be judicious in your assessment of what can and cannot be written there.
Next, what external libraries are you using? Before we dive in too deeply, I’m not encouraging you to never use external libraries. Quite the contrary.
I do encourage you to use them. Doing so reduces the amount of code which is needed to be written, as well as reduces the maintenance burden. These two advantages cannot be overstated.
However, what’s your process for assessing the quality and security of libraries before you use them? How are you assessing that these libraries don’t have bugs which in time may allow your software to be breached?
Your developers could read through the code (and that’s not a bad thing to do for a whole host of reasons). But, that takes time. So, to aid in this, ensure that you’re using a library assessment tool, such as Roave Security Advisories for libraries written in PHP. This package:
Ensures that your application doesn’t have installed dependencies with known security vulnerabilities
I’m not as familiar with other languages. But here are some security advisory scanners, databases, or services which I have found.
Do you know what applications you have? Do you know what they were created to solve? Do you know who uses them? Do you know when they’re used?
It’s quite common, as businesses grow, to see a range of applications be created to solve a veritable cornucopia of needs. Applications can be created as one-off solutions, or they can be dedicated solutions.
Regardless, often they’re not tracked and documented. Given that, it’s hard, if not impossible, to know how access is being made available to your systems, by whom, and when. When that’s the case, how can you be sure that your applications and your systems are secure?
This isn’t application specific, more of an IT policy. But it’s important nonetheless. Make sure that your organization conducts an audit and records information on every application, who can use it, when they can use it, and why. If an application’s no longer required, ensure it’s decommissioned.
Build a Development Security Mindset
Finally, build a security mindset in your development team. If you fail to do this, it’s almost inevitable that, before long, all your work will have been in vain. If developers don’t appreciate security, they won’t consider it when they’re coding. And, invariably, they’ll make the same mistakes all over again.
So find someone who can champion security on your development team and then:
- Empower them to encourage your team (or just plain nag them) to code securely.
- Empower them to teach your team all about security.
- Show your team that security is as important as good design and testing.
- Make it a part of the culture to find and kill bugs.
- Engender a positive reward culture for not creating security loopholes in the first place.
- Breed that mindset in all of your developers, both young and old.
No one’s ever too young or too old to learn — especially about security.
Sure, legacy applications can have a range of poor decisions (or good decisions based on a less than ideal set of circumstances) baked in. But those decisions aren’t set in stone. They can be changed.
Unlike physical structures, such as bridges, roads, and buildings software is adaptable, malleable, and flexible. And as so much of modern life depends on it, we have to ensure that our applications are as secure as possible.
I encourage you to take all of the suggestions in this article into consideration and begin refactoring your legacy applications so they’re as secure as you can make them.
It will take time. But it’s worth it. To help you out, here’s one final list, a selection of books on refactoring legacy applications. I hope they will make the process easier.
About the author
Matthew Setter is an independent software developer and technical writer. He specializes in creating test-driven applications and writing about modern software practices, including continuous development, testing, and security.