As I write this, some of the country’s brightest developers and practitioners of IT security are at the RSA Conference in San Francisco to discuss the newest techniques and technologies that – hopefully – will keep our networks and computer systems secure.
For this article, though, I’d like to focus on where many organizations are falling short today in defending against current threats and especially the more dangerous advanced persistent threats. I consulted with Michael Sutton, vice president of security research for Zscaler and head of Zscaler ThreatLabz, to put together this list of common shortcomings that just might provide the opening that attackers look for.
* Failing to stay current with modern technologies and techniques.
Many companies continue to do what they have always done to protect their IT systems, but that isn’t enough for today’s security landscape. There are two staple technologies that have become the norm and have pretty much 100% penetration: host-based anti-virus and appliance-based URL filtering. They are certainly important and they will weed prevent some attacks, but neither is going to catch more advanced threats.
The IT security landscape has changed so much in the past few years that companies that continue to rely only on anti-virus and URL filtering are at high risk because modern-day threats can skirt those defenses too easily. It’s important to layer on additional current technologies that have the ability to detect today’s types of threats.
* Not utilizing a comprehensive approach to defending the mobile and “off network” world.
Workforce mobility has changed dramatically in recent years. The typical employee today works off-premises at least occasionally and uses mobile devices to check email and access applications. Obviously the mobile ecosystem can’t be defended the same way as traditional network-connected devices. For example, it can be difficult to run anti-virus software on a smart phone because it uses up the battery charge too quickly.
Even when someone is using a laptop that is off the enterprise network for a time, it’s possible for the device to pick up malware that is introduced to the enterprise network when it is reconnected. A perimeter defense tool would never catch this infection and the malware is free to spread to other devices.
In general, global visibility of devices and data is becoming much harder due to mobility. Many companies haven’t yet solved the challenge of how to track the traffic and the patterns from users on their smart phones and tablets and working at a Starbucks on their laptop. That’s a mighty big blind spot for most organizations.
* Using disparate security technologies and not correlating the details from them.
Visibility in detecting security events is another huge weakness for most enterprises. Companies buy best-of-breed solutions from different vendors and then don’t have the means to correlate and analyze information across those solutions. The SIEM industry was created to pull all these log files back into one location for analysis, but few companies have actually achieved this objective.
It’s more common for organizations to look at incident reports individually by location or by technology. This approach fails to connect the dots because, when we are dealing with targeted attacks and APTs in particular, the attacker is acting like a sniper. So for example, there might be a little bit of traffic hitting a company’s New York office, a little bit hitting the London office, and a little bit hitting Hong Kong. It’s only by looking at the full global picture that you see all three people targeted were executives, and they were all targeted with a similar social engineering attack. Without that global visibility and the ability to correlate events, the attack may go unnoticed until it’s too late.
* Putting most of the enterprise’s resources into prevention and ignoring detection and remediation.
A comprehensive approach to IT security includes prevention, detection and remediation. Most companies spend 90% of their security budget on prevention in the belief that they should focus on stopping or preventing attacks in the first place. From his position with the Zscaler ThreatLabz, Sutton can see that most companies are already infected to some degree. “Of course we want to protect and defend against attacks before they affect us if at all possible, but we absolutely can’t ignore the detection side or the remediation side,” says Sutton. “We know we’re going to get some infections and we need to limit that damage as quickly as possible and isolate the problem and do the appropriate remediation steps. Enterprises need to adopt that focus.”
* Not analyzing outbound traffic.
On the detection side of security, companies need to be inspecting outbound traffic—but rarely do. They should be looking for outbound traffic that suggests an infection, such as communication going to a command and control server. And that can’t just be checking it against a black list of known malicious sites because they change all the time. It’s important to inspect every part of that request to say, ‘I don’t know where this request is going because I’ve never seen that destination, but this has all of the characteristics of a botnet. I am going to flag it as such and block that traffic.’
* Failing to do forensics after a breach.
Breaches do happen, even to the best of companies. When a breach occurs, it’s important to figure out how and why it happened in order to prevent it from happening again, as well as to figure out how extensive the breach may be. If the company doesn’t have the expertise in-house to do the forensic analysis, it’s worthwhile to look for outside help to get the answers that can help prevent another breach.