Archive for November, 2008

The New PCI 6.6

Posted in Application Security, PCI with tags , , , , on November 20, 2008 by Matthew Flick

All Your Public facing Web Apps Are Relevant To Us

I’m going to start off this post with the moral of the story: Good intentions often have bad, unintended consequences. The following is the ‘Testing Procedures’ text of requirement 6.6 from the new PCI DSS v1.2 (source: https://www.pcisecuritystandards.org/security_standards/pci_dss_download.html):

For public-facing web applications, ensure that either one of the following methods are in place as follows:

• Verify that public-facing web applications are reviewed (using either manual or automated vulnerability security assessment tools or methods), as follows:

– At least annually
After any changes
By an organization that specializes in application security
That all vulnerabilities are corrected
That the application is re-evaluated after the corrections

Verify that a web-application firewall is in place in front of public-facing web applications to detect and prevent web-based attacks.

    Note that for ease of reading I’m going to shorten “PCI DSS v1.2 Requirement 6.6” to just “PCI 6.6” since nearly everyone already refers to it as such.

    Without further clarification, this requirement now compels an organization that stores, processes, or transmits cardholder data (“CHD”) to apply PCI 6.6 to all of their “web applications” that are publicly accessible, regardless of whether the applications participate in the storage, processing, or transmission of CHD. We must assume also that this requirement will be applicable regardless of other security controls as well, such as network segmentation. Disregarding all other changes to the PCI DSS, this one requirement could significantly increase the level of effort and cost of PCI assessments.

    Before I get into advice for dealing with the new requirement, let’s examine the two options in relation to the change. First option is to have an application security organization perform security testing on the applications, correct identified vulnerabilities, re-test and repeat if necessary. Furthermore, such testing must occur “at least annually”, which should not be a huge problem…and “after any changes”, which could very easily force an organization to simply stop making any changes to its web applications. Unintended consequences.

    The second option seems rather simple, straightforward, and innocuous unless you have actual experience with web application firewalls (“WAFs”). At a high level, WAFs have two basic modes of defense: 1. Blacklisting attack strings, typically pushed down by the vendor like anti-virus signature updates, and 2. Custom rule checking, which requires training the WAF to understand more of the application it is protecting. Both modes can often lead to false positives; the second and more effective mode requires a lot of human interaction throughout the system’s lifecycle. Add to the mix the large price tag for commercial WAFs and the new requirement that requires protection for all public facing web applications, and what you likely end up with is an organization implementing an easily defeated blacklist of your standard attack strings (XSS, SQL injection, etc). Unintended consequences

    I do not want to be accused of PCI hate speech. In fact, I would like to sincerely applaud their efforts in helping to drive application security education and strongly encouraging organizations to address the problems. My concern, though, is that the more stringent the requirements become, the more likely organizations will continue (or begin) to do only the bare minimum in order to “check the box” instead of fixing the root cause of the problem. This is similar to the problem of progressively raising taxes: the higher the tax rate climbs, the more people will invest to identify loopholes, legal or otherwise. Or, in Star Wars terms, “The more you tighten your grip, Tarkin, the more star systems will slip through your fingers.” –Princess Leia

    With this information in mind, we now come to the issue of how to approach PCI 6.6. I’m going to suggest the most pragmatic approach that should work for most or all organizations, or at least those organizations that must endure PCI DSS assessments. In step-by-step format, naturally:

    1. Classify applications – Separate all public facing web applications into a grid by user type (e.g. internal users only, external users only, mixed) and data sensitivity (e.g. CHD, public, internal/corporate, internal/sensitive).
    2. Shorten list – If possible, move all web applications with internal users only to the internal network and provide external access via VPN. This should be an obvious move and something that should have been done well before PCI 6.6, but you might be surprised how often I see this situation.
    3. Create assessment plan – First determine the budget for performing automated runtime vulnerability scans (the “cheap” option) against all remaining public facing web applications. Then the remaining budget—if any—can be reserved to add more effective assessment tasks to the most relevant applications, obviously starting with those that store, process, or transmit CHD.

    As a general rule, I suggest using the results of a vulnerability scan/assessment against one application to identify and remediate potential weaknesses in other applications that either use the same codebase or were developed in similar fashion. This approach should lessen the number of findings in subsequent tests.

    If all the advice in this post seemed familiar or common sense, then congratulations…you’ve been paying attention! The security industry has been evangelizing a risk-based approach for a long time now. Or at least some of us in the industry have been.

    Nmap’s New Math? 9 = 8 but does 3,674 = 65,536?

    Posted in Penetration Testing, Uncategorized, Vulnerability Assessment with tags , , , , on November 13, 2008 by Tim

    Fyodor’s inclusion of the results from the Top Ports Project into the latest version (4.76) of Nmap is a welcome addition to information security professionals who need to perform port scans of large networks in short periods of time. **cough*** Consulting Firms ***cough**

    However, the claim that using the “–top-ports” switch to scan only the top 3,674 TCP ports is 100% effective opens the door for yet another false sense of security. I wholeheartedly believe that it was NOT Fyodor’s intention for organizations to rely solely on port scans using this configuration to determine which ports are open. However, it does not require a leap of faith to believe that some less “offensive minded” security professionals will now use this configuration to get a “complete picture” of their networks.

    Why is this a problem? If you are reading this blog, you probably already know where I am going with this. It doesn’t require another leap of faith to believe that an attacker or offensive minded individual would examine the “Top Ports” list and code their malware or configure their tools to operate on ports that are not included in the list. The result? Those who subscribe to this complete picture mentality will not discover the open ports.

    So how do we effectively leverage the hard work of the Top Ports Project? I’m not entirely sure yet. Perhaps we use the “–top-ports” switch to perform differential scans and continue to use “-p-” to perform baseline scans? Or maybe we use the “–top-ports” switch to perform discovery scans and “-p-” to perform enumeration?

    I do know that the information that has been provided as a result of the Top Ports Project is valuable. How do you think we can effectively use this information?

    Remediating Common PCI SSL Vulnerabilities with a Simple Windows Registry File

    Posted in PCI, Uncategorized with tags , , , , , on November 12, 2008 by Tim

    Recently I was working with a client who was struggling to remediate two vulnerabilities identified by their quarterly perimeter PCI scans. Specifically, they needed to remediate the following vulnerabilities:

    • SSLv2 Enabled
    • Weak SSL Encryption Ciphers Enabled

    With these vulnerabilities being so common amongst those bound to the PCI DSS, I would have hoped that better remediation information existed beyond Microsoft’s overcomplicated Knowledgebase Article,

    In response to this lack of quality remediation information, I created the following Windows Registry file that aims to simplify the remediation of both vulnerabilities. This file has been tested on IIS 6.0 (Windows 2003) and disables the following weak ciphers, hashing functions, and protocols associated with SSL:

    • Weak Ciphers – DES 56, NULL, RC2 40/128, and RC4 40/56/128
    • Weak Hash Functions – MD5
    • Weak Protocols – PCT 1.0, and SSL 2.0

    You can download the registry file from our website, here.

    The standard “Backup your registry first” and “Test on non-production systems first” rules apply. Happy remediating! (and more importantly…SECURING!!!)

    Application Security Industry: 2008 Report Card

    Posted in Application Security with tags , on November 6, 2008 by Matthew Flick

    I have had many discussions this year regarding the future of the application security industry and even more about its current state. It’s interesting how people of such varying backgrounds will have similarly varying views; this short article is designed to capture those views and hopefully drive some productive discussion as a result.

    Where are we now? Should be a simple question, right? Let me summarize in three main categories:

    Maturity – Purists aside, “Application Security” today means the security of web applications and related things, such as web services. Application Security started to grow up as older information security areas–like network security and vulnerability management–matured quite well. I am quite sure it seemed at the time a great idea to port the mature security ideas and technologies over to the application security world. However, as usual we have learned the hard way that the proverbial square peg does not fit into a round hole. Result? Application Security is in the early stages of being adopted into organization’s information security programs. **Golf Clap**

    Technology – I remember from childhood riding long distances (>1000 miles) with my family of five in a compact car to visit the in-laws. Of the many shortcuts we tried using our trusted 10-year old map, I remember one that ended with a missing bridge over the Mississippi River. The recent development of app sec tools eerily reminds me of this trip: choosing the wrong path and driving insistently down that path until you end up with very wet socks. Luckily for my family the car brakes worked…unfortunately I don’t see anyone trying to even find the app sec technology brakes.
    So who is driving us down the wrong path, and why is it the wrong path? The typical response would be the vendors, but I disagree. Runtime scanners, source code scanners, application layer firewalls–they all perform as designed (in most circumstances). The problem lies in how these tools are sold and used as a method to secure the vulnerable applications. To a slightly lesser degree is the nearly insurmountable problem of these security tools keeping pace with the fast growing arena of application technology and subsequent vulnerabilities. Both of these problems illustrate why organizations needs to focus more on the root cause of the vulnerabilities rather than on the detection and prevention of attack vectors. The unfortunate fact is that application security technologies can not—and may never be able to—keep pace with vulnerability and attack research. This is the wrong path. This is why we need to hit the brakes and find a better route.

    Approach – Is that light? Are we in a tunnel? Yes and no. The application security world has witnessed several of its citizens make wonderful presentations on why we need to…**drumroll**…incorporate security into the SDLC! Or at least it was witnessed by the five of us not watching the more exciting and entertaining presentations on the latest and greatest XSS and CSRF attacks. Whereas this statement of incorporating security into the world of development is theoretically valid, when put into practice the wheels tend to fall off (or in worse cases, explode). After assessing so many different environments and working with clients to build practical and effective application security programs, I’ve all but killed the nerdy theorist inside of me. The old “one size fits all” adage really starts to become annoying!
    I am not arguing the approach nor will I espouse a new theory of my own–remember I said it is valid. Instead I will just note that we are, as an industry, still at the theoretical stage. At least I can cross off “job security” from my list of worries.

    Where are we going? If the first half of this post was not depressing enough for you, go back and read it again more thoroughly. Then read on. Here is a sampling of quotes I collected this past year on the question of where the application security industry will be in 3-5 years:

    “More of the same. New technology maybe, same attacks against both the new and old technology.” –security consultant

    “I don’t think much will change. With Flash and AJAX growing there will be new opportunities there, but not much else will be different.” –application security manager

    “SaaS is the future of app sec. The tools are quickly getting smart enough to attack application business logic and will revolutionize the industry.” –vendor marketing guru (shocker)

    “More companies will realize application security is something they have to deal with.” –security guy

    Job security…oh right, I already crossed that off. Is this the best we can do? Inch our way further down the wrong path? Marginally better tools, more of the same attacks against current technology, and bigger budgets for the same insufficient solutions? I cannot be the only one thinking that this is a problem. Sadly, I have to somewhat agree with the comments above. In dealing with people from all sides of the aisle, it does appear that the application security community is settled comfortably with the status quo. With the massive increase in application security spending over the last few years, can anyone blame them? That being said, maybe I am the only one plotting and scheming of a better way…and maybe that’s not such a bad thing.

    Development Double Agent

    Posted in Application Security with tags , on November 4, 2008 by Matthew Flick

    Of the many ideas floating around the application security industry lately, there is one often overlooked but very effective approach: spying. Too often security personnel will look at developers as improperly educated code jocks, akin to Hollywood’s portrayal of “hackers” in the 1990s. Similarly, developers see the security analyst as an idealistic zealot with no concept of how things are in the  “real world.” So the goal is to bridge the gap between the security and development groups. That bridge is a trusted developer that has a technical understanding of application security issues.

    Who should be the bridge? There is no perfect answer to this question, so I’ll give some analysis of your options.

    1. Current Developer – By promoting–or at least elevating–a trusted senior developer to the position of “Security-Development Bridge”, an organization can expect near seamless transition from vulnerability detection to resolution (assuming the recommendations below are implemented properly). As a developer, the individual will understand the unique demands and expectations that the business pushes on its developers. With the appropriate training, the individual can also help to translate vulnerability findings into development remediation plans. With insider information, this double agent could also help the Security group better understand how the development group implements their recommendations. The related personal growth and resume enhancement is similarly a great opportunity for the developer in question. This should be considered your best option.
    2. Current Security Analyst – This would not be considered a bad option if an organization is building a new application security program or just getting their feet wet. The window of opportunity for the Security group to successfully plant someone inside Development group is small, but possible. Any pre-existing resentment or prior disagreements may end the initiative before it starts, regardless of whether the individual analyst was directly involved. Of course prior development experience is highly recommended, as is detailed training on the organization’s SDLC and related policies and procedures. Being a current employee of the organization will allow the individual to immediately focus on learning how to best fulfill the role rather than laboring through the on-boarding process.
    3. Experienced Outsider – If option 1 and 2 are not possible or preferred, an organization may have to look outside its doors to fill the role. All of the recommendations and rules of experience and group interaction described above apply here as well. Depending on the amount of ongoing development, the role is likely to be a full-time position, although providing hands-on assistance to developers may be possible and help to build trust.
    4. Outsourcing – Despite being in a position to benefit from a long-term staffing opportunity, I generally do not recommend outsourcing the double agent position. It may be more difficult to build trust amongst the developer ranks for a true outsider. Alternatively, a consultant could be in a particularly effective position to build a tunnel if a wall separates the Security and Development groups. Organizations and individuals alike are often more willing to trust an experienced and knowledgeable third party than their enemies across the cube wall.

    How should we build this bridge? Similar to nearly all other security initiatives, there should be support from management. I doubt the upper echelon of most organizations will want or need to involve themselves in such minor details, but at least the heads of the Security and Development groups need to fully support the plan and its details. Some other suggestions:

    • The Security and Development groups must very clearly define the role and responsibilities of their new agent.
    • As indicated by the title of this article, the double agent should be organizationally placed in the Development group. This can help to build trust amongst the developers and improve the application security program where it typically struggles most: remediation.
    • Encourage or initiate teamwork. A friendly group lunch, happy hour, or other event could very easily improve the ROI of the double agent initiative. Yes–when in doubt, add food!