Archive for the Penetration Testing Category

GuestStealer Wrapup

Posted in Cloud Computing, GuestStealer, Penetration Testing, ShmooCon, Virtualization Security, VMware, Vulnerability Assessment on March 1, 2010 by Tony Flick

In addition to the previously mentioned Nmap script, GuestStealer has now made its way into a Nessus plugin and a Metasploit module. Nessus Plugin 44646 was released by Tenable a few weeks ago and the Metasploit module was pushed up to the trunk last week.

GuestStealer has been mentioned in several articles and blog posts recently, including DarkReading – Tech Insight: Securing The Virtualized Server Environment and The Hacker News Network. While most have been accurate, several early blogs stated that GuestStealer used a cross site scripting attack to steal the guests. So to clarify and avoid any confusion, GuestStealer exploits the directory traversal vulnerability described in CVE-2009-3733. For further information, check out the presentation slides or presentation video.


LAN Party anyone? Let’s volunteer to hack Government websites…

Posted in Application Security, Government, Penetration Testing on June 21, 2009 by Matthew Flick

Would I volunteer my time? Sure, why not. Is it really a good or realistic idea to have our Military and Government solicit an army of volunteers to test their web sites? Probably not. Jeremiah Grossman, CTO and founder of WhiteHat Security, this past week voiced his opinion on a topic that isn’t entirely new, but hasn’t been brought up by an industry pundit for a while. He estimates “fewer than ten percent of United Stats .GOV and .MIL websites are professionally tested for custom Web application vulnerabilities” and suggests getting vulnerability researchers to volunteer to test those web sites.  I honestly didn’t see his blog entry until late Thursday and up til now sort of waited to see all the comments to his thoughts. It seems feedback is back and forth with the more detailed responses are on the “not so good” idea side of the fence. Jack Mannino had a good response, which Jeremiah is planning to respond to.

Here are my thoughts: Have you ever worked directly for/with the government? Not a rhetorical question or being a smart a$$, but seriously. It’s a given that the US Federal government doesn’t spend enough money or resources on cybersecurity [see previous blogs]. I cannot imagine how difficult it would be to attempt to coordinate these “volunteer tests” with the Federal Agency, the government security teams that protect the site, the network and system folks responsible for the site, and then all the contractors involved in monitoring and supporting the site… that’s just a fraction of the folks on the government site. We’ve done contract work with the Federal Government and doing a simple scan of a small environment from a single source address took a tremendous amount of coordination and planning. It would be insanity to try to coordinate dozens of volunteers. And that is just one of a couple pre-planning obstacles I can see.

What about from a slightly tangent ethics view; is it volunteers or freebies? The Federal Government from what I’ve experienced has pushed procurement and purchasing folks through different types of ethics training to prevent inappropriate kickbacks to individuals and organizations. Could a big corporate entity like “Big Yellow” for example be allowed to volunteer a team of young staffers to “pen-test”? Wouldn’t it then be ironic if that same company down the line got a chance to bid on projects or even technologies to be implemented? You got to expect someone out there to try to take advantage of it.

The one point that Jack made mention that I’m not going to rehash too much but agree with is regarding incident response. Could you imagine being the contractor or GS-10 sitting in the SOC during “volunteer pen-test day”? If the government doesn’t have the tools to assess their own web sites, I wonder if they would have the technologies or resources in place to review the logs generated to figure out what is considered “normal” vs. “bad” traffic.

I’m not entirely sure if Jeremiah really thinks it’s a good idea or is throwing it out there for media fodder (I see SC Magazine already picked it up). It does bring up some interesting early debate but the more I think about it, it just doesn’t seem reasonable.

However one thought I did come up with and please pardon me if there is already an organization like this, is taking a page from the National Guard. It would still require some money, background checks, a tremendous amount of coordination and volunteering. How about a “Cybersecurity National Guard” unit, where people can volunteer X hours a month and have one of the core responsibilities testing government and military web sites. I’m deeply familiar with the military and intelligence community programs and teams that do this, but this “volunteer” group could be staffed with vulnerability researchers who have extra time or want to do something more valuable for ISC^2 CPEs ;)

Holes in Your Security Christmas Stockings

Posted in PCI, Penetration Testing, Vulnerability Assessment with tags , , , , , , , , , , on December 31, 2008 by Tim

Over the Holiday season, I tended to my family’s computers for their annual check-up. As usual, I initially checked which Microsoft security updates were not installed. While their computers are configured to download and install Microsoft security updates automatically, several updates usually require manual interaction to install. After the Microsoft security updates were installed, I began the daunting task of installing the non-Microsoft application security updates and upgrades that have accumulated over the course of the year.

Similarly, most organizations have setup Windows Server Update Services (WSUS) or Systems Management Server (SMS) to apply Microsoft security updates. However, most organizations still have not implemented an enterprise-wide solution for applying security patches to non-Microsoft applications. Applications such as Adobe’s Acrobat and Flash or Sun’s Java Runtime Environment are often installed as part of a base laptop image or installed by employees at a later time. While their providers often release security updates, these applications remain at the current patch level as when they were installed. As a result, organizations remain extremely vulnerable from these non-Microsoft applications. For example, on December 5, 2008, US-CERT released an advisory (US-CERT Advisory TA08-340A) concerning security vulnerabilities that could allow an attacker to obtain complete control of systems running vulnerable versions of Sun’s Java Runtime Environment.

I am not recommending organizations abandon non-Microsoft products and would encourage organizations to evaluate the alternatives. The current problem is that non-Microsoft applications are often over-looked and the emphasis in patch management is on Microsoft products.
Several enterprise solutions exist to apply patches to non-Microsoft applications. Similar to Microsoft’s WSUS and SMS, these products are not perfect and have their own flaws. In order to implement an effective solution, the following best-practices practices should be followed:

• Identify the applications that have valid business requirements

• Restrict users from installing other applications

• Implement an enterprise-wide solution that controls applying security patches to non-Microsoft applications

As Microsoft attempts to create more secure products, hackers are crafting malware to specifically exploit non-Microsoft products. For example, a Trojan masquerading as a plugin for Mozilla’s Firefox web browser was recently identified ( – Firefox Trojan). The non-Microsoft application security patches have been overlooked for many years and should become a major initiative of organizations.

Nmap’s New Math? 9 = 8 but does 3,674 = 65,536?

Posted in Penetration Testing, Uncategorized, Vulnerability Assessment with tags , , , , on November 13, 2008 by Tim

Fyodor’s inclusion of the results from the Top Ports Project into the latest version (4.76) of Nmap is a welcome addition to information security professionals who need to perform port scans of large networks in short periods of time. **cough*** Consulting Firms ***cough**

However, the claim that using the “–top-ports” switch to scan only the top 3,674 TCP ports is 100% effective opens the door for yet another false sense of security. I wholeheartedly believe that it was NOT Fyodor’s intention for organizations to rely solely on port scans using this configuration to determine which ports are open. However, it does not require a leap of faith to believe that some less “offensive minded” security professionals will now use this configuration to get a “complete picture” of their networks.

Why is this a problem? If you are reading this blog, you probably already know where I am going with this. It doesn’t require another leap of faith to believe that an attacker or offensive minded individual would examine the “Top Ports” list and code their malware or configure their tools to operate on ports that are not included in the list. The result? Those who subscribe to this complete picture mentality will not discover the open ports.

So how do we effectively leverage the hard work of the Top Ports Project? I’m not entirely sure yet. Perhaps we use the “–top-ports” switch to perform differential scans and continue to use “-p-” to perform baseline scans? Or maybe we use the “–top-ports” switch to perform discovery scans and “-p-” to perform enumeration?

I do know that the information that has been provided as a result of the Top Ports Project is valuable. How do you think we can effectively use this information?