Monthly Archives: January 2009

IT-GRC Benchmark Survey – Aberdeen Group – FREE Report (a $399 value)

Aberdeen Group, a well known research organization, is conducting a benchmark survey on IT-GRC.  The response from this survey will be the foundation of their IT-GRC report in March.
Participants who respond to the survey (15-30 minutes long) will receive a complimentary copy of the final research report by Aberdeen (a USD 399 value).

Please click on http://www.aberdeen.com/survey/it-grc-control to answer the survey.

Palo Alto Networks Hosting Webinar with Forrester on PCI Audit Process

Jan 29, 2009 (Close-Up Media via COMTEX) —

Palo Alto Networks will host a webinar with Forrester Research Security and Risk Management Analyst, John Kindervag on Tuesday, February 10 at 10 a.m. PST, 1 p.m. EST.

PCI audits are often daunting, both in scope of effort and associated costs, and this informative webinar will review how to simplify the process through network segmentation and user-based security policies.

The October 2008 update of the PCI DSS documentation states that companies can reduce the cost and complexity of PCI compliance by using network segmentation to isolate the cardholder data in a secure segment. Without adequate network segmentation – sometimes called a “flat network” – the entire network is in scope of the PCI DSS assessment. This webinar will offer insight into the issues, challenges and strategies required to meet PCI compliance.

http://paloaltonetworks.com/events/webinars/index.html

via Palo Alto Networks Hosting Webinar with Forrester on PCI Audit Process.

Archer Sets Its Sights On IT GRC Rival, Acquires Brabeion

 

 

 

Top contenders in the IT governance, risk, and compliance market merged on Tuesday as Archer Technologies announced it is acquiring Brabeion Software. Forrester projected consolidation as a key GRC market trend for 2009, and we explored the issue further for IT GRC vendors in our report, “Consolidation Looms for the IT GRC Market.”

This was a strong move for Archer, as other, larger vendors are closely eying the IT GRC space for acquisition potential. Along with the acquisition of Paisley by Thomson Reuters last month in the Enterprise GRC space, this is just the beginning of what’s to come over the next 12-18 months. The GRC market as a whole is extremely broad and ripe for growth, but it is also crowded with niche vendors. Market leaders and enormous outsiders will be eager to scoop up as much of the pie as possible, which means more deals are on the way.

via The Forrester Blog For Security & Risk Professionals.

The Death Of PCI DSS? Don’t Be Silly – Security Blog – InformationWeek

The Death Of PCI DSS? Don’t Be Silly

Posted by George Hulme, Jan 27, 2009 10:13 PM

Yes, in the past year two big retailers, who were apparently compliant to the Payment Card Industry Data Security Standard, were breached. Does that mean PCI DSS has grown increasingly irrelevant? That’s absurd.

First, I’m going to state the obvious to anyone who has studied IT security and compliance: being compliant to any mandate won’t make one secure. And it never will. I can’t make the case as to why any better than Mike Fratto did in his earlier post, so I won’t even try. But the point is: focus on building a secure and sustainable infrastructure, no matter how big, or small.

I’ve spent the better part of this decade interviewing IT security experts, vendors, chief information security officers, and other security mangers in just about every industry. Generally — and I stress *generally* (as there are always exceptions) — retailers, manufacturers, and heath care providers have tended to have the least mature security programs in place. Not true in every case, but I found it to be true often enough to see a trend.

Prior to the adoption of PCI DSS several years ago, online retailers and brick-and-mortar retailers barely paid attention to IT security. Trust me, in 2003, when it came to a Top 10 list of the most pressing IT objectives for most merchants, IT security ranked around 580th. They probably spent more time evaluating stationery than how to secure their databases, Web applications, and VPNs. So when you see retail IT managers arguing over whether they should install a Web Application Firewall, or conduct application security vulnerability assessments, or even do both, you can thank PCI DSS to a large degree.

PCI DSS has done more to raise security awareness among retailers than anything else I can think of. Even the torrent of breaches earlier this decade. And, while I can’t quantitatively prove it, PCI DSS has most certainly raised the security of the retail industry, in general.

The unfortunate breaches of Hannaford Bros. Co, and more recently Heartland Payment Systems, which were both PCI compliant, doesn’t make PCI DSS either irrelevant, obsolete, or worthless. Whoever thought PCI DSS would eliminate data breaches in retailers should probably make sure they don’t work in IT, and definitely make sure they don’t work in IT security. Its goal was to raise security among retailers and merchants, and to a large degree its been a success.

This standard isn’t perfect, not by a long shot. The standard won’t eliminate security breaches — and no one said it would. But the standard has increased the security of many retailers, and probably stopped quite a few breaches along the way.

Do we talk about the “failure” of law enforcement when someone commits a crime? Do we talk of the irrelevance of physical security when banks are robbed? Do we talk about how “worthless” the military is after a lost battle or two? Do we talk about how eating healthily and exercising is such a waste when falling ill?

No, intelligent people do not do those things.

The battle against cybercrime is like any other long-term fight, and once in awhile companies that strive to do everything right are going to find themselves breached. It’s the nature of this beast, not the fault of PCI DSS.

via The Death Of PCI DSS? Don’t Be Silly – Security Blog – InformationWeek.

USAJobs hacking raises further security questions

For the second time in 17 months, the federal jobs portal has been hacked. And the attack appears to be similar to the one that happened in Aug. 2007 when hackers stole data from about 8 percent of the more than 2 million job seekers on USAJobs.gov.

The Office of Personnel Management issued a statement Jan. 26 detailing the problem. In a press release, OPM says “[T] he Monster database was illegally accessed and certain contact and account data were taken, including user IDs and passwords, e-mail addresses, names, phone numbers and some basic demographic data. The information accessed does not include resumes. The accessed information does not include – sensitive data such as social security numbers or personal financial data.”

The reccurring success of hackers getting into the Monster database causes some concern among security experts.

Glenn Schlarman, who spent more than 25 years working on cybersecurity issues for the Office of Management and Budget and the FBI, says there have been questions around vendor cybersecurity for some time.

“Most of us believed for a long time that industry was better than government,” says Schlarman, who now works with Good Harbor Consulting. “I’ve long since given up on that theory. I don’t believe they are any better than the government.”

Schlarman, who retired in 2006, says in the past OMB was uneasy about the security of USAJobs.gov.

“The question always in my mind is OPM’s inspector general really performing the due diligence that the law requires them to perform,” he says. “I have to question that because this is the second breach that we know of, but because we don’t know if anyone is conducting independent evaluations that the law requires we don’t know this is only the second time.”

The Federal Information Security Management Act (FISMA) requires agencies to make sure vendor systems that hold federal data or are connected to federal systems meet all the cybersecurity requirements. The law places the onus on the IGs to audit these vendor systems.

The OPM IG says in its semi-annual report to Congress for 2008 that it audited the security of USAJobs.gov Sept. 5. But it is unclear whether this includes Monster’s security.

An OPM spokesman was looking into whether the IG’s audit included Monster’s systems.

Monster spokeswoman Nikki Richardson says, “No company in our business can completely prevent unauthorized access to data. Monster’s Web site is designed as a search resource for our customers, and we need to balance that service with the need to provide the best practical security features we can.”

Richardson says the company does not comment on its security measures.

“We take this, and any incursion, extremely seriously,” she says. “Immediately, upon learning about this, Monster initiated an investigation and took corrective steps. The company continually monitors for any illicit use of information from our database, and so far, we have not detected the misuse of this information.”

The biggest risk federal job seekers face is phishing attacks and OPM suggested USAjobs.gov users change their passwords next time they use the portal.

via Federal News Radio 1500 AM: USAJobs hacking raises further security questions.

Code Review or WAFs? PCI 6.6

by Ulf Mattsson – CTO of Protegrity – Wednesday, 28 January 2009.

Short answer: both. Compliance with requirement 6.6 of the PCI DSS cites the use of either a web application firewall (WAF) or code review. It’s far more effective to combine both.

Ultimately, you want to build the vulnerability scanning and testing phase into your development process. Realistically, however, enterprises should be more concerned about the applications they’ve already deployed than about revamping their QA process. Enterprises should attack the problem first by identifying all their sites and the applications running on them. An audit by a third-party expert and a through scan by a vulnerability scanning tool can give the enterprise a starting point for remediation. And a WAF will keep the criminals out and the applications running while you’re working to correct problems. The best web application firewalls feature threat level driven security policy escalation capable of dynamically adjusting protection levels as conditions warrant.

Unfortunately there are many classes of vulnerabilities which automated tools cannot easily spot, so we must also utilize other methods for identifying the “false negatives” – vulnerabilities that exist in the code, but were missed. The two primary approaches used for web application testing are “Fault Injection” and “Source Code Analysis.” The former focuses on interactive testing of a website, trying to force error conditions.

Qualys: This free security guide describes the scanning requirements for PCI-DSS and provides a quick-reference requirements matrix for both Merchants and Service Providers of all levels.

Source Code Analysis looks at how data flows through an application and where the application might be manipulated. (These are necessarily simplistic explanations of complex procedures that are beyond the scope of this short article.) I prefer using a mix of both approaches, known as “Grey Box Assessment,” to gain the most complete picture possible of an application’s security profile. Grey Box also combines aspects of White Box assessment, conducted with full access of an application’s functional specifications and source code, and Black Box assessments, where a tester begins work with absolutely no knowledge of the application.

Since there are potentially an infinite number of tests that can be run when testing an application, the best-practice approach is to risk-prioritize the work. Designate critical application areas as highest risk, and etc. in descending order of perceived risk, and thoroughly test what most needs to be tested. Then audit the network regularly to spot any problems, develop a process for patching and correcting code, and consider scheduling security audits conducted by outside experts on a yearly basis.

Code review and penetration testing, teamed with sound policies, procedures and smart technology, will help put malicious hackers out of the data-stealing business. Layers of security is far from a new idea, but it will remain valid for years to come.

via Code Review or WAFs? PCI 6.6.

Retailer Wireless Devices Largely Unprotected

A new survey shows 44 percent of the wireless devices used by retailers are vulnerable to attacks by data thieves. And that’s the good news. A year ago, the same Motorola survey showed 85 percent of retailers were sitting targets for drive-by data attacks. New PCI standards phasing out Wireless Equivalent Protocol–the weakest form of encryption this side of no encryption at all–may hold the key to improved retailer wireless security.

The good news: A new survey shows retailers are beefing up their wireless security. The bad news: The same survey shows 44 percent of the wireless devices used by retailers are sitting targets for data thieves, suffering from weak encryption, data leakage, misconfigured access points and outdated access point firmware.

While that 44 percent number may seem shockingly high, Richard Rushing of Motorola’s Enterprise Mobility unit points to last year’s results that found 85 percent of retailers’ wireless devices were begging to be compromised.

“Retailers nationwide are improving wireless security, as quantified by the significant drop in vulnerable wireless devices that were discovered during this year’s monitoring efforts,” said Rushing, Motorola’s senior director of information security for mobile devices. “However, a significant majority of retailers are still susceptible to a network intrusion—a sign that wireless security remains an afterthought for many.”

The Motorola survey conducted by Rushing included a review of wireless data security at more than 4,000 stores in some of the world’s busiest shopping cities, including Atlanta; Boston; Chicago; Los Angeles; New York; San Francisco; London; Paris; Seoul, South Korea; and Sydney, Australia. While 68 percent of the sites were using some form of encryption for their laptops, mobile computers and bar-code scanners, 25 percent of those were still using outdated WEP (Wired Equivalent Protocol) deployments, the weakest protocol for wireless data encryption.

Click here for a glossary of wireless security terms.

Altogether, Motorola discovered almost 8,000 APs, with 22 percent of them misconfigured. Another 10 percent of the AP’s SSIDs (Service Set Identifiers) were poorly named, which makes it relatively easy for potential data thieves to zero in on the store’s identity. More than 32 percent of retailers had unencrypted data leakage, while 34 percent had encrypted data leakage.

“As wireless exploded over the last few years, retailers had a bunch of devices that connected to the [store’s] network,” Rushing said. “Then, you didn’t have people who knew both wireless and security. The security model is just coming into play the last two to three years.”

Rushing said one of the more overlooked security issues with large retailers is the cookie-cutter approach to wireless technology. By using the same technology, configuration, security and/or naming conventions at all retail locations, vulnerabilities repeat themselves across the entire store chain, rendering them susceptible to attacks.

“The bad guys had a huge head start,” Rushing said. “We’ve caught up with them, but we’re not necessarily ahead of them.”

Helping the retailers play catch-up are companies like Motorola and Aruba Networks. Both have recently introduced wireless enterprise security product lines that store, process, transmit and protect wireless data, including credit card information.

Also pushing the retailers to greater wireless security is the Payment Card Industry council, which issues requirements for security management, policies and procedures. With PCI members including VISA, American Express, Discover Financial Services and MasterCard Worldwide, the council leverages the standards to force retailers to improve their wireless security.

If a breach happens, retailers not deploying PCI security standards run the risk of losing the ability of processing customers’ credit and debit cards or incurring fines or restrictions on the use of customers’ cards. Both Motorola’s and Aruba’s enterprise wireless security systems are PCI-compliant.

Included in the PCI’s newest standards is a prohibition against new WEP deployments in the Cardholder Data Environment beyond March 31, 2009, and a requirement of the elimination of WEP from the CDE beyond June 30, 2010.

“Retailers are moving away from WEP more and more,” Rushing said. “Things are now moving in a different direction. It’s all becoming more mature. You have to deploy layered secured security.”

Still, 44 percent of retailers’ wireless devices are susceptible to unwelcome intrusions.

“If you’ve looked at wireless as long as I have, the shock goes away,” Rushing said. “It’s certainly better than it was, but, in my opinion, it’s a wonder there haven’t been more data thefts.”

via Retailer Wireless Devices Largely Unprotected.

e-Book: “PCI for Dummies”

Download this Free Book: Get the Facts on PCI Compliance and Learn How to Comply with the PCI Data Security Standard

Complying with the PCI Data Security Standard may seem like a daunting task for merchants. This book is a quick guide to understanding how to protect cardholder data and comply with the requirements of PCI – from surveying the standard’s requirements to detailing steps for verifying compliance.

PCI Compliance for Dummies arms you with the facts, in plain English, and shows you how to achieve PCI Compliance. In this book you will discover:

* What the Payment Card Industry Data Security Standard (PCI DSS) is all about

* The 12 Requirements of the PCI Standard

* How to comply with PCI

* 10 Best-Practices for PCI Compliance

* How QualysGuard PCI simplifies PCI compliance

via e-Book: “PCI for Dummies”.

Don’t Chase Checkboxes – Analytics – InformationWeek

Posted by Mike Fratto, Jan 22, 2009 04:19 PM

Drew Conry-Murray takes apart PCI in his recent blog PCI Is Meaningless, But We Still Need It. I agree with most of his points, but they mostly apply to companies that view compliance as a set of checkboxes that have to be filled in annually. Filling checkboxes is doomed to failure. Focus on the spirit of the requirements and your company’s security posture will be the better for it.

Organizations that try to regulate behavior, whether it’s the U.S. Department of Health and Human Services with HIPAA or the PCI Council requirements, are trying to articulate in measurable ways, the features and functions that should be in place to protect personal information. Doing so sounds easy in concept, but in all practicality, developing measurable technical requirements for a broad audience is an extremely difficult task. Requirements need to be specific enough to be addressable by the target audience while being broad enough that you don’t have to make modifications on a constant basis.

But if that’s all you’re looking at in a regulated industry — am I satisfying this or that line item — and not the big picture, you are missing the point.

Consider regulations and requirements as a codification of best practices. Picking on PCI 1.2 for the moment, if you read requirement 1 — “Install and maintain a firewall configuration to protect cardholder data,” there’s a whole lot of room for interpretation in that section and I can imagine a number of ways that I could configure a firewall to comply with the requirement, yet be “insecure.”

However, the responsible action is to look at what requirement No. 1 is driving at, which is to ensure that you have a properly configured firewall in place that only allows the necessary access in and out of sensitive areas and that there is a formal process in place to initiate, review, justify, and test changes of the firewall. Seems like a best practice to me. If you adhere to the spirit of CPI requirement No. 1, then you can’t help but comply with the line items. I’d hope that any well-managed IT shop can do that with their eyes closed.

I know there are some really vague requirements, like 6.6, where one option for public-facing Web applications is to use a Web application firewall configured to detect and prevent Web-based attacks. What kind of Web application attacks? Cross-site scripting? Transferring viruses through HTTP downloads? SQL Injection? Unicode attacks? All, none? How would you measure the effectiveness of the Web application firewall? What is the accepted practice and standards? Apparently, a clarification to section 6.6 will be coming soon, but in the meantime, what do you do?

I think you can’t go wrong if, like any other best practice, you make every attempt to properly configure, document, and test your Web application firewall for your environment. Make it part of your change control process, the modification and testing of any Web application firewall rules.

You have to pass an annual audit, but you have a responsibility to protect your customer data from loss. Focus on protecting customer data and the rest will follow.

via Don’t Chase Checkboxes – Analytics – InformationWeek.

How to choose and use source code analysis tools

The high cost of finding and patching application flaws is well known. Wouldn’t it be cheaper to write secure code in the first place?

One of the fastest growing areas in the software security industry is source code analysis tools, also known as static analysis tools. These tools review source code (or in Veracode’s case, binary code) line by line to detect security vulnerabilities and provide advice on how to remediate problems they find – ideally before the code goes into production. (See How to Evaluate and Use Web Application Scanners for tools and services that search for vulnerabilities as an outside attacker might.)

The entire software security market was worth about $300 million in 2007, according to Gary McGraw, CTO at Cigital, Inc., a software security and quality consulting firm in Dulles, VA. McGraw estimates that the tools portion of that market doubled from 2006 to 2007 to about $180 million. About half of that is attributable to static analysis tools, which amounted to about $91.9 million, he says.

And no wonder; according to Gartner, close to 90% of software attacks are aimed at the application layer. If security were integrated earlier in the software development lifecycle, flaws would be uncovered earlier, reducing costs and increasing efficiency compared with removing defects later through patches or never finding them at all, says Diane Kelley, founder of Security Curve, a security consultancy in Amherst, N.H. “Although there is no replacement for security-aware design and a methodical approach to creating more secure applications, code-scanning tools are a very useful addition to the process,” she says.

Despite the high degree of awareness, many companies are behind the curve in their use of static analysis tools, Kelley says, possibly due to the big process changes that these tools entail.

Key decisions in source code analysis

1) Should you start with static tools or dynamic tools or use both?

In addition to static analysis, which reviews code before it goes live, there are also dynamic analysis tools, which conduct automated scans of production Web applications to unearth vulnerabilities. In other words, dynamic tools test from the outside in, while static tools test from the inside out, says Neil McDonald, an analyst at Gartner.

Many organizations start with dynamic testing, just to get a quick assessment of where their applications stand, McDonald says. In some cases, the groups that start this initiative are in security or audit compliance departments and don’t have access to source code. The natural second step is to follow up with static analyzers, enabling developers to fix the problems found by dynamic analysis tools. Some companies continue using both, because each type yields different findings.

An important differentiator between the two types is that static analyzers give you the exact line of code causing the problem, while dynamic analyzers just identify the Web page or URL causing the issue. That’s why some vendors offer integration between the two types of tools.

According to the chief scientist at a large software vendor, dynamic assessment tools “tend to be brute force,” he says. “You have to hit every parameter to find the vulnerabilities, whereas static tools investigate the whole landscape of the application.” He recently chose a code scanner from Ounce Labs, after outsourcing the work to Cigital since 2006. He became interested in application security whencustomers began requiring PCI DSS certification. He plans to add in dynamic testing in the future, but the static analysis tool is the cornerstone of his application security program.

2) Do you have the source code?

Most static analyzers scan source code, but what happens if you want to analyze third-party software or code written so long ago that you only have the executable? In that case, Veracode, Inc. offers binary code scanning through a software as a service platform. “A vendor may not be willing to give you source code, but they will give you executables or binary,” Kelley says.

At the Federal Aviation Administration, Michael Brown, director of the Office of Information Systems Security, says he chose to use Veracode’s services this year because of the amount of vendor-written code the FAA anticipated to use as a result of its modernization of the national airspace system. Brown says he wanted to ensure the code was not only functionally correct and of high quality but also secure. He wanted a service rather than a tool to reduce the need for training. So far, the results have been eye-opening, he says. “A lot of the code didn’t really take security into account,” he says. “There were cases of memory leaks, cross-site scripting and buffer overflows that could have been a cause for concern.”

3) What do you currently use for software quality?

Some tool vendors, such as Coverity, Inc., Klocwork, Inc., Parasoft Corp. and Compuware Corp., originated in the quality-testing arena and have added security capabilities vs. vendors like Ounce and Fortify Software, Inc., which were solely designed for security. It’s worthwhile to check into the quality tools you already use to see if you can leverage the existing relationship and tool familiarity. You should also consider whether it’s important to your organization to have the two functions merged into one tool in the long term, McDonald says. (See Penetration Testing: Dead in 2009? for more on the interplay of quality assurance and vulnerability detection.)

Source code analysis tools: Evaluation criteria

— Support for the programming languages you use. Some companies support mobile devices, while others concentrate on enterprise languages like Java, .Net, C, C++ and even Cobol.

— Good bug-finding performance, using a proof of concept assessment. Hint: Use an older build of code you had issues with and see how well the product catches bugs you had to find manually. Look for both thoroughness and accuracy. Fewer false positives means less manual work.

— Internal knowledge bases that provide descriptions of vulnerabilities and remediation information. Test for easy access and cross-referencing to discovered findings.

— Tight integration with your development platforms. Long-term, you’ll likely want developers to incorporate security analysis into their daily routines.

— A robust finding-suppression mechanism to prevent false positives from reoccurring once you’ve verified them as a non-issue.

— Ability to easily define additional rules so the tool can enforce internal coding policies.

— A centralized reporting component if you have a large team of developers and managers who want access to findings, trending and overview reporting.

Do’s and Don’ts of source code analysis

DON’T underestimate adoption time required. Most static analysis projects are initiated by security or compliance, not developers, who may not immediately embrace these tools. Before developers get involved, McDonald suggests doing the legwork on new processes; planning integration with other workflows like bug-tracking systems and development environments; and tuning the tool to your unique coding needs. “Don’t deploy to every developer at once,” he adds. “Ideally, you’ll get someone who wants to take on a competency role for security testing.”

The chief scientist at the large software vendor has developed an application security awareness program that includes training on common vulnerabilities, through podcasts and videocasts. Once he builds up awareness, he’ll educate developers on secure coding standards. To complete the circle, he’ll introduce Ounce’s static code analysis tool to enforce the standards and catch vulnerabilities “so it’s a feedback loop,” he says. (See Rob Cheyne Pushes for Developer Security Awareness for a look at a similar agenda.

DO consider using more than one tool. Collin Park, senior engineer at NetApp, says the company uses two code analysis tools: Developers run Lint on their desktops, and the company uses Coverity each night to scan all completed code. “They catch different things,” he explains. NetApp began using these tools when its customer base shifted to enterprise customers who had more stringent requirements. While Coverity is better at spotting vulnerabilities such as memory leaks, LINT catches careless coding errors that developers make and seems to run faster on developer desktops, Park says.

According to Kelley, organizations typically implement static analyzers at two stages of the development process: within the development environment, so developers can check their own code as they’re writing, and within the code repository, so it can be analyzed at check-in time. The chief scientist uses this method. “In the first scan, if the engineer takes every finding and suppresses them, a milestone scan will catch those and generate a report,” he says.

DO analyze pricing. Vendors have different pricing strategies, McDonald says. For instance, while all continuously add information to their libraries about the latest vulnerabilities, some charge extra for this, while others include it in the maintenance fee, he says. In addition, some vendors charge per seat, which can get expensive for large shops and may even seem wasteful for companies that don’t intend to run the scanner every day, while others charge per enterprise license. Additionally, some vendors charge for additional languages, while others charge one price for any language they support, McDonald says.

DO plan to amend your processes. Tools are no replacement for strong processes that ensure application security from the beginning, starting with defining requirements, which should focus on security as much as functionality, according to Kelley. For instance, a tool won’t tell you whether a piece of data should be encrypted to meet PCI compliance. “If a company just goes out and buys one of these tools and continues to do everything else the same, they won’t get to the next level,” she says.

The chief scientist says it’s also important to determine what will happen when vulnerabilities are found, especially because the tools can generate thousands of findings. “Does the workflow allow them to effectively analyze, triage, prioritize or dispose of the findings?” he says. He is working with Ounce to integrate the system better with his current bug-tracking system, which is Quality Center. “It would be great to right-click on the finding to automatically inject it into the bug-tracking system,” he says.

At NetApp, Park has reworked existing processes to ensure developers fix flagged vulnerabilities. As part of doing a code submit, developers do a test build, which must succeed or it can’t be checked in. Then, when they check in code, an automated process starts an incremental build. If that build fails, a bug report is filed, complete with the names of developers who checked in code before the last build. “Developers are trained to treat a build failure as something they have to look at now,'” Park says.

NetApp also created a Web-based chart that’s automatically updated each night, to track which managers have teams that were issued Lint or Coverity warnings and whether they were cleared.

DO retain the human element. While the tools will provide long lists of vulnerabilities, it takes a skilled professional to interpret and prioritize the results. “Companies don’t have time to fix every problem, and they may not need to,” Kelley says. “You have to have someone who understands what is and is not acceptable, especially in terms of a time vs. perfection tradeoff.'”

The chief scientist calls this “truly an art form” that requires a competent security engineer. “When the tool gives you 10,000 findings, you don’t want someone trying to fix all those,” he says. “In fact, 10,000 may turn out to just be 500 or 100 vulnerabilities in actual fact.”

Park points out an instance where the Coverity tool once found what it called “a likely infinite loop.” On first glance, the developer could see there was no loop, but after a few more minutes of review, he detected something else wrong with the code. “The fact that you get the tool to stop complaining is not an indication you’ve fixed anything,” Park says.

DON’T anticipate a short scan. NetApp runs scans each night, and because it needs to cover thousands of files and millions of lines of code, it takes roughly 10 hours to complete a code review. The rule of thumb, according to Coverity, is for each hour of build time, allow for two hours for the analysis to be complete. Coverity also enables companies to do incremental runs so that you’re not scanning the entire code base but just what you’ve changed in the nightly build.

DO consider reporting flexibility. At the FAA, Brown gets two reports: an executive summary that provides a high-level view of vulnerabilities detected and even provides a security “score,” and a more detailed report that pinpoints which line of code looks troublesome and the vulnerability that was detected. In the future, Brown would like to build into vendor contracts the requirement that they meet a certain security score for all code they develop for the FAA.

DON’T forget the business case. When Brown first wanted to start reviewing code, he met with some pushback from managers who wanted a defined business need. “You’ve got program managers with schedules to meet, and they can view this as just another bump in the road that’s going to keep them from making their milestones,” he says.

Brown created the business case by looking to independent sources like Gartner and Burton Group for facts and figures about code vulnerability, and he also ran some reports on how much time the FAA was dedicating to patch management.

The chief scientist justified the cost of the Ounce tool by taking the total cost of the product and comparing that to the effort involved in a manual review. “With millions of lines of code, imagine how many engineers it would take to do that, and by the way, we want to do it every week,” he says. “The engineers would fall down dead of boredom.”

Mary Brandel is a freelance writer based outside of Boston.

via http://www.networkworld.com/news/2009/012009-how-to-choose-and-use.html.