Absent such standards, Feigenbaum noted that Google received SaS 70 certification and shares the audit results on its security controls with customers. Google is also now seeking certification to comply with the Federal Information Security Management Act (FISMA).
The Payment Card Industry Security Standards Council (PCI SSC), under pressure from merchants to improve the training of its certified Qualified Security Assessors (QSA), has detailed plans to beef up its PCI QSA certification review process, adding much needed staff and funding to improve oversight of the individuals who conduct PCI Data Security Standard (DSS) compliance assessments.
“If you treat PCI certification as a fixed-asset purchase, it will serve you over a long period and ensure the trust of your customers, which has a very definite ROI over a period of time,” Tech News World reported.
The new worry from CMS, according to Government Health IT, is that healthcare providers sharing EHR files will be required to meet FISMA standards, which include an annual security test and FISMA certification.
The high cost of finding and patching application flaws is well known. Wouldn’t it be cheaper to write secure code in the first place?
One of the fastest growing areas in the software security industry is source code analysis tools, also known as static analysis tools. These tools review source code (or in Veracode’s case, binary code) line by line to detect security vulnerabilities and provide advice on how to remediate problems they find – ideally before the code goes into production. (See How to Evaluate and Use Web Application Scanners for tools and services that search for vulnerabilities as an outside attacker might.)
The entire software security market was worth about $300 million in 2007, according to Gary McGraw, CTO at Cigital, Inc., a software security and quality consulting firm in Dulles, VA. McGraw estimates that the tools portion of that market doubled from 2006 to 2007 to about $180 million. About half of that is attributable to static analysis tools, which amounted to about $91.9 million, he says.
And no wonder; according to Gartner, close to 90% of software attacks are aimed at the application layer. If security were integrated earlier in the software development lifecycle, flaws would be uncovered earlier, reducing costs and increasing efficiency compared with removing defects later through patches or never finding them at all, says Diane Kelley, founder of Security Curve, a security consultancy in Amherst, N.H. “Although there is no replacement for security-aware design and a methodical approach to creating more secure applications, code-scanning tools are a very useful addition to the process,” she says.
Despite the high degree of awareness, many companies are behind the curve in their use of static analysis tools, Kelley says, possibly due to the big process changes that these tools entail.
Key decisions in source code analysis
1) Should you start with static tools or dynamic tools or use both?
In addition to static analysis, which reviews code before it goes live, there are also dynamic analysis tools, which conduct automated scans of production Web applications to unearth vulnerabilities. In other words, dynamic tools test from the outside in, while static tools test from the inside out, says Neil McDonald, an analyst at Gartner.
Many organizations start with dynamic testing, just to get a quick assessment of where their applications stand, McDonald says. In some cases, the groups that start this initiative are in security or audit compliance departments and don’t have access to source code. The natural second step is to follow up with static analyzers, enabling developers to fix the problems found by dynamic analysis tools. Some companies continue using both, because each type yields different findings.
An important differentiator between the two types is that static analyzers give you the exact line of code causing the problem, while dynamic analyzers just identify the Web page or URL causing the issue. That’s why some vendors offer integration between the two types of tools.
According to the chief scientist at a large software vendor, dynamic assessment tools “tend to be brute force,” he says. “You have to hit every parameter to find the vulnerabilities, whereas static tools investigate the whole landscape of the application.” He recently chose a code scanner from Ounce Labs, after outsourcing the work to Cigital since 2006. He became interested in application security whencustomers began requiring PCI DSS certification. He plans to add in dynamic testing in the future, but the static analysis tool is the cornerstone of his application security program.
2) Do you have the source code?
Most static analyzers scan source code, but what happens if you want to analyze third-party software or code written so long ago that you only have the executable? In that case, Veracode, Inc. offers binary code scanning through a software as a service platform. “A vendor may not be willing to give you source code, but they will give you executables or binary,” Kelley says.
At the Federal Aviation Administration, Michael Brown, director of the Office of Information Systems Security, says he chose to use Veracode’s services this year because of the amount of vendor-written code the FAA anticipated to use as a result of its modernization of the national airspace system. Brown says he wanted to ensure the code was not only functionally correct and of high quality but also secure. He wanted a service rather than a tool to reduce the need for training. So far, the results have been eye-opening, he says. “A lot of the code didn’t really take security into account,” he says. “There were cases of memory leaks, cross-site scripting and buffer overflows that could have been a cause for concern.”
3) What do you currently use for software quality?
Some tool vendors, such as Coverity, Inc., Klocwork, Inc., Parasoft Corp. and Compuware Corp., originated in the quality-testing arena and have added security capabilities vs. vendors like Ounce and Fortify Software, Inc., which were solely designed for security. It’s worthwhile to check into the quality tools you already use to see if you can leverage the existing relationship and tool familiarity. You should also consider whether it’s important to your organization to have the two functions merged into one tool in the long term, McDonald says. (See Penetration Testing: Dead in 2009? for more on the interplay of quality assurance and vulnerability detection.)
Source code analysis tools: Evaluation criteria
— Support for the programming languages you use. Some companies support mobile devices, while others concentrate on enterprise languages like Java, .Net, C, C++ and even Cobol.
— Good bug-finding performance, using a proof of concept assessment. Hint: Use an older build of code you had issues with and see how well the product catches bugs you had to find manually. Look for both thoroughness and accuracy. Fewer false positives means less manual work.
— Internal knowledge bases that provide descriptions of vulnerabilities and remediation information. Test for easy access and cross-referencing to discovered findings.
— Tight integration with your development platforms. Long-term, you’ll likely want developers to incorporate security analysis into their daily routines.
— A robust finding-suppression mechanism to prevent false positives from reoccurring once you’ve verified them as a non-issue.
— Ability to easily define additional rules so the tool can enforce internal coding policies.
— A centralized reporting component if you have a large team of developers and managers who want access to findings, trending and overview reporting.
Do’s and Don’ts of source code analysis
DON’T underestimate adoption time required. Most static analysis projects are initiated by security or compliance, not developers, who may not immediately embrace these tools. Before developers get involved, McDonald suggests doing the legwork on new processes; planning integration with other workflows like bug-tracking systems and development environments; and tuning the tool to your unique coding needs. “Don’t deploy to every developer at once,” he adds. “Ideally, you’ll get someone who wants to take on a competency role for security testing.”
The chief scientist at the large software vendor has developed an application security awareness program that includes training on common vulnerabilities, through podcasts and videocasts. Once he builds up awareness, he’ll educate developers on secure coding standards. To complete the circle, he’ll introduce Ounce’s static code analysis tool to enforce the standards and catch vulnerabilities “so it’s a feedback loop,” he says. (See Rob Cheyne Pushes for Developer Security Awareness for a look at a similar agenda.
DO consider using more than one tool. Collin Park, senior engineer at NetApp, says the company uses two code analysis tools: Developers run Lint on their desktops, and the company uses Coverity each night to scan all completed code. “They catch different things,” he explains. NetApp began using these tools when its customer base shifted to enterprise customers who had more stringent requirements. While Coverity is better at spotting vulnerabilities such as memory leaks, LINT catches careless coding errors that developers make and seems to run faster on developer desktops, Park says.
According to Kelley, organizations typically implement static analyzers at two stages of the development process: within the development environment, so developers can check their own code as they’re writing, and within the code repository, so it can be analyzed at check-in time. The chief scientist uses this method. “In the first scan, if the engineer takes every finding and suppresses them, a milestone scan will catch those and generate a report,” he says.
DO analyze pricing. Vendors have different pricing strategies, McDonald says. For instance, while all continuously add information to their libraries about the latest vulnerabilities, some charge extra for this, while others include it in the maintenance fee, he says. In addition, some vendors charge per seat, which can get expensive for large shops and may even seem wasteful for companies that don’t intend to run the scanner every day, while others charge per enterprise license. Additionally, some vendors charge for additional languages, while others charge one price for any language they support, McDonald says.
DO plan to amend your processes. Tools are no replacement for strong processes that ensure application security from the beginning, starting with defining requirements, which should focus on security as much as functionality, according to Kelley. For instance, a tool won’t tell you whether a piece of data should be encrypted to meet PCI compliance. “If a company just goes out and buys one of these tools and continues to do everything else the same, they won’t get to the next level,” she says.
The chief scientist says it’s also important to determine what will happen when vulnerabilities are found, especially because the tools can generate thousands of findings. “Does the workflow allow them to effectively analyze, triage, prioritize or dispose of the findings?” he says. He is working with Ounce to integrate the system better with his current bug-tracking system, which is Quality Center. “It would be great to right-click on the finding to automatically inject it into the bug-tracking system,” he says.
At NetApp, Park has reworked existing processes to ensure developers fix flagged vulnerabilities. As part of doing a code submit, developers do a test build, which must succeed or it can’t be checked in. Then, when they check in code, an automated process starts an incremental build. If that build fails, a bug report is filed, complete with the names of developers who checked in code before the last build. “Developers are trained to treat a build failure as something they have to look at now,'” Park says.
NetApp also created a Web-based chart that’s automatically updated each night, to track which managers have teams that were issued Lint or Coverity warnings and whether they were cleared.
DO retain the human element. While the tools will provide long lists of vulnerabilities, it takes a skilled professional to interpret and prioritize the results. “Companies don’t have time to fix every problem, and they may not need to,” Kelley says. “You have to have someone who understands what is and is not acceptable, especially in terms of a time vs. perfection tradeoff.'”
The chief scientist calls this “truly an art form” that requires a competent security engineer. “When the tool gives you 10,000 findings, you don’t want someone trying to fix all those,” he says. “In fact, 10,000 may turn out to just be 500 or 100 vulnerabilities in actual fact.”
Park points out an instance where the Coverity tool once found what it called “a likely infinite loop.” On first glance, the developer could see there was no loop, but after a few more minutes of review, he detected something else wrong with the code. “The fact that you get the tool to stop complaining is not an indication you’ve fixed anything,” Park says.
DON’T anticipate a short scan. NetApp runs scans each night, and because it needs to cover thousands of files and millions of lines of code, it takes roughly 10 hours to complete a code review. The rule of thumb, according to Coverity, is for each hour of build time, allow for two hours for the analysis to be complete. Coverity also enables companies to do incremental runs so that you’re not scanning the entire code base but just what you’ve changed in the nightly build.
DO consider reporting flexibility. At the FAA, Brown gets two reports: an executive summary that provides a high-level view of vulnerabilities detected and even provides a security “score,” and a more detailed report that pinpoints which line of code looks troublesome and the vulnerability that was detected. In the future, Brown would like to build into vendor contracts the requirement that they meet a certain security score for all code they develop for the FAA.
DON’T forget the business case. When Brown first wanted to start reviewing code, he met with some pushback from managers who wanted a defined business need. “You’ve got program managers with schedules to meet, and they can view this as just another bump in the road that’s going to keep them from making their milestones,” he says.
Brown created the business case by looking to independent sources like Gartner and Burton Group for facts and figures about code vulnerability, and he also ran some reports on how much time the FAA was dedicating to patch management.
The chief scientist justified the cost of the Ounce tool by taking the total cost of the product and comparing that to the effort involved in a manual review. “With millions of lines of code, imagine how many engineers it would take to do that, and by the way, we want to do it every week,” he says. “The engineers would fall down dead of boredom.”
Mary Brandel is a freelance writer based outside of Boston.
Mumbai, December 5, 2008 (AllPayNews.com) : PayMate, a pioneer in introducing innovations in the m-commerce sector, was upgraded from the PCI DSS 1.1 certification awarded last year, to the PCI DSS 1.2 certification by leading PCI DSS QSA, ControlCase. PayMate achieved the latest certification post a rigorous audit process for its undivided focus in offering m-commerce customers a safe ecosystem for mobile transactions.
Data security is typically a concern for customers making payments over the mobile platform. The PayMate service however, enables consumers to shop with their mobile phones in a highly secure manner, whether across the counter, online or remotely. Each PayMate transaction over automated IVR is secured by a stringent 2-factor authentication, the mobile number and the m-PIN. Upgrading to the PCI DSS 1.2 certification demonstrates PayMate’s continued commitment to protection and security of customer account data throughout the transaction process as well as in storage.
Ajay Adiseshann, MD and Founder, PayMate, said “Being in the payments industry, it is imperative that we maintain the highest data and security standards at all times. More importantly, it is crucial that we update ourselves periodically to remain compliant with the changing policies and guidelines of the payments industry. With the PCI-DSS 1.2 certification, we can continue aggressively with our national and global expansion plans by providing our customers and partners with a highly secure and robust payments platform.”
Commenting on the achievement by PayMate, Mr. Erik Winkler, Senior VP – ControlCase mentioned that “our involvement with PayMate as QSA (Qualified Security Assessors) for PCI DSS for the last two years has been extremely satisfying as PayMate have demonstrated their commitment at all levels in not only maintaining PCI DSS but have taken actions to ensure that they are continuously compliant to new changes and any updates to the standard.”
“Becoming compliant to PCI DSS 1.2 is a testament that PayMate has advanced methodology for Corporate Governance”, said Suresh Dadlani COO of ControlCase.
The PayMate service enables consumers to shop with their mobile phones in a convenient and safe manner, either across the counter, online, remotely, over automated IVR.
PayMate India is a Mumbai-based wireless transactions platform provider, the first-of-its-kind mobile payment service in India. PayMate is an innovative, easy, secure and convenient mode of making payments through the mobile phone.
PayMate has created a viable ecosystem that enables wireless transactions connecting banks, switches, merchants and customers using a simple, secure and seamless technology. It is an IVR-based solution that transforms your phone into a wallet. It works on any handset without the need to upgrade the SIM or GPRS connectivity.
PayMate is accepted at over 13,000 merchants in India, which include online portals, voice portals, travel services, utilities, retail outlets and restaurants. PayMate has not just created one of the world’s largest m-payment eco-systems but has also won several globally coveted awards for its success with innovative initiatives. PayMate has been acknowledged as one of the top 100 most innovative companies by Red Herring Asia for two consecutive years. More so, PayMate’s list of security certifications includes the most advanced and stringent of compliances such as PCI DSS 1.2, certifying our systems and infrastructure among the best in the world.
PayMate has tied up with a number of business entities like Standard Chartered Bank, ABN AMRO Bank, Bank of Ceylon, Citibank, Euronet, Corporation Bank and US based leading service provider – Infonox. It is steadily broadening its portfolio with several other MNC and PSU banks and retail merchants. PayMate has already tied up with over 12 banks to offer its services in India, USA, Sri Lanka, Nepal and Dubai.
PayMate has reversed the outsourcing trend by offering its unique patented wireless application suite to empower one of the largest electronic transaction processing companies in the US.
For more information log on to www.paymate.co.in
For further information please contact:
Aakash Shah @ Perfect Relations, Tel: 9819182755
IGT Awarded The First PCI DSS 1.2 Certification
Submitted by newsdesk on Mon, 12/22/2008 – 19:42
IGT, a pioneer and global leader in travel technologies and services received the coveted PCI DSS 1.2 certification from leading PCI DSS QSAC, ControlCase. IGT is the first Travel BPO Organization to become PCI DSS 1.2 compliant. It has successfully met the newest version of the Payment Card Industry Data Security Standard (PCI DSS) compliance requirements. ControlCase conducted a meticulous audit process of IGT’s security measures used in protecting e-commerce customers and their data involving travel transactions.
ControlCase awarded IGT with the PCI DSS 1.2 compliance rating after IGT met the 259 Requirements (grouped into 12 broad categories) that make up the control objectives. Data security continues to be a concern for customers making payments over the internet. IGT supports millions of travel transactions annually and enables consumers to make travel purchases in a highly secure manner both online and remotely. The PCI DSS 1.2 certification demonstrates IGT’s continued commitment to the protection and security of our B2C and B2B customer’s account data throughout the transaction process.
Vipul Doshi, CEO, IGT, stated “Our clients rely heavily on credit cards with more than 2/3rds of travel transactions occurring over the internet, it’s imperative that we maintain the highest standard of information security. Receiving the PCI DSS 1.2 further demonstrates our commitment to protecting our client’s and their customers.”
Internet security and personal information continues to be a top priority and concern of individuals transacting over the world wide web. Credit card companies impose hefty fines on companies not meeting PCI compliance requirements. Some reports indicate nearly one trillion dollars per year is spent on travel, and more than 2/3rds of those sales occur with credit cards. That coupled with the travel industry racking up more sales on the internet than any other industry and you have a recipe for serious credit card fraud, the very reason PCI DSS was implemented.
Mohit Magon, Vice President – Business Excellence stated “Achievement of PCI DSS 1.2 compliance reinforces our continuous commitment to the highest level of security standards. As an organization our people are committed to achieve excellence in whatever we do. Our proactive approach to comply with PCI DSS 1.2 standard is a testimony to our responsiveness towards the ever changing business environment and customer needs.”
IGT is the first Travel BPO company to achieve the recently updated version of the PCI DSS. Suresh Dadlani, COO, ControlCase stated “We are pleased to have worked closely with IGT on PCI DSS 1.2 certification. The compliances to the requirements of the standard are quite technically intensive and do not provide any scope for compromises. The achievement of PCI DSS 1.2 Certification in a short period of time was only possible due to the commitment at all levels and the technical competencies demonstrated by the team.”
IGT remains committed to meeting the highest security standards applicable in the information technology industry. With more than 1/3rd of the world’s travel transactions relying on IGT, its good to know your data is protected with IGT.
InterGlobe Technologies (IGT) provides services and solutions to corporations worldwide in the areas of Business Process Outsourcing (BPO) and Information Technology (IT). IGT’s gamut of offerings spread across the entire technology spectrum. With some 2000 global employees operating in facilities located in India, North America and Europe, InterGlobe was ranked by The Great Place To Work Institute as the best travel company of India. In 2008, Deloitte and Touche recognized IGT as one of the fastest growing companies in India and The Black Book of Outsourcing ranked IGT as one of the top 5 Travel BPO companies in the world. www.igt.in
About PCI DSS
The Payment Card Industry Data Security Standard (PCI DSS) is a world-wide benchmark mandated by credit card companies for the protection of card holder’s identity and transaction information. It prevents credit card fraud, hacking and various other security vulnerabilities and threats. The standard was developed by major card brands including American Express, Discover Financial Services, JCB International, Master Card Worldwide and Visa International.
Sophisticated cyber criminals have followed businesses into the online world; they now can steal everything from intellectual property to credit cards en masse. And that’s just the start Add social security numbers, addresses, and other personally identifying information to the list and you can essentially reconstruct and hijack entire identities. What’s worse is that cybercriminals benefit from anonymity: They can compromise entire databases of sensitive information and leave only a masked IP address behind as a trail—and that trail often ends in a foreign country where both jurisdiction and law enforcement are limited.
Regulators Focus On Large Enterprises
As cyber criminals successfully raided corporate databases and siphoned away credit card, tax, banking, healthcare and other consumer information, regulators took notice. In an effort to protect consumers, governments and industry consortiums imposed regulations and mandates like Sarbanes-Oxley Act SOX, the Health Insurance Portability and Accountability Act HIPAA, the Gramm-Leach-Bliley Act GLBA, and the Payment Card Industry PCI standard. The initial round of enforcement and deadlines, however, was mostly targeted at large enterprises. Thus it is not surprising that over the last few years, large enterprises have made significant investments in cyber security and have at least increased the barrier to such breaches.
When Cybercrime Moves Downstream
Undeterred, cybercriminals are finding it easier to move downstream and target small to medium businesses, which are increasingly online but do not have the necessary safeguards. The Privacy Rights Clearinghouse website lists a long chronology of breaches. Take a look and you’ll find that while familiar names like ChoicePoint, the U.S. Department of Veterans Affairs, TJX, and Circuit City have endured highly publicized breaches, the majority of breaches actually occur at small to medium merchants.
Regardless of whether you are a small retailer, a credit union with a single location, or a doctor’s office or clinic, you face the same problems as a global enterprise when a breach occurs: potential fines, bad press, class-action lawsuits and customer attrition. In fact, the costs of security breaches can be more devastating for a small enterprise that has fewer financial and other resources.
The squeeze doesn’t end there. Regulations increasingly apply to small and medium-sized businesses, not just larger ones. The PCI Data Security Standard (PCI DSS) must now be met by any business that stores, processes, or transmits credit card information—regardless of annual transaction volume. Similarly, publicly traded companies with a market capitalization under $75 million must now comply with SOX. HIPAA, of course, applies to the smallest doctor’s office and the largest hospitals and insurance firms.
Combating Cybercrime with the Hidden Trail
Just thinking about how to provide adequate security can seem overwhelming to a small business. But your business already has the information you need to detect breaches in a timely manner and to cost effectively address regulatory requirements. Every second of the day, your servers, laptops, applications, network infrastructure, and security devices leave a trail of activity behind in the form of logs. Everything from a login or logout to a badge swipe or file access is tracked in this hidden trail. Bring this information together and you have a powerful and cost-effective means to detect threats and protect your business.
Tips On How to Maximize Your Security Budget:
- Improve efficiency—consider approaches to security that require less hardware and effectively support consolidation and green initiatives.
- Manage clear visibility on the network—knowing where your internal/external threats and policy violations exist will eliminate or reduce the extraneous costs of a data breach, fraud, or cybercrime.
- Avoid the â¬Sone size fits allâ¬ solutions—look for multiple performance options and scalability to adapt to evolving security and compliance regulations.
- Understand the impact of automation—reserve limited and valuable IT resources for more strategic tasks.
- Integrate security as part of the business—leverage security solutions in more strategic ways by offer a clear path to ROI and productivity gains.
For organizations of any size, there’s no doubt that battling cybercrime and meeting regulatory compliance will be a top business issue in 2009. However, given the state of security in today’s economy, it will be important to measure the cost-comparisons between technology and IT resources used versus the costs associated with a data breach or cybercrime attack.
Ansh Patnaik is the director of product marketing at ArcSight. He is an ISSA and ISACA member and maintains the CISSP certification. Ansh has worked in the security space for over 10 years with companies such as BindView/Symantec and Omniva Policy Systems.
On November 17 the Department of Labor revealed the final revisions to the Family Medical Leave Act (FMLA), a major overhaul of the law, and the first since its enactment in 1993. To assist employers with navigating the revised regulation and implementing policies as they prepare for it to become law on January 16, 2009, Business & Legal Reports, Inc. (BLR(R)) has launched the BLR FMLA Information Center at www.blr.com/information-fmla.
“The impact of the final FMLA rule cannot be overstated. Employers will need to reconsider all of their leave policies and practices and make significant changes to avoid costly FMLA violations,” said Susan Schoenfeld, J.D., BLR legal editor and FMLA expert. “This will require employers to devote time and resources to understanding the new rules and changing their FMLA leave administration programs. The time employers have to actually make those changes is dangerously short — with only 60 days until the rule becomes effective on January 16.”
FMLA revisions affect the following areas of policy:
— Serious Health Condition
— Intermittent Leave
— Employee and Employer Notice
— Light Duty
— Perfect Attendance Awards
— Medical Certification
— Fitness-for-Duty Certification
— Military Caregiver Leave
— Leave for Qualifying Exigencies for Families of National Guard and
— Revised Forms
What the Revisions Mean for Employers
The Department of Labor says that many of the revisions were designed to clarify the requirements that the FMLA imposes on both employees and employers and to improve communication between the parties.
Schoenfeld states that “since it was first enacted in 1993, employers have really struggled to understand and implement the FMLA in the ‘real world.’ Unlike many other federal employment laws, employers never really reached a comfort level in understanding the FMLA. The new FMLA rule comes with new forms that may assist with recordkeeping; however, the FMLA rule also adds two new types of leave which place extra burden on employers.
“The bottom line,” added Schoenfeld, “is that employers will be significantly impacted by the FMLA changes, especially at first while they struggle to learn and understand the new FMLA requirements.”
NIST Requests Comments on Next Generation C/A Process for Information Systems
National Institute of Standards & Technology
Release date: August 19, 2008
The National Institute of Standards and Technology (NIST) has released for public review and comment a major revision to its security certification and accreditation (C&A) guidelines for federal information systems. A substantial rewrite of the original document, the new Guide for Security Authorization of Federal Information Systems: A Security Lifecycle Approach, represents a significant step toward developing a common approach to information security across the Federal government, including civilian, defense, and intelligence agencies, according to NIST security experts.
When finalized, the revised guide will replace NIST Special Publication 800-37, which was issued in 2004 under the title Guide for the Security Certification and Accreditation of federal Information Systems. Like the original, the revised guide maps out a basic framework for managing the risks that arise from the operation and use of federal information systems, the measures taken to address or reduce risk, and a formal managerial process for accepting known risks and granting-or withdrawing-authorization to operate information systems. The guide emphasizes the need to treat information security as a dynamic process, with established procedures to monitor, reassess and update security measures to maintain the authorized security state of an information system. The revised security authorization process is designed to be tightly integrated into enterprise architectures and ongoing system development life cycle processes, promotes the concept of near real-time risk management, capitalizes on investments in technology including automated support tools, and takes advantage of over three decades of lessons learned in previous approaches to certification and accreditation.
Since 2003, NIST has developed and published information security standards and guidelines under the Federal Information Security Management Act (FISMA). While the NIST methodology for analyzing, documenting and authorizing the security of information systems is widely followed by federal agencies operating non-national security systems, other frameworks have coexisted with it for national security systems, including the Department of Defense Information Assurance Certification and Accreditation Process (DIACAP) and the National Information Assurance Certification and Accreditation Process (NIACAP). This first revision to SP 800-37 is the result of an interagency effort that is part of a C&A Transformation Initiative working toward a convergence of information security standards, guidelines and best practices across the government’s civilian, defense and intelligence agencies. NIST is participating in this effort along with the Office of the Director of National Intelligence (DNI), the Department of Defense (DOD) and the Committee on National Security Systems (CNSS). Future updates to NIST FISMA publications will continue this convergence towards common standards and procedures.
Copies of the initial public draft of SP 800-37 Revision 1 are available from the NIST Computer Security Resource Center at http://csrc.nist.gov. NIST is requesting comments on the draft by Sept. 30, 2008.
Media Contact: Michael Baum, firstname.lastname@example.org, (301) 975-2763