Tag Archives: software

Visa leads effort at PCI conference to minimise payment information vulnerability

Visa opens PCI Dubai Conference

Dubai, UAE 14 April 2009: Visa International, the leading payment solutions provider, has participated in PCI Dubai, the leading payment industry conference and addressed stakeholders from across the GCC payment industry on various issues surrounding data security and payment card fraud. Participants also shared best practices, emerging technologies, and discussed ongoing industry challenges during the daylong event.

Kamran Siddiqi, MENA General Manager, Visa, delivered the keynote speech at the conference which promotes the widespread adoption of the Payment Card Industry Data Security Standard PCI DSS. In his speech, Mr Siddiqi focused on data protection and fraud prevention through common data security policies and practices.

“Visa has been at the forefront of the issue of data security and payment card fraud, initiating its Account Information Security AIS programme in 2000. The programme, which originally concentrated on the importance of data storage, was the precursor and foundation for the PCI DSS,” said Kamran Siddiqi, General Manager, Visa Inc.

In addition to the conference, sessions, educational seminars, and interactive workshops provided exchanges between participants allowing for a greater understanding of the issues around payment card data vulnerability and the benefits of the PCI DSS.

“Gartner, the information technology research and advisory firm, estimates that 20 – 30 % of Global 1000 companies suffer exposure resulting from privacy mismanagement. The costs to recover from these security mistakes often range from $5 – 20 million for each organisation. Whether data loss is occurring accidentally by businesses or illegally through theft by individuals, the loss of data and the resulting fraud are largely preventable concluded Siddiqi.

via Visa leads effort at PCI conference to minimise payment information vulnerability.

Identity Theft – PCI Chiefs Defend Standards, Plans – eWeek Security Watch

It’s a gross oversimplification of an utterly staggering technical and social challenge, and he knows it as well as anyone, but it’s hard to argue with PCI Security Standards Council General Manager Bob Russo’s assertion that when it comes to improving electronic data security and related matters of individual privacy, “something is much better than nothing.”

Since the massive, potentially record-breaking security breach at Heartland Data Systems in late January, the Payment Card Industry Security Standards Council and its DSS Data Security Standard have been put under a microscope and criticized for foisting on companies an impractical IT security mandate that detractors say does not actually meet its goal of making it harder for companies that handle credit and debit card data to be fleeced similarly to Heartland.

Some highly respected security researchers and practitioners have come out since the Heartland robbery and questioned the viability of the entire DSS effort, perceived as being out of touch with real-world IT environments and insufficient to help organizations avoid exploitation. A handful have gone as far as saying it actually makes the process even harder.

And after all, here’s a Tier 1 company that’s likely had to push to abide by the technological and process-oriented stipulations required under the PCI Standard as much and as long as any other, and it just got positively hammered.

However, visiting Boston on a media tour organized to share some new elements of the PCI Council’s larger plans the week of Feb. 23, Russo and new PCI Security Standards Council Chairman Lib de Veyra — an executive at and appointee of JCB International Credit Card — made a lot of credible points. Mostly, because they firmly recognized the reality that no standard is perfect and that DSS as it exists is only a first step in a long evolutionary process.

Not to be misinterpreted, the PCI Council is satisfied with what it’s put in place thus far, given the challenge at hand, Russo and de Veyra said.

The parts of DSS that need to be tweaked to address the vast diversity of infrastructure and applications employed by all the retailers, merchants and processors, as well as all the techniques utilized by attackers, will be addressed by taking feedback directly from the very companies that must comply with the standard, the PCI Council representatives said. And truthfully that has been at the very least a consistent message of the organization all along.

A number of powerful banking, retail, technology and government players are also involved in the PCI Advisory Board.

And the Heartland incident, as well as those reported at other companies that have been at some time certified as PCI compliant, including TJX Companies and Hannaford Brothers, in no way proves that the standard is clearly lacking in some specific area, they said.

The PCI leaders said in addition to having not yet shared specific details with the Council of exactly how they were individually victimized by fraudsters, the fact that these companies were at one time judged to be in conformity with DSS in no way guarantees that they were at the time they were attacked.

“Just because a company gets a clean bill of health today doesn’t mean they can’t be infected tomorrow,” de Veyra said. “Organizations are making configuration changes and broadening adoption of technologies like wireless all the time; the guidelines in DSS are something that you have to continue to monitor and maintain all the time.”

And many of the Council’s initiatives, including plans to launch two new standards aimed at improving embedded security features, or “host security modules,” built into card data transaction processing hardware, and regulations for UPTs (unattended payment terminals) such as gas pumps and ticketing kiosks, will help push the entire industrywide process forward, they said.

The PCI Security Standards Council will also continue to push DSS overseas, in Europe and APAC specifically, where the guideline has faced some resistance from card handlers. But the effort launched by the world’s largest card companies — American Express, Discover, JCB, MasterCard and VISA – remains undaunted in its pursuit, PCI’s chief spokespeople said.

“Addressing the criticism comes down to communication; once we have enough information from companies like Heartland to truly examine what happened, we can understand how it relates to DSS,” de Veyra said. “And working with all the companies on our Advisory Board, meeting with them and incorporating their feedback over time, will be the most important aspect of maturing the standards.”

Another new element of DSS will be a technological tool, a sort of stripped-down PCI diagnostic application provided by the Council to offer organizations still getting started with the standard a more “prioritized approach to DSS.”

The Prioritized Approach tool will help companies track their ability to meet basic milestones of achieving compliance with DSS, the representatives said. The first three steps — preventing the improper storage of electronic data, securing the network perimeter and securing applications — have obviously been proven hard to accomplish for many organizations, and some might argue most or even all.

But most importantly, the idea is to promote gradual coalescence of a world where every company affected by the PCI mandate has at least greatly augmented and formalized its approach to, if not its execution of, securing electronic data, the leaders said.

“No standard is ever going to completely stop what we’re seeing right now with cyber-crime, but the reaction we’ve seen to PCI after some of these incidents like Heartland has been absolutely unfair, because we don’t even know if they were compliant,” Russo said.

In terms of whether incidents like the breaches at Heartland, TJX and Hannaford Brothers have damaged public perceptions of DSS, the industry veteran said, as in any case, there is no shortage of opinions.

“You can sit there and look at it from one side and say, you have this standard but these incidents have still happened, and that proves something isn’t working,” Russo said. “But what you don’t know at the same time is, If we didn’t have DSS as it stands in place, how many more of these incidents might we have had?”

I’m sure that there are valid criticisms of various aspects of PCI — some very smart people have spent time voicing their questions already.

But, I’m curious to know whether they’d agree at the end of the day that something is better than nothing.
Matt Hines has been following the IT industry for over a decade as a reporter and blogger, and has been specifically focused on the security space since 2003, including a previous stint writing for eWEEK and contributing to the Security Watch blog. Hines is currently employed as marketing communications manager at Core Security Technologies, a Boston-based maker of security testing software. The views expressed herein do not necessarily represent the views of Core Security, and neither the company, nor its products and services will be actively discussed in the blog. Please send news, research or tips to SecurityWatchBlog@gmail.com.

via Identity Theft – PCI Chiefs Defend Standards, Plans – eWeek Security Watch.

Palo Alto Networks Hosting Webinar with Forrester on PCI Audit Process

Jan 29, 2009 (Close-Up Media via COMTEX) —

Palo Alto Networks will host a webinar with Forrester Research Security and Risk Management Analyst, John Kindervag on Tuesday, February 10 at 10 a.m. PST, 1 p.m. EST.

PCI audits are often daunting, both in scope of effort and associated costs, and this informative webinar will review how to simplify the process through network segmentation and user-based security policies.

The October 2008 update of the PCI DSS documentation states that companies can reduce the cost and complexity of PCI compliance by using network segmentation to isolate the cardholder data in a secure segment. Without adequate network segmentation – sometimes called a “flat network” – the entire network is in scope of the PCI DSS assessment. This webinar will offer insight into the issues, challenges and strategies required to meet PCI compliance.

http://paloaltonetworks.com/events/webinars/index.html

via Palo Alto Networks Hosting Webinar with Forrester on PCI Audit Process.

Archer Sets Its Sights On IT GRC Rival, Acquires Brabeion

 

 

 

Top contenders in the IT governance, risk, and compliance market merged on Tuesday as Archer Technologies announced it is acquiring Brabeion Software. Forrester projected consolidation as a key GRC market trend for 2009, and we explored the issue further for IT GRC vendors in our report, “Consolidation Looms for the IT GRC Market.”

This was a strong move for Archer, as other, larger vendors are closely eying the IT GRC space for acquisition potential. Along with the acquisition of Paisley by Thomson Reuters last month in the Enterprise GRC space, this is just the beginning of what’s to come over the next 12-18 months. The GRC market as a whole is extremely broad and ripe for growth, but it is also crowded with niche vendors. Market leaders and enormous outsiders will be eager to scoop up as much of the pie as possible, which means more deals are on the way.

via The Forrester Blog For Security & Risk Professionals.

The Death Of PCI DSS? Don’t Be Silly – Security Blog – InformationWeek

The Death Of PCI DSS? Don’t Be Silly

Posted by George Hulme, Jan 27, 2009 10:13 PM

Yes, in the past year two big retailers, who were apparently compliant to the Payment Card Industry Data Security Standard, were breached. Does that mean PCI DSS has grown increasingly irrelevant? That’s absurd.

First, I’m going to state the obvious to anyone who has studied IT security and compliance: being compliant to any mandate won’t make one secure. And it never will. I can’t make the case as to why any better than Mike Fratto did in his earlier post, so I won’t even try. But the point is: focus on building a secure and sustainable infrastructure, no matter how big, or small.

I’ve spent the better part of this decade interviewing IT security experts, vendors, chief information security officers, and other security mangers in just about every industry. Generally — and I stress *generally* (as there are always exceptions) — retailers, manufacturers, and heath care providers have tended to have the least mature security programs in place. Not true in every case, but I found it to be true often enough to see a trend.

Prior to the adoption of PCI DSS several years ago, online retailers and brick-and-mortar retailers barely paid attention to IT security. Trust me, in 2003, when it came to a Top 10 list of the most pressing IT objectives for most merchants, IT security ranked around 580th. They probably spent more time evaluating stationery than how to secure their databases, Web applications, and VPNs. So when you see retail IT managers arguing over whether they should install a Web Application Firewall, or conduct application security vulnerability assessments, or even do both, you can thank PCI DSS to a large degree.

PCI DSS has done more to raise security awareness among retailers than anything else I can think of. Even the torrent of breaches earlier this decade. And, while I can’t quantitatively prove it, PCI DSS has most certainly raised the security of the retail industry, in general.

The unfortunate breaches of Hannaford Bros. Co, and more recently Heartland Payment Systems, which were both PCI compliant, doesn’t make PCI DSS either irrelevant, obsolete, or worthless. Whoever thought PCI DSS would eliminate data breaches in retailers should probably make sure they don’t work in IT, and definitely make sure they don’t work in IT security. Its goal was to raise security among retailers and merchants, and to a large degree its been a success.

This standard isn’t perfect, not by a long shot. The standard won’t eliminate security breaches — and no one said it would. But the standard has increased the security of many retailers, and probably stopped quite a few breaches along the way.

Do we talk about the “failure” of law enforcement when someone commits a crime? Do we talk of the irrelevance of physical security when banks are robbed? Do we talk about how “worthless” the military is after a lost battle or two? Do we talk about how eating healthily and exercising is such a waste when falling ill?

No, intelligent people do not do those things.

The battle against cybercrime is like any other long-term fight, and once in awhile companies that strive to do everything right are going to find themselves breached. It’s the nature of this beast, not the fault of PCI DSS.

via The Death Of PCI DSS? Don’t Be Silly – Security Blog – InformationWeek.

e-Book: “PCI for Dummies”

Download this Free Book: Get the Facts on PCI Compliance and Learn How to Comply with the PCI Data Security Standard

Complying with the PCI Data Security Standard may seem like a daunting task for merchants. This book is a quick guide to understanding how to protect cardholder data and comply with the requirements of PCI – from surveying the standard’s requirements to detailing steps for verifying compliance.

PCI Compliance for Dummies arms you with the facts, in plain English, and shows you how to achieve PCI Compliance. In this book you will discover:

* What the Payment Card Industry Data Security Standard (PCI DSS) is all about

* The 12 Requirements of the PCI Standard

* How to comply with PCI

* 10 Best-Practices for PCI Compliance

* How QualysGuard PCI simplifies PCI compliance

via e-Book: “PCI for Dummies”.

How to choose and use source code analysis tools

The high cost of finding and patching application flaws is well known. Wouldn’t it be cheaper to write secure code in the first place?

One of the fastest growing areas in the software security industry is source code analysis tools, also known as static analysis tools. These tools review source code (or in Veracode’s case, binary code) line by line to detect security vulnerabilities and provide advice on how to remediate problems they find – ideally before the code goes into production. (See How to Evaluate and Use Web Application Scanners for tools and services that search for vulnerabilities as an outside attacker might.)

The entire software security market was worth about $300 million in 2007, according to Gary McGraw, CTO at Cigital, Inc., a software security and quality consulting firm in Dulles, VA. McGraw estimates that the tools portion of that market doubled from 2006 to 2007 to about $180 million. About half of that is attributable to static analysis tools, which amounted to about $91.9 million, he says.

And no wonder; according to Gartner, close to 90% of software attacks are aimed at the application layer. If security were integrated earlier in the software development lifecycle, flaws would be uncovered earlier, reducing costs and increasing efficiency compared with removing defects later through patches or never finding them at all, says Diane Kelley, founder of Security Curve, a security consultancy in Amherst, N.H. “Although there is no replacement for security-aware design and a methodical approach to creating more secure applications, code-scanning tools are a very useful addition to the process,” she says.

Despite the high degree of awareness, many companies are behind the curve in their use of static analysis tools, Kelley says, possibly due to the big process changes that these tools entail.

Key decisions in source code analysis

1) Should you start with static tools or dynamic tools or use both?

In addition to static analysis, which reviews code before it goes live, there are also dynamic analysis tools, which conduct automated scans of production Web applications to unearth vulnerabilities. In other words, dynamic tools test from the outside in, while static tools test from the inside out, says Neil McDonald, an analyst at Gartner.

Many organizations start with dynamic testing, just to get a quick assessment of where their applications stand, McDonald says. In some cases, the groups that start this initiative are in security or audit compliance departments and don’t have access to source code. The natural second step is to follow up with static analyzers, enabling developers to fix the problems found by dynamic analysis tools. Some companies continue using both, because each type yields different findings.

An important differentiator between the two types is that static analyzers give you the exact line of code causing the problem, while dynamic analyzers just identify the Web page or URL causing the issue. That’s why some vendors offer integration between the two types of tools.

According to the chief scientist at a large software vendor, dynamic assessment tools “tend to be brute force,” he says. “You have to hit every parameter to find the vulnerabilities, whereas static tools investigate the whole landscape of the application.” He recently chose a code scanner from Ounce Labs, after outsourcing the work to Cigital since 2006. He became interested in application security whencustomers began requiring PCI DSS certification. He plans to add in dynamic testing in the future, but the static analysis tool is the cornerstone of his application security program.

2) Do you have the source code?

Most static analyzers scan source code, but what happens if you want to analyze third-party software or code written so long ago that you only have the executable? In that case, Veracode, Inc. offers binary code scanning through a software as a service platform. “A vendor may not be willing to give you source code, but they will give you executables or binary,” Kelley says.

At the Federal Aviation Administration, Michael Brown, director of the Office of Information Systems Security, says he chose to use Veracode’s services this year because of the amount of vendor-written code the FAA anticipated to use as a result of its modernization of the national airspace system. Brown says he wanted to ensure the code was not only functionally correct and of high quality but also secure. He wanted a service rather than a tool to reduce the need for training. So far, the results have been eye-opening, he says. “A lot of the code didn’t really take security into account,” he says. “There were cases of memory leaks, cross-site scripting and buffer overflows that could have been a cause for concern.”

3) What do you currently use for software quality?

Some tool vendors, such as Coverity, Inc., Klocwork, Inc., Parasoft Corp. and Compuware Corp., originated in the quality-testing arena and have added security capabilities vs. vendors like Ounce and Fortify Software, Inc., which were solely designed for security. It’s worthwhile to check into the quality tools you already use to see if you can leverage the existing relationship and tool familiarity. You should also consider whether it’s important to your organization to have the two functions merged into one tool in the long term, McDonald says. (See Penetration Testing: Dead in 2009? for more on the interplay of quality assurance and vulnerability detection.)

Source code analysis tools: Evaluation criteria

— Support for the programming languages you use. Some companies support mobile devices, while others concentrate on enterprise languages like Java, .Net, C, C++ and even Cobol.

— Good bug-finding performance, using a proof of concept assessment. Hint: Use an older build of code you had issues with and see how well the product catches bugs you had to find manually. Look for both thoroughness and accuracy. Fewer false positives means less manual work.

— Internal knowledge bases that provide descriptions of vulnerabilities and remediation information. Test for easy access and cross-referencing to discovered findings.

— Tight integration with your development platforms. Long-term, you’ll likely want developers to incorporate security analysis into their daily routines.

— A robust finding-suppression mechanism to prevent false positives from reoccurring once you’ve verified them as a non-issue.

— Ability to easily define additional rules so the tool can enforce internal coding policies.

— A centralized reporting component if you have a large team of developers and managers who want access to findings, trending and overview reporting.

Do’s and Don’ts of source code analysis

DON’T underestimate adoption time required. Most static analysis projects are initiated by security or compliance, not developers, who may not immediately embrace these tools. Before developers get involved, McDonald suggests doing the legwork on new processes; planning integration with other workflows like bug-tracking systems and development environments; and tuning the tool to your unique coding needs. “Don’t deploy to every developer at once,” he adds. “Ideally, you’ll get someone who wants to take on a competency role for security testing.”

The chief scientist at the large software vendor has developed an application security awareness program that includes training on common vulnerabilities, through podcasts and videocasts. Once he builds up awareness, he’ll educate developers on secure coding standards. To complete the circle, he’ll introduce Ounce’s static code analysis tool to enforce the standards and catch vulnerabilities “so it’s a feedback loop,” he says. (See Rob Cheyne Pushes for Developer Security Awareness for a look at a similar agenda.

DO consider using more than one tool. Collin Park, senior engineer at NetApp, says the company uses two code analysis tools: Developers run Lint on their desktops, and the company uses Coverity each night to scan all completed code. “They catch different things,” he explains. NetApp began using these tools when its customer base shifted to enterprise customers who had more stringent requirements. While Coverity is better at spotting vulnerabilities such as memory leaks, LINT catches careless coding errors that developers make and seems to run faster on developer desktops, Park says.

According to Kelley, organizations typically implement static analyzers at two stages of the development process: within the development environment, so developers can check their own code as they’re writing, and within the code repository, so it can be analyzed at check-in time. The chief scientist uses this method. “In the first scan, if the engineer takes every finding and suppresses them, a milestone scan will catch those and generate a report,” he says.

DO analyze pricing. Vendors have different pricing strategies, McDonald says. For instance, while all continuously add information to their libraries about the latest vulnerabilities, some charge extra for this, while others include it in the maintenance fee, he says. In addition, some vendors charge per seat, which can get expensive for large shops and may even seem wasteful for companies that don’t intend to run the scanner every day, while others charge per enterprise license. Additionally, some vendors charge for additional languages, while others charge one price for any language they support, McDonald says.

DO plan to amend your processes. Tools are no replacement for strong processes that ensure application security from the beginning, starting with defining requirements, which should focus on security as much as functionality, according to Kelley. For instance, a tool won’t tell you whether a piece of data should be encrypted to meet PCI compliance. “If a company just goes out and buys one of these tools and continues to do everything else the same, they won’t get to the next level,” she says.

The chief scientist says it’s also important to determine what will happen when vulnerabilities are found, especially because the tools can generate thousands of findings. “Does the workflow allow them to effectively analyze, triage, prioritize or dispose of the findings?” he says. He is working with Ounce to integrate the system better with his current bug-tracking system, which is Quality Center. “It would be great to right-click on the finding to automatically inject it into the bug-tracking system,” he says.

At NetApp, Park has reworked existing processes to ensure developers fix flagged vulnerabilities. As part of doing a code submit, developers do a test build, which must succeed or it can’t be checked in. Then, when they check in code, an automated process starts an incremental build. If that build fails, a bug report is filed, complete with the names of developers who checked in code before the last build. “Developers are trained to treat a build failure as something they have to look at now,'” Park says.

NetApp also created a Web-based chart that’s automatically updated each night, to track which managers have teams that were issued Lint or Coverity warnings and whether they were cleared.

DO retain the human element. While the tools will provide long lists of vulnerabilities, it takes a skilled professional to interpret and prioritize the results. “Companies don’t have time to fix every problem, and they may not need to,” Kelley says. “You have to have someone who understands what is and is not acceptable, especially in terms of a time vs. perfection tradeoff.'”

The chief scientist calls this “truly an art form” that requires a competent security engineer. “When the tool gives you 10,000 findings, you don’t want someone trying to fix all those,” he says. “In fact, 10,000 may turn out to just be 500 or 100 vulnerabilities in actual fact.”

Park points out an instance where the Coverity tool once found what it called “a likely infinite loop.” On first glance, the developer could see there was no loop, but after a few more minutes of review, he detected something else wrong with the code. “The fact that you get the tool to stop complaining is not an indication you’ve fixed anything,” Park says.

DON’T anticipate a short scan. NetApp runs scans each night, and because it needs to cover thousands of files and millions of lines of code, it takes roughly 10 hours to complete a code review. The rule of thumb, according to Coverity, is for each hour of build time, allow for two hours for the analysis to be complete. Coverity also enables companies to do incremental runs so that you’re not scanning the entire code base but just what you’ve changed in the nightly build.

DO consider reporting flexibility. At the FAA, Brown gets two reports: an executive summary that provides a high-level view of vulnerabilities detected and even provides a security “score,” and a more detailed report that pinpoints which line of code looks troublesome and the vulnerability that was detected. In the future, Brown would like to build into vendor contracts the requirement that they meet a certain security score for all code they develop for the FAA.

DON’T forget the business case. When Brown first wanted to start reviewing code, he met with some pushback from managers who wanted a defined business need. “You’ve got program managers with schedules to meet, and they can view this as just another bump in the road that’s going to keep them from making their milestones,” he says.

Brown created the business case by looking to independent sources like Gartner and Burton Group for facts and figures about code vulnerability, and he also ran some reports on how much time the FAA was dedicating to patch management.

The chief scientist justified the cost of the Ounce tool by taking the total cost of the product and comparing that to the effort involved in a manual review. “With millions of lines of code, imagine how many engineers it would take to do that, and by the way, we want to do it every week,” he says. “The engineers would fall down dead of boredom.”

Mary Brandel is a freelance writer based outside of Boston.

via http://www.networkworld.com/news/2009/012009-how-to-choose-and-use.html.

Heartland Payment Systems Reports Breach

Credit card processing company Heartland Payment Systems disclosed today it suffered a malware attack last year. The discovery was made after officials from Visa and MasterCard reported suspicious activity involving processed card transactions.

Payments processor Heartland Payment Systems disclosed today it was hit with a malware attack last year that may have resulted in a large cache of financial data being compromised.

The company said it launched an investigation after officials at Visa and MasterCard reported suspicious activity surrounding processed card transactions. In response, Heartland enlisted forensic auditors to conduct an investigation. Last week, the investigation uncovered malicious software that compromised data that crossed Heartland’s network, Heartland officials said.

In a statement released today, Heartland declared the breach had been contained. The compay further added that no merchant data or cardholder social security numbers, unencrypted personal identification numbers (PIN), addresses or telephone numbers were involved in the breach. None of Heartland’s check-management systems were involved either, officials added.

“We found evidence of an intrusion last week and immediately notified federal law enforcement officials as well as the card brands,” said Robert H.B. Baldwin, Jr., Heartland’s president and chief financial officer, in the statement. “We understand that this incident may be the result of a widespread global cyber fraud operation, and we are cooperating closely with the United States Secret Service and Department of Justice.”

In the wake of the incident, Heartland has announced plans to implement a program designed to flag network anomalies in real-time and help law enforcement catch cyber-criminals. The company has also created a Web site – www.2008breach.com – to provide information about the situation. Cardholders are not responsible for unauthorized fraudulent charges made by third parties.

“Heartland apologizes for any inconvenience this situation has caused,” continued Baldwin. “Heartland is deeply committed to maintaining the security of cardholder data, and we will continue doing everything reasonably possible to achieve this objective.

Based in Princeton, NJ, Heartland provides credit, debit, prepaid card processing, payroll, check management and payments solutions to more than 250,000 business locations nationwide.

via eWeek.

HIPAA: Getting in tune SC Magazine US

The heat has been turned up for those charged with bringing their institutions into HIPAA compliance, reports Greg Masters.

In the good old days, the IT staffs at a lot of hospitals did what they could to protect their patient’s privacy, but there was little in the way of requirements or enforcement. But, with increasing instances of data breaches and the introduction of HIPAA and other state and federal mandates, closer attention is being paid to these privacy concerns.

The challenges are daunting. Access to health care records can come from any number of places – from hospital staff tapping into the database, or from malicious outsiders hacking their way into the database mining for the valuable personal information.

Larry WhitesideLarry Whiteside Jr. (left), chief information security officer, Visiting Nurse Service of New York (VNSNY), which provides home health care and community-based health services in the five boroughs of New York City and Nassau County on Long Island, argues that HIPAA has little bite.

“The infractions that companies have been fined for have been due to publicly identified breaches that affect public confidence in the company and health care systems,” he says. “HIPAA has to step into those situations with a big hand. For those of us that have not had a breach, but take security seriously, we focus on things that deal with reducing risk. That’s the underlying factor for everything, RISK.”

All HIPAA does, he adds, is provide some general security guidelines that are specific to the health care vertical. “I don’t know one entity that has had a government agency come through and hit them with fines and sanctions due to not meeting HIPAA regulations. The reality is that everyone wants to be secure, everyone wants to reduce risk, everyone wants to be compliant (eventually).”

It’s just that the road companies take to get there are all different, says Whiteside. Some entities actually hit potholes while on that road (the breaches we have seen). For those that don’t hit that pothole, they hit speed bumps (budget, executive buy-in, etc.), he says.

“Regardless of the obstacle that is hit or the direction we started in, we are all on a road with a big secure sign at the end,” he says.

Despite the caveats, all the regulations, including HIPAA, have benefits and those are the guidance that they provide, he says, adding that the Visiting Nurse Service of New York takes HIPAA seriously.

But, the organization, the largest not-for-profit home health care agency in the nation, is not completely focused on just being HIPAA compliant.

“My more overall goal is to be secure,” says Whiteside. “Thus, I will inherently be compliant through being secure.”

There is no one-stop shop to security, however.

“Having regulations allows specific industries a way to focus in on the things that should matter to them and allow them to prioritize what should be done first based on their industry and regulations,” says Whiteside. “Am I saying that in health care one should meet all their HIPAA requirements before addressing anything else? No, I am saying that when you look at becoming a secure organization and you begin trying to determine what to address first, things like HIPAA help you make that decision.”

Across the border

Our neighbors to the north in Canada have similar compliance laws in place, many based on U.S. legislation, says Bobby Singh, director of information security at Smart Business Systems for Health Agency eHealth Ontario, an agency of the Ontario Ministry of Health and Long-Term Care.

Singh, who has the advantage of having worked in the states before moving to Canada several years ago, says Canadians are much more cognizant than their U.S. neighbors of privacy issues and how personal information is handled.

“More proactive steps are taken in Canada,” he says. While HIPAA allows some flexibility and provides suggested guidelines more than prescriptive steps, Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA), a federal law, mandates how entities collect, use and disseminate sensitive personal information.

In addition, there are some provincial acts in the health care space that replace the federal regulations – for example, the Personal Health Information Protection Act (PHIPA), in Ontario where Singh works.

But, even though matters are somewhat simplified in Canada by having only one state-run HMO, whether in the U.S. or Canada, hospitals and health care facilities face similar issues in protecting patient data, says Singh.

However, while the system may be a bit simpler in Canada, on the other hand, there is a political component, being that the state is running the health care system.

“There’s somewhat of a big brother approach,” says Singh. “The government defines what doctors and patients should be doing as far as managing personal data.”

The prescription, he adds, just may be a higher level of priorities. “Hospitals need to talk to each other so data can be electronically transferred.”

He’s optimistic that as the process evolves, those in charge of the health care system will gain a better understanding of how to use IT for efficiencies.

more at HIPAA: Getting in tune – Print Article – SC Magazine US.

Motorola’s New Wireless Firewall

The Enterprise Mobility arm of Motorola today announced a new wireless firewall designed to protect retail clients from the kinds of WLAN attacks to which firewalls optimized for wired infrastructure may be blind.

Calling it “the industry’s first wireless firewall,” Motorola says its solution meets the requirements of the latest Data Security Standards enforced by the Payment Card Industry (PCI) by providing clean separation between wireless and wired networks. Used in conjunction with Motorola’s AirDefense wireless intrusion prevention systems (WIPS), the firewall protects sensitive information, such as credit card data, by employing “unparalleled” traffic inspections at every network layer.

Motorola says its “enhanced stateful Wireless Firewall” is easy to deploy and integrates with leading enterprise authentication systems.

“There are a multitude of challenges for retailers to protect against,” said Kevin Goulet, senior director of product marketing, Enterprise WLAN Division of Motorola’s Enterprise Mobility Business unit. “Threats into networks are becoming commonplace. As the network expands, it is making the edge of that network more vulnerable. We saw the need for additional protection, not only protection between the outside Internet and the inside network, but also from the killers at the edge.”

The TJX scandal and other high-profile retail data breeches attributed to lax WLAN security have made retailers more aware of risks—and the PCI requirements help ensure greater levels of security are enforced.

“Our retailers are aware that their networks are vulnerable and need to be protected,” said Goulet. “The headlines over the past year or two, and other areas, have made them more aware than other industries. There are also retailer requirements for passing credit cards over a WLAN; they must be PCI-compliant to maintain their ability. We see the retail industry as being more aware and proactive in protecting than other industries. They see everything from DOS attacks to rogue devices.”

The fine for businesses recklessly transmitting customer credit card and personal information can be up to $300 per compromised record.

“Providing security at the edge, at the wired/wireless demarcation point, is not enough in today’s wireless enterprise deployments,” said Goulet. “We are offering location-based access control and policy enforcement.”

In other words, retailers can leverage the locationing engine built in to the Motorola firewall to enforce user identity-, role- and location-based security policies, which helps to keep access to sensitive consumer data under control. Retailers can have one policy for an employee who is accessing the network externally by connecting through mesh APs, for example, and a different security policy for employees inside headquarters using the internal network.

“This allows way more granularity, depending on role, location, time of day, etcetera,” said Goulet.

Motorola’s wireless LAN access points, switches, and mobile computing devices also support the IEEE 802.11i security standard, as mandated by the new PCI Data Security Standard (DSS) version 1.2.

“This is a new approach to firewalling that we think is right for the wireless network. In the old way, a wired firewall is acting as the demarcation point between the wired and wireless network, but this ignores the other vulnerability, which is the wireless side,” said Goulet. “This is different than AirDefense wireless intrusion detection; this is a firewall on the AP to protect the network from the attacks on the wireless side.”

Beta customers are currently testing the firewall in the field. Existing customers who have a service agreement will not have to pay to upgrade their software to include the firewall, and new customers will not see a bump in the cost of Motorola APs.

“We want to offer more features at the same price point,” said Goulet. “It really ties in to when Motorola and AirDefense, came together. We have a vision that will make ‘wireless’ and ‘security’ synonymous. We began by bringing WIPS into the wireless LAN infrastructure and we followed that up with the secure AP that had a full-time traffic cop on one radio in the AP, providing access to clients and devices. This is the third leg of that–providing firewalling and security in the network. It helps us execute that vision.”

* For more on Motorola, read “Review: Motorola RF Management Suite (Part 1),” “Review: Motorola LANPlanner (RF Management Suite, Part 2),” “Aruba/Motorola Patent Dispute Slogs On.”

* For more on Wi-Fi in the retail space, read “Retailers Need to Shore Up Defenses,” “WLAN Security Service Aims to Boost PCI Compliance,” and “RF Barrier Helps Deter Eavesdroppers.”

* To learn more about WLANs, read “Understanding WLANs: Architecture 101.”

* For more on AirDefense, read “Motorola’s AirDefense Acquisition Complete,” “Motorola’s New Indoor/Outdoor Management Solution,” and “Motorola Buys AirDefense.”

via Motorola’s New Wireless Firewall.