2 Jun 2016

Should Companies Be Required to Share Information About Cyberattacks?


Disclosure of data breaches might help strengthen cybersecurity for everyone. But keeping attacks and responses secret may lead to quicker fixes and less reputational harm.

 

 When companies disclose cyberattacks, the effects can be mixed.

 



Damage from cyberattacks comes in layers. Direct harm, in the form of theft and other losses. Damage to the reputation of the companies affected when news gets out. And the slow erosion of confidence in overall online security—a malaise that grows worse with each new breach.

How do we limit the damage and, more important, restore confidence in online security? That is a question that bedevils policy makers as much as it does network analysts and computer scientists.
Requiring companies to report when they’ve been attacked and to share details about how it was done might help strengthen cyberdefenses for everyone. But it can also complicate the process of trying to keep systems secure, and injure the companies’ reputations in the meantime. Conversely, allowing breached companies to work on solutions in secret may fix problems quickly and prevent reputational harm. But keeping attacks secret may also increase the danger for others.
Making the case for required disclosure is Denise Zheng, deputy director and senior fellow in the Strategic Technologies Program at the Center for Strategic and International Studies. Andrea Castillo, program manager in the Technology Policy Program at George Mason University’s Mercatus Center, argues against such a mandate.
YES: Companies Now Are Flying Blind When Closing Security Gaps
By Denise Zheng
Last year, losses or thefts of more than half a billion identities were reported, including the largest data breach in history, involving more than 191 million U.S. voter-registration records.
At the same time, more companies are choosing to keep the scope of the breaches they suffer secret.
Both trends are worrisome, and are likely the tip of the iceberg.
Underreporting of cyberattacks contributes to an incomplete understanding of the magnitude of the threat. It means we are relying on anecdotal information to determine effective defenses against cyberthreats.
Breaches that are disclosed generally involve loss of personally identifiable information or medical records, because reporting of this kind of attack is mandated by state and federal data-breach notification laws.
But the laws we have can best be described as a patchwork of requirements inadequate to protect consumers and burdensome for companies. Indeed, state requirements vary in 47 states, and federal rules are inconsistent across sectors and lack specificity. The Securities and Exchange Commission clarified in 2011 that “material” cyberrisks and intrusions must be disclosed to investors. But the SEC didn’t offer formal guidance on what is “material.” Such vagueness means that most public companies file generic statements about cyberrisk and many still don't disclose intrusions at all.
The Cybersecurity Act signed into law in December, as part of a broader spending bill, creates a framework for voluntary sharing of cyberthreat information. This is a significant step, but it, too, doesn't go far enough. Nothing in the law compels companies to disclose incidents or technical details about breaches. There is liability protection against suits resulting from efforts by companies to monitor their own networks and share threat information. But there is no liability protection for companies sued as a result of a breach.
Of course, even if there were liability protection in the case of a breach, companies could still suffer reputational harm by reporting the breach. But the benefits to society by requiring reporting would outweigh the costs to the individual companies. Requiring not only cyber incidents to be reported but the tactics and techniques used by hackers would create greater transparency, allowing businesses, policy makers and consumers to make more informed decisions about how to manage cyberrisk. It would enable decision makers in companies and government to assess risk as well as progress.
Not all incidents should be disclosed, and not all the details should be made public. We wouldn't want to give malicious hackers a road map to conduct additional attacks. But major attacks—those that have significant consequences for the economy, public health and safety, or national security—should be disclosed to relevant government agencies and to downstream stakeholders that may be affected by the incident. Disclosure creates incentives for improvement. The alternative is to fly blind when it comes to cybersecurity, which is the approach we use now without much success.
After the congressional and presidential elections in the fall, Congress and the new administration should make a serious effort to overhaul state and federal data-breach-notification laws, harmonize requirements and toughen the standard for data-breach notification.
Congress should also enact new laws to encourage more disclosure of incidents as well as relevant technical details to enable IT vendors and companies to close gaps and vulnerabilities that attackers could use to conduct similar intrusions on others.
Ms. Zheng is deputy director and senior fellow in the Strategic Technologies Program at the Center for Strategic and International Studies. She can be reached at reports@wsj.com.
NO: It Would Limit Companies’ Ability to Thwart Attacks
In the ever-evolving world of cybersecurity, a blanket mandate that organizations must report incidents and share cyberattack information could ultimately create more problems than it solves.
Context is the key. In some cases, not reporting an attack can be a good strategy. Security experts may be able to better understand or even disarm a cyberattack by taking their time to quietly observe the infiltrators’ activities. This kind of approach cannot work if organizations are forced to pre-emptively publicize an attack, or if the policing effort that follows involves too many parties. In addition, a mandate requiring companies to share cyberattack information would effectively criminalize this kind of quiet security strategy and limit organizations’ abilities to effectively counteract cyberattacks on their own.
It is important to view the government as a trusted resource for collaboration when appropriate—for instance, when an attack involves a nation-state or terrorist group. But the amount of work that would result from a system of routine mandatory reporting of data breaches may have the effect of diverting valuable resources from security toward compliance. The likely information overload that would result could inundate analysts with reams of unhelpful data while useful information about real threats gets lost in the shuffle.

Some supporters of compulsory reporting think that it will bring about greater transparency in the world of cybersecurity, leading to more cooperation and enhanced security for all. But the federal government itself is often unable to properly act on cyberthreat information that is shared even among its own offices. And the government’s idea of information sharing is usually asymmetric. Indeed, federal agencies are often unwilling to share information with private entities. In addition, some intelligence agencies, such as the National Security Agency, when they find “zero-day” vulnerabilities—system weaknesses that network security analysts aren’t aware of and so have spent no time working on—stockpile them for their own use rather than report them to the appropriate party for patching.
Advocates of required reporting were pleased when Congress in December passed the Cybersecurity Act, a bill that offers liability protections to corporations that monitor their information systems for security and share information about threats. Lawmakers passed the bill in the face of strong opposition from computer-security experts and civil-liberties advocates who argued that it threatened to undermine Americans’ privacy while doing little to actually address our biggest cybersecurity challenges. Policy makers who backed the Cybersecurity Act assured the public it was only a voluntary program.
But the law has done nothing to make us safer from cyberattacks. And less than a half a year later, the specter of mandatory reporting is still with us as its supporters appear eager to continue the debate.
Required reporting would be a clear increase of government power and, in some ways, inconsistent with the spirit of the Cybersecurity Act. While that law is intended to limit the liabilities of those organizations that cooperate, mandatory information sharing would seek to force cooperation by extending liabilities to all.
There is much that can be done to improve U.S. cybersecurity without requiring companies to report cyberattacks. The government should first focus on correcting policy missteps from the past. It should promote the use of strong encryption and reform counterproductive laws like the Computer Fraud and Abuse Act that chill security research. Requiring organizations to share information with hack-prone federal agencies under threat of penalty will only add to the current contradictory mess of policies.
wsj