In a landmark policy shift, the DoJ recently announced it will no longer prosecute “good faith” hackers under the CFAA. Danny Bradbury considers whether this will be enough to protect security researchers
When US lawmakers passed the Computer Fraud and Abuse Act (CFAA) 1986, CompuServe was still big and CD-ROMs were just getting started. The term ‘cyberspace’ was brand new, and the first web page was still years away. Almost three decades on, security advocates are still criticizing the law for being ill-defined and over-scoped, with egregious penalties.
Now, the Department of Justice (DoJ) wants to change that. It has just issued a policy document with gentler prosecution guidelines aimed at creating a safer law for security researchers. But does it go far enough?
Shall We Play a Game?
It was 1983, and the movie WarGames was in theaters. Matthew Broderick played a young hacker who broke into NORAD’s computer system and almost triggered a nuclear war. President Ronald Reagan saw the film and loved it. However, others on the Hill were appalled. Congressmen showed clips from the movie on the House floor.
Lawmakers acted quickly, first passing the Comprehensive Crime Control Act in 1984, which defined a series of basic computer crimes. In 1986, it passed an amendment, the CFAA, that extended the original act’s crimes and penalties. These now included accessing a computer without authorization and altering, damaging or destroying information on systems of federal interest.
The law is vague and ill-defined, warns Andrew Crocker, senior staff attorney at the Electronic Frontier Foundation. “These ideas of unauthorized access or authorization are not defined in the statute,” he says. “It’s created a lot of headaches for courts and for people charged with interpreting that statute.”
Since then, lawmakers have updated the CFAA multiple times. They prohibited unauthorized access that obtains information of any kind, expanding the types of computers that fell under the law and expanding acts – now including even threats to compromise data – that could be prosecuted.
Changing How the DoJ Sees the Law
Unpicking the entire CFAA and rewriting it would be a mammoth legal and political task. Instead, the DoJ has published a policy document advising prosecutors on how to interpret the law.
Issued on May 19 2022, the policy asserts that good-faith security research should not be charged. The Department didn’t respond to our requests for comment, but at the announcement, Deputy Attorney General Lisa Monaco said: “The department has never been interested in prosecuting good-faith computer security research as a crime.”
Under the new policy, good faith means accessing a computer for testing, investigating or correcting security flaws without harming individuals or the public. The information from the test should be used primarily to promote the device’s security.
The policy change stems in part from a SCOTUS ruling, Van Buren vs. United States, that disallows CFAA prosecutions simply for violating an organization’s terms of service. A researcher could embellish an online dating profile or create fake accounts on hiring or social media websites and avoid prosecution under the policy, even if the site’s terms and conditions forbid it.
“It appears there is a concerted effort to focus CFAA prosecutions on bright-line violations"
The Tragic Effects of a Franken-Law
It’s good to see the DoJ involved in these discussions, say lawyers. “It appears there is a concerted effort to focus CFAA prosecutions on bright-line violations and prioritize cases based on the level and tangibility of the harm caused,” says Dan Gelb, an attorney at Gelb & Gelb and a member of the National Association of Criminal Defense Lawyers (NCDL).
Cathy R. Gellis, an attorney who advises clients on technology policy, is less sanguine. The DoJ’s move is a step in the right direction, but dangers still remain, she says. The policy doesn’t change the law itself.
“It’s Frankenstein’s monster,” says Gellis. “It just adds language upon language without actually making sure that all the language on the books is really a coherent policy response to the policy challenge.”
She cites the US federal government’s case against Aaron Swartz as one of the law’s biggest misuses. The young security researcher was arrested in 2011 after using an MIT guest account to download articles from the copyright-protected JSTOR service. Prosecutors charged him under laws including the CFAA, carrying a maximum sentence of up to 35 years in jail. Swartz took his own life in early 2013.
“There is nothing about that prosecution that appears to be obviated by this new policy,” Gellis says, adding that the case escalated when prosecutors perceived harm to intellectual property. The new policy fails to separate security law from copyright law, enabling prosecutors to still use the CFAA as a copyright enforcement tool.
Indeed, the policy’s definition of ‘good faith’ comes straight from the 1998 Digital Millennium Copyright Act (DMCA), which introduced a new chapter to the United States Code. This language, section 1201, made it illegal to circumvent digital controls used to enforce copyright protections.
Relying on a common legal definition helps avoid confusion, but “it’s a problem in general that the Copyright Office is speaking to security research at all because that’s not a job and it’s not a competency,” says Gellis. She argues for decoupling security and copyright law. “Issues of intellectual property tend to sabotage better cybersecurity,” she warns.
The Danger of Civil Use
The CFAA’s potential for civil use also weakens the effect of the DoJ’s policy, warn experts. Companies can pursue security researchers under the same law without following the DoJ’s policy. Worse still, a prosecution under a civil case could set a precedent that public prosecutors would then be forced to follow.
Moreover, the policy includes language that ultimately defers to company rules. It won’t prosecute researchers for creating accounts in violation of a company’s terms, it says. “However, when authorizers later expressly revoke authorization – for example, through unambiguous written cease and desist communications that defendants receive and understand – the Department will consider defendants from that point onward not to be authorized.”
In any case, reinterpreting the CFAA itself isn’t a silver bullet, warns Crocker. Cases against security researchers are still possible under other laws, including the DMCA.
“Nearly all states have their own version of the CFAA"
Digital rights advocates warned after DMCA’s introduction that it would have a chilling effect on security research by making it illegal to tinker with protection systems. The FBI arrested ElcomSoft’s Dmitry Skylarov in 2001 after he spoke at the DEFCON hacking conference about a software tool that cracked Adobe’s ebook format. Other cases have seen researchers drop book publications and even leave their field of research altogether after being threatened by vendors and industry groups. Activists have been forced to lobby the copyright office for product exemptions from the 1201 language every three years.
The Specter of State Law
Another worry is the possibility of an over-zealous and under-educated state official relying on local law. “Nearly all states have their own version of the CFAA,” Crocker says. “They are frequently modeled on the CFAA, but they also frequently go further or are worded just differently enough that it isn’t certain exactly how broad in scope they are.”
State-level officials have threatened to prosecute people conducting good-faith research in the past. Missouri Governor Mike Parson threatened to prosecute a St. Louis Post-Dispatch journalist who reported security holes in a web application at the state’s Department of Elementary and Secondary Education. All the reporter did to find the flaw, which exposed over 100,000 teachers’ Social Security numbers, was to view the site’s source code in his browser.
Ignoring a backlash, Parson doubled down on these claims. Nevertheless, the Cole County Prosecutor’s office refused to pursue the claim, calling it an inappropriate use of taxpayer money.
Things didn’t go as well for Justin Wynn, a senior security consultant at security company Coalfire Systems. Coalfire tasked Wynn and a colleague with conducting a penetration test on courthouse systems in the state of Iowa.
The duo engaged in hacking and phishing activity along with physical security tests under the scope of the contract. Unlocked doors gave them full access to physical systems, including a judge’s open laptop, complete with a password written nearby. They also found an application vulnerability that allowed them to deanonymize jurors, says Wynn.
All was going well until the pair was caught breaking into a courthouse in Dallas County, Iowa. The county sheriff decided to prosecute them for trespass and burglary, even though the State of Iowa had hired them to conduct the project.
The two were charged and faced seven years’ jail time. Coalfire paid for their personal lawyers, and the CEO came out in their defense. The County reduced the charges to trespass and eventually dropped those too, but with no public apology.
Now, Wynn’s face and prints are on public record. An avid hunter, he was told he couldn’t buy a rifle thanks to his presence in the National Instant Criminal Background Check System (NICS) database. Even now, the two are in the middle of a countersuit against the sheriff and the County.
“It definitely had lasting effects emotionally,” Wynn says. “For the workplace employment, there are always lasting records out there as a part of this.” Admitting an arrest makes it more difficult when getting client approval for new work projects, for example.
For Wynn, the CFAA policy change isn’t enough. Even with the policy change, he still worries about being left vulnerable to its misuse by someone with a grudge or a lack of knowledge in the area. This is due in part to many gray areas in a pen testing engagement. For example, he is often required to sign on as an employee when conducting pen tests, agreeing not to hack systems, even when his contract requires him to do so. “I think there are still gaps in the framework here that could be abused. So I’d like to see it go a little bit further,” he says.
Wynn would like to see a law that better acknowledges a hacker’s intent. Gellis calls for a “carefully nuanced law that fully understands the problem,” with better capabilities to avoid collateral damage. But as she says, it’s easier to criticize a law like this than it is to rewrite it.
In the meantime, we can expect security researchers to be suspicious of the CFAA, warns Crocker. “I spend a good deal of my time in my work talking to people who are worried about the CFAA and are reconsidering specific research they might be doing,” he concludes. The policy is little more than a sticking plaster for a bigger wound.