Is the ACLU’s Lawsuit Against Bad Anti-Hacking Law Ingenious or Terrifying?

The vague language of the federal Computer Fraud and Abuse Act (CFAA) has made it prone to abuse by federal prosecutors.

This law’s alleged purpose is to fight cybercrimes and hackers. But the law is far more expansive, making it a federal crime to violate a web site’s “terms of service” as a user or to access a computer or network in an “unauthorized” fashion. Yes, the law is used to fight hackers trying to get into people’s bank accounts to steal their money. But it has also been used to put journalist Matthew Keys in prison for giving a password to a member of Anonymous, who then vandalized the website for the Los Angeles Times by changing a single headline. The law was also used against activist Aaron Swartz, who was arrested and charged for downloading huge numbers of academic studies at the Massachusetts Institute of Technology with the intent of making them freely available to everybody. The prosecutor used the law as a hammer to try to push Swartz to accept a plea deal. Instead he committed suicide. It’s a terrible law that you’ve probably broken without even realizing.

And now the American Civil Liberties Union (ACLU) is suing to challenge the constitutionality of the law. This is very good news. How they’re tackling it is both interesting, but also just a little bit troubling. Their argument is that the law has the side effect of chilling some online research and journalism investigations of some online commercial behavior. More specifically, this is research over whether online algorithms that put information and advertising in front of people’s eyeballs is influenced by discriminatory attitudes or intent. Are those sponsored ads you’re getting racist or sexist?

The CFAA barrier keeps academics and journalists from researching algorithmic behavior, stopping researchers from independently “auditing” what happens by keeping them from creating fake online profiles to see how advertising reacts. The terms of service of many websites prohibit the use of fake accounts or identities. Therefore using the same sort techniques used to sniff out discriminatory behavior in the “real world” in areas like job interviews and bank loans (fake applications) are legally not permissible. People, of course, create fake online profiles and identities anyway, but most people are not researchers or journalists who plan to publicly release the results of their investigations and would have to worry about legal retaliation.

But potentially bringing about an end to at least part of this broad law may be exchanging one type of legal threat with another. A look over the ACLU’s arguments for striking down that part of CFAA should set off alarms about what the future could bring:

As more and more of our transactions move online, and with much of our internet behavior lacking anonymity, it becomes easier for companies to target ads and services to individuals based on their perceived race, gender, or sexual orientation. Companies employ sophisticated computer algorithms to analyze the massive amounts of data they have about internet users. This use of “big data” enables websites to steer individuals toward different homes or credit offers or jobs—and they may do so based on users’ membership in a group protected by civil rights laws. In one example, a Carnegie Mellon study found that Google ads were being displayed differently based on the perceived gender of the user: Men were more likely to see ads for high-paying jobs than women. In another, preliminary research by the Federal Trade Commission showed the potential for ads for loans and credit cards to be targeted based on proxies for race, such as income and geography.

This steering may be intentional or it may happen unintentionally, for example when machine-learning algorithms evolve in response to flawed data sets reflecting existing disparities in the distribution of homes or jobs. Even the White House has acknowledged that “discrimination may ‘be the inadvertent outcome of the way big data technologies are structured and used.'”

Companies should be checking their own algorithms to ensure they are not discriminating. But that alone is not enough. Private actors may not want to admit to practices that violate civil rights laws, trigger the negative press that can flow from findings of discrimination, or modify what they perceive to be profitable business tools. That’s why robust outside journalism, testing, and research is necessary. For decades, courts and Congress have encouraged audit testing in the offline world—for example, where pairs of individuals of different races attempt to secure housing and jobs and compare outcomes. This kind of audit testing is the best way to determine whether members of protected classes are experiencing discrimination in transactions covered by civil rights laws, and as a result it’s been distinguished from laws prohibiting theft or fraud.

The text of the complaint (read here) makes it abundantly clear that one likely outcome—even a desirable outcome—could be civil rights lawsuits under other federal laws like the Fair Housing Act and Title VII of the Civil Rights Act of 1964. This is not just about people trying to avoid being punished under one federal law. This is also potentially about getting evidence in order to use federal discrimination laws to punish private companies over the complex results of computer algorithms.

This is not to say that the ACLU itself plans to go around filing lawsuits willy nilly. But the ACLU and the people they’re representing in this case (one of whom is First Look Media, publishers of The Intercept), would not be the only people who be able to mobilize as a result of a friendly court ruling. Consider the lawyers (and their clients) who use the Americans with Disabilities Act to go from business to business looking for reasons to sue over frivolous concerns and eke out settlements. When the ACLU says, “Companies should be checking their own algorithms to ensure they are not discriminating,” there’s now a threat there, even if it’s not coming from the ACLU, isn’t there? Is that something even small businesses would have to pay attention to now? Is this going to be a new type of compliance cost? Could a business get into trouble for—as an example—buying targeted advertising that only reaches people in certain zip codes that have a high proportion of one race over another? Even if there are very good reasons for doing so, will a business is still now have to worry about having to defend against a lawsuit over it? Consider how many businesses settle complaints because the cost of fighting becomes too much of a burden.

It’s vexing, because the ACLU’s arguments for striking down that part of the law are compelling. There’s no reason that it should be a federal crime to do the same kind of auditing to determine discriminatory process that’s used through mailed applications or in-person interviews. Honestly it’s hard to justify making the violation of a sites “terms of service” a federal crime for any reason.

It’s disappointing, though, that ending one sort of abusive federal government prosecution may also be used as a tool to pry open new avenues to use the courts to harass people. The ACLU invokes redlining—the historical system of discrimination in which banks refused to provide mortgage loans in neighborhoods with high numbers of minorities. Does that really compare to which people get shown different types of online advertising?

from Hit & Run http://ift.tt/29qp6SS
via IFTTT

Leave a Reply

Your email address will not be published. Required fields are marked *