Senators Introduce Bill To Limit Facial Recognition Technology—but Does It Go Far Enough?

A pair of senators are teaming up across the aisle to put limits on how federal law enforcement agencies use facial recognition tools and when they must seek a warrant.

This week Sens. Mike Lee (R–Utah) and Chris Coons (D–Md.) introduced the Facial Recognition Technology Warrant Act. If it becomes law, it would require federal officials to get a warrant if they’re going to use facial recognition technology to attempt to track a specific person’s public movements for more than 72 hours.

The bill does not prohibit the use of facial recognition technology to identify people or even to monitor events in real time. Indeed, it says the authorities can use facial recognition to identify people, even without a warrant, as long as “no subsequent attempt is made to track that individual’s movement in real time or through the use of historical records after the individual has been identified.”

In other words, the Facial Recognition Technology Warrant Act doesn’t actually require warrants for the application of the technology, except for long-term surveillance of specific individuals.

Nevertheless, Coons and Lee put out a statement—joined by Fred Humphries, corporate vice president of U.S. government affairs at Microsoft—to praise what the bill would accomplish if passed. “Facial recognition technology can be a powerful tool for law enforcement officials,” said Lee. “But it’s [sic] very power also makes it ripe for abuse. That is why American citizens deserve protection from facial recognition abuse. This bill accomplishes that by requiring federal law enforcement agencies to obtain a warrant before conducting ongoing surveillance of a target.”

Americans for Prosperity also declared its support of this bill, which it sees as more balanced than a full ban on government’s use of facial recognition tech tools. Americans for Prosperity prefers regulations that keep authorities from abusing the tech. “We’re standing behind this bill,” said Senior Policy Analyst Billy Easley, “because we believe in the appropriate application of facial recognition technology and ensuring it is used for good rather than the mistreatment of Americans.”

Other privacy activists are much less impressed. The bill doesn’t stop the feds from accessing or using the hundreds of millions of face pictures they’ve already collected from drivers licenses and passports. In fact, it specifically gives such use a thumbs-up. At CNet, representatives of the American Civil Liberties Union and Fight for the Future express their misgivings:

“It has gaping loopholes that authorize the use of facial recognition for all kinds of abusive purposes without proper judicial oversight,” said Evan Greer, Fight for the Future’s deputy director. “It’s good to see that Congress wants to address this issue, but this bill falls utterly short.”

Read the bill for yourself here.

 

from Latest – Reason.com https://ift.tt/33UoOiA
via IFTTT

Senators Introduce Bill To Limit Facial Recognition Technology—but Does It Go Far Enough?

A pair of senators are teaming up across the aisle to put limits on how federal law enforcement agencies use facial recognition tools and when they must seek a warrant.

This week Sens. Mike Lee (R–Utah) and Chris Coons (D–Md.) introduced the Facial Recognition Technology Warrant Act. If it becomes law, it would require federal officials to get a warrant if they’re going to use facial recognition technology to attempt to track a specific person’s public movements for more than 72 hours.

The bill does not prohibit the use of facial recognition technology to identify people or even to monitor events in real time. Indeed, it says the authorities can use facial recognition to identify people, even without a warrant, as long as “no subsequent attempt is made to track that individual’s movement in real time or through the use of historical records after the individual has been identified.”

In other words, the Facial Recognition Technology Warrant Act doesn’t actually require warrants for the application of the technology, except for long-term surveillance of specific individuals.

Nevertheless, Coons and Lee put out a statement—joined by Fred Humphries, corporate vice president of U.S. government affairs at Microsoft—to praise what the bill would accomplish if passed. “Facial recognition technology can be a powerful tool for law enforcement officials,” said Lee. “But it’s [sic] very power also makes it ripe for abuse. That is why American citizens deserve protection from facial recognition abuse. This bill accomplishes that by requiring federal law enforcement agencies to obtain a warrant before conducting ongoing surveillance of a target.”

Americans for Prosperity also declared its support of this bill, which it sees as more balanced than a full ban on government’s use of facial recognition tech tools. Americans for Prosperity prefers regulations that keep authorities from abusing the tech. “We’re standing behind this bill,” said Senior Policy Analyst Billy Easley, “because we believe in the appropriate application of facial recognition technology and ensuring it is used for good rather than the mistreatment of Americans.”

Other privacy activists are much less impressed. The bill doesn’t stop the feds from accessing or using the hundreds of millions of face pictures they’ve already collected from drivers licenses and passports. In fact, it specifically gives such use a thumbs-up. At CNet, representatives of the American Civil Liberties Union and Fight for the Future express their misgivings:

“It has gaping loopholes that authorize the use of facial recognition for all kinds of abusive purposes without proper judicial oversight,” said Evan Greer, Fight for the Future’s deputy director. “It’s good to see that Congress wants to address this issue, but this bill falls utterly short.”

Read the bill for yourself here.

 

from Latest – Reason.com https://ift.tt/33UoOiA
via IFTTT

Justin Amash to Trump: Let Bolton, Giuliani, and Mulvaney Testify

House Democrats have upped the ante in the impeachment inquiry. President Donald Trump’s efforts to make security assistance to Ukraine contingent on politically useful probes, they suggest, are a violation of criminal bribery law.

“The bribe is to grant or withhold military assistance in return for a public statement of a fake investigation into the elections,” said House Speaker Rep. Nancy Pelosi (D–Calif.) at a press conference on Thursday. “That’s bribery.”

Weighing in on Neil Cavuto’s Fox News program, Judge Andrew Napolitano explained that “it wouldn’t matter if it was Joe Biden or Joe Blow” who was at the center of the investigations sought by Trump. It’s also inconsequential, he said, “whether the favor comes or not.” (Republicans have argued that, since the aid was released before the country conducted any investigations, there was no quo in the quid pro quo. Others counter that this only happened because Congress started getting suspicious.)

“I think that the argument that asking for a favor in return for doing a legal obligation—releasing the [security] funds—is pretty clearly a violation of criminal bribery laws,” Napolitano told Cavuto. “Republicans may not want to acknowledge that, which is why they’d rather undermine the witnesses than address the merits.”

Napolitano is referring to the first day of impeachment inquiry testimony, when the two witnesses—William B. Taylor, the chargé d’affaires in Ukraine, and George Kent, the deputy assistant secretary of state for European and Eurasian affairs—railed against Trump’s attempts to push Ukraine into investigating a political rival. In response, the minority party characterized both men as liable to have misrepresented or misunderstood interactions they had with those close in Trump’s circle.

According to Rep. Justin Amash (I–Mich.), the libertarian-minded congressman who left the Republican Party in July, there is an easy way to gain clarity.

“This is simple. Keep it simple,” he tweeted on Wednesday. “The White House released security assistance to Ukraine only after Congress started asking questions. Why? Considering that Bolton, Giuliani, Mulvaney, and others may have pertinent first-hand testimony, why won’t President Trump let them testify?”

from Latest – Reason.com https://ift.tt/2qgxJMP
via IFTTT

Justin Amash to Trump: Let Bolton, Giuliani, and Mulvaney Testify

House Democrats have upped the ante in the impeachment inquiry. President Donald Trump’s efforts to make security assistance to Ukraine contingent on politically useful probes, they suggest, are a violation of criminal bribery law.

“The bribe is to grant or withhold military assistance in return for a public statement of a fake investigation into the elections,” said House Speaker Rep. Nancy Pelosi (D–Calif.) at a press conference on Thursday. “That’s bribery.”

Weighing in on Neil Cavuto’s Fox News program, Judge Andrew Napolitano explained that “it wouldn’t matter if it was Joe Biden or Joe Blow” who was at the center of the investigations sought by Trump. It’s also inconsequential, he said, “whether the favor comes or not.” (Republicans have argued that, since the aid was released before the country conducted any investigations, there was no quo in the quid pro quo. Others counter that this only happened because Congress started getting suspicious.)

“I think that the argument that asking for a favor in return for doing a legal obligation—releasing the [security] funds—is pretty clearly a violation of criminal bribery laws,” Napolitano told Cavuto. “Republicans may not want to acknowledge that, which is why they’d rather undermine the witnesses than address the merits.”

Napolitano is referring to the first day of impeachment inquiry testimony, when the two witnesses—William B. Taylor, the chargé d’affaires in Ukraine, and George Kent, the deputy assistant secretary of state for European and Eurasian affairs—railed against Trump’s attempts to push Ukraine into investigating a political rival. In response, the minority party characterized both men as liable to have misrepresented or misunderstood interactions they had with those close in Trump’s circle.

According to Rep. Justin Amash (I–Mich.), the libertarian-minded congressman who left the Republican Party in July, there is an easy way to gain clarity.

“This is simple. Keep it simple,” he tweeted on Wednesday. “The White House released security assistance to Ukraine only after Congress started asking questions. Why? Considering that Bolton, Giuliani, Mulvaney, and others may have pertinent first-hand testimony, why won’t President Trump let them testify?”

from Latest – Reason.com https://ift.tt/2qgxJMP
via IFTTT

California School Shooting Leads to Renewed Demands for Assault Weapons Ban

On Thursday, two students were killed and another three injured, in a shooting at Saugus High School in Santa Clarita, California. The gunman has yet to be identified, but police say he was another student at the school and that he has been hospitalized after shooting himself in the head.

The shooting is shocking and tragic. The response from politicians is depressingly predictable: They claim that such shootings are common, they declare that new firearms laws are needed to prevent them from happening, and they don’t spend much time considering whether the laws they’re proposing would actually have prevented the crime. Former President Bill Clinton and current presidential candidate Kamala Harris both went on CNN to call for a ban on “assault weapons.” But the shooter used a .45 semi-automatic pistol, a weapon unlikely to be covered by even the broadest assault weapons ban.

A Los Angles Times editorial noted that the killer is 16 and then declared that “it’s not a leap to say that no 16-year-old should have ready access to a firearm outside the immediate supervision of an adult.” The fact that the minimum age to purchase a handgun in California is 21 didn’t warrant a mention.

Others said that “common sense” gun control legislation shouldn’t be a contentious political issue and ripped into Republicans, the National Rifle Association (NRA), and Senate Leader Mitch McConnell (R–Ky.) for making it one.

Others noted that some schools have reacted to mass shootings by performing traumatizing shooter drills—and cited that bad policy to justify yet more bad policies. “Every politician paid to defend the status quo by the gun lobby needs to answer whether they are comfortable with live shooter drills becoming routine, students running terrified from their classrooms, and entire communities being locked down,” former Rep. Gabrielle Giffords told The New York Times.

It needn’t be repeated that school shootings are a terrible and tragic thing. It does need to be repeated that these are also rare, and that the long-term trend has been for them to happen less often, not more often.

Rare or not, we obviously should do what we can to stop such shootings from happening. But you can’t do that without being aware of the nature of the problem, the trade-offs of the solutions, and—above all—whether the policies you’re proposing would even have stopped the crime being discussed.


FREE MARKETS

Uber is fighting for its right to keep deploying dockless for-hire electric scooters in Los Angeles, now that city officials have issued a temporary suspension of the company’s permit.

The Wall Street Journal reports that Uber has requested a hearing with L.A.’s transit regulators to discuss the suspension of its permit. The move will allow the company to keep operating while they wait for that hearing.

The dispute comes down to the data. The Los Angeles Department of Transportation (LADOT) requires all dockless scooter companies to share real-time trip data with the city, something Uber has said is a massive violation of their users’ privacy.

“By demanding real-time information about people’s movements, LADOT is an outlier among hundreds of cities around the world—and independent privacy experts and advocates raised the alarm more than a year ago that LADOT’s misguided technique poses serious risks to consumer privacy,” an Uber spokesperson told the Journal.

The company has threatened legal action against LADOT if it continues with its demands for real-time data—demands that are likely illegal under state law.


FREE MINDS

Brooke Nelson, a graduate of Northern State University in South Dakota, told the local paper that she opposed efforts to make a young adult novel by Sarah Dessen the required Common Read book for incoming freshmen.

“She’s fine for teen girls,” Nelson told Aberdeen News. “But definitely not up to the level of Common Read. So I became involved simply so I could stop them from ever choosing Sarah Dessen.”

This has since blown up into a major controversy, with Dessen and other popular authors ripping into Nelson on Twitter. By suggesting that college students shouldn’t be being assigned young adult fiction, Nelson was allegedly disparaging already-marginalized teen girls.

Washington Post writer Elizabeth Bruening has a good thread on what this says about the state of woke culture:


QUICK HITS

  • A Texas state lawmaker has resigned after police found a sealed envelope, bearing official letterhead, that was full of cocaine.
  • A police officer in Arizona was filmed tackling an armless, legless 15-year-old.
  • The Economist has an interesting article on how people’s music preferences match to how they vote.
  • Squad member Rep. Ayanna Pressley (D–Mass.) has introduced an ambitious criminal justice reform bill.

from Latest – Reason.com https://ift.tt/2NNzNVv
via IFTTT

California School Shooting Leads to Renewed Demands for Assault Weapons Ban

On Thursday, two students were killed and another three injured, in a shooting at Saugus High School in Santa Clarita, California. The gunman has yet to be identified, but police say he was another student at the school and that he has been hospitalized after shooting himself in the head.

The shooting is shocking and tragic. The response from politicians is depressingly predictable: They claim that such shootings are common, they declare that new firearms laws are needed to prevent them from happening, and they don’t spend much time considering whether the laws they’re proposing would actually have prevented the crime. Former President Bill Clinton and current presidential candidate Kamala Harris both went on CNN to call for a ban on “assault weapons.” But the shooter used a .45 semi-automatic pistol, a weapon unlikely to be covered by even the broadest assault weapons ban.

A Los Angles Times editorial noted that the killer is 16 and then declared that “it’s not a leap to say that no 16-year-old should have ready access to a firearm outside the immediate supervision of an adult.” The fact that the minimum age to purchase a handgun in California is 21 didn’t warrant a mention.

Others said that “common sense” gun control legislation shouldn’t be a contentious political issue and ripped into Republicans, the National Rifle Association (NRA), and Senate Leader Mitch McConnell (R–Ky.) for making it one.

Others noted that some schools have reacted to mass shootings by performing traumatizing shooter drills—and cited that bad policy to justify yet more bad policies. “Every politician paid to defend the status quo by the gun lobby needs to answer whether they are comfortable with live shooter drills becoming routine, students running terrified from their classrooms, and entire communities being locked down,” former Rep. Gabrielle Giffords told The New York Times.

It needn’t be repeated that school shootings are a terrible and tragic thing. It does need to be repeated that these are also rare, and that the long-term trend has been for them to happen less often, not more often.

Rare or not, we obviously should do what we can to stop such shootings from happening. But you can’t do that without being aware of the nature of the problem, the trade-offs of the solutions, and—above all—whether the policies you’re proposing would even have stopped the crime being discussed.


FREE MARKETS

Uber is fighting for its right to keep deploying dockless for-hire electric scooters in Los Angeles, now that city officials have issued a temporary suspension of the company’s permit.

The Wall Street Journal reports that Uber has requested a hearing with L.A.’s transit regulators to discuss the suspension of its permit. The move will allow the company to keep operating while they wait for that hearing.

The dispute comes down to the data. The Los Angeles Department of Transportation (LADOT) requires all dockless scooter companies to share real-time trip data with the city, something Uber has said is a massive violation of their users’ privacy.

“By demanding real-time information about people’s movements, LADOT is an outlier among hundreds of cities around the world—and independent privacy experts and advocates raised the alarm more than a year ago that LADOT’s misguided technique poses serious risks to consumer privacy,” an Uber spokesperson told the Journal.

The company has threatened legal action against LADOT if it continues with its demands for real-time data—demands that are likely illegal under state law.


FREE MINDS

Brooke Nelson, a graduate of Northern State University in South Dakota, told the local paper that she opposed efforts to make a young adult novel by Sarah Dessen the required Common Read book for incoming freshmen.

“She’s fine for teen girls,” Nelson told Aberdeen News. “But definitely not up to the level of Common Read. So I became involved simply so I could stop them from ever choosing Sarah Dessen.”

This has since blown up into a major controversy, with Dessen and other popular authors ripping into Nelson on Twitter. By suggesting that college students shouldn’t be being assigned young adult fiction, Nelson was allegedly disparaging already-marginalized teen girls.

Washington Post writer Elizabeth Bruening has a good thread on what this says about the state of woke culture:


QUICK HITS

  • A Texas state lawmaker has resigned after police found a sealed envelope, bearing official letterhead, that was full of cocaine.
  • A police officer in Arizona was filmed tackling an armless, legless 15-year-old.
  • The Economist has an interesting article on how people’s music preferences match to how they vote.

  • Squad member Rep. Ayanna Pressley (D–Mass.) has introduced an ambitious criminal justice reform bill.

from Latest – Reason.com https://ift.tt/2NNzNVv
via IFTTT

Andrew Yang Proposes Making Social Media Algorithms Subject to Federal Approval

Entrepreneur Andrew Yang has run a tech-centered campaign for the Democratic presidential nomination, positioning his Universal Basic Income proposal as a solution to rapid technological change and increasing automation. On Thursday, he released a broad plan to constrain the power tech companies supposedly wield over the American economy and society at large.

“Digital giants such as Facebook, Amazon, Google, and Apple have scale and power that renders them more quasi-sovereign states than conventional companies,” the plan reads. “They’re making decisions on rights that government usually makes, like speech and safety.”

Yang has now joined the growing cacophony of Democrats and Republicans who wish to amend Section 230 of the Communications Decency Act; the landmark legislation protects social media companies from facing certain liabilities for third-party content posted by users online. As Reason‘s Elizabeth Nolan Brown writes, it’s essentially “the Internet’s First Amendment.”

The algorithms developed by tech companies are the root of the problem, Yang says, as they “push negative, polarizing, and false content to maximize engagement.”

That’s true, to an extent. Just like with any company or industry, social media firms are incentivized to keep consumers hooked as long as possible. But it’s also true that social media does more to boost already popular content than it does to amplify content nobody likes or wants to engage with. And in an age of polarization, it appears that negative content can be quite popular.

To counter the proliferation of content he does not like, Yang would require tech companies to work alongside the federal government in order to “create algorithms that minimize the spread of mis/disinformation,” as well as “information that’s specifically designed to polarize or incite individuals.” Leaving aside the constitutional question, who in government gets to make these decisions? And what would prevent future administrations from using Yang’s censorious architecture to label and suppress speech they find polarizing merely because they disagree with it politically?

Yang’s push to amend 230 is similarly misguided, as he seems to think that removing liabilities would somehow end only bad online content. We should “amend the Communications Decency Act to reflect the reality of the 21st century,” he writes, which tech giants are using “to act as publishers without any of the responsibility.”

Yet social media sites are already working to police content they deem harmful—something that should be clear in the many Republican complaints of overzealous and biased content removal efforts. Section 230 expressly allows those tech companies to scrub “objectionable” posts “in good faith,” allowing them to self-regulate.

It goes without saying that social media companies haven’t done a perfect job with screening content, but their failure says more about the task than their effort. User-uploaded content is essentially an infinite stream. The algorithms that tech companies use to weed out content that comports with their terms of service regularly fail. Human screens also fail. Even if Facebook or Twitter or Youtube could create an algorithm that only deleted the content those companies intended for it to delete, they would still come under fire for what content they find acceptable and what content they don’t. Dismantling Section 230 would probably discourage efforts to fine-tune the content vetting process and instead lead to broad, inflexible content restrictions.

Or, it could lead to platforms refusing to make any decisions about what they allow users to post.

“Social media services moderate content to reduce the presence of hate speech, scams, and spam,” Carl Szabo, Vice President and General Counsel at the trade organization NetChoice, said in a statement. “Yang’s proposal to amend Section 230 would likely increase the amount of hate speech and terrorist content online.”

It’s possible that Yang misunderstands the very core of the law. “We must address once and for all the publisher vs. platform grey area that tech companies have lived in for years,” he writes. But that dichotomy is a fiction.

“Yang incorrectly claims a ‘publisher vs. platform grey area.’ Section 230 of the Communications Decency Act does not categorize online services,” Szabo says. “Section 230 enables services that host user-created content to remove content without assuming liability.”

Where the distinction came from is somewhat of a mystery, as that language is absent from the law. Section 230 protects sites from certain civil and criminal liabilities if those companies are not explicitly editing the content; content removal does not qualify as such. A newspaper, for instance, can be held accountable for libelous statements that a reporter and editor publish, but their comment section is exempt from such liabilities. That’s because they aren’t editing the content—but they can safely remove it if they deem it objectionable.

Likewise, Facebook does not become a “publisher” when it designates a piece of content to the trash chute, any more than a coffee house would suddenly become a “publisher” if it decided to remove an offensive flier from its bulletin board.

Yang’s mistaken interpretation of Section 230 is likely a result of the “dis/misinformation” around the law promoted by his fellow presidential candidates and in congressional hearings. There’s something deeply ironic about that.

from Latest – Reason.com https://ift.tt/2CKJde7
via IFTTT

Andrew Yang Proposes Making Social Media Algorithms Subject to Federal Approval

Entrepreneur Andrew Yang has run a tech-centered campaign for the Democratic presidential nomination, positioning his Universal Basic Income proposal as a solution to rapid technological change and increasing automation. On Thursday, he released a broad plan to constrain the power tech companies supposedly wield over the American economy and society at large.

“Digital giants such as Facebook, Amazon, Google, and Apple have scale and power that renders them more quasi-sovereign states than conventional companies,” the plan reads. “They’re making decisions on rights that government usually makes, like speech and safety.”

Yang has now joined the growing cacophony of Democrats and Republicans who wish to amend Section 230 of the Communications Decency Act; the landmark legislation protects social media companies from facing certain liabilities for third-party content posted by users online. As Reason‘s Elizabeth Nolan Brown writes, it’s essentially “the Internet’s First Amendment.”

The algorithms developed by tech companies are the root of the problem, Yang says, as they “push negative, polarizing, and false content to maximize engagement.”

That’s true, to an extent. Just like with any company or industry, social media firms are incentivized to keep consumers hooked as long as possible. But it’s also true that social media does more to boost already popular content than it does to amplify content nobody likes or wants to engage with. And in an age of polarization, it appears that negative content can be quite popular.

To counter the proliferation of content he does not like, Yang would require tech companies to work alongside the federal government in order to “create algorithms that minimize the spread of mis/disinformation,” as well as “information that’s specifically designed to polarize or incite individuals.” Leaving aside the constitutional question, who in government gets to make these decisions? And what would prevent future administrations from using Yang’s censorious architecture to label and suppress speech that they find polarizing merely because they disagree with it politically?

Yang’s push to amend 230 is similarly misguided, as he seems to think that removing liabilities would somehow end only bad online content. We should “amend the Communications Decency Act to reflect the reality of the 21st century,” he writes, which tech giants are using “to act as publishers without any of the responsibility.”

Yet social media sites are already working to police content they deem harmful—something that should be clear in the many Republican complaints of overzealous and biased content removal efforts. Section 230 expressly allows those tech companies to scrub “objectionable” posts “in good faith,” allowing them to self-regulate.

It goes without saying that social media companies haven’t done a perfect job with screening content, but their failure says more about the task than their effort. User-uploaded content is essentially an infinite stream. The algorithms that tech companies use to weed out content that comports with their terms of service regularly fail. Human screens also fail. Even if Facebook or Twitter or Youtube could create an algorithm that only deleted the content those companies intended for it to delete, they would still come under fire for what content they find acceptable and what content they don’t. Dismantling Section 230 would probably discourage efforts to fine-tune the content vetting process and instead lead to broad, inflexible content restrictions.

Or, it could lead to platforms refusing to make any decisions about what they allow users to post.

“Social media services moderate content to reduce the presence of hate speech, scams, and spam,” Carl Szabo, Vice President and General Counsel at the trade organization NetChoice, said in a statement. “Yang’s proposal to amend Section 230 would likely increase the amount of hate speech and terrorist content online.”

It’s possible that Yang misunderstands the very core of the law. “We must address once and for all the publisher vs. platform grey area that tech companies have lived in for years,” he writes. But that dichotomy is a fiction.

“Yang incorrectly claims a ‘publisher vs. platform grey area.’ Section 230 of the Communications Decency Act does not categorize online services,” Szabo says. “Section 230 enables services that host user-created content to remove content without assuming liability.”

Where the distinction came from is somewhat of a mystery, as that language is absent from the law. Section 230 protects sites from certain civil and criminal liabilities if those companies are not explicitly editing the content; content removal does not qualify as such. A newspaper, for instance, can be held accountable for libelous statements that a reporter and editor publish, but their comment section is exempt from such liabilities. That’s because they aren’t editing the content—but they can safely remove it if they deem it objectionable.

Likewise, Facebook does not become a “publisher” when it designates a piece of content to the trash chute, any more than a coffee house would suddenly become a “publisher” if it decided to remove an offensive flier from its bulletin board.

Yang’s mistaken interpretation of Section 230 is likely a result of the “dis/misinformation” around the law promoted by his fellow presidential candidates and in congressional hearings. There’s something deeply ironic about that.

from Latest – Reason.com https://ift.tt/2CKJde7
via IFTTT

Congratulations to the Lumen Database!

I’ve praised the Lumen Database often before, because it has been indispensable in my research on Internet takedown and deindexing requests. A lot of the frauds and forgeries that I’ve found, I’ve found through Lumen; likewise, many of the legitimate anti-libel injunctions that I mention in my forthcoming Anti-Libel Injunctions article at Penn came via Lumen. Likewise, many people who have studied DMCA takedown attempts have relied heavily on Lumen.

I’m therefore delighted to pass along this report about a huge new grant that Lumen just got:

Illuminating the Flows and Restrictions of Content Online

Arcadia to support expansion of Harvards Berkman Klein Center Lumen Database

Lumen, a unique resource collecting and studying millions of removal requests for online content, is pleased to announce that it has received a $1.5 million grant from Arcadia, a charitable fund of Lisbet Rausing and Peter Baldwin, to expand and improve its database and research efforts.

From well-publicized takedowns from foreign governments, political campaigns and celebrities to more obscure requests from private entities and individuals, modern online platforms and search engines must regularly address third-parties’ efforts to remove content and links. Lumen provides a way for the public and its representatives – including academic researchers, journalists and other stakeholders – to understand trends in demands for content removal and their outcomes in ways that balance public disclosure and privacy rights and serve the greater public interest.

“Lumen has seen tremendous growth and interest in the database over the past few years, receiving over two million new notices in the last year alone,” says Adam Holland, the Project Manager for Lumen at the Berkman Klein Center for Internet & Society at Harvard. “And in the same time period, more and more exceptional research relying on Lumen’s data, and with substantial real-world impact, has been published, with more to come soon.”

Conceived and developed in 2002 by Wendy Seltzer, one of the inaugural Berkman Center Fellows, Lumen’s efforts initially focused on removal requests submitted under the United States’ Digital Millennium Copyright Act. As the Internet and its usage has grown and evolved, so has Lumen, and its database now includes complaints of all varieties, including trademark, defamation, private information, as well as domestic and international court orders. Over the course of the next three years, Lumen will increase the number of institutions and platforms that submit removal requests; refine the project’s online presence and underlying infrastructure to make it easier for researchers to use; conduct and facilitate further research on its data; and host a series of multi-stakeholder convenings to help better understand the details of the removal request ecosystem and to develop a set of best practices regarding those requests and transparency regarding them.

“Arcadia’s generous grant represents a quantum leap for Lumen and opens up a wide array of new possibilities for the project,” says Lumen’s principal investigator Christopher Bavitz, WilmerHale Clinical Professor of Law at Harvard Law School and a faculty co-director of the Berkman Klein Center. “I’m incredibly excited for the next three years and beyond.”

The Lumen project team works with Internet publishers, platforms, and service providers to shed light on takedown requests they receive that would otherwise go unseen. Currently, Google and Twitter are Lumen’s two largest submitters of notices by volume. As part of the planned expansion supported by Arcadia’s grant, Lumen will extend the reach of its network of partners to provide new transparency to even more takedown notices from more sources. The increase in the volume of data Lumen anticipates receiving in the next few years further underscores the importance of ensuring that the project’s database is equipped to easily accept all incoming takedown notices and that working with the database is intuitive and manageable for researchers, notice submitters and other interested parties.

Lumen’s database has supported critical research over the years by both legal and academic scholars, as well as journalists…. Arcadia’s support will make it possible for Lumen to expand its support of such research, as well as to expand Lumen’s core team in order to conduct more of its own research and writing.

About the Berkman Klein Center

The Berkman Klein Center for Internet & Society at Harvard University is dedicated to exploring, understanding, and shaping the development of the digitally-networked environment. A diverse, interdisciplinary community of scholars, practitioners, technologists, policy experts, and advocates, we seek to tackle the most important challenges of the digital age while keeping a focus on tangible real-world impact in the public interest. Our faculty, fellows, staff and affiliates conduct research, build tools and platforms, educate others, form bridges and facilitate dialogue across and among diverse communities.

About Arcadia Fund

Arcadia is a charitable fund of Lisbet Rausing and Peter Baldwin. It supports charities and scholarly institutions that preserve cultural heritage and the environment. Arcadia also supports projects that promote open access and all of its awards are granted on the condition that any materials produced are made available for free online. Since 2002, Arcadia has awarded more than $663 million to projects around the world. More information at https://www.arcadiafund.org.uk/.

from Latest – Reason.com https://ift.tt/2XlNUom
via IFTTT

Congratulations to the Lumen Database!

I’ve praised the Lumen Database often before, because it has been indispensable in my research on Internet takedown and deindexing requests. A lot of the frauds and forgeries that I’ve found, I’ve found through Lumen; likewise, many of the legitimate anti-libel injunctions that I mention in my forthcoming Anti-Libel Injunctions article at Penn came via Lumen. Likewise, many people who have studied DMCA takedown attempts have relied heavily on Lumen.

I’m therefore delighted to pass along this report about a huge new grant that Lumen just got:

Illuminating the Flows and Restrictions of Content Online

Arcadia to support expansion of Harvards Berkman Klein Center Lumen Database

Lumen, a unique resource collecting and studying millions of removal requests for online content, is pleased to announce that it has received a $1.5 million grant from Arcadia, a charitable fund of Lisbet Rausing and Peter Baldwin, to expand and improve its database and research efforts.

From well-publicized takedowns from foreign governments, political campaigns and celebrities to more obscure requests from private entities and individuals, modern online platforms and search engines must regularly address third-parties’ efforts to remove content and links. Lumen provides a way for the public and its representatives – including academic researchers, journalists and other stakeholders – to understand trends in demands for content removal and their outcomes in ways that balance public disclosure and privacy rights and serve the greater public interest.

“Lumen has seen tremendous growth and interest in the database over the past few years, receiving over two million new notices in the last year alone,” says Adam Holland, the Project Manager for Lumen at the Berkman Klein Center for Internet & Society at Harvard. “And in the same time period, more and more exceptional research relying on Lumen’s data, and with substantial real-world impact, has been published, with more to come soon.”

Conceived and developed in 2002 by Wendy Seltzer, one of the inaugural Berkman Center Fellows, Lumen’s efforts initially focused on removal requests submitted under the United States’ Digital Millennium Copyright Act. As the Internet and its usage has grown and evolved, so has Lumen, and its database now includes complaints of all varieties, including trademark, defamation, private information, as well as domestic and international court orders. Over the course of the next three years, Lumen will increase the number of institutions and platforms that submit removal requests; refine the project’s online presence and underlying infrastructure to make it easier for researchers to use; conduct and facilitate further research on its data; and host a series of multi-stakeholder convenings to help better understand the details of the removal request ecosystem and to develop a set of best practices regarding those requests and transparency regarding them.

“Arcadia’s generous grant represents a quantum leap for Lumen and opens up a wide array of new possibilities for the project,” says Lumen’s principal investigator Christopher Bavitz, WilmerHale Clinical Professor of Law at Harvard Law School and a faculty co-director of the Berkman Klein Center. “I’m incredibly excited for the next three years and beyond.”

The Lumen project team works with Internet publishers, platforms, and service providers to shed light on takedown requests they receive that would otherwise go unseen. Currently, Google and Twitter are Lumen’s two largest submitters of notices by volume. As part of the planned expansion supported by Arcadia’s grant, Lumen will extend the reach of its network of partners to provide new transparency to even more takedown notices from more sources. The increase in the volume of data Lumen anticipates receiving in the next few years further underscores the importance of ensuring that the project’s database is equipped to easily accept all incoming takedown notices and that working with the database is intuitive and manageable for researchers, notice submitters and other interested parties.

Lumen’s database has supported critical research over the years by both legal and academic scholars, as well as journalists…. Arcadia’s support will make it possible for Lumen to expand its support of such research, as well as to expand Lumen’s core team in order to conduct more of its own research and writing.

About the Berkman Klein Center

The Berkman Klein Center for Internet & Society at Harvard University is dedicated to exploring, understanding, and shaping the development of the digitally-networked environment. A diverse, interdisciplinary community of scholars, practitioners, technologists, policy experts, and advocates, we seek to tackle the most important challenges of the digital age while keeping a focus on tangible real-world impact in the public interest. Our faculty, fellows, staff and affiliates conduct research, build tools and platforms, educate others, form bridges and facilitate dialogue across and among diverse communities.

About Arcadia Fund

Arcadia is a charitable fund of Lisbet Rausing and Peter Baldwin. It supports charities and scholarly institutions that preserve cultural heritage and the environment. Arcadia also supports projects that promote open access and all of its awards are granted on the condition that any materials produced are made available for free online. Since 2002, Arcadia has awarded more than $663 million to projects around the world. More information at https://www.arcadiafund.org.uk/.

from Latest – Reason.com https://ift.tt/2XlNUom
via IFTTT