When Warnock and Ossoff take office, the U.S. Senate will be split evenly between Democrats and Republicans. Vice President-elect Kamala Harris will be permitted to act as a tiebreaker in any 50-50 vote, giving Democrats a narrow path to pass bills. Ahead of the general election last fall, VentureBeat spoke with tech policy experts who closely follow Congress about how Democratic control of the U.S. Senate might change the way AI is regulated and its impact on people’s lives.
For this newsletter, we asked those four experts to share their thoughts following the largest insurrection in the U.S. Capitol in over two centuries. Major tech policy issues facing a Democrat-controlled U.S. Senate include facial recognition regulation, net neutrality, automated discrimination and algorithmic bias, Section 230, broadband infrastructure funding, biometric privacy, and data privacy protections.
Jevan Hutson is a lawyer, privacy advocate, and human-computer interaction researcher who proposed AI regulation law in the state of Washington. He told VentureBeat this week that he’s concerned the white supremacist coup attempt could lead state and federal lawmakers to double down on surveillance, which he fears will further weaponize the technology and hurt those disproportionately impacted by police violence.
“The harm is still going to continue to fall on marginalized communities. Even if it’s in the service of ‘OK, we’re going to catch the white supremacists who stormed the Capitol,’ you can’t disentangle expanding police power in one instance from the ways in which that power will play out in others,” he said.
Use of facial recognition entered the conversation shortly after the insurrection came to an end, when the Washington Times falsely reported that facial recognition from the company XRVision was used to identify antifa protestors in the crowd at the U.S. Capitol, a claim Rep. Matt Gaetz (R-FL) repeated during proceedings to verify the Electoral College results. The Washington Times story was later corrected to remove that claim. But the FBI is using facial recognition technology to identify people who were involved in the U.S. Capitol breach, NBC News correspondent Garrett Haake reported Thursday.
“This will only function to expand police surveillance power at a time when it desperately needs to be defanged,” Hutson said. “We don’t want to give them additional tools to be engaged in that oppression further.”
Hutson isn’t alone in that opinion. Detroit-based tech justice advocate Tawana Petty said she opposes use of facial recognition for the U.S. Capitol breach investigation. Fight for the Future founder Evan Greer made a similar plea in a Fast Company op-ed Friday titled “You can’t fight fascism by expanding the police state.”
I keep hearing the media lift up facial recognition as a way to find the folks who stormed the Capitol. No, I don’t support the use of facial recognition in this instance either. Black folks will always end up on the losing end. No exceptions to the rule.#BanFacialRecognition
— Tawana Petty (@Combsthepoet) January 7, 2021
Congress members from both sides of the aisle have previously talked about placing limits on law enforcement use of facial recognition tech, but few regulations or standards limit predictive policing or police use of facial recognition at this time. In AI policy book Turning Point, the Brookings Institution’s Darrell M. West and John R. Allen suggest implementing a legal check on facial recognition technology akin to the process required to obtain a search warrant. Allen, a former leader of U.S. forces in Afghanistan, warned in June that Trump’s decision to tear-gas Black Lives Matter (BLM) protestors might have signaled the beginning of the end of American democracy.
Malkia Devich-Cyril is a longtime advocate for equitable digital rights and the founder of Media Justice. Last year, she coauthored the surveillance section of the Vision for Black Lives policy platform created by 50 Black organizations.
“I can say that, more than ever, we urgently need reforms that will hold platform companies like Facebook and Twitter accountable to [their] Black, Latinx, and other users targeted for disproportionate harm. It’s taken entirely too long for these companies to de-platform white supremacist users,” Devich-Cyril said. “On the contrary, they’ve given them aid and comfort, and we’ve seen the result.”
In a similar policy suggestion this week, Center for Humane Technology cofounder Roger McNamee argued that Wednesday’s events highlight the need for social media companies to ditch a business model that encourages the spread of hateful content and misinformation.
Facebook knows that its recommendation algorithms are responsible for a majority of people joining extremist groups on its platform. Google-owned YouTube’s algorithm also has a reputation for radicalizing users and spreading conspiracy theories.
Betsy Cooper leads the Aspen Tech Policy Hub incubator for software and policy to address societal issues. She told VentureBeat she also expects increased pressure on social media companies following the attack on the Capitol.
“Even skeptics now see the traumatic effect that misinformation and online groupthink can have on our democracy, especially that online rhetoric can lead to real-life violence. Social media companies will struggle to defend their decisions to show users radical content, even though such content may be extremely profitable,” she said.
In response to the insurrection, Twitter permanently banned Donald Trump from its platform and Facebook and Instagram suspended President Trump’s account at least through Inauguration Day. Members of the Alphabet Workers Union (yes, that happened this week too) called on Google to suspend the president’s YouTube account.
On Thursday, Sen. Mark Warner (D-VA), who will chair the Senate Select Committee on Intelligence, said in a statement that he was pleased to see Facebook, Twitter, and YouTube act to address “sustained misuse of their platforms to sow discord and violence,” but he called those actions “both too late and not nearly enough.”
Cooper also said that although Democrats will soon control both houses of Congress, their narrow lead means moderate Democrats will have a lot of influence shaping tech policy.
Ernesto Falcon is a former Hill staffer and senior legislative counsel for the Electronic Frontier Foundation. He believes that with Georgia elections settled, we will see a change of committee chairs and more antitrust action in the Senate than before.
“I do see a lot of alignment between the work that David Cicilline (D-RI) has done on the House side and many of the Senate Democrats who are now taking the gavel in the Judiciary Committee,” he said.
He expects Congress to exhibit a continued interest in oversight and investigations into misinformation. But when it comes to Section 230, which social media platforms currently use for liability protection, he believes a lack of consensus will make immediate reform unlikely.
“I don’t think the Congress is unified enough in a logical, thoughtful way to approach what to do with 230 in terms of any sort of legislative changes,” he said, adding that congressional interest in regulation could decline if social media platforms take significant steps to self-regulate.
Finally, Falcon expects Congress to move soon on significant broadband access funding to help end the digital divide. The HEROES Act passed the House last year with support for billions of dollars in finance and grant programs for broadband infrastructure, but it was kept from a vote by majority leader Sen. Mitch McConnell (R-KY). Falcon expects activity in Congress in the coming months to focus largely on problems wrought or highlighted by the pandemic, including broadband access.
It’s true that social media platforms with an economic incentive to use algorithms to spread hate and conspiracy theories bear some of the blame for recent events, as does the president, with his well-established history of using racist dog whistles for political gain. But these are only accelerants of toxic conditions that were already here. On display in Washington was naked white supremacy, a social hierarchy older than the United States that must be dismantled for the good of us all.
In this newsletter, we attempt to center tech policy next steps because AI and tech are woven into a range of structural issues important to people’s lives, from facial recognition misidentification leading to false arrests of Black men to the proliferation of remote proctoring technology that can place people from marginalized backgrounds at a disadvantage.
AI and tech policy issues and the way legislators approach them will have profound implications for the United States and could prove critical to the future of democracy in the U.S. and abroad. As late Congress member John Lewis, whose commemorative display was vandalized Wednesday, said in an op-ed about redeeming the soul of America: “Democracy is not a state. It is an act.”
While our tech policy experts expressed grave concern about some issues, they are not without hope. Progressive change is often an uphill battle for marginalized communities, Devich-Cyril said, but a Democratic Senate provides “fuel to sustain the journey.”
For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark The Machine.
Thanks for reading,
Senior AI Staff Writer