You have /5 articles left.
Sign up for a free account or log in.

Artificial intelligence and chat bots are the opening that American society needs to close the information policy gaps in its law on privacy, security, behavioral advertising and even financial services.

Why is this the case? Because it is rare that the introduction of one product or service garners so much attention. The last major industry storm was the iPhone. While some questions have arisen in the information policy space related to the iPhone, mostly around the San Bernardino murders and government access, the resolution of that case with law enforcement and the careful use that Apple appears to have with user data left unresolved issues concerning both government and consumer surveillance.

AI and chat bots bring us back to those crucial questions. First, they are about information itself, the bull’s-eye of gaps in our internet ecosystem. Second, the interaction with the user is what the commotion is all about. While major tech companies figure out how to monetize AI and chat bots, you can bet it is going to have something to do with user interaction and behavioral information that Big Tech has so profitably monetized. And as banking and financial services wobble, don’t we require more trusted information to sort out government decisions?

Third, U.S. law provides few footholds upon which to address concerns. Privacy law has proven unable to rebalance civil liberties and national security after the USA-Patriot Act/Freedom Act for government surveillance or identify clearly the harms to consumers of “free” services such as Google Search or Facebook/Meta. Congress has not updated important technical provisions of “wiretapping” 1986 Electronic Communications Privacy Act. Promulgated seven years before the internet even became open to the public, technological differences between telephone and internet services are the proverbial hole through which law enforcement continues to drive expansive monitoring.

Under the direction of FTC commissioner Lina Kahn and head of the DOJ antitrust division Jonathan Kanter, the Biden administration is advancing new antitrust legal theories against internet companies, but even the strongest of those cases may take years to resolve. Some market corrections might be expected, for example, in the FTC recent action against Google about its monopoly on ads. Even if the government is successful, that win would favor investors and new players in the market, not necessarily consumers directly.

Ironically, large tech companies have pitted consumer privacy against antitrust action, alleging that privacy will be the price consumers would pay if their companies are forced to divest and provide new, untested players with their user information. Nice pivot away from providing society with more transparency on technology behind the extractive processes that are what ultimately line equity investor’s pockets. Or opening the kimono on the algorithms that gird its AI systems?

American society needs information policy. We need laws that address the obvious pitfalls of such an unregulated system that goes from Uber—which calls itself not a transportation but an information service, if you didn’t know—to how digitized trading among the investment and equity brokers can tank a bank for which taxpayers pay the bill. And what about those less tangible harms that are hard to name but are surely experienced? How does it make you feel that Google knows more about you than you know yourself? That without very intentional, time-consuming precautions, almost every time you use the internet you are being surveilled? That you are not the customer of Facebook but the commodity, and by golly, you did it to yourself to create a profile and post?

In the crafting of an information policy, let’s begin with a comprehensive perspective on real people, and not one that reduces us to data points such as name and social security number. We can apply basic fair information practices such as informed consent; clear, understandable notice of the information that Big Tech companies have about us; and what those companies do with that information. Robust security practices to end the endless stream of data breaches should be standardized. Importantly, we need to focus less on specific categories of data and, as legal scholar Daniel Solove advises us, more on the use, harms and risk—on a broad scale. Then let’s close the biggest government loophole of all: the absence of laws prohibiting government from buying information about individuals from private companies, an obvious end run around the Fourth Amendment.

I teach a course called Culture, Law and Politics of Information Policy. In the first preliminary, I asked students for feedback. One student said it was interesting but he could not wait until we got to learn the information policy. I guess I have not taught it well enough. The point of the course is to demonstrate that the United States does not have one. And yet we desperately need one.

AI and chat bots provide us with that opportunity. Collective attention to the interaction of such sophisticated machine learning technologies and the mountains of information involved in their processing focused on important policy concerns could go a long way to protect the humanity and autonomy of both consumers and citizens. A carefully considered information policy would bolster democratic values and processes that could be further deployed to counteract the scourge of mis-/disinformation that currently operates to undercut trust in our government. Ultimately, an information policy that balances innovation with consumer confidence works towards a more efficient economy, one consistent with our information age but also one in keeping with fundamental American fairness values.

Next Story

Written By