Andrew Burt First, California passed major privacy legislation in June. Then in late September, the Trump administration published official principles for a single national privacy standard. Not to be left out, House Democrats previewed their own Internet “Bill of Rights” earlier this month.
Sweeping privacy regulations, in short, are likely coming to the United States. That should be welcome news, given the sad, arguably nonexistent state of our modern right to privacy. But there are serious dangers in any new move to regulate data. Such regulations could backfire — for example, by entrenching already dominant technology companies or by failing to help consumers actually control the data we generate (presumably the major goal of any new legislation).
That’s where Brent Ozar comes in.
Ozar runs a small technology consulting company in California that provides training and troubleshooting for a database management system called Microsoft SQL Server. With a team of four people, Ozar’s company is by all means modest in scope, but it has a small international client base. Or at least it did, until European regulators in May began to enforce a privacy law called the General Data Protection Regulation (GDPR), which can carry fines of up to 4% of global revenue.
A few months before the GDPR began to be enforced, Ozar announced that it had forced his company to, in his words, “stop selling stuff to Europe.” As a consumer, Ozar wrote, he loved the regulations; but as a business, he simply couldn’t afford the costs of compliance or the risks of getting it wrong.
And Ozar wasn’t alone. Even larger international organizations like the Los Angeles Times and the Chicago Tribune — along with over 1,000 other news outlets — simply blocked any user accessing their sites with a European IP address rather than confront the costs of the GDPR.
So why should this story play a central role in the push to enact new privacy regulations here in the United States?
Because Ozar illustrates how privacy regulations come with huge costs. Privacy laws are, from one perspective, a transaction cost imposed on all our interactions with digital technologies. Sometimes those costs are minimal. But sometimes those costs can be prohibitive.
Privacy regulations, in short, can be dangerous.
So how can we minimize these dangers?
First, as regulators become more serious about enacting new privacy laws in the United States, they will be tempted to implement generic, broad-based regulations rather than to enshrine specific prescriptions in law. Even though in the fast-moving world of technology, it’s always easier to write general rules than more explicit recommendations, they should avoid this temptation wherever possible.
Overly broad regulations that treat all organizations equally can end up encouraging “data monopolies” — where only a few companies can make use of all our data. Some organizations will have the resources to comply with complex, highly ambiguous laws; others (like Ozar’s) will not.
This means that the regulatory burden on data should be tiered so that the costs of compliance are not equal across unequal organizations. California’s Consumer Privacy Act confronts this problem directly by opting out specific business segments such as many smaller organizations. The costs of compliance for any new regulation must not give additional advantages to the already-dominant tech companies of the world.
Second, and relatedly, a few organizations are increasingly in charge of much of our data, which presents a huge danger both to our privacy and to technological innovation. Any new privacy regulation must actively incentivize organizations that are smaller to share or pool data so that they can compete with larger data-driven organizations.
One possible solution to this problem is by encouraging the use of what are called privacy enhancing technologies, or PETs, such as differential privacy, homomorphic encryption, federated learning, and more. PETs, long championed by privacy advocates, help balance the tradeoff between the utility of data on the one hand and its privacy and security on the other.
Last, user consent — the idea of users actively consenting to the collection of their data at a given point in time — can no longer play a central role in protecting our privacy. This has long been a dominant aspect of major privacy frameworks (think of all the “I Accept” buttons you’ve clicked to enter a website). But in the age of big data and machine learning, we simply cannot know the value of the information we give up at the point of collection.
The entire value of machine learning lies in its ability to detect patterns at scale. At any given time, the cost to our privacy of giving up small amounts of data is minimal; over time, however, that cost can become enormous. The famous case of Target knowing a teenager was pregnant before her family did, based simply on her shopping habits, is one among many such examples.
As a result, we cannot assume that we are ever fully informed about the privacy we’re giving up at any single point in time. Consumers must be able to exercise rights over their data long after it’s been collected, and those rights should include restricting how it’s being used.
Unless ours laws can adapt to new digital technologies correctly — unless they can calibrate the balance between the cost of the compliance burden and the value of privacy rights they seek to uphold — we run some very real risks. We can all too easily implement new laws that fail to preserve our privacy while also hindering the use of new technology, and both at the same time.
No comments:
Post a Comment