Can the California Consumer Privacy Act Curb Big Tech?

Font Size:

Scholar argues that a landmark data privacy law obscures the problem it seeks to solve.

Font Size:

By the time you finish reading this sentence, Americans will have generated enough data to fill over four million file cabinets.

Concerned with the volume and availability of personal data, activists in California in recent years have promoted a ballot initiative that eventually became the California Consumer Privacy Act (CCPA), a landmark digital privacy law that took effect earlier this year.

In a recent paper published in the Columbia Law Review, David Alpert compares a key component of the CCPA to the Freedom of Information Act (FOIA), warning that some of FOIA’s weaknesses are likely to plague the California law. To repair the “fundamental mismatch” between the CCPA and the problem it seeks to solve, Alpert suggests imposing a tailored tax regime on the tech industry.

The CCPA grants consumers new privacy rights, including a right to access the data that companies collect on them, a right to have their data deleted, and a right to opt out of the sale of their data. In his analysis, Alpert focuses on the right to access, which allows individual Californians to request their data from a business and requires companies to respond to these requests.

The CCPA’s request-and-respond requirement is becoming a hallmark of data privacy regulations. The European Union’s General Data Protection Regulation, for example, includes a request-and-respond provision. In the United States, many proposed state and federal bills also feature the request-and-respond model.

This model originated over 50 years ago with FOIA, the “crown jewel” of a wave of legislation born of the cultural and political shifts of the 1960s. The architects of FOIA imagined that its primary beneficiaries would be journalists, who would use it to gain access to government information, then disseminate that information to create a public that is willing and able to hold its government accountable.

Despite its idealistic conception, scholars contend that FOIA is “deficient in significant respects.” Many government agencies have departments tasked with responding to FOIA requests, but those are often underfunded or understaffed and so often unable to respond to requests promptly. In addition, the act contains expansive exemptions, so that even when requests are fulfilled, the government’s production is frequently incomplete.

For many agencies, the bulk of FOIA requests come from individuals seeking to secure benefits, such as Social Security payments, or other documents of personal interest. These requests account for the delays and incomplete responses that have made FOIA particularly ill-suited to journalism. In fact, according to a recent analysis of requests sent to 85 agencies over a one-year period, less than 8 percent came from media organizations.

As originally conceived, FOIA was a tool to create greater transparency in government so that the public could effectively participate in the democratic process. Similarly, the activists who first proposed the CCPA reportedly viewed increased transparency as a first step toward reining in the personal data economy.

Alpert argues that outcome is unlikely. The CCPA explicitly contemplates individual requests. As FOIA’s history shows, a pool of requesters advocating their own interests is more likely to lead to delayed responses, not regulatory intervention. Moreover, many people are unequipped to sort through data dumps and extract the type of information that might encourage collective action, Alpert claims.

The CCPA’s data-access provision tries to address digital privacy problems on an individual basis, but it does little to discourage people from trading their data to join a social media platform or download a free game. It also obscures the harm posed by data aggregation.

When examined collectively, the data people shed may expose more about their lives than they know or intend. For example, cell phones track their owners’ locations in real-time, and people’s social media accounts reveal who their friends are. Used together, this information offers clues about their friends’ whereabouts and social habits.

Aside from privacy concerns, mass data collection could help to exacerbate systemic inequality. When companies aggregate data from an entire community or demographic group, they can create algorithms to determine a person’s creditworthiness based on their social networks or identify low-income consumers to target with predatory marketing campaigns.

As an alternative to the request-and-respond model, Alpert considers an affirmative disclosure regime that would require entities to publish the type of data they have about a particular consumer, why and when they collected the data, and with whom they shared it. He argues, however, that an affirmative disclosure regime would be subject to the same pitfalls that plague the request-and-respond approach: Requesters would not know how to parse their data nor what to do with all of their information. In addition, affirmative disclosure would risk treating transparency as an “end in itself.”

Instead, Alpert suggests that imposing a tax “at the point of data collection” may be the most “fruitful solution.” Such a tax would serve to internalize the negative consequences of data mining that society otherwise bears. He reasons that if companies had to pay to continue mining, bundling, and reselling consumer data, they would more carefully consider how, when, and why they collect data. Alternatively, if the tax were passed onto consumers, companies would need to convince them that data collection is in their best interest to avoid losing them to competing companies that do not collect data.

Alpert concludes that over time, this tax regime would decrease data mining and encourage more thoughtful collection practices, as the CCPA’s backers imagined.