Generative AI is not magic. It is the product of millions of invisible creators—poets, coders, photographers, musicians—whose work has been harvested, scraped, and absorbed into AI models without consent or compensation. These models feed on the commons, yet give nothing back. While AI firms reap billions, the very cultural ecosystems they exploit—local journalism, independent art, experimental film—are collapsing under the weight of neglect and extraction.

This is not innovation. It is expropriation.

As Mariana Mazzucato and Fausto Gernone argue, the solution is not to privatize knowledge further with stricter IP enforcement, which would only entrench digital feudalism. Instead, we must recognise cultural content as a public good—a commons we all draw from and must collectively sustain. Just as we tax for roads and hospitals, we must now tax for the infrastructure of human creativity.

A levy on AI firms' gross revenues—not profits—can finance a new cultural compact: independent funds that directly support creators, democratize access to knowledge, and preserve the richness of our shared intellectual life. This is not charity. It is economic justice. The art multiplier is real: each human creation now yields both cultural value and algorithmic fuel.

Without structural reform, the future of culture will be cheapened, homogenised, and owned by a few tech monopolies. But with bold public policy, we can turn AI into a vehicle for public flourishing, not just private gain.

It’s time to reprogram the system: AI must fund the commons, not consume them.

Read full essay here:  https://www.project-syndicate.org/onpoint/how-ai-profits-can-help-fund-cultural-production-by-mariana-mazzucato-and-fausto-gernone-2025-07 

Note: I often use social media posts as a way to “step into” a particular position in a discussion—to test how it looks and unfolds—much like Otto Laske’s view of dialogue as a tool for personal perspective-taking. As some commenters rightly noted, this time my position can appear overly simplistic, as it does not fully articulate my own call for deeper ideological reflection. For a more nuanced exploration, I invite you to read the thoughtful exchange with Neil Turkewitz (and his parallel post and essay), which helps to deepen and expand the conversation. Thanks, Neil, for making me revise the proposals further!

#AIEthics #CreativeCommons #PlatformJustice #CulturalPolicy #PublicValue #MarianaMazzucato #GenerativeAI #FutureOfWork #DigitalJustice #InnovationForGood

Follow-up conversation:

I want to offer here Neil Turkewitz's very well-articulated and well-substantiated challenge to my - or rather Mariana's - position. Neil writes:

I agree with the motivation here, but believe that the proposed “solution” would be a mistake. Some time ago I wrote an essay analyzing a proposal for introducing extended collective licensing as a response to the unauthorized use of creative works to train AI. Much of this is directly relevant to the proposals contained in the instant paper.

“Opt-outs and levies are, in my view, exactly the wrong response to the challenge of AI. Given the centrality of AI in our current and foreseeable environments, use of creative works to train AI is entirely consumptive. Indeed, it remains unclear whether use of AI will wholly erase the value of the underlying works. As such, our focus must be on ensuring the effective exercise of exclusive rights to authorize or prohibit use of works. That is the path.”

Read further: https://lnkd.in/eSG_bvwZ

Neil's post: https://www.linkedin.com/posts/activity-7356402999184793601-8LVh?utm_source=share&utm_medium=member_desktop&rcm=ACoAAABm1WMBiwxFaUc1X66gje88odJOEyNAskc

A: Neil, thanks for this careful and principled intervention. Your defence of creators’ rights against frictionless AI extraction is both morally coherent and jurisprudentially rigorous. If I understand your argument correctly, it rests on a liberal-deontological theory of justice: the inviolability of creative autonomy, grounded in consent, procedural fairness, and legal sovereignty. In this view, to treat authorial consent as negotiable input into AI systems subordinates personhood to computation, violating core liberal tenets of human dignity and ownership of work. Accordingly, exclusive rights must be enforced ex ante, not mitigated retroactively through levies or opt-outs. Policy mechanisms that displace transactional licensing with structural redistribution—such as collective levies—are not just unworkable but impermissible, as they obscure violation through compensation and impose unacceptable burdens on individual creators. You rightly reject the DMCA’s notice-and-takedown regime as structurally asymmetric, privileging intermediaries while failing to ensure meaningful redress. But one might also ask: is not Berne—designed for discrete, attributable infringements—misaligned with the opacity and scale of modern AI systems? Here I diverge in the framing of AI training as a purely consumptive act akin to depletion. This analogy falters in two respects. First, not all AI models substitute the works they are trained on. Second, the models do not retain works in extenso, but encode statistical patterns across massive datasets. Treating this as "exhaustion" obscures the multi-modal reuse and reinterpretation inherent to the cultural commons. Moreover, the framework presumes a Romantic-author model—the author as sovereign, proprietary origin. But culture is accumulative, communal, and co-produced. To insist all works require individualised consent risks re-enclosing vast domains of the commons, stifling the very creativity we seek to protect. Also critically, the exclusive-rights regime collapses under conditions of scale and opacity: models ingest trillions of tokens; attribution is legally untraceable; creators lack bargaining power; and transaction costs are prohibitive. While you rightly critique opt-out schemes for post hoc legitimation, the exclusive rights model offers no scalable institutional remedy for ambient cultural labour powering AI. Enforcement becomes a luxury of platform-backed litigants, not a viable pathway for systemic justice.

This points to the deeper fault line: you and Marianna represent divergent political ideologies. Your model is rooted in liberal-legal idealism: AI is a legal disruptor that must be governed through enforceable rights and market mediation. Mazzucato adopts a realist-institutionalist frame: AI is a systemic force that violates a broader ethical order—reciprocity, common goods, democratic culture—and requires mission-oriented public governance. Where you would govern uses (through consent), she would govern systems (through redistributive architecture). The former is libertarian-sovereign, the latter communitarian-structural. I would argue for a dialectical mixed regime: preserve ex ante licensing where feasible (e.g., high-value IP), introduce sectoral levies where attribution collapses, and build collective institutions—unions, registries, bargaining bodies—to enable creators to act as rights-bearing collectives. This implies Rawlsian layering: uphold individual rights where enforceable, and deliver distributive justice where markets fail. Perhaps the real challenge is not consent or compensation—but designing a governance model where both become meaningful again, in a world where AI demands we think beyond liberal imagination.

R: Many thanks once again. You may find this examination of the international legal system useful and/or interesting.

“My modest proposal? How about we allow creators to determine how, or whether, their works are used in establishing the conditions of a digital world in which they are the suppliers of the raw materials. In what universe do we believe it’s fair to exclude them from the fruits of their labor? To my mind, it’s clear — there is no such universe, or at least I hope there isn’t. Safeguarding consent is the right thing to do, both morally and legally.”

https://medium.com/digital-diplomacy/searching-for-global-copyright-laws-in-all-the-wrong-places-an-examination-of-the-legality-of-cec358492285


Keep Reading

No posts found