Platform Accountability: Why the Current Debate Falls Short
Where the Discussion Goes Wrong
The most common framing of platform accountability focuses on individual content decisions — what should or should not be moderated. This is the wrong level of analysis. The important questions concern systemic design choices that shape what content gets produced and promoted in the first place.
Platform defenders often argue that any specific moderation decision is contested, which is true. But this argument deflects from system-level accountability, where the evidence is clearer.
What I Would Do
Mandated transparency about algorithmic decision-making would be a meaningful step. Not public exposure of proprietary systems, but audited disclosure to regulators and researchers. This would enable the kind of oversight that is currently impossible.
Platform liability for systemic harms, not individual content, would change incentive structures substantially. Platforms would design for safer systems rather than optimizing for engagement and using legal doctrines as shields against responsibility.