Notes

Wild Westworld: Section 230 of the CDA and Social Networks’ Use of Machine-Learning Algorithms

October 31, 2017

Abstract

On August 10, 2016, a complaint filed in the Eastern District of New York accused Facebook of aiding the execution of terrorist attacks. The complaint depicted user-generated posts and groups promoting and directing the perpetration of terrorist attacks. Under § 230 of the Communications Decency Act, interactive service providers (ISPs), such as Facebook, cannot be held liable for user-generated content where the ISP did not create or develop the content at issue. However, this complaint stands out because it seeks to hold Facebook liable not only for the content of third parties but also for the effect its personalized machine-learning algorithms—or “services”— have had on the ability of terrorists to execute attacks. In alleging that Facebook’s actual services, as well as its publication of content, allow terrorists to more effectively execute attacks, the complaint seeks to negate the applicability of § 230 immunity.

This Note argues that Facebook’s services—specifically the personalization of content through machine-learning algorithms—constitute the “development” of content and as such do not qualify for § 230 immunity. This Note analyzes the evolution of § 230 jurisprudence to help inform the development of a revised framework. This framework is guided by congressional and public policy goals and creates brighter lines for technological immunity. It tailors immunity to account for user data mined by ISPs and the pervasive effect that the use of that data has on users—two issues that courts have yet to confront. This Note concludes that under the revised framework, machine-learning algorithms’ content organization— made effective through the collection of individualized data—make ISPs codevelopers of content and thus bar them from immunity.

November 2017

No. 2