The regulation of social media platforms has become one of the most pressing and contested global policy issues. From combating misinformation and hate speech, to terrorist propaganda and other harmful materials, debates over how to make online platforms safe have taken the center stage. Calls for regulation have increased over the years, and tend to come after violent attacks are found to be linked to content shared online. The tech sector increased its efforts as well, developing new content moderation guidelines and investing in technologies to rapidly detect and take down harmful material from social media sites.

Despite these efforts, a significant number of extremist actors still successfully operate on online platforms, disseminating propaganda, recruiting supporters, and inspiring violence. How can these groups––who face an increasingly disruptive information environment––continue to use the internet effectively to advance their cause? What makes extremist actors resilient to content moderation? And will new regulatory efforts be effective at preventing online harms?

A new book project by Professor Tamar Mitts sheds light on this puzzle by examining how extremist actors strategically adapt to online regulation across platforms. Drawing on rich empirical evidence from various sources, including data on extremist groups’ online networks, archives of banned terrorist propaganda, and data on social media platforms’ enforcement actions, the research points to various mechanisms that allow extremist actors to remain a threat despite efforts to moderate content, and thus explains why online extremism continues to be a problem for digitally connected societies.