platform companies themselves are clearly ‘interests’ in this special usage, and so are other businesses that have relationships with them (news media, entertainment, advertising, etc.), as well as organizations that represent conflicting or competing interests: trade unions, advocacy groups, consumer organizations, and non-government organizations (NGOs) generally. Institutions are those organizational arenas that have responsibility for governing and regulating digital platforms for particular outcomes. Through them the interplay of ideas and interests is played out and collective decision-making occurs – at local, national, regional and supranational levels.
What we see from this angle is a mismatch between the rise of digital platform companies as dominant players in the global economy and de facto gatekeepers of digital interactions on the one hand, and the ideas and institutions that underpin platform regulation on the other. Many of the ideas that inform this space and the institutions established for its governance remain tied to the decentralized world of the open internet on which they were premised. In this world, nation states should not govern the digital realm because no one needs to govern it. That was the ideal of spontaneous ordering promised by the libertarian internet. As a result, we come across increasing numbers of instances where nation states that attempt to regulate competition, content, data, and other aspects of the digital environment find their legitimacy in doing so repeatedly challenged, both by the interested companies themselves, which tend to operate globally rather than nationally, and by civil society organizations. At the heart of debates along this line is the question whether those who interact with digital platforms are best understood as national citizens or as global netizens.
I owe a key conceptual debt to the work of Michel Foucault. This book is not a Foucauldian analysis of the internet and digital platforms. It does, however, pick up on two key insights from Foucault. The first concerns the nature of power. I argue here that platform power exists, but does not operate primarily through the ability of digital companies to make people do things that they would not otherwise do. In that respect, my view here differs from the critique offered in the 2020 documentary The Social Dilemma (Orlowski, 2020), which draws a direct link between behavioural targeting through algorithmic manipulations of user data and the turn to online filter bubbles and political extremism. This could be described as evidence of akrasia – the kind of weakness that makes one act against one’s better judgement.
While the algorithmic manipulation of users through access to online data about them is possible – this is the basis of the Cambridge Analytica scandal – the argument of this book is that a comprehensive treatment of digital platform power needs to focus on the capacity of major platforms to shape the economic, political, and communications environments in which they operate. They can shape digital markets, political processes, and the online public sphere. This capacity may or may not be exercised, but it demonstrably exerts a strong influence on other players in the environment, from media companies to political activists and from politicians and political parties to regulators and governments. When, in February 2021, Facebook withdrew the access of Australian news media sites to its global news feed, as part of a bargaining strategy designed to influence the federal government’s proposed News Media and Digital Platforms Mandatory Bargaining Code, it made explicit forms of power that had long been tacit in the media environment. Similarly, the whole debate as to whether platforms such as Facebook, Twitter, and YouTube would act upon false claims and misinformation emanating from Donald Trump and his supporters drew attention to the amount of power of this sort they held within their organizations: power not framed by constitutions, laws, or legislators but contained by their own terms of service as interpreted by themselves. This is a form of power quite different from that of big corporations, as it is sui generis power, which constitutes a genuine challenge to other kinds of political authority. This is a challenge that, as many commentators have noted, is unprecedented among media industries but has a historical analogue (and to that extent a precedent) in the rise of the giant industrial trusts of the early twentieth century. Interestingly, the populist challenge to the power of big tech has played out particularly strongly in the United States, where it is one of the very few policy issues that can cross the Republican–Democrat partisan divide.
The responses to this concentrated economic, political, and communications power have been many and varied. A recurrent issue in these debates concerns the global nature of digital platforms and whether nation states have the inclination or the capacity to constitute forms of countervailing regulatory power. It is also the case that there are different national trajectories that have shaped the evolution of the internet in different parts of the world, ranging from the Californian ideology of the early Silicon Valley culture (Barbrook and Cameron, 1996) to the authoritarian statism and techno-nationalism that have shaped the Chinese internet. At the same time, not accepting this binary opposition, leaders such as Emanuel Macron called for a ‘third way’ of regulating the internet (Macron, 2018). The regulatory activism of the European Union shows us the gist of these initiatives in policies such as the GDPR and the proposed Digital Services Act. But this move has caused concerns about the rise of a global ‘splinternet’ (Lemley, 2021), as different national and regional models of internet governance develop institutional path dependence and the relatively weak and fragmented institutions of global internet governance show little capacity to broker a new framework for shared global governance in an era when nation states are gaining ascendancy.
There are strong reasons to believe that the capacity of nation states to regulate global digital platforms has been systematically underestimated; and the idea that state regulation is inherently impossible is not an empirical reality as much as an ideology that serves dominant interests. One of the important consequences of the platformization of the internet is that it has revealed the extent to which content on digital platforms is already moderated, curated, managed, and governed in various ways. This discovery has shifted the focus from whether online content can be regulated to who should regulate it and what forms of accountability and transparency should be set in place for content moderation decisions. Moreover, the demand to use antitrust laws to ‘break up big tech’ (Warren, 2019) can be seen as being as much about promoting competitive markets as it is about regulating digital capitalism. Indeed, some of the most vocal supporters of antitrust measures are also strong champions of free market capitalism and argue that information monopolies are stifling economic growth and innovation (Stigler Center for the Study of the Economy and the State, 2019). This has prompted critics on the left to argue that antitrust laws do not go far enough in breaking up the architecture of surveillance capitalism and data colonialism (Couldry and Mejias, 2019; Deibert, 2020).
The book concludes with a discussion of the practicalities of platform regulation and of some wider political issues that arise from the turn to a ‘legitimacy’ discourse – that is, one where the stress is on who makes decisions on what basis and whether the private and public actors can be trusted by the citizenry (Bowers and Zittrain, 2020). There are differences between policies and regulations that aim to enhance competition in digital markets and policies and regulations that aim to address online harms and online content. A series of substantive regulatory questions arises. One can ask whether the focus is on illegal or potentially harmful content (and who decides what is ‘potentially harmful’); how well these regulations sit within a revised communications and media policy programme; whether platforms begin to resemble publishers in legal terms; to what extent regulations apply primarily to what the European Union now calls ‘very large online platforms’ (VLOPs); and the issue of proportionality in regulatory burden.
I argue in the Conclusion that platform regulation can been seen, not as the state imposing its will upon digital netizens, but rather as a series of steps to democratize decision-making about digital platforms and digital futures. There are of course inherent risks: governments can overreach in their attempt to control the information for their own ends; alternatively, regulation may end up taking a largely symbolic form – appearing to address problems when in reality it has no ‘teeth’. Like many issues in the policy domain today, the politics of platform regulation does not align neatly with a left–right political split. Conservatives grapple with the division between a pro-market, pro-globalization wing and a more populist and nationalist wing, which is more likely to attempt to regulate digital platforms, whereas the left is divided into a globally minded cosmopolitan wing, which looks upon the state as a threat to freedom