banner
Hoodrh

Hoodrh

人文、产品、加密探索(非正式研究)
medium
twitter
substack
hoodrh.top

Agreement, not platform - a technical approach to freedom of speech

Changing the Internet's Economy and Digital Infrastructure to Promote Free Speech#

Author: Mike Masnick

Translator: Hoodrh

After about a decade of widespread sentiment supporting the internet and social media as a means to achieve more speech and improve the marketplace of ideas, this view has undergone a significant shift in recent years—now it seems almost no one is happy. Some believe these platforms have become cesspools of trolling, paranoia, and hate. Meanwhile, others argue that these platforms have become overly aggressive in regulating language and are systematically suppressing or censoring certain viewpoints. This doesn't even touch on privacy issues and what these platforms are doing (or not doing) with all the data they collect.

This situation has created a kind of crisis both inside and outside these companies. Despite historically promoting themselves as defenders of free speech, these companies have struggled to navigate their new roles as arbiters of online truth and goodness. Meanwhile, politicians from both major parties have been attacking these companies, albeit for completely different reasons. Some have complained about how these platforms might allow foreign interference in our elections. Others have lamented how they are used to spread misinformation and propaganda. Some accuse these platforms of being too powerful. Others have pointed out inappropriate account and content removals, while some believe that attempts at moderation are discriminatory against certain political viewpoints.

It is evident that these challenges do not have simple solutions, and most of the questions raised often fail to address the realities of the issues or understand the technical and social challenges that may render them impossible.

Some advocate for stricter regulation of online content, with companies like Facebook, YouTube, and Twitter discussing hiring thousands to form their moderation teams. On the other hand, companies are increasingly investing in more sophisticated technological aids, such as artificial intelligence, to try to identify controversial content early in the process. Others argue that we should change Section 230 of the CDA, which allows platforms to freely decide how they regulate (or do not regulate). Some suggest that moderation should not be allowed at all—at least for platforms of a certain scale—so that they are seen as part of the public square.

As this article will attempt to emphasize, most of these solutions are not only impractical; many would exacerbate the original problems or create equally harmful side effects.

This article proposes a completely different approach—a seemingly counterintuitive method that could actually provide a viable plan for achieving more free speech while minimizing the impacts of trolling, hate speech, and large-scale misinformation efforts. As a bonus, it could also help users of these platforms regain control over their privacy. Most importantly, it could even provide these platforms with entirely new revenue streams.

This approach: build protocols, not platforms.

It is important to clarify that this is a way of taking us back to the past of the internet. The early internet involved many different protocols—instructions and standards that anyone could use to build compatible interfaces. Email used SMTP (Simple Mail Transfer Protocol). Chat was done via IRC (Internet Relay Chat). Usenet served as a distributed discussion system using NNTP (Network News Transfer Protocol). The World Wide Web itself was its own protocol: Hypertext Transfer Protocol or HTTP.

However, over the past few decades, the internet has not built new protocols but has developed around proprietary, controlled platforms. These can operate in ways similar to early protocols, but they are controlled by a single entity. There are many reasons this has happened. Clearly, a single entity controlling a platform can profit from it. Additionally, having a single entity often means new features, upgrades, bug fixes, etc., can be rolled out faster, increasing user bases.

In fact, some of today's platforms are leveraging existing open protocols but have built walls around them, locking users in rather than just providing an interface. This actually highlights that there is no binary choice between platforms and protocols, but rather a spectrum. However, the argument presented here is that we need to shift more towards a world of open protocols rather than platforms.

Shifting to a world dominated by protocols rather than proprietary platforms would address many of the issues facing the internet today. Instead of relying on a few large platforms to regulate online speech, there could be widespread competition where anyone could design their own interfaces, filters, and other services, allowing the most effective platforms to succeed without resorting to outright censorship of certain voices. It would allow end-users to determine their own tolerance for different types of speech while making it easier for most people to avoid the most problematic speech without silencing anyone completely or allowing the platforms themselves to decide who gets to speak.

In short, it would push power and decision-making to the edges of the network rather than concentrating it in a small group of very powerful companies.

At the same time, it could bring about new, more innovative features, as well as better control for end-users over their own data. Ultimately, it could help introduce a range of new business models that focus not just on monetizing user data.

Historically, the internet has increasingly shifted towards centralized platforms rather than decentralized protocols, partly due to the incentive structures under the old internet. Protocols are hard to monetize. Therefore, it is difficult to keep them updated and deliver new features in compelling ways. Companies often come in and "take over," creating a more centralized platform, adding their own features (and integrating their own business models). They are able to invest more resources into these platforms (and business models), creating a virtuous cycle for the platform (and a certain amount of lock-in for users).

However, this also brings its own difficulties. With the emergence of control comes demands for accountability, including stricter regulation of the content hosted on these platforms. It has also raised concerns about filter bubbles and bias. Furthermore, it has created the dominance of certain internet companies, which (quite reasonably) makes many people uncomfortable.

Returning to a focus on protocols above platforms could address many of these issues. Other recent developments suggest that doing so could overcome many of the shortcomings of early protocol-based systems, potentially creating the best of both worlds: useful internet services driven by competition that are not solely controlled by large companies and are financially sustainable, providing end-users with better control over their data and privacy while offering far fewer opportunities for errors and misinformation to cause serious harm.

Early Problems with Protocols and What Platforms Do Well#

While the early internet was dominated by a series of protocols rather than platforms, the limitations of these early protocols illustrate why platforms came to dominate. There are many different platforms, each with its own set of successes and failures (or shortcomings), but to help illustrate the issues discussed here, we will limit the comparison to Usenet and Reddit.

Conceptually, Usenet and Reddit are quite similar. Both involve a set of forums typically organized around specific topics. On Usenet, these are called newsgroups. On Reddit, they are subreddits. Each newsgroup or subreddit tends to have moderators who have the authority to set different rules. Users can post new threads in each group, leading to replies from others in the group, creating a semblance of discussion.

However, Usenet is an open protocol (technically the Network News Transfer Protocol, or NNTP) that anyone can utilize with various applications. Reddit is a centralized platform completely controlled by a single company.

To access Usenet, you initially need a special newsreader client application (of which there are several), and then you need access to Usenet servers. Many internet service providers initially offered their own services (when I first went online in 1993, I accessed Usenet through my university's news server, along with a Usenet reader provided by the university). As the web became more popular, more organizations tried to provide a web front end for Usenet. In its early days, this space was dominated by Deja News Research Service, which provided the first web interface for Usenet. Later, it added many additional features, including (most helpfully) a comprehensive search engine.

While Deja News experimented with various business models, ultimately its search was shut down. Google acquired the company in 2001, including its Usenet archives, which it used as a key part of Google Groups (which still provides email-style mailing lists unique to the Google platform, as well as much of the web interface for Usenet and its newsgroups).

Much of the content on Usenet was complex and obscure (especially before web interfaces became widespread). One early joke about Usenet was that every September, the service would be flooded with confused "newbies," who were inevitably college freshmen who had just gotten new accounts and knew little about the popular practices and proper etiquette involved in using the service. Thus, September often became a time when many old-timers found themselves frustratedly "correcting" these newcomers until they conformed to the system's norms.

In that same spirit, the period after September 1993 was commemorated by old-school Usenet enthusiasts as "Eternal September." That was the moment when the proprietary platform America Online (AOL) opened the doors to Usenet, leading to a flood of unruly users.

Because there were many different Usenet servers, content was not centrally hosted but spread across various servers. This had its pros and cons, including that different servers could handle different content in different ways. Not every Usenet server had to host every group. But it also meant there was no central authority to handle disruptive or malicious activity. However, certain servers could choose to block certain newsgroups, and end-users could use tools like kill files to filter out various unwanted content based on their own chosen criteria.

Another major drawback of the original Usenet was that it was not particularly adaptable or flexible, especially regarding larger-scale changes. Because it was a set of decentralized protocols, there was a complex consensus process that required broad agreement among many parties to implement any changes to the protocols. Even small changes often required a lot of work, and even then, they were not always universally accepted. Creating a new newsgroup was a fairly complex process. For certain hierarchies, there was an approval process, but other "alternative" categories were easier to set up (though there was no guarantee that all Usenet servers would carry that board). In contrast, setting up a new subreddit is easy. Reddit has a product and engineering team that can make any changes it wants—but the user base has little say in how those changes happen.

The biggest problem with the old system may have been the lack of a clear business model. As Deja News's demise illustrates, running a Usenet server was never particularly profitable. Over time, the number of "premium" Usenet servers that required payment for access grew, but these servers tended to appear later and were not as large compared to internet platforms like Reddit, and were often seen as focusing on trading in infringing content.

Current Problems with Big Platforms#

Over the past twenty years, the rise of internet platforms (Facebook, Twitter, YouTube, Reddit, etc.) has more or less replaced the protocol-based systems that were previously used. With these platforms, there is a single (often for-profit) company running the service for end-users. These services are often initially funded by venture capital and then supported by advertising (often highly targeted).

These platforms are built on the web and tend to be accessed through traditional internet web browsers or increasingly through mobile device applications. The benefits of building services as platforms are fairly obvious: the owners have ultimate control over the platform, allowing them to better monetize it through some form of advertising (or other ancillary services). However, this does incentivize these platforms to extract more and more data from users to better target them.

This has led to reasonable concerns and pushback from users and regulators, who worry that the platforms are not acting fairly or are not adequately "protecting" the end-user data they have been collecting.

The second major issue facing today's largest platforms is that as they become larger and increasingly central to everyday life, the operators of these platforms are becoming more concerned about the content they can publish—and the responsibilities of these platforms. Operators may regulate or block that content. They face increasing pressure from users and politicians to more actively moderate that content. In some cases, there have been legal mandates to delete certain content, gradually undermining the early immunity that many platforms enjoyed (such as Section 230 of the Communications Decency Act in the U.S. or the EU's E-Commerce Directive), which many platforms appreciated for their moderation choices.

As a result, platforms feel reasonably compelled not only to be more proactive but also to testify before various legislative bodies, hire thousands of employees as potential content moderators, and invest heavily in moderation technology. However, even with these regulatory demands and human and technological investments, it remains unclear whether any platform can truly do a "good" job of moderating content at scale.

Part of the problem is that any platform's moderation decisions will leave someone unhappy. Clearly, those whose content is moderated often are dissatisfied with it, but so are others who wish to view or share that content. At the same time, in many cases, the decision not to moderate content can also leave people feeling uneasy. Currently, these platforms face a lot of criticism for their moderation choices, including accusations (most of which are unsubstantiated) that political bias is driving these content moderation choices. As platforms face increasing pressure to take on more responsibility, every choice they make regarding content moderation puts them in a bind. Remove controversial content—and anger those who created or supported it; avoid removing controversial content—and anger those who think it is problematic.

This puts platforms in a no-win situation. They can continue to pour more and more money into this issue and keep talking to the public and politicians, but it is unclear how this will end with enough people "satisfied." On any given day, when platforms like Facebook, Twitter, and YouTube fail to remove certain content, it is not hard to find people unhappy with these platforms—when they do finally remove that content, they can immediately be replaced by those dissatisfied with the platform.

This setup leaves all parties involved frustrated, and it is unlikely to get better anytime soon.

Rescue Protocols#

In this article, I suggest we return to a world of protocols dominating the internet rather than platforms. There is reason to believe that migrating to a protocol system could address many of the issues associated with platforms today while minimizing the inherent problems of protocols from decades ago.

While there is no silver bullet, protocol systems can better protect user privacy and free speech while minimizing the impacts of online abuse and creating more consistent new and compelling business models aligned with user interests.

The key to making this work is that while the various types of platforms we see today have specific protocols, those protocols would have many competing interface implementations. Competition would come from these implementations. The lower switching costs from moving from one implementation to another would reduce lock-in, and anyone could create their own interface and access all the content and users on the underlying protocol, significantly lowering the barriers to entry for competition. If you can already access everyone using the "social network protocol" and just provide a different or better interface, you do not need to build an entirely new Facebook.

To some extent, we have already seen such an example in the email space. Built on open standards like SMTP, POP3, and IMAP, email has many different implementations. Email systems popular in the 1980s and 1990s relied on client-server setups, where service providers (whether commercial ISPs, universities, or employers) would only briefly host emails on their servers until they were downloaded to users' own computers via some client software like Microsoft Outlook, Eudora, or Thunderbird. Alternatively, users could access that email through text interfaces (like Pine or Elm).

In the late 1990s, web-based email emerged, first with Rocketmail (which was eventually acquired by Yahoo and became Yahoo Mail) and Hotmail (acquired by Microsoft and later became Outlook.com). Google launched its own product, Gmail, in 2004, which sparked a new wave of innovation as Gmail offered more storage for emails and a significantly faster user interface.

However, due to these open standards, there is great flexibility. Users can use non-Gmail email addresses within the Gmail interface. Or they can use their Gmail account with entirely different clients, such as Microsoft Outlook or Apple Mail. Most importantly, new interfaces can be created on top of Gmail itself, such as using Chrome extensions.

This setup has many benefits for end-users. Even if one platform (like Gmail) becomes more popular, the switching costs are much lower. If users dislike how Gmail handles certain features or are concerned about Google's privacy practices, switching to another platform is much easier, and users do not lose access to all their old contacts or become unable to email others (even those contacts who are still Gmail users).

Note that this flexibility is a strong incentive for Google to treat its Gmail users well; Google is unlikely to take actions that could lead to a rapid exodus. This is different from fully proprietary platforms like Facebook or Twitter, where leaving those platforms means you can no longer communicate with people there in the same way and cannot easily access their content and communications anymore. Using a system like Gmail allows for easy export of contacts and even old emails, and simply starting over with a different service without losing the ability to stay in touch with anyone.

Additionally, it opens up the competitive landscape more. While Gmail is a particularly popular email service, others have been able to build significant email services (like Outlook.com or Yahoo Mail) or create successful startup email services targeting different markets and niches (like Zohomail or Protonmail). It also opens up other services that can be built on top of the existing email ecosystem without having to worry about relying on a single platform that might shut them out. For example, Twitter and Facebook tend to change product directions and cut off third-party applications, but in the email space, there is a thriving service market with companies like Boomerang, SaneBox, and MixMax, each offering additional services that can run on various email platforms.

The end result is more competition between and within email services to make the services better, along with a strong incentive for the major providers to act in the best interests of users, as significantly reduced lock-in allows those users to choose to leave.

Protecting Free Speech While Limiting the Impact of Abuse#

One of the most controversial parts of the discussion around content moderation may be how to handle "abusive" behavior. Almost everyone recognizes that such behavior exists online and can be destructive, but there is no consensus on what it actually includes. The behaviors that raise concerns can be categorized into many different types, from harassment to hate speech, from threats to trolling to obscenity, from doxxing to spam, and so on. But none of these categories has a comprehensive definition, and most are in the eye of the beholder. For example, an attempt by one person to express a strong opinion may be viewed as harassment by the recipient. Neither party may be "wrong" in itself, but leaving it to each platform to adjudicate such matters is an impossible task, especially when dealing with billions of pieces of content daily.

Currently, platforms are the ultimate centralized authority for handling these issues. Many have resolved this through increasingly complex internal "laws" (whose "rulings" are often opaque to end-users) and then handed it off to a large number of employees (often outsourced, with relatively low pay) who have little time to judge thousands of pieces of content.

In such a system, Type I ("false positive") and Type II ("false negative") errors are not only common; they are inevitable. Much of what people think should be removed is retained, while much of what people think should be retained is removed. Multiple content moderators may view content in completely different lights, and moderators are almost impossible to consider context (partly because most of the context may be unavailable or unclear to them, and partly because the time required to fully investigate each case makes it impossible to do so cost-effectively). Similarly, no technical solution can adequately consider context or intent—computers cannot recognize things like sarcasm or exaggeration, even at levels that are obvious to any human reader.

However, a protocol-based system would shift most decision-making from the center to the edges of the network. Anyone could create their own set of rules rather than relying on a single centralized platform and all the internal biases and incentives that come with it—including what content they do not want to see and what they want to see promoted. Since most people do not want to manually control all their preferences and levels, this could easily fall to any number of third parties—whether they are competing platforms, nonprofit organizations, or local communities. Those third parties could create any interface based on whatever rules they wanted.

For example, those interested in civil liberties issues might subscribe to moderation filters or even additional services published by the ACLU or EFF. Deeply engaged political individuals might choose a filter from their designated party (though this would obviously raise some concerns about increasing "filter bubbles," there is reason to believe the impacts of such things would be limited, as we will see).

Entirely new third parties could emerge, focused solely on providing better experiences. This could involve not just content moderation filters but the entire user experience. Imagine a competing interface for Twitter that would be pre-set (and continuously updated) to moderate content from troll accounts and better promote more thoughtful, thought-provoking stories rather than traditional clickbait trending topics. Or the interface could provide better layouts for conversations. Or for news reading.

The key is to ensure that the "rules" are not only shareable but completely transparent and controlled by any end-user. Thus, I might choose to use the publicly available controls for Twitter provided by the EFF, using an interface provided by a new nonprofit organization, but if I prefer more content about the EU, I could adjust my settings. Or if I primarily want to use the web to read news, I might use an interface provided by The New York Times. Or if I want to chat with friends, I could use an interface designed for better communication among small groups of friends.

In such a world, we could have a million content moderation systems handling the same content corpus—each taking completely different approaches—and then see which ones are most effective. Centralized platforms would no longer be the sole arbiters of what is allowed and what is not. Instead, many different individuals and organizations would be able to tune the system to their own comfort levels and share with others—and allow competition to happen at the implementation layer rather than at the underlying social network layer.

This would not completely prevent anyone from speaking on the platform, but if more popular interfaces and content moderation filters voluntarily choose not to include them, the power and influence of their speech would be more limited. This then presents a more democratic approach where the filter market can compete. If people feel that one such interface or filter provider is doing poorly, they can switch to another interface or adjust their settings themselves.

Thus, we have less central control, fewer reasons to claim "censorship," more competition, broader approaches, and more control for end-users—while potentially minimizing the scope and impact of content that many consider abusive. In fact, the existence of various filtering options could change the influence of anyone, proportional to how problematic many consider that person's speech to be.

For example, there has been significant controversy over how platforms handle the account of artist Alex Jones, who operates InfoWars and frequently supports various conspiracy theories. Users have exerted immense pressure on platforms to cut off his connections, and when they finally did so, they faced corresponding backlash from his supporters, claiming that their decision to remove him from the platform was politically biased.

In a protocol-based system, those who have always believed Jones is not an honest actor might block him earlier, while other interface providers, filter providers, and individuals might make intervention decisions based on any particularly shocking behavior. While his most ardent supporters might never block him, his overall influence would be limited. Thus, those who do not want to be disturbed by his nonsense would not have to deal with it; those who wish to see it could still access it.

A market of different filters and interfaces (and the ability to customize your own) would achieve greater granularity. Conspiracy theorists and trolls would encounter more trouble on "mainstream" filters, but would not completely silence those who want to hear them. Unlike today’s centralized systems where all voices are more or less equal (or completely banned), in a protocol-centered world, extremist views are far less likely to find mainstream appeal.

Protecting User Data and Privacy#

One incidental benefit of this approach is that protocol-based systems would almost certainly enhance our privacy. In such a system, social media-style systems would not need to collect and host all your data. Instead, just as filtering decisions can move to the edges, data storage can too. While this could develop in various ways, one fairly simple approach would be for end-users to build their own "data storage" through applications they control. Since we are unlikely to return to a world where most people store data locally (especially as we increasingly operate across multiple devices, including computers, smartphones, and tablets), it still makes sense to host this data in the cloud, but the data could be entirely controlled by the end-user.

In such a world, you might use a dedicated data storage company that hosts your data as encrypted blobs inaccessible to the data storage provider, but you can selectively enable access for any purpose at any specific moment. This data could also serve as your sole identity. Then, if you want to use a Twitter-like protocol, you could simply open access to your database for the Twitter-like protocol to access the necessary content. You would be able to set what content is allowed (and not allowed) to access, and you could also see when and how your data is accessed and what it has done to the data. This means that if someone abuses that access, you can cut off access at any time. In some cases, the system could be designed so that even if the service is accessing your data,

In this way, end-users could still use their data across various social media tools, but rather than locking that data in opaque silos that are inaccessible, non-transparent, and uncontrollable, control would be fully shifted to the end-user. Intermediaries would be incentivized to act in the best interest to avoid being cut off. End-users would have a better understanding of how their data is actually used and improved their ability to register for other services or even securely transfer data from one entity to another (or multiple other entities), enabling powerful new functionalities.

While some may worry that various intermediaries will still focus on absorbing all your data in such a system, this is not the case for several key reasons. First, given the ability to use the same protocols and switch to different interface/filter providers, any provider that becomes too "greedy" with your data risks disappointing people. Second, by separating data storage from interface providers, end-users have greater transparency. The idea is that you will store data in encrypted formats in data storage/cloud services so that the hosting party cannot access it. Interface providers need to request access and can develop tools and services that allow you to determine which data platforms are allowed access, for how long,

While interface/filter operators may abuse their permissions to collect and retain your data, there are also potential technical means, including designing protocols to pull only your relevant data from your data storage in real-time. If it does not do so and is accessing its own data storage, it could trigger alerts indicating that your data is being accessed against your will.

Finally, as explained when discussing business models below, interface providers have much stronger incentives to respect end-users' privacy wishes, as their revenue may be more directly driven by usage rather than through monetizing data. Disrupting your user base could lead to their exodus, harming the economic interests of the interface provider itself.

Enabling Greater Innovation#

By its nature, a protocol system could bring more innovation to the field, partly because it allows anyone to create interfaces to access that content. This level of competition would almost certainly lead to various innovative attempts to improve all aspects of service. Competing services could offer better filters, better interfaces, better or different functionalities, and so on.

Currently, we only have cross-platform competition, which has occurred to some extent but is quite limited. It is clear that the market can accommodate a few giants, so while Facebook, Twitter, YouTube, Instagram, and a few other companies may vie for users' attention here and there, the incentive to improve their own services is less.

However, if anyone can provide new interfaces, new features, or better moderation, then competition within specific protocols (formerly platforms) could quickly become fierce. Various ideas may be tried and abandoned, but real-world laboratories could showcase how these services innovate and deliver more value faster. Currently, many platforms provide APIs that allow third parties to develop new interfaces, but these APIs are controlled by the central platform—they can change them at will. In fact, it is well-known that Twitter has changed its support for APIs and third-party developers multiple times—but under a protocol system, APIs would be open, expecting anyone to build on them, and there would be no central company cutting off developers.

Most importantly, it could create entirely new avenues for innovation, including auxiliary services, such as parties focused on providing better content moderation tools or the competing databases discussed earlier, which are solely for hosting access to encrypted data without accessing it or performing any specific operations on it. These services could compete on speed and uptime rather than on additional features.

For example, in a world of open protocols and private data storage, thriving businesses could develop in the form of "agents" that connect your data storage with various services, automatically performing certain tasks and providing added value. A simple version could be an agent focused on scanning various protocols and services for news related to a specific topic or company, then sending you alerts when any content is found.

Creating New Business Models#

One of the main reasons early internet protocols have faded away compared to centralized platforms is the issue of business models. Owning your platform (if it is popular) has always seemed like a model that could print a lot of money for companies. However, building and maintaining protocols has long been a struggle. Most of the work is typically done by volunteers, and over time, well-known protocols have been known to wither away without attracting attention. For example, a critical security protocol that much of the internet relies on, OpenSSL, was found to have a significant vulnerability called Heartbleed in 2014. Around this time, it was noted that there was almost a complete lack of support for OpenSSL. There was a loose group of volunteers and one full-time employee working on OpenSSL. (“The open-source encryption software library protects hundreds of thousands of web servers and many products sold by companies worth billions of dollars, but its operating budget is very limited. OpenSSL Software Foundation chair Steve Marquess wrote in a blog post last week that OpenSSL typically receives about $2,000 in donations each year and has only one employee working full-time on the open-source code.”).

There are many such stories. As mentioned, Deja News was unable to build much of a business from Usenet, so it was sold to Google. Email has never made money like protocols, and it is typically included for free in your ISP account. Some early companies tried to build network platforms around email, but two significant examples were quickly acquired by larger companies (Yahoo's Rocketmail and Microsoft's Hotmail) to be integrated into larger products. Ultimately, Google launched Gmail, which did a fair amount of work in bringing email into its platform, but it was rarely seen as a huge driver of revenue. Nevertheless, the success that Google and Microsoft achieved with Gmail and Outlook respectively shows that large companies can build very successful services on top of open protocols. If Google were to really mess up Gmail or do something problematic with the service, it would not be hard for people to switch to a different email system and retain access to everyone they communicate with.

We have discussed the competition between various interface and filter implementations to provide better services, but there could also be competition in business models. There may be experiments with different types of business models involving data storage services—which could charge for premium access and storage (and security)—similar to what services like Dropbox and Amazon Web Services do today. Various different business models could also form around implementations and filters. Subscription products or alternative payment methods could also be offered for premium services or features.

While there are reasonable concerns about the current data surveillance setup of the advertising market on today’s social media platforms, there is reason to believe that less data-intensive advertising models could thrive in the world described here. Similarly, since end-users hold the keys to data and privacy levels, it would be impractical or useful to collect all data more aggressively. Instead, several different types of advertising models could be developed.

First, there could be an advertising model based on more limited data, focusing more on matching intent or pure brand advertising. To understand this possibility, consider Google's original advertising model, which did not rely heavily on knowing all information about you but rather on understanding your internet search context at a specific moment. Alternatively, we could return to a more traditional brand advertising world, where popular advertisers seek to advertise within micro-communities that have clear interest in cars, for example.

Or, considering the level of control end-users have over their data, a reverse auction-type business model could be developed, where end-users themselves might be able to offer their data in exchange for access or deals from certain advertisers. The key is that the end-user—rather than the platform—will be in control.

Perhaps most interestingly, there are some potential new opportunities that could make protocols actually more sustainable. In recent years, with the rise of cryptocurrencies and tokens, it has become theoretically possible to build protocols that use cryptocurrencies or tokens of certain value, where the value of these projects increases with usage. One simple way to look at it is that token-based cryptocurrencies are akin to equity in a company—but rather than being tied to the financial success of the company, the value of the crypto tokens is tied to the value of the protocol.

Without delving deeply into how these work, these forms of currency have their own value, and they are associated with the protocols they support. As more people use the protocol, the value of the currency or token itself increases. In many cases, running the protocol itself may require the use of the currency or token—thus, as the protocol is used more widely, the demand for the currency/token will increase while the supply remains constant or expands according to previously designed growth plans.

This would incentivize more people to support and use the protocol to increase the value of the associated currency. There are currently attempts to build protocols where the organizations responsible for the protocols retain a certain percentage of the currency while distributing the rest. Theoretically, under such a system, if it were to become popular, the appreciation of the tokens/currency could help fund the ongoing maintenance and operation of the protocol—effectively eliminating the historical problem of funding open protocols that help create open protocols.

Similarly, various implementers of interfaces or filters or agents might find ways to benefit from the increase in token value. Different models could emerge, but specific shares of tokens could be allocated to various implementations, and as they help increase usage of the network, their own token value would also increase. In fact, token distribution could be tied to the number of users within a specific interface to create consistent incentives (though there are mechanisms to avoid gaming the system with fake users). Alternatively, as mentioned above, the use of tokens could be a necessary component of the actual architecture of running the system, much like Bitcoin currency is a key part of its open blockchain ledger functionality.

In many ways, this setup better aligns the interests of service users with those of protocol developers and interface designers. In platform-based systems, incentives are either to charge users directly (creating some conflict between the platform and users) or to collect more data to advertise to them. Theoretically, "good" advertising could be seen as valuable to end-users, but in most cases, when platforms collect vast amounts of data to target ads to them, end-users feel that the interests of the platform and users are often misaligned.

However, under a tokenized system, the key driver is to gain more usage to increase the value of the tokens. Clearly, this could bring other incentive challenges—there are already concerns that platforms will take up too much time, and any service will face challenges when it becomes too large—but likewise, protocols will encourage competition to provide better user interfaces, better functionalities, and better moderation, thus minimizing this challenge. In fact, one interface might compete by offering a more limited experience and enhancing its ability to limit information overload.

Nevertheless, the ability to align the incentives of the network itself with economic interests creates quite a unique opportunity that many are now exploring.

What Might Not Work#

That is not to say that protocol-based systems can solve all problems. Most of the suggestions above are speculative—in fact, we have seen historically that platforms have outpaced protocols, and the development capabilities of protocols are limited.

Complexity Chokes#

Any protocol-based system could be too complex and cumbersome to attract a sufficiently large user base. Users do not want to fiddle with a multitude of settings or different applications to make things work. They just want to figure out what the service is and be able to use it effortlessly. Platforms have historically been very good at focusing on user experience, especially in onboarding new users.

If we are to try a new protocol-based regime, people would hope that it can and will learn from the successes of today’s platforms and build upon them. Similarly, competition within service-level protocols could create greater incentives to create better user experiences—the same goes for the value of relevant cryptocurrencies, which is actually tied to creating better user experiences. In fact, providing the simplest and most user-friendly interface to access the protocol could be a key area of competition.

Ultimately, one of the reasons platforms have historically triumphed is that having everything controlled by a single entity can also bring some obvious performance improvements. In a protocol world with independent data storage/interfaces, you would be more reliant on multiple companies connecting seamlessly. Internet giants like Google, Facebook, and Amazon have truly perfected their systems to work together seamlessly, while bringing multiple third parties into the mix could introduce greater risks. However, there have already been significant technological improvements in this area (in fact, large platform companies have open-sourced some of their own technologies to achieve this). Most importantly, broadband speeds have improved and should continue to do so, potentially minimizing this possible technical barrier.

Existing Platforms Are Too Big to Change#

Another potential stumbling block is that existing platforms—Facebook, YouTube, Twitter, Reddit, etc.—are so large and entrenched that it may be nearly impossible to replace them with a protocol-based approach. This criticism assumes that the only way to achieve this is to build an entirely new system reliant on protocols. This may work, but the platforms themselves may also consider using protocols.

The reaction of many to the idea that platforms could execute this themselves is to ask why they would do so, as this would inevitably mean relinquishing their current monopolistic control over information in the system and allowing that data to return to end-user control and be used for competing services using the same protocols. However, there are several reasons to believe that certain platforms might actually be willing to accept this trade-off.

First, as the pressure on these platforms increases, they increasingly need to acknowledge that what they are currently doing is not working and is unlikely to work. The current operating model only leads to increasing pressure to "solve" problems that seem impossible to resolve. At some point, migrating to a protocol system may be a way for existing platforms to relieve themselves of the burden of being the gatekeepers of everything everyone is doing on the platform.

Second, continuing to do what they are doing will become increasingly expensive. Facebook recently committed to hiring another ten thousand moderators; YouTube has also committed to hiring "thousands" of moderators. Hiring all these people will also increase the costs for these companies. Switching to a protocol-based system would shift the moderation elements to the edges of the network or competing third parties, saving large platforms money.

Third, existing platforms may explore using protocols as an effective way to compete with other large internet platforms, as their competitive capabilities are much weaker. For example, Google has attempted and abandoned attempts to build Facebook-like social networks multiple times. However, if it continues to believe there should be a social network alternative to Facebook, it may recognize the appeal of providing a system based on open protocols. In fact, realizing that it is unlikely to build its own proprietary solution would make offering an open protocol system an attractive alternative, even if only to undermine Facebook's position.

Finally, if the token/cryptocurrency approach proves to be a viable method for supporting successful protocols, then building these services as protocols rather than centralized controlled platforms could even be more valuable.

This Will Exacerbate Filter Bubble Issues#

Some argue that this approach will actually worsen some of the issues around online abusive content. The crux of the argument is that allowing abusers—whether mere trolls or horrific neo-Nazis—to express their thoughts will be a problem. Further, they would argue that by allowing competing services, you ultimately end up with cesspool areas of the internet where the worst will continue to gather unimpeded.

While I sympathize with this possibility, it does not seem inevitable, however one thinks about it. One point against this complaint is that we have already allowed these people to infest various social networks, and so far, we have not successfully gotten rid of them. But a larger point is that this could isolate them to some extent, as their content is less likely to enter the most widely used implementations and services on the protocol. That is to say, while they may be vile and shameless in their own dark corners, their ability to infect the rest of the internet and (importantly) to seek out and recruit others will be severely limited.

To some extent, we have already seen this. When forced to gather in their own corners of the internet after being expelled from sites like Facebook and Twitter, alternative services catering solely to these users have not particularly succeeded in expanding or growing over time. There will always be some people with crazy ideas—but giving them their own little space to be crazy may better protect the broader internet than continually kicking them off of every other platform.

Handling More Objectively Problematic Content#

A key assumption here is that most of the "offensive" content causing headaches falls within a broad "gray" area rather than being "black and white." However, there is some content—often illegal content—that is much clearer and does not fall within the middle of the spectrum. There are legitimate concerns about how this setup would allow communities to form around things like child pornography, revenge porn, stalking, doxxing, or other criminal activities.

Of course, the reality is that such communities have already formed—often on the dark web—and the way to deal with them today is primarily through law enforcement (sometimes through investigative reporting). In such a setup, it seems likely that the same would be true. There is little reason to believe that in a protocol-centered world, this problem would be fundamentally different from the issues that currently exist.

Moreover, through an open protocol system, there would actually be greater transparency, with some (such as civil society groups monitoring hate groups or law enforcement agencies) even able to establish and deploy agents to monitor these spaces and be able to trigger alerts for particularly shocking comments that require more direct scrutiny. Those being stalked may not need to directly track their stalkers but could use digital agents to scan the broader protocol to determine if there is any content indicating a problem, then directly alert the police or other relevant contacts.

Examples in Practice / What It Might Look Like#

As mentioned above, this could play out in various ways. Existing services may find that the burden of being a centralized platform becomes too expensive, leading them to seek alternative models—the tokenized/cryptocurrency approach could even make that model financially viable.

Alternatively, new protocols could be created to achieve this. There are already many different levels of attempts. Services like IPFS (InterPlanetary File System) and its related product Filecoin have laid the groundwork and infrastructure for distributed services based on their protocols and currencies. Tim Berners-Lee, the inventor of the World Wide Web, has been working on a system called Solid, which is now part of his new company Inrupt, which aims to facilitate a more distributed internet. Other projects like Indieweb have been bringing people together to build many parts that could contribute to a future world of protocols rather than platforms.

In any case, if a protocol is proposed and begins to gain attention, we would hope to see some key things: multiple implementations/services on the same protocol, providing users with choices about which service to use rather than limiting them to just one. We may also start to see the rise of new business lines involving secure data storage/data hosting, as users will no longer provide their data for free to platforms and gain more control. Other new services and opportunities may also emerge as competition to build better service sets for users intensifies.

Conclusion#

Over the past half-century of network computing, we have swung between client-side and server-side computing. We have moved from mainframes and dumb terminals to powerful desktop computers to web applications and the cloud. Perhaps we will also begin to see a similar pendulum swing in this area. We have shifted from a world dominated by protocols to a world where centralized platforms control everything. Taking us back to a world where protocols dominate platforms could greatly benefit online free speech and innovation.

Such an initiative has the potential to return us to the early promise of the internet: creating a place where like-minded individuals can communicate globally on a variety of topics, where anyone can discover useful information on a wide range of different subjects without being polluted by abuse and misinformation. At the same time, it could foster greater competition and innovation on the internet while giving end-users more control over their data and preventing large companies from having too much data on any particular user.

Shifting to protocols rather than platforms is a way to promote free speech in the 21st century. Rather than relying on a "market of ideas" within a single platform (which could be hijacked by malicious actors), protocols could lead to ideal markets where competition occurs to provide better services, minimizing the impact of malicious users without completely cutting off their ability to speak.

This would represent a fundamental shift that should be taken seriously.

You can also find me in these places

Mirror: Hoodrh

Twitter: Hoodrh

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.