Categories of Decentralization

The past few years have seen an explosion in the number of protocols and platforms that are calling themselves "decentralized". There is a lot of money being invested into decentralized systems and as a result there has been an enormous pressure to relax the definition of decentralization such that every popular protocol can successfully claim to be decentralized.

These efforts have been successful; the definition of decentralization means a wide variety of conflicting things to different communities, and any attempt to land on a concrete definition is sure to fail. Instead, we are going to tackle the problem by defining five new categories for reasoning about decentralized systems.

These five categories intentionally blast through nuance. We ignore lots of little properties that distinguish different protocols and instead focus on the high level trust assumptions that are being made. It is not dissimilar from Big-O notation in computer science: two algorithms with identical O-notation can have dramatically different real world performance, yet Big-O notation remains useful for obtaining a baseline understanding of an algorithm and its general performance characteristics.

Similarly, these five categories are intended to provide a robust baseline for reasoning about the trust assumptions made by a platform or protocol, without trying to capture every important detail or provide definite certainty that one protocol is more decentralized than another.

The five categories are:

Briefly, 'Independently Verifiable' systems are systems where the user has the ability to verify a fact with a confidence that is fully independent of any external entities. These systems have no trust at all.

'Independent Threshold Trust' systems are systems where the user is depending on external trusted entities, but the user has the ability to select these entities themselves, can update the set of trusted group at any time, and is only depending on K of N entities to be honest.

'External Threshold Trust' systems are systems where the user is depending on a group of external trusted entities, where at least K of N entities are trusted to be honest. These systems are similar to independent threshold trust systems, except that the user is not in full control over which entities are part of the trusted group, and/or lacks the ability to update the trusted group.

'Externally Hosted' systems are systems where the user is at least partially trusting a single entity with their user experience. If that single entity becomes malicious, the user loses some level of functionality or benefit that was previously enjoyed.

'Externally Administrated' systems are systems where all users are at least partially trusting the same single entity. In an externally hosted system, different users will be trusting different entities, therefore one provider becoming malicious is able to impact some users but not all of them. In an externally administrated system, there is at least one entity that has the power to impact all users negatively if they become malicious.

Independently Verifiable

Independently verifiable systems have no trust within them at all. Some such systems are very basic. For example, imagine we are sitting together and I declare that a rock is warm to the touch. You can easily verify this by touching the rock yourself. Once you have touched the rock, there is no doubt in your mind about the temperature of the rock. Our system of determining the relative temperature of rocks is therefore independently verifiable.

Within the world of consensus, Proof of Work systems are the only known systems that enable independently verifiable consensus, even in theory. Proof of Work provides an independently verifiable solution to the Byzantine General's Problem.

The Byzantine General's problem is a formal academic problem where a number of honest entities are attempting to come to consensus in the presence of a number of malicious entities. The entities engage in some decision making protocol, with the only goal being that all honest entities arrive at the same decision. It does not need to be the "correct" decision, the decision is definitionally correct as long as it is the same decision as every other honest entity.

Proof of Work enables all honest entities to arrive at the same decision by creating a concrete metric for measuring the "weight" of a particular decision (the quantity of work that was put behind a decision). Because this weight can be independently verified, all honest entities can arrive at the same conclusion about which decision to make.

In Proof of Work blockchains, the "decision" is the transaction history of the network. Proof of Work protocols assign an additional criteria to decisions: the work on a particular transaction history is only considered if every transaction in that history follows all of the rules of the blockchain protocol. Transactions for example are not allowed to arbitrarily move money from one account to another, they must include a signature from the original owner of the money. If the rules are not followed, all of the work is ignored and a different transaction history will be selected.

Determining that a particular transaction history is valid and then measuring the amount of weight (or work) behind that transaction history is and act that requires no trust, enabling Proof of Work blockchains to be independently verifiable from the perspective of solving the Byzantine General's Problem and ensuring that all honest entities agree on the same transaction history.

In the world of cryptocurrency, it's not nearly sufficient to ensure that all honest entities arrive at the same transaction history, we also want assurance that the transaction history will not be changed in the future. This is a concept known as "finality".

Proof of Work system can provide independently verifiable guarantees that a certain transaction history is difficult to change, as an alternate history would need to build up an equivalent amount of work to replace the current history. Though this is not nearly as comfortable as a full guarantee that the transaction history will never be reversed, it's a stronger guarantee than can be achieved by any other known independently verifiable system.


In practice, most Proof of Work systems are not independently verifiable. Even Bitcoin, which is by far the closest system to being independently verifiable, has elements of trust within its ecosystem.

The most common pitfall that prevents Proof of Work cryptocurrencies from being independently verifiable is node software that has automatic updates. If a piece of software receives automatic updates, that means the developers of the software are able to push malicious updates to the user and undermine the integrity of the protocol. If the node software has automatic updates, the trust category is either externally hosted or externally administrated.

Another common pitfall is to have multiple implementations of the consensus rules. A blockchain that has multiple implementations is not independently verifiable because it is practically impossible to guarantee that two different implementations have perfectly identical interpretations of whether a transaction history is valid.

Unlike most software, blockchain software is extremely sensitive to even tiny differences between two implementations. If there is any difference at all, a malicious actor can exploit that difference to convince two honest entities to arrive and different conclusions about which transaction history is the canonical transaction history.

In practice, any two implementations of software are guaranteed to be different in small but critically important ways. There is even a historic example within Bitcoin where two different implementations that had the same code but different compilers (one compiling to 64 bit, the other to 32 bit) resulted in a bug where one implementation thought a transaction was valid and the other did not.

Bitcoin developers have demonstrated time and time again that bugs like these are almost trivial to expose if you use modern tools like fuzzing. At some point in the distant future (at least 15 years from now), we may be able to use formal verification to create multiple implementations of a protocol that are not different in exploitable ways, but today any ecosystem that supports multiple implementations can correctly be assumed to have consensus critical bugs.

Bitcoin is closest example to independently verifiable that the cryptocurrency world has to offer. Bitcoin has no automatic updates, one primary software implementation, and lots of robust scrutiny that has been applied to identify and fix places where Bitcoin strays from independent verifiability.

Despite being close, Bitcoin is known to not be independently verifiable. One problem is that while there is a single main implementation for Bitcoin, that implementation has released numerous versions from many different compilations (each compilation targeting a different platform), and it is difficult to be certain that all versions across all platforms will always arrive at identical conclusions when determining whether a transaction history is valid. Confidence is growing with time, but still has room for improvement.

The other major issue holding Bitcoin back from independent verifiability is the compilation process for the software itself. Nearly all software today (including outside of Bitcoin) is either compiled using a toolchain that depends on trusted binaries. A good introduction to this problem is the paper 'Reflections on Trusting Trust' by Ken Thompson. The conclusion is that we cannot know whether some part of our toolchain has inserted a back door or otherwise compromised the soundness of the Bitcoin software.

This problem is actively being tackled, and groups such as GUIX have put together a roadmap for eliminating all trusted binaries from the toolchain of Bitcoin. This is a difficult endeavor that has nonetheless made a significant amount of progress.

With enough additional effort, Bitcoin can achieve true independent verifiability and help pave the way for other Proof of Work protocols to achieve independent verifiability as well.

Independent Threshold Trust

An independent threshold trust system is one where the user is depending on a threshold of entities within a trusted set to do the right thing. In order for a threshold trust system to be considered 'independent', the user needs to have full control over who the members of the trusted set are, and the user needs to have the ability to update the trusted set at any time.

Decentralized data storage is perhaps the strongest example of an independent threshold trust system that is in production today. Typically, a user splits their data up into N pieces, giving each piece to a different storage provider. The user has full control over which storage providers receive the data, and then can recover the data at any time with just K pieces of the data. If the user wishes to update the trusted set, the just download the data and re-upload it to a different set of storage providers.

Certain multiparty computations also fall into the independent threshold trust category. In most cases, only one entity participating in the computation needs to be honest to protect the privacy and integrity of the computation. If the user is themselves participating the computation, they can be that one honest entity and the multiparty computation can actually be considered to be independently verifiable. This same tactic also works for decentralized storage - if you host a full copy of the data yourself, the system is independently verifiable rather than being a threshold trust system.

Independent Threshold Trust systems often depend on a frame of reference. A user that uploads a file to a decentralized storage network and then shares that file with a friend can say that file is stored using an independent threshold trust system. But the friend cannot say the same as they do not have control over the trusted group that is storing the files. To the friend, the file is secured by an external threshold trust system.

These frames of reference are most important when talking about trusted setup rituals. Trusted setup rituals are often 1-of-N systems, where many members of a large community participate in the trusted setup. For everyone who has participated in the trusted setup, that trusted setup is independently verifiable. But for everyone who did not participate in the ritual, the trusted setup is an external threshold trust system.

Independent threshold trust systems are a fundamental downgrade from independently verifiable systems, because the user is exposed to a group that can take malicious action against them. Even so, the user retains a high degree of control because they can choose the set of entities that are part of the trusted group. This gives the user ample opportunity to make mistakes, but at least the mistakes are within the user's own control.

External Threshold Trust

External Threshold Trust systems are threshold trust systems (K of N) that do not meet the full criteria to be considered independent. Because there are many different ways that a system can fail to meet the criteria for independence, external threshold trust systems span a wide range of practical implementation.

Classic examples of external threshold trust include things like board members voting on the future of a company, and congressmen voting on whether a bill should be turned into a law. A lot of longstanding human institutions have been leveraging external threshold trust for a long time.

Most traditional byzantine fault tolerant systems (paxos, raft, pbft, etc) were actually designed to be independent threshold trust systems. The idea behind traditional consensus systems like paxos is that one company (such as Microsoft or Google) would own all of the servers, and paxos would protect that company from scenarios like servers getting hacked. It was only really with the advent of cryptocurrency that traditional consensus systems started being used in external threshold trust contexts.

Within the world of cryptocurrency, the most trust minimizing consensus systems are "federated consensus" systems. In a federated consensus system, some trustworthy arbiter explicitly assembles trustworthy entities (typically institutions like exchanges, universities, and non-profits) to participate in a consensus process.

The users don't have any control over the members of the trusted group, but they typically have high visibility into the members of the trusted group, and a high confidence that the members of the trusted group will not change unexpectedly. The transparency and process around federated consensus helps to minimize the total amount of trust required.

A live example of this is Blockstream's Liquid network. The Liquid network has a seldom updated set of participants that work together to build consensus and submit transactions to the network. As long as at least 11 out of 15 participants remain honest, the network is secure. The members of the trusted group are carefully vetted and are disclosed to the members of the network.

I am hesitant to make broad claims or generalizations about Proof of Stake protocols, because Proof of Stake protocols come in many flavors and any statement about Proof of Stake is sure to have notable exceptions. That said, most Proof of Stake systems can be categorized as external threshold trust systems.

Unlike federated consensus systems such as Liquid, Proof of Stake protocols tend to have a more ad-hoc process for accepting entities into the trusted group. Usually the only requirement is wealth. In addition to being ad hoc, the trusted group in Proof of Stake protocols tends to be large, tends to be composted of mostly unknown entities, and the trusted group tends to change frequently and unexpectedly as people buy and sell tokens.

One advantage that Proof of Stake protocols enjoy over federated consensus protocols is the ability to keep members of the trusted group financially aligned. At the highest level, all members of the trusted group have a financial stake in the network, and are therefore motivated to keep the network healthy. Additionally, most Proof of Stake networks feature a technique called 'slashing', where members of the trusted group can have their coins taken away from them if they are caught misbehaving.

These financial motivators come with many caveats. Slashing for example can only punish certain types of malicious behavior, and additionally can only be effective at all if a sufficient threshold of the trusted group is still honest (this threshold is usually 1/3). The financial alignment is even more dubious, because Proof of Stake networks are fundamentally self-referential. Destroying value in a Proof of Stake network is not actually destroying anything physical, instead it is changing how virtual value is assigned. When you add in financial complications like liquidity and leverage and cryptographic complications like Sybil attacks, it becomes unclear exactly how much value is lost during a slashing event. It may be for example that the entity was unable to exit their wealth anyway due to low liquidity, or the entity may have found a clever way to recover most of their losses through hedging or even outsourcing their damage to someone else.

The scariest aspect of external threshold trust systems is the idea that the trusted group is a fake group. Because the user did not select the trusted group themselves, they don't actually know that the trusted group is a true group, it may just be one entity that is pretending to be a group. This uncertainty is especially strong in Proof of Stake systems that have large conglomerates of Venture Capitalists, large coin custodians for users (such as Coinbase), or trusted validators that perform the staking duties of many ecosystem participants. This uncertainty is also present in federated systems like Liquid, where the trusted group is all using the same hardware that was manufactured by a single entity.

In practice, these unknowns means that external threshold trust systems are significantly less trustworthy for users than independent threshold trust systems. There are supposedly advantages over centralized systems, but it can be difficult to be certain that the advantages are real. The user has ample opportunity to unwittingly participate in an external threshold trust system which was designed from the start to be a scam, and has really just been a single malicious entity the whole time.

Externally Hosted

An externally hosted system is one where the user is dependent on a trusted entity for some part of their user experience. The simplest example of this is a trusted wallet custodian such as Coinbase. If Coinbase were to become malicious, many (but not all) users would lose all of their money.

Another example of an externally hosted system is any Ethereum application that depends on a trusted API like Infura. If the API provider becomes malicious, the user can potentially lose all of their money. This risk exists even if the API provider does not have access to the user's keys, because they can provide malicious responses that cause the user to unwittingly do things like send all of their money to a malicious contract rather than the correct contract.

Any cryptocurrency software with an automatic update feature is at best an externally hosted system, as the software developer can push updates at any time that compromise the user. A lot of protocols that would otherwise be threshold trust protocols or even independently verifiable protocols reduce themselves to the externally hosted category simply because of their automatic updates.

Distributed systems such as Email, Mastodon, and Matrix are all externally hosted systems. These systems have users that participate using a wide variety of service providers, but each user is fundamentally dependent on a particular service provider that can do things like block messages and delete accounts. A provider such as gmail becoming malicious has significant impact on all gmail users, even if it does not impact all email users as a whole.

Externally hosted systems are a significant downgrade in trust from threshold trust systems. One big factor in this is that externally hosted systems typically enjoy a significant amount of power over the user. For example, most externally hosted cryptocurrency platforms (like Coinbase) have the power to outright steal all of a user's money. This power typically does not exist in threshold trust systems, even when the entire trusted group is malicious.

Externally Administrated

Externally Administrated systems are systems where the entire userbase is subject to control by a single entity. Most examples of this in the wild are centralized platforms like Facebook or YouTube. On these systems, all users are subject to the rules and conditions set forth by a single external actor.

Cryptocurrency systems that have a single blockchain implementation and mandatory software updates would also be considered externally administrated, as the mandatory updates give the central group of developers a substantial amount of power over users.

Most stablecoins within the crypto ecosystem are externally administrated, as all of the stablecoin withdrawals are processed by a single entity, and that entity has the power to deny people from withdrawing.

Most NFTs and NFT platforms are also externally administrated, because the actual content of the NFTs is hosted on infrastructure that is fully controlled by the developers. I will quickly note that NFTs are not fundamentally externally administrated, and in fact several independently verifiable NFT projects exist already. It just happens that most NFT projects today are externally administrated.

Externally administrated systems are the least trustworthy systems in our classification. Not only are users subject to the whims of a single entity like in externally hosted systems, but users also have little to no recourse or ability to re-establish themselves in the event that the administrator becomes malicious. At least in exteranlly hosted systems, the user can re-establish themselves with a new provider.

Further Learning

Reflections on Trusting Trust

Ken Thompson provides a demonstration of how to build source code that looks correct and will pass all review, but produce binaries that have trojan horses in them. His conclusion is that we must trust the people writing our code.

Our conclusion today is a bit different. We know today that it is possible to build a toolchain out of pure source code. You start with a 512 byte assembly file which creates a compiler for a slightly higher level language. You then use that language to define an even more powerful compiler, and so on, until you have a fully completed C complier where any back doors must be visible in the code that was compiled. There is a project currently attempting to build such a toolchain called GUIX.

Thompson also mentions that these backdoors can extend all the way to the hardware level. Though we can't solve for that problem in software, there are many steps that can be taken to improve the overall trustworthiness of hardware as well. For example, one could run the same computation on multiple pieces of hardware from different manufacturers and abort the computation if the output does not exactly match between all components.

If you have addtional suggestions for on-topic material that could be linked here as further learning, send me a ping. You can find me on Twitter as @DavidVorick.