Canonical block order, or:

Jonathan Toomim
8 min readAug 13, 2018

How I learned to stop worrying and love the DAG

The main reason I like the canonical block order (CBO) concept is that it teaches us something new about how Bitcoin works, and about ourselves too. It takes something from Bitcoin that we all thought was a necessary security feature of the system and eliminates it entirely. This causes anxiety, fear, uncertainty, and doubt, since none of us like it when things we are attached to are taken from us. However, as is often the case, the thing we are attached to here is something we don’t actually need. By realizing that and learning to let go, we can ascend to a higher plane of spiritual enlightenment and algorithmic efficiency.

It’s not the path that matters; It’s the journey

When embarking on a journey, the list of steps that you took and the places that you visited doesn’t really matter. What matters is how it changes your state, how you ended up as a different person at the end. The same is true in Bitcoin.

For each block, we only care that the state of the ledger was valid before the block and is valid after the block. In order for the after-state to be valid, there needs to be a valid path between the two. That is, there needs to be at least one valid transaction ordering in which all inputs are valid spends of valid unspent outputs. We don’t actually need to know what that ordering is; just knowing that it exists is enough.

With the current system, we know that a valid ordering exists because we make it the miner’s burden to provide one such ordering. This strategy works okay, though it has its burden. Most blocks will have astronomical numbers of valid orderings. Take as an example a block with 1000 transactions of which 90% are independent. The number of valid orderings for such a block is far larger than the number of atoms in the universe. But there are far more invalid orderings than valid ones. If each atom in the universe represented a potential ordering of the block, then statistically not a single atom would represent a valid ordering. Generating a valid ordering isn’t actually as hard as that makes it sound, but it still takes some time.

When validating the block, many of the validation steps ought to be serialized in order to ensure that this particular path is indeed valid. There are some tricks you can do to get around that serial processing requirement, but it is a source of complexity. Trying to deal with that has led us to a surprising realization:

It’s easier to prove that a valid ordering exists than it is to validate any specific ordering.

In order to show this, we’ll first talk a bit about the basic relationships that transactions can have which determine transaction validity. Those basic relationships are the sibling relationship and the parent-child relationship.

Faith in the family tree

Bros first. The sibling relationship happens when two transactions share a parent — that is, when two transactions both have an input spending the same output from a third transaction. Whenever that occurs, you have a double-spend attempt. This is a very serious crime in the Bitcoin world, and the punishment is death by fratricide. To determine which sibling is the guilty party, we use trial by combat, with the assumption that the elder brother will be bigger and stronger than the younger one. If a miner includes both siblings in a block, that means they did not enforce the law of trial by combat, which makes the block invalid. The penalty for that is revolution and overthrow of our mining overlords. Fortunately for everyone, checking to see if there are siblings in a block is very easy using hashtables with atomic swaps. It’s an embarassingly parallel problem, and that’s exactly the kind of problem we like to have.

And now for the troubled relationship between the parent and the child. Bitcoin exists because Satoshi Nakamoto hated the idea of time travel. He didn’t want a Back-to-the-Future scenario in which someone could travel back in time and rewrite history, and he especially hated time-traveling incest. Going into the past to impregnate one’s grandmother, thereby becoming your own grandfather, is gross and not the natural order of things. We don’t want that, and Satoshi didn’t want that. A transaction should only spend inputs that preceded it chronologically, and the transaction dependency graph ought to not have any loops in it. It should be a directed acyclic graph, or a DAG. In his analysis, tl121 noted that even if a loop happened, it wouldn’t compromise Bitcoin’s security because no new coins could be created. Really, tl121 just needed to take a step back and breathe. We don’t have to worry about these incest cycles at all, because they’re not possible. Time travel isn’t a thing. Satoshi made sure of that.

Each transaction input refers to a previous transaction’s output using the SHA256 hash of the transaction. SHA256 is a one-way function. You can’t create the SHA256 hash to match the input, which means you have to create the input to match the SHA256 hash. If it were possible to invert SHA256 that way, then an attacker could change any historical block or transaction they wanted to by making another one with the same hash. Because each input must be created after the previous output’s hash is known, time travel is impossible. Since time travel is impossible, the transaction graph must be a DAG no matter what. We don’t even have to verify this for each block. It is just a provable mathematical fact.

All DAGs must have at least one topological sorting. That is to say, for every transaction DAG, there is at least one path that visits every transaction in order without visiting any transaction input or output twice. If we wanted to, we could compute one of those orderings, but there’s no need. As long as we know it’s a DAG, and that the DAG has no siblings or orphans, our validation work is done.

The value of letting go

If we stop worrying about the path taken, it gives us some real, tangible benefits.

First, it’s a more elegant and beautiful system. Being able to delete a bunch of code because you did some math on paper is one of the most beautiful things an engineer can do.

Second, we no longer have to worry about miners finding their own path — that is, block template creation could be faster, allowing higher throughput and fewer orphan blocks. Currently, about 70% of block template creation time seems to come from the CPFP code that deals with transaction package order dependencies. That part might get faster if we stopped doing it.

Third, it means we no longer have to worry about intermediate states ever again. One of the main reasons that Ethereum is so slow is that the protocol requires transaction receipts, and these partially specify the state of the ledger after every single transaction. That makes embarrassing parallelization impossible, which is embarrassing. We don’t want that. With CBO, we only need to worry about the start and the finish.

Fourth, it makes giving other people directions a lot easier. Instead of telling people each turn they need to take in order, we can just give them the difference between their current coordinates and the destination coordinates. This is what Graphene does, although the coordinate system is a high-dimensional hypercube in which each edge is based on hashes of every transaction in the block. Confused? Doesn’t matter, it works. And it works 7.14x better if we don’t need to know the transaction order.

Fifth, it reduces the complexity of the system, and makes it harder for malicious directions-givers to intentionally mix up directions in order to get people lost. This can be relevant in weak block scenarios, where a selfish miner could make weak blocks that contain the same transactions but in different orders, and send different weak blocks to different nodes. The selfish miner could then manipulate the final block they create in order to change how efficiently the block propagates to different parts of the network. Requiring all transactions to be sorted by hash makes this sort of thing impossible. It reduces the degrees of freedom of the system, and gets rid of information that we do not need.

Among these five effects, I believe #4 and #5 to be the most important. Block propagation speed in typical and adversarial conditions are currently the main limiting factor on safe block sizes. Being able to improve performance in both conditions with the same change is rare.

Seeking enlightenment is a lie we tell ourselves to feed our egos

Oh, this is embarrassing. Remember how I said in #3 above that Ethereum’s state specification makes embarrassingly parallel validation impossible? I made it sound like current Bitcoin also suffers from this problem, and that embarrassingly parallel validation is currently impossible with Bitcoin. It turns out that isn’t true. The same algorithm for embarrassingly parallel validation after the CBO fork can still be used right now, with a minor modification.

The basic algorithm is to first go through each transaction (in any order) and make a hashtable of all the outputs, and afterwards to go through each transaction and make sure that its inputs spend an unspent output that’s either from the same block or before.

That’s really easy to do, but it only works if we don’t care whether the output came later in the block than the input which spends it. Wouldn’t it be difficult to also check the order in a parallel algorithm?

No, it isn’t. You go through each transaction (in any order) and make a hashtable of all the outputs and their block positions, and afterwards to go check that each input spends an unspent output which came earlier in the block or in a previous block.

This means that one of the big benefits that CBO was designed to achieve can even be done by Bitcoin Core, in the land of no forks.

There is no spoon, but is there a fork?

So we really don’t need the CBO hard fork in order to get parallel validation. That’s nice to know, as it frees us somewhat from needing to fork before we can implement the parallel validation. And maybe Bitcoin Cash can delay the fork or do without it entirely. All of the things we have to gain by letting go are performance optimizations or adversarial conditions, both of which can be addressed by writing more code. While we would prefer to use a fork to remove cruft from Bitcoin’s design and forge a leaner, purer, and more elegant Bitcoin Cash, if our attempts to do that make the community feel scared of the murky and mystical future that lies ahead, we can just go back to writing more workaround code and adding complexity to the system.

But if the community thinks it’s ready, and is willing to take the leap of faith with us to cross over to the other side, we are waiting for you with open arms. And there are brownies.

--

--