Ethereum’s Technical Challenges
Before getting into the details of what I have learned, lets review the technical challenges faced by smart contract developers on Ethereum.
1. Performance
A performance analysis in Feb 2016 showed that it took the Parity Ethereum client over an hour to process 6 months worth of transactions (1 million blocks). All 1M blocks were prior to the recent denial of service attack on the Ethereum network.
To put this in perspective, processing 6 million Steem blocks with an average transaction rate that is significantly higher Ethereum can be processed in just a few minutes. This represents over a 20x difference in processing speed.
The speed at which a single CPU thread can process the virtual machine directly impacts that potential transaction throughput of the network. The current Ethereum network is able to sustain little more than 20 transactions per second. Steem, on the other hand, can sustain 1000 transactions per second.
A recent attack on Ethereum was able to completely saturate the network and deny service to others. Steem, on the other hand, easily survived the flood attacks thrown at it without disrupting service and all without any transaction fees!
There are several reasons why the EVM (Ethereum Virtual Machine) is slow.
- Accessing Storage based on Level DB and 32 byte key/value pairs
- 256 bit operations are much slower for normal calculations
- Calculating GAS consumption is part of consensus
- Internal Memory Layout of Script is also part of consensus. (see 1.)
- Few opportunities to optimize EVM scripts
Regarding claims of Unlimited Scalability

Vitalik Buterin claims that Ethereum will offer “unlimited” scalability within 2 years. This claim is based upon the idea that not all nodes need to process all transactions. This is a bold claim that I believe will fail for some very practical reasons which I will address below.
Their approach is to “shard” the blockchain, which can be viewed as a way of making the blockchain “multi-threaded”. Each node will run “one thread” and each “thread” will be capable of 20 transactions per second. In theory they can add an unlimited number of nodes and the transaction volume can scale to an limitless amount.
Lets assume there exists two completely independent smart contracts. These two contracts could each run in their own shard at 20 transactions per second. But what happens if these two contracts need to communicate with each other? The solution is to pass messages from one contract to another contract. Anyone who has implemented multi-threaded programs with message passing knows that it isn’t worth the effort unless the computation to message passing overhead is low enough.
The overhead of message passing among nodes over the internet is very high. Adding cryptographic validation also introduces significant overhead. The “cost” to “read” a single value from another shard will be significant.
Lastly developers of multi-threaded programs are familiar of the concept of each thread “owning” the data it manages. Everything that wants to touch that data goes through its owner. What happens when a single shard owns a piece of data that gets requests greater than 20 per second? At some point the single thread becomes the bottle neck.
Implementing Steem with “sharding” would end up bottlenecking on the “global state” that every vote impacts. The same thing would happen for any market processing limit orders. Sharding simply doesn’t scale linearly and certainly not in an unlimited manner.
2. Pricing
Implementing something like Steem on Ethereum has the major challenge that users would have to pay $0.01 per vote and more per post. As the number of users grows the network would get saturated pushing the price of GAS higher.
Now imagine that Steem wasn’t the only application running on Ethereum, imagine that Golos and Augur both became popular with a million user’s each. The price of GAS would go up until it stunted the growth of all three applications.
The only way to bring prices down is to increase transaction throughput by improving efficiency.
Improving efficiency isn’t a uniform process. Installing a faster disk will not improve the efficiency of computation. Making computations faster will not help with disk access. All attempts at improving efficiency will necessarily impact the relative GAS cost of each operation.
Ethereum was recently forced to execute a Hard Fork to change gas costs. Last time Ethereum had a hard fork it resulted in the creation of Ethereum Classic!
It is safe to say that all attempts to optimize the EVM will change the relative cost of the operations. The GAS price can only be reduced by an amount proportional to the instruction that sees the least optimization.
While optimizing some instructions may increase the profit margin of the block validators, Smart Contract developers are still stuck paying higher prices.
Because GAS is part of consensus, all nodes need to continue processing old blocks using the old GAS calculations up until a hard fork occurs. This means that future optimizations are constrained by the need to maintain the original accounting.
3. Optimizing Code
One of the biggest sources of optimization is not through improving the hardware of your computer, but through improving the software. In particular, compilers can work wonders at improving the performance of code running on the same machine. Compilers have the ability to optimize because they have access to more information about the programmers intent. Once the code is converted to assembly many opportunities for optimization are lost.
Imagine someone wanted to optimize an entire contract by providing a native implementation. The native implementation would cause all the same outputs given the same inputs, except it wouldn’t know how to calculate the GAS costs because it wasn’t run on the EVM.
4. Programer Intent
Ethereum smart contracts are published as compiled bytecode which the interpreter processes. In order for people to process and comprehend the smart contract they need to read code, but the blockchain doesn’t store source code, it stores assembly. People are forced to validate the “compiled code” matches the expected output of the source code.
There are several problems with this approach. It requires that all compiler developers generate the same code and make the same optimizations or it requires that all contracts be validated based upon the chosen compiler.
In either case, the compiled code is one step removed from the expressed intent of the contract writers. Bugs in the compiler now become violations of programer intent and these bugs cannot be fixed by fixing the consensus interpretation because consensus does not know the source code.
A Different Approach
The creators of the C++ language have a philosophy of defining the expected behavior of a block of code without defining how that behavior should be implemented. This means that different compilers generate different code with different memory layouts on different platforms.
It also means that developers can focus on what they want to express and they can get the results they expect without unneeded restrictions on the compiler developers or the underlying hardware. This maximizes the ability to optimize performance while still conforming to a spec.
Imagine a smart contract platform where developers publish the code they want to run, the blockchain consensus is bound to a proper interpretation of the code, but not bound to how the code should be executed.
In this example, a script could be replaced with a precompiled binary using a different algorithm and everything would be ok so long as the inputs and outputs of the black box remain the same. This is not possible with Etheruem because the black box would need to calculate exactly how much GAS was consumed.
A better approach to GAS
GAS is a crude approach to calculate a deterministic execution time. In an ideal world we would simply use wall clock time, but different computers with different specifications and loads will all get different results. While it may not be possible to reach a deterministic consensus on exactly how much time something takes, it should be possible to reach consensus on whether or not to include the transaction.
Imagine a bunch of people in a room attempting to reach consensus on whether or not to include a transaction. Each of them measures the wall clock time it takes them to process the transaction and using preemptive scheduling breaks execution if it takes too long.
After taking their measurements they all vote and if the majority say it was “ok”, then everyone includes the transaction. The network does not know “how long it took”, it only knows that the transaction took an approved amount of time. An individual computer will then execute the transactions regardless of how long they take once they know consensus has been reached.
From a consensus perspective, this means all scripts pay the same fee regardless of the actual computation performed. Scripts are paying for “fixed length time slices” rather than paying for “computations”. In terms operating system developers may be familiar with, scripts must execute within the allotted quantum or they will be preempted and their work lost.
The above approach is very abstract and wouldn’t be practical in a direct voting implementation, but there is a way to implement this that scales without much more overhead than is currently used by Steem. For starters, all block producers are on a tight schedule to produce their block. If they miss their time slot then the next witness will go. This means that block producers must be able to apply their block and get it propagated across the network to the majority of nodes (including the next witness) before the next block time.
This means that the mere presence of a transaction in a block is a sign that the network was able to process the block and all of its transactions in a timely manner. Each node in the network also gets a “vote” on how long a block and its transactions took to process. In effect, a node does not need to relay a block if it thinks the transactions exceeded their allocated time.
A node that objects to a block based upon its perceived execution time will still accept new blocks building on top of the perceived “bad” block. At some point either the node will come across a longer fork and switch or the “bad” block will be buried under enough confirmations (votes) that it becomes irreversible. Once it is irreversible the node will begin relaying that block and everything after it.
A block producer, who desires to get paid, will want to make sure that his blocks propagate and will therefore be “conservative” in his estimates of wall clock time that other nodes will have. The network will need to adjust block rewards to be proportional to the number of transactions included.
Due to the natural “rate limiting” enforced by bandwidth / quantum per vesting stake it would require a large stake for any individual miner to fill their own block just to collect the bonus.
Preventing Denial of Service

One of the challenges with scripts is that it costs an attacker nothing to generate an infinite loop. Validating nodes end up consuming resources even if the final conclusion is to reject the script. In this case the validator doesn’t get paid for the resources they consumed.
There are two ways that validators can deal with this kind of abuse:
- Local Blacklist / White list scripts, accounts, and/or peers
- Require a proof-of-work on each script
Using proof of work it is possible for a validator to know that the producer of the script consumed a minimum amount of effort. The more work done, the greater the “wall clock” time the validator will allow the script to execute up to a maximum limit. Someone wishing to propagate a transaction that is computationally expensive will need to generate a more difficult proof of work than someone generating a transaction that is less expensive.
This proof-of-work combined with TaPoS (Transactions as Proof of Stake) means we now have Transactions as Proof of Work which collectively secure the entire network. This approach has the side effect of preventing witnesses from “stuffing their own block” just to get paid, because each transaction they generate will require a proof of work. The blockchain can therefore reward witnesses for transactions based upon the difficulty of the proof of work as an objective proxy for the difficulty of executing the script.
Proof of Concept
I recently developed some code using the Wren scripting language integrated with experimental blockchain operations. Here is how you would implement a basic “crypto currency” in a single smart contract:
test_script = R"(
class SimpleCoin {
static transfer( from, to, amount ) {
var from_balance = 0
var to_balance = 0
if( from != Db.current_account_authority().toString )
Fiber.abort( "invalid authority" )
var a = Num.fromString( amount )
if( a < 0 ) Fiber.abort( "cannot transfer negative balance" )
if( Db.has( from ) )
from_balance = Num.fromString(Db.fetch( from ))
if( Db.has( to ) )
to_balance = Num.fromString(Db.fetch( to ))
if( from_balance <= 0 && Db.script_account().toString != from)
Fiber.abort( "insufficient balance" )
from_balance = from_balance - a
to_balance = to_balance + a
Db.store( from, from_balance.toString )
Db.store( to, to_balance.toString )
}
}
)";
trx.operations.emplace_back( set_script{ 0, test_script } );
trx.operations.emplace_back(
call_script{
account_authority_level{1,1}, 0,
"SimpleCoin",
"transfer(_,_,_)", {"1","0","33"}
}
);
I introduced two blockchain level operations: set_script, and call_script. The first operation assigns the script to an account (account 0), and the second operation invokes a method defined by the script.
The scripting environment has access to the blockchain state via the Db
api. From this API it can load and store script-specific data as well as query information about the current authority level of an operation. The call_script
operation will assert that “account 1” authority level “1”, aka active authority, has approved the call. It will then invoke the script on “account 0”, and call SimpleCoin.transfer( “1”, “0”, “33” )
.
The transfer method is able to verify that the current_account_authority
matches the from
field of the transfer call.
Benchmark of Proof of Concept
I ran a simulation processing 1000’s of transactions each containing a single call to ‘SimpleCoin.transfer’ and measured the time it took to execute. All told, my machine was able to process over 1100 transactions per second through the interpreter. This level of performance is prior to any optimization and/or caching of the script. In other words, the measured 1100 transactions per second included compiling the script 1100 times. A smarter implementation would cache the compiled code for significant improvements.
To put this in perspective, assuming Ethereum “Unlimited” scaled perfectly it would take 55 nodes to process the same transactions that my single node just processed. By the time Wren is optimized with proper caching and Ethereum is discounted for necessary synchronization overhead a smart contract platform based upon Steem and Wren technology could be hundreds of times more efficient than Ethereum.
Why Turing Complete Smart Contracts
Decentralized governance depends impart upon decentralized enforcement of contracts. A blockchain cannot know in advance every contract that might be beneficial and the politically centralizing requirement of hard forks to support new smart contracts limits the ability to organically discover what works.
With a proper smart contract engine, developers can experiment with “low performance scripts” and then replace their “slow scripts” with high performance implementations that conform to the same black-box interface. Eliminating the need to deterministically calculate GAS and memory layout is absolutely critical to realizing the desired performance enhancements.
Conclusion
The technology behind Steem combined with the described smart contract design will be hundreds of times more scalable than Ethereum while entirely eliminating transaction fees for executing smart contracts. This can open up thousands of new applications that are not economically viable on Ethereum.