Blockchain was developed as a concept in computer science before cryptocurrency was even conceptualized and was primarily used in cryptography and in handling and verifying data structures. A primitive form was the hash tree, also known as the Merkle tree, patented by Ralph Merkle in 1979 (Figure 1). In a peer-to-peer network of computers, validating data was important to make sure nothing was altered or changed during transfer. It also helped to ensure that false data was not sent.

Figure 1: The Merkle Tree

Over the past several years, blockchain has evolved fast from the original Bitcoin protocol to the second-generation Ethereum platform. We are building the third generation of blockchains. In this evolution, we can see how the technology is evolving from its original form, as essentially just a database, to becoming a fully-fledged, globally distributed cloud computing. In this video, we will trace the past, present, and future of blockchain technology.

The first blockchain was conceptualized in 2008 by an anonymous person or group known as Toshi Nakamoto. The concepts and technical aspects are described in an accessible white paper termed “Bitcoin A Peer-to-Peer Electronic Cash System” and later modules. These ideas were first implemented in 2009 as a core component supporting Bitcoin, where it served as the public ledger for all transactions. The invention of the blockchain for Bitcoin made it the first digital currency to solve the double-spending problem without a trusted authority or central server. It was only later that we came to separate the blockchain concept from its specific implementation as a currency in Bitcoin. We came to see that the underlining technology had a more general application beyond digital currencies in its capacity to function as a distributed ledger tracking and recording the exchange of any form of value. The Bitcoin design has been the inspiration for other applications and has played an important role as a relatively large-scale proof-of-concept.

Within just a few years, the second generation of blockchains emerged, designed as a network on which developers could build applications, essentially the beginning of its evolution into a distributed virtual computer. This was made technically possible by the development of the Ethereum platform. Ethereum is an open-source, public blockchain-based, distributed computing platform featuring smart contract functionality. It provided a decentralized Turing-complete virtual machine that can execute computer programs using a global network of nodes. Ethereum was initially described in a white paper by Vitalik Buterin in late 2013 to build distributed applications. The system went live almost two years later and has successfully attracted a large and dedicated community of developers, supporters, and enterprises.

The important contribution of Ethereum as the second generation of blockchains is that it worked to extend the capacity of the technology from primarily being a database supporting Bitcoin to becoming more of a general platform for running decentralized applications and smart contracts…both of which we’ll discuss in upcoming videos and modules. As of 2018, Ethereum is the largest and most popular platform for building distributive applications. Many different applications have been built, from social networks to identity systems, prediction markets, and many financial applications.

Ethereum has been a major step forward, and with its advent, it has become ever more apparent where we’re heading with the technology, which is the development of a globally distributed computer, a massive globally distributed cloud computing platform, on which we can run any application at the scale and speed of today’s major websites, with the assurance that it has the security, resilience, and trustworthiness of today’s blockchains. However, the existing solutions that we have are like extremely inefficient computers. The existing blockchain infrastructure is like a bad computer that cannot do much except proof of concepts. Getting to the next level remains a huge challenge that involves some original and difficult computer science, game theory, and mathematical challenges. Scalability remains at the heart of the current stage in the journey that we’re on, and this is what the third generation of blockchain technologies are trying to solve.

The mining required to support the Bitcoin network currently consumes more energy than many small nations. Being equal to that of Denmark and costing over 1.5 billion dollars a year. This is fueled by cheap but dirty coal energy in China, where almost 60% of the mining is currently being done. This high energy consumption is not scalable to mass adoption. Ethereum and Bitcoin use a combination of technical tricks and incentives to ensure that they accurately record who owns what without a centralized authority. The problem is it’s difficult to preserve this balance while also growing the number of users. Currently, blockchain requires global consensus on the order and outcome of all transfers. In Ethereum, all smart contracts are stored publicly on every blockchain node, which has its trade-offs. The downside is that performance issues arise and that every node is calculating all the smart contracts in real-time, which results in those speeds. This is clearly a cumbersome task, especially since the total number of transactions increases approximately every 10 to 12 seconds with each new block added.

The volume of transactions is likewise an existing constraint. With cryptocurrency, speed is measured by TPS (transaction per second); the Bitcoin network’s theoretical maximum capacity is up to seven transactions per second. At the same time, the Ethereum blockchain, as of 2018, can handle about 15 transactions per second. By comparison, a credit card network is capable of handling more than 20 thousand transactions per second. Equally, Facebook may have about 900 thousand users on the site at any given time, meaning that it’s handling about a hundred and seventy-thousand requests per second.

Another issue is that of cost. It costs some small amount to run the network to pay the miners for maintaining the ledger. What we have is sufficient for a limited number of large transactions, such as sending money, but making a small transaction by purchasing a coffee could not be done by most blockchains. They can’t, in their existing form, deal with a very large amount of microtransactions. These types of transactions will be required to enable high-volume machine-to-machine exchanges. It would prove too expensive to operate these kinds of economies that involve many small exchanges, but this is exactly what many people will want to use the blockchain for in the future.

In response to these constraints, the third generation of blockchain networks is currently under development. Many different organizations are currently working on building this next-generation blockchain infrastructure. Such projects include Dfinity, NEO, EOS, IOTA, and Ethereum itself. They are each using different approaches to try and overcome existing constraints. Going into the details of how these different networks work is a bit advanced for this course, so that we will give a brief overview of two of them.

The Lightning Network is one such project that seeks to extend the capacities of existing blockchains. The main idea is those small and non-significant transactions do not have to be stored on the main blockchain. This is called an “off-chain” approach because small transactions happen off the main blockchain. It works by creating small communities wherein transactions can occur without being registered on the main blockchain. A payment channel is opened between a group of people, with the funds been frozen on the main blockchain. Those members can then transact with each other using their private key to validate the transactions. This is a bit like having a tab or an IOU with the merchant, where you mark down what you’ve exchanged so that you don’t have to update the main record in the bank each time you make a purchase. The record stays local between the members involved, then sends the finances and updates the main bank record. This only requires two transactions on the main blockchain, one to open the transaction channel and one to close it. All other transactions happen just within the network without it being registered on the main blockchain. This both reduces the workload on the main blockchain and makes it possible to run many, very small, transactions within the sub-network. As of the start of 2018, there is a proof-of-concept running live on the Bitcoin test net, but the system will not be fully operational until later in the year, as is the case with most of these projects.

IOTA is another example where existing blockchains are sequential chains, where blocks are added in a regular, linear, chronological order. The data structure of the IOTA system can achieve high transactional throughput by having parallel operations. The data structure is more like a network than a linear chain where processing and validation can occur alongside each other. The other big difference is that there are no specialized miners in this network. Every node that uses the network functions as a miner. In the IOTA network, every node making a transaction also actively participates in forming the consensus, so in effect, everyone does the mining. this means that there is no centralization of mining within the network, which creates bottlenecks and demands lots of energy. Likewise, with this network, there are no transaction fees for validation. Additionally, with IOTA, because it is more user-generated, the more people that use the network, the faster it becomes, which is the opposite of existing systems. This makes IOTA very scalable.

There are many other possible approaches to overcome existing constraints but suffice it to say, and the blockchain should be understood as an emerging technology whose existing implementation is like a large-scale proof-of-concept running on a very inefficient system. However, through lots of experimentation and iteration will hopefully, in the coming years, evolve into this globally distributed computer. As Melanie Swan writes in her book, “First there were the mainframe and PC (personal computer) paradigms, and then the internet revolutionized everything. Mobile and social networking were the most recent paradigm. The current emerging paradigm for this decade could be the connected worlds of computing relying on blockchain cryptography.”

To understand this better in the next module, we will talk about the blockchain in the context of the broader technological changes currently underway as we build the next generation of the Internet—what we call the decentralized web or web 3.0. How we understand the blockchain and where we are with it today is extremely transitory. In this respect, what we are talking about in this course when we talk about the blockchain, is really this emerging IT infrastructure of a distributed global cloud computing. The next generation of blockchains will take us a step further on that journey. What we called the blockchain today is just a very limited and often very inefficient version of this. We still have many very difficult problems to solve before we get there. The end-stage may look something like the blockchain of today, but it may look very different.

Feedback

Was this helpful?

Yes No
You indicated this topic was not helpful to you ...
Could you please leave a comment telling us why? Thank you!
Thanks for your feedback.

Post your comment on this topic.

Post Comment