Build Back Better: Blockchains, Federated Data and Artificial Intelligence
by John deVadoss
The defining challenge of our time will be this question: will data (i.e., access to insights/analytics/machine learning algorithms and most importantly AI models) continue to serve the interests of the few or will it truly benefit the many?
Building back better demands that we prioritize this categorical imperative, in the spirit of Immanuel Kant.
The Resurgence of AI: Deep Learning at Scale
I use the term resurgence because we have had previous eras when AI was seen as being imminent. Alan Turing devised the eponymous Turing Test in 1950 and brought mainstream attention to the possibility of machines that could ‘think’. The Dartmouth Workshop of 1956 was a milestone event that marked the birth of AI when John McCarthy proposed the phrase to represent the then-burgeoning growth of research into cybernetics, neural networks, and symbolic reasoning.
In the 80s, there was significant progress in the areas of expert systems, case-based reasoning, and a revival in connectionist neural networks with the invention of back-propagation. Machine learning gained momentum in the 90s with a shift towards exploiting probability and statistics. Much of what we term AI today results from the application of Machine Learning to extraordinarily large amounts of data. To be precise, it is the application of so-called Deep Learning, going back to 40s with the creation of the Pitts-McCulloch computer model based on the then understanding of the brain’s neural networks; the impact of Deep Learning however, really took off in the 2000s with the so-called Deep Learning revolution ~2012.
In practical terms, Deep Learning is a collection of techniques that teaches computers to do what comes naturally to humans i.e., learn from examples. The Deep Learning models are trained (both supervised and unsupervised) using very large sets of labeled data with architectures that contain multiple layers of software attempting to model the behavior of neurons.
The Challenges of Centralizing Data: Asymmetry and Information-based Inequality
In order to train a Deep (Machine) Learning model today, there are two primary techniques that while extraordinarily effective today, contribute increasingly towards greater control, power and economic dominance of centralized platforms.
First, conventional learning approaches use approaches requiring the training data to be centrally aggregated. These centralized systems collect extraordinarily large amounts of user data (‘Big Data’) in their repositories; subsequently, either in a one-time or continuous manner, they deploy algorithms on these data repositories to mine and to build the resulting AI models. It will be apparent to the reader that this approach is privacy-intrusive, and more; it will also be obvious that this ability to aggregate data at scale presupposes both economic and scientific resources at scale.
Second, and equally problematic is that the centralized approach is dependent, and often locked-in to a single ‘platform’ i.e., the system’s choice of algorithm(s), the implementation mechanism (language, libraries, frameworks, tools), the preferred hardware (in-house, external, dependency on chip manufacturer etc.), the data center architecture, the personnel (risk of being compromised etc.), and the choice of tools to surface the results of the training algorithms. In computer science parlance, the single-platform implementation is subject to what is known as the Byzantine Fault Tolerance problem.
Either of these techniques on its own is a challenge to the longevity of today’s AI applications; together, however, they fundamentally reshape the role and power of markets and play a role in contributing to information-based inequality.
Where to: Blockchains, Federated Data and Machine Learning
Blockchain platforms have led to incredible advances in the design and development of decentralized applications and systems and have been applied to domains ranging from cryptocurrencies to enterprise supply chains. More importantly, there are two capabilities that blockchains enable due to their inherently federated implementation.
First, blockchains provide the ability for users (and entities) to be in control of their data and to decide when, where, to whom, and for how long to provide access to their data i.e., blockchains are the anti-thesis of systems that intrinsically and automatically exploit private data. Further, with the advent of Zero-knowledge proofs, homomorphic encryption et al., blockchain platforms can reveal nothing more about a transaction except that it is valid.
Second, blockchains are designed to be self-sustaining and to evolve in a federated or asymptotically decentralized manner. Therefore, in order to achieve agreement on both data and transactions, blockchains use a variety of fault-tolerant consensus algorithms. While there is an assortment of consensus algorithms, all of them share similar characteristics with respect to achieving agreement across a federated set of nodes. Blockchains enable the development of a new generation of federated AI systems and applications that are not reliant on single-platform (locked-in) implementations with all their concomitant asymmetries.
Extraordinarily large amounts of data are generated by consumers and their devices during their lifetime; this data in turn has become the cornerstone of deep learning models that deliver highly personalized services. It is imperative that tomorrow’s data platforms not contribute to increasingly greater asymmetry.