- Prediction is the essence of the future. But how does prediction work, and how does it affect our lives? In this article, we will summarize and review the book The Age of Prediction: Algorithms, AI, and the Shifting Shadows of Risk by Christopher Mason and Igor Tulchinsky, which provides a simple and insightful framework for understanding and applying prediction.
- If you are interested in learning more about the age of prediction, and how it can affect your decisions, strategies, and innovations, then you should read the book The Age of Prediction: Algorithms, AI, and the Shifting Shadows of Risk by Christopher Mason and Igor Tulchinsky.
Table of Contents
- Recommendation
- Take-Aways
- Summary
- In “The Age of Prediction,” scientists are reducing uncertainty across industries.
- Leverage diverse perspectives when model building to glean better insights from big data.
- Exponential data growth won’t eliminate risk, as complex systems are in a constant state of flux.
- New predictive tools create both dystopian possibilities and life-saving solutions.
- Future employers could draw on new forms of data, such as genetic markers, when predicting performance.
- Humanity could destroy itself with predictive AI-enabled weapons.
- More accurate predictive polling can lead to voter manipulation, threatening democracy.
- As humankind’s power to make accurate predictions increases, complex new risks emerge.
- About the Authors
- Genres
- Review
Recommendation
Humanity is on the cusp of a new age, write Igor Tulchinsky and Christopher Mason – “The Age of Prediction.” AI algorithms and the exponential explosion of big data are giving scientists across myriad industries the means to harness the power of increasingly accurate predictions, reducing uncertainty and many of the associated risks. However, new risks are emerging, such as the existential threat of autonomous AI-enabled weapons systems and threats to democracy. Drawing on their backgrounds in quantitative predictions in finance and genomics, Tulchinsky and Mason share both disquieting and hopeful insights on how machine learning and predictive algorithms could radically transform humanity’s future.
Take-Aways
- In “The Age of Prediction,” scientists are reducing uncertainty across industries.
- Leverage diverse perspectives when model building to glean better insights from big data.
- Exponential data growth won’t eliminate risk, as complex systems are in a constant state of flux.
- New predictive tools create both dystopian possibilities and life-saving solutions.
- Future employers could draw on new forms of data, such as genetic markers, when predicting performance.
- Humanity could destroy itself with predictive AI-enabled weapons.
- More accurate predictive polling can lead to voter manipulation, threatening democracy.
- As humankind’s power to make accurate predictions increases, complex new risks emerge.
Summary
In “The Age of Prediction,” scientists are reducing uncertainty across industries.
Society is moving into a new era, “the Age of Prediction,” in which billions of algorithms enable people to identify future events before they happen, reducing uncertainty and risk. Whether mapping the human genome or analyzing financial market data, an exponential increase in data and the emergence of technologies such as AI are giving scientists new tools to identify patterns and make increasingly accurate predictions. While the COVID-19 pandemic was by no means a “triumph of prediction,” humanity responded by broadening predictive possibilities and perspectives in many ways, harnessing the potential of big data. For example, partnering with Pfizer, BioNTech used machine-learning trained algorithms to create predictions, enabling it to rapidly create a COVID-19 vaccine, rolling out a US Food and Drug Administration-approved vaccine in just nine months.
“As the dawn of the Age of Prediction rises, there are clearly some dangers, but we believe that over time the greater scientific ability to predict will prove broadly beneficial.”
You can visualize the predictive modeling approaches taken to anticipate COVID-19’s effects much like those of a “layer cake”: The top layer of the “cake” is comprised of epidemic models (primarily SIR models, which divide populations into the categories of viral susceptibility, infectious and recovered or removed), which rest atop macroeconomic models, which overlay microeconomic models that deal with the impacts on industries, individual consumers, families and companies. Each of these layers is in a state of constant flux, making prediction a continuous, evolving process.
Leverage diverse perspectives when model building to glean better insights from big data.
Data is growing exponentially – in 2012, the world produced 2.5 quintillion bytes of data each day (according to IBM), while today, the daily rate of data production increases by a rate of about 40% to 80% per year. By 2025, researcher Arne Holst says humanity will have “created, captured, copied and consumed” 181 zettabytes of data worldwide, a dramatic increase from 97 zettabytes in 2022. The exponential growth of data gives rise to increased competition and inevitable turbulence, requiring companies to embrace flexibility and empiricism. No predictive model is free of imperfections, and those trying to glean insights from this exponential expansion of data would be best advised to remember statistician George Box’s famous words: “All models are wrong, but some are useful.”
“Prediction speaks of what is to come; risk, less visibly, calculates the probability that a model is wrong and, like a grim accountant, totals up the costs of that error.”
Your predictive models will be more useful if you leverage a diverse range of quality data sets, including proxy data – secondary data from interconnected individual sources. For example, data from the company Kinsa’s “smart thermometers” helped researchers more accurately predict viral outbreaks of COVID-19, as the virus tended to spread more when temperatures spiked. Igor Tulchinsky launched the company WorldQuant Predictive in 2018, applying financial prediction tools across industries using an approach called “idea arbitrage” – this refers to the fact that your predictions will be more effective if you work with model builders who can think creatively and approach your data and problem from multiple perspectives. Use the three-step formula to achieve this: ideate, arbitrate (choose the best idea) and predict.
Exponential data growth won’t eliminate risk, as complex systems are in a constant state of flux.
The nature of risk changes as predictive accuracy increases, because “moral hazard” – a term that refers to an individual propensity to engage in reckless behavior and become less risk-averse – can increase when actors perceive diminished risks. For example, someone who sees a genomic analysis that indicates they don’t appear to be predisposed to develop liver cancer or coronary disease may justify eating and drinking excessively, as they believe there are fewer associated risks. Likewise, financial players might take excessive risks if they believe they’ll receive a bailout if they fail.
“The garden of biology and data is always growing and changing, and it must always be tended.”
It’s important to remember that while humanity may have access to more and more data, the risks humanity faces aren’t static and are constantly in flux. Just as the scientists sequencing the human genome discovered that doing so wasn’t as simple as finding the combination to a safe – given that organic life is constantly evolving and mutating – so too must data scientists re-analyze older data when newer data emerges, rethinking risks and probabilities.
New predictive tools create both dystopian possibilities and life-saving solutions.
Scientists have an increasing number of tools to sift through vast amounts of data to glean insights. For example, using cell-free DNA (cfDNA) analysis, scientists can predict potential health problems before a baby is born, predict the success of a heart transplant or discern if certain tissues are dying via whole-body scans. Your body is constantly sending signals that scientists are increasingly learning how to interpret. For example, your mitochondrial DNA can serve as a warning system for stress (or even suicide), as levels spike when people experience distress. For those working in law enforcement, there are now vast pools of data to draw on when catching criminals, including DNA records – an increasing number of people are taking genetic tests, which creates digital records – and security footage from millions of Internet of Things devices.
“One of the most significant outgrowths of predictive technologies is the demand for the kind of data that many people view as intimate or private.”
As data-enabled predictions become more ubiquitous, eroding privacy, this triggers new risks, including genetic discrimination and the weaponizing of identity tracing. In the future, insurers could easily change life insurance premiums if an individual had genetic markers indicating a vulnerability to certain diseases, for example. As human life is increasingly quantified and scientists make inroads into collecting new forms of data – such as mapping the gut metagenome (revealing new markers related to your health and identity), or tracking the purchases of consumers so intensely that brands can predict pregnancies and target women with pregnancy-related advertisements accordingly – people are becoming increasingly resistant to sharing data. However, it’s likely that most people will continue to give up their privacy if they believe doing so keeps them safe.
Future employers could draw on new forms of data, such as genetic markers, when predicting performance.
All recruiters seem to be trying to improve their ability to predict performance – but doing so is no easy feat. According to University of Pennsylvania management professor Peter Cappelli (in Harvard Business Review): “Businesses have never done as much hiring as they do today. They’ve never spent as much money doing it. And they’ve never done a worse job of it.” Companies rely on external recruiters who increasingly leverage “smart” algorithms and tools to find candidates, yet often fail to accurately predict which candidates will perform well. When it comes to academic admissions, “objective,” quantitative metrics (for instance, GPA) aren’t always the best way to predict success, and “subjective” factors, such as letters of reference, can actually be better indicators of a person’s performance potential. College admissions offices may thus want to invest in predictive NLP algorithms that can analyze free-text.
“We’re not very good at assessing future performance. We have never been all that good at it, and some claim that we are getting worse at it.”
In the future, predicting careers based on genetic data may become the norm. For example, if hiring someone to work in a high-stress role, an employer with access to genetic data could search for a candidate with a high expression of the gene PDE4B, which is elevated in people with higher problem-solving capacities and lower anxiety levels. Getting a new job in the future, may require “reprogramming for a new job,” as future employers could expect workers to modify their genomes, switching genes “off and on” much like light switches. This isn’t entirely in the realm of science fiction: “PReemptive Expression of Protective Alleles and Response Elements” (PREPARE), a project from the US Department of Defense’s research and development body, Defense Advanced Research Projects Agency (DARPA), is currently researching the possibility of engaging in epigenetic editing to improve the performance of astronauts and soldiers. It’s vital that those predicting performance don’t ignore the possibility of biases and errors when attempting to do so.
Humanity could destroy itself with predictive AI-enabled weapons.
AI-enabled predictions are radically reshaping modern warfare. Today, humanity has the capacity to build technologies such as autonomous missiles and autonomous defense robots. Militaries worldwide are exploring how best to rapidly analyze nonstop streams of data and make the rapid predictions needed for decision-making during combat. While a rational approach to warfare would entail reducing casualties as much as possible, war is driven by irrational human emotions, such as terror and anger. Guided by increasingly accurate predictions, precision-guided bombs and missiles enable militaries to wreak devastation with improving accuracy, using controls much like those in a video game.
“The ability of weapons to calculate, choose and learn with blinding speed raises the potential for disaster to much higher levels.”
Autonomous drones can react much more quickly than human operators, as a computer system can launch coordinated strategic attacks in real time. The development of fully autonomous weapons systems gives rise to a barrage of ethical concerns, as a machine decision-making would replace human decision-making when selecting human targets to kill. Currently, the US Department of Defense appears reluctant to give robots full autonomy during warfare. Stuart Russell, a University of California computer science professor, writes that, “If any major military power pushes ahead with AI weapons development, a global arms race is virtually inevitable.” He warns that autonomous weapons could end up on the black market, much like Russian assault rifles, and pose a threat to humanity in the hands of dictators and terrorists.
More accurate predictive polling can lead to voter manipulation, threatening democracy.
Political polling is one of the most controversial areas in which data-driven predictions are leveraged. When conducting political polls, questionable, low-quality data often yields inaccurate results. For example, pollsters may ask people questions over the phone, and people may lie or lack firm opinions. Polling data is unreliable because it comes from people, who may shift their opinions or vote emotionally, making it difficult to reduce uncertainty.
“People produce polling data, not natural forces or machines; they are individuals negotiating complex, shifting environments and social mores, not traders seeking profits.”
Polls become more accurate when researchers glean insights from proxy data and use machine-learning programs to improve predictions. The company Cambridge Analytica made algorithmic predictions, creating different profiles of social media users that featured their suspected political affiliations, lifestyle choices and personalities. Steve Bannon used this information to refine messaging for Donald Trump’s campaign, winning him the US presidential election. As AI-enabled predictive polling tools evolve, it becomes increasingly possible to manipulate elections, potentially undermining democracy.
As humankind’s power to make accurate predictions increases, complex new risks emerge.
Humanity’s move into the Age of Prediction will coincide with economic disruption and job loss on a mass scale: Autonomous driving vehicles, for example, can accurately leverage AI to anticipate driving routes with more accuracy and predictability than human drivers, but also have the potential to displace the 4.5 million workers who drive trucks, taxis, Ubers and Lyfts. One must also reflect upon emerging privacy threats and the dangers of mass surveillance, as machines increasingly track and monitor more intimate aspects of human life. Other risks include growing anti-science sentiments, epistemic confusion and authoritarianism. Once super-intelligent machines can make accurate predictions in real-time, it could become harder for humans to detangle the logic behind these predictions, as machines are effectively in a constant state of “recalculating and rethinking,” making them “as opaque as people.”
“How will we know that our predictive capacity is truly predictive or simply the elimination of all other paths to the future?”
Humans must reflect carefully on what life might be like if machines obtain a monopoly on all its predictable areas. Would humans be left focusing only on matters that are less predictable, such as politics and war? Would humans themselves become, paradoxically, less predictable? Will humanity program unpredictability into machines, ensuring they have more “soft deterministic states,” avoiding the rigidity of “hard determinism”? One can envision a possible future in which the world itself seems more predictable, but only because humanity’s predictive machines have reduced the number of possible human paths available. Ultimately, the potential of predictive AI algorithms to reshape the world is immense, with potential benefits outweighing the risks – AI could help humans achieve everything ranging from colonizing Mars to extending human life expectancy.
About the Authors
Igor Tulchinsky is the founder, CEO and chairman of a global quantitative asset management firm, WorldQuant, as well as an investor, venture capitalist, philanthropist, entrepreneur and author. Christopher E. Mason is a genomics, physiology and biophysics professor at Weill Cornell Medicine, and founding director of the WorldQuant Initiative for Quantitative Prediction.
Genres
Technology, Science, Business, Economics, Finance, Innovation, Strategy, Management, Artificial Intelligence, Data Science
Review
The book explores how artificial intelligence and big data are transforming the fields of prediction and risk, and how these changes affect various aspects of our lives, from finance and medicine to crime and warfare. The authors, who are experts in quantitative investing and genomics, respectively, provide an overview of the history and current state of prediction and risk, as well as the challenges and opportunities that lie ahead. They also discuss the ethical and social implications of using predictive technology, such as the impact on privacy, free will, and human dignity.
The book is divided into 11 chapters, each focusing on a different domain where prediction and risk play a crucial role. The chapters are:
- Prediction and Risk: An introduction to the main themes and concepts of the book, such as the definition and measurement of risk, the types and sources of uncertainty, and the evolution and limitations of prediction.
- The Complexity of Prediction: A survey of the mathematical and computational tools and techniques that enable prediction, such as probability theory, statistics, machine learning, and artificial neural networks.
- The Quantasaurus: A case study of how quantitative investing, or the use of algorithms and data to make financial decisions, has revolutionized the world of finance and created new forms and levels of risk.
- The Trouble with Risk: A critique of the conventional approaches and models of risk management, such as the normal distribution, the efficient market hypothesis, and the value at risk, and how they fail to capture the complexity and unpredictability of real-world phenomena.
- New Tools of Prediction: A showcase of some of the emerging and innovative methods and applications of prediction, such as natural language processing, computer vision, blockchain, and quantum computing.
- Mortality and Its Possibilities: An examination of how prediction and risk affect the field of medicine and health, especially in the areas of genomics, precision medicine, and longevity.
- Crime and Privacy: An analysis of how prediction and risk influence the domains of crime prevention and law enforcement, as well as the trade-offs and tensions between security and privacy.
- The Smart Killing Machine: A discussion of how prediction and risk shape the realm of warfare and defense, particularly in the context of autonomous weapons, drones, and cyberattacks.
- Predicting Performance: A review of how prediction and risk impact the sphere of sports and entertainment, such as the use of analytics, biometrics, and virtual reality to enhance performance and experience.
- The Plague of Polling: A reflection on how prediction and risk affect the field of politics and democracy, especially in the light of the recent failures and controversies of polling and forecasting.
- Free Will, AI Jobs, and the Ultimate Paradox: A speculation on the future of prediction and risk, and the implications for human agency, identity, and dignity, as well as the potential and perils of artificial superintelligence and the singularity.
The book is a fascinating and insightful exploration of the role and impact of prediction and risk in our modern society. The authors draw from their own expertise and experience, as well as from a wide range of sources and examples, to illustrate the power and paradox of predictive technology. The book is well-written and engaging, with a clear and logical structure, and a balanced and nuanced perspective. The book is also accessible and informative, without being too technical or jargon-heavy, and provides a good introduction and overview of the topics covered.
The book is not without its flaws, however. One possible criticism is that the book is too broad and ambitious in its scope, and does not go deep enough into some of the issues and questions raised. For instance, the book could have explored more thoroughly the ethical and social dilemmas and challenges posed by predictive technology, such as the issues of bias, fairness, accountability, and transparency. Another possible criticism is that the book is too optimistic and uncritical in its tone, and does not acknowledge or address some of the limitations and risks of predictive technology, such as the possibility of errors, failures, or misuse. Furthermore, the book could have offered more concrete and practical suggestions and recommendations on how to improve and regulate the use and development of predictive technology, and how to foster a more responsible and ethical culture of prediction and risk.
Overall, the book is a valuable and stimulating contribution to the literature and discourse on prediction and risk, and a must-read for anyone interested in learning more about the current and future state of artificial intelligence and big data, and how they affect various aspects of our lives. The book is also a timely and relevant reminder of the importance and responsibility of being aware and informed of the potential and pitfalls of prediction and risk, and of the need and opportunity to shape and influence the direction and outcome of these powerful and pervasive forces.