Explore the opportunities and perils of Foundation Models – general-purpose AI capable of open-ended conversations – through Rishi Bommasani, Percy Liang and colleagues’ balanced and revealing look at these nascent technologies. Form your perspective on responsible stewardship of Foundation Models by considering their insights on ethics, oversight and impact.
The researchers explore the groundbreaking yet potentially hazardous AI currently displaying wide-ranging competencies without specific functional purpose. Analyzing DialogPT and related models, they outline benefits like education and lack of bias, but also dangers of manipulation, safety lapses and difficulty moderating impact.
Readers gain understanding of harms modeling, transparency concerns and uncertainty surrounding self-supervised learning models. Mitigations discussed include constrained scenarios, value-alignment techniques and recommendations for accountability.
This balanced look at emergent “Foundation Models” arms readers with insights to independently and responsibly consider socio-technical tradeoffs. It highlights the promising yet precarious nature of progress requiring vigilance to maximize human well-being.
Table of Contents
- Genres
- Recommendation
- Take-Aways
- Summary
- “Emergence” and “homogenization” characterize foundation models of artificial intelligence (AI).
- Foundation models operate in the real world and will generate social impacts.
- Universities and industries are developing these models which should draw attention from many disciplines.
- Foundation models offer numerous possible capabilities.
- Foundation models may transform health care, law and education.
- Foundation models pose ethical and political challenges.
- About the Authors
Genres
Artificial intelligence, programming, technology, science, public policy, ethics, risk management, research, future studies, philosophy
Recommendation
Foundation models of artificial intelligence train on diverse data. Using deep neural networks and self-supervised learning, they can handle a variety of subject areas and tasks. According to a large research team at Stanford University, experts can deploy foundation models on a mind-boggling scale with billions of parameters. These models, which can be trained to do many tasks, also seem to develop the capacity to perform tasks beyond their training. These AI models are new, and some users may not understand their weaknesses or their potential. Since foundation models will soon be in wide use, researchers must prioritize studying the social and ethical issues they spawn.
Take-Aways
- “Emergence” and “homogenization” characterize foundation models of artificial intelligence (AI).
- Foundation models operate in the real world and will generate social impacts.
- Universities and industries are developing these models which should draw attention from many disciplines.
- Foundation models offer numerous possible capabilities.
- Foundation models may transform health care, law and education.
- Foundation models pose ethical and political challenges.
Summary
“Emergence” and “homogenization” characterize foundation models of artificial intelligence (AI).
Foundation models of AI are trained on diverse data and driven by neural networks. They rely on self-supervised learning and can perform a variety of concrete tasks. Recent foundation models are vast in scale. For example, Generative Pre-trained Transformer (GPT-3) deploys a staggering 175 billion parameters and can perform a wide variety of tasks no one specifically taught it to perform.
“Existing foundation models have the potential to accentuate harms, and their characteristics are in general poorly understood.”
Emergence occurs when this kind of machine learning system learns to perform a task outside its formal training by drawing on examples of that task in the data developers feed to it. When the machine learning approach to a variety of applications or tasks derives from a single methodology, the foundation model is said to be “homogenized.” Heavily homogenized foundation models pass any flaws they have along to the individual applications that use them.
Foundation models are not merely theoretical entities – they exist in the real world, and more and more AI systems use them. For example, Google, which has billions of users, deploys foundation models.
“What is the nature of this social impact?”
Foundation models that occupy entire multi-stage ecosystems roll out in multiple stages.
First, a variety of sources generate data, like the photographs people take of one another. Next, researchers assemble the data into data sets. Foundation models train on the data sets and adapt what they know into functional knowledge, such as understanding how to perform a specific task, like comparing different texts. At that point, the AI system is ready for deployment in the world to do that task.
Foundation models have some inherent flaws. For instance, they can inherit and propagate historic social and economic inequalities present in their source material. That means AI could generate seemingly credible content that bad actors could use to manipulate and deceive entire populations. Foundational models’ complexity and scale may enable them to shift knowledge and power into the hands of any entity that possesses the resources to develop and use them.
Universities and industries are developing these models which should draw attention from many disciplines.
Foundation models remain in the research stage. Experts know relatively little about how they function or their social and ethical implications. No agreed-upon standards exist to define whether a given foundation model is ready for deployment or what any person or organization should do if a model causes harm.
“Given that the future of foundation models is thus filled with uncertainty, a big question is: Who will determine this future?”
The technology for AI’s foundation model derives from extensive research in a variety of fields, including Natural Language Processing (NLP) and computer vision. Universities conduct research on foundation models, but only big tech companies – such as Google, Facebook and Microsoft – create these models for deployment. Big tech companies’ limitless financial and data resources give them a significant advantage in developing foundation models.
Private market incentives for developing foundation models with particular abilities can lead to products with considerable social value. However, profit and market incentives can tempt developers to ignore socially important issues, such as investing in technologies that improve the circumstances of impoverished or marginalized communities. Barriers to entry will continue to rise, a situation that governments could address by investing in public computing infrastructure.
Foundation models offer numerous possible capabilities.
Foundation models can develop a variety of “capabilities” that developers can adapt for specific applications.
Natural Language Processing is crucial to developing foundation models. However, today’s NLP capabilities don’t reflect the full complexity of human language. Researchers are working to adapt such models to the diversity of “linguistic variation” – all of humanity’s languages, dialects, vernaculars and modes of speech.
Researchers hope to use foundation models also for computer vision. Computer vision models trained on unlabeled “raw data” have advanced in their ability to classify images and detect objects. However, the potential use of computer vision foundation models for surveillance technology raises social and political issues.
“Our tentative conclusion is that skepticism about the capacity of future foundation models to understand natural language may be premature, especially where the models are trained on multi-modal data.”
While language and vision have been central to creating foundation models, robotics has faced obstacles. One long-time ultimate aim is to create “generalist” robots that are adept at multiple tasks. But scientists first must create an entirely new type of foundation model – one that is different from language and vision models – in order to overcome robots’ current physical nature and limitations.
Foundation models may transform health care, law and education.
Big technological companies are conducting foundation model research in the context of computer science and AI. Consequently, foundation models and their applications tend to exist today within the tech world, although the use of these models may eventually extend to fields beyond the technology industry.
In particular, expansions of these models could potentially enable AI to bring about changes in areas that are crucial to contemporary society, such as health care, law and education.
“For foundation models to significantly contribute to these application domains, models will require specific capabilities as well as technical innovation to account for the unique considerations in each domain.”
Health care and biomedical research demand specialized, potentially costly information. Foundation models may prove especially effective in those fields because they potentially can meet those demands by providing vast data from numerous modalities – from photographic images to microscopics – to train the models. In addition, such models could provide efficiency and cost controls by limiting how much a company might need to involve an expensive expert.
In the legal sphere, foundation models could use information from legal documents, though developers still must improve their models’ capacity to analyze relevant legal data. Foundation models and AI could boost efficiency and save costs.
These models’ capacity to apply data from multiple domains – such as text, mathematics and video – suggests that they also could play a role in education.
Foundation models pose ethical and political challenges.
Using a single foundation model across several domains exacerbates the impact of its flaws. The difficulty researchers face in interpreting foundation models conceals the models’ flaws and hinders efforts to figure out their effects. Homogenization or creation of an “algorithmic monoculture” could lead to arbitrary, harmful decisions and spread inaccurate information across numerous domains. This potential for harm places a special burden on those who are developing foundation models.
“Homogenization has the potential to amplify bias; to standardize bias, [thus] compounding injustices rather than distributing them; and to amplify arbitrary exclusion.”
Standardized training data also are likely to exacerbate the flaws in AI models, given the vast amount of information they demand. Models that use similar data will end up mirroring the base data’s biases and flaws. Science may find alternatives to homogenization and could improve these models’ capacity to incorporate different viewpoints. For instance, they may introduce a “dialogue system” in which different “personas” are used to represent different social groups. This could help produce a broader, more diverse spectrum of voices.
As with any high-powered technology, human beings create and use foundation models. People also can decide to build and deploy such technologies with a critical eye – or not to build and deploy them at all. Creating and launching technologies such as foundation models proves in the end to be an inherently political act. People should be aware of possible pitfalls and strive to create foundation models that are “interpretable, accessible, sustainable and fair.”
About the Authors
Rishi Bommasani is a PhD student in computer science at Stanford University, where Percy Liang is an associate professor of computer science. More than 100 bylined researchers contributed to this paper – On the Opportunities and Risks of Foundation Models
.