Dive into the world of Trustworthy AI as Beena Ammanath demystifies the intricacies of safety and security in this groundbreaking exploration. With a keen focus on safeguarding the future, Ammanath’s insights offer a beacon of hope amidst technological advancements. Prepare to embark on a journey that redefines trust and reliability in the digital age.
Join us as we uncover the secrets of Trustworthy AI and discover how you can navigate the landscape of safety and security with confidence. Let’s embark on this transformative journey together!
Table of Contents
- Genres
- Review
- Recommendation
- Take-Aways
- Summary
- Enthusiasm for artificial intelligence (AI) may overwhelm ethical considerations about its use.
- People need to apply their values to drive AI ethics.
- Developers must remove biases from the data they use to train AI algorithms.
- AI that works in the lab must work in the real world.
- To build trust, make AI transparent and explainable.
- To build confidence, prioritize AI’s safety and security.
- Beyond safety and security, focus on data privacy.
- Make AI accountable and responsible.
- About the Author
Genres
Technology, Artificial Intelligence, Ethics, Data Science, Innovation, Cybersecurity, Risk Management, Futurism, Trust Building, Governance
In “Trustworthy AI,” Beena Ammanath provides a comprehensive examination of the critical importance of safety and security within the realm of artificial intelligence. Through meticulous analysis and real-world examples, Ammanath elucidates the complexities of ensuring trust and reliability in AI systems. From ethical considerations to practical implementations, this book serves as a roadmap for navigating the evolving landscape of AI with integrity and accountability.
Review
Beena Ammanath’s “Trustworthy AI” is a timely and indispensable resource for anyone seeking to understand the pivotal role of safety and security in AI development. With a blend of technical expertise and ethical insights, Ammanath offers actionable guidance for building AI systems that prioritize user safety and data security. This book is a must-read for policymakers, technologists, and anyone concerned with shaping a future where AI serves humanity responsibly.
Recommendation
Beena Ammanath, executive director of the Global Deloitte AI Institute, weaves research and real-world examples into her discussion of AI ethics. Her navigation through the complexities of AI and its risks emphasizes the necessity of being transparent to build trust. Ammanath avoids jargon, so general business readers will find her writing accessible. She discusses each element of AI governance, including bias, safety, privacy and accountability. Every chapter provides strategies, systems and oversight methods for incorporating AI ethics into your business decision-making.
Take-Aways
- Enthusiasm for artificial intelligence (AI) may overwhelm ethical considerations about its use.
- People need to apply their values to drive AI ethics.
- Developers must remove biases from the data they use to train AI algorithms.
- AI that works in the lab also must work in the real world.
- To build trust, make AI transparent and explainable.
- To build confidence, prioritize AI’s safety and security.
- Beyond safety and security, focus on data privacy.
- Make AI accountable and responsible.
Summary
Enthusiasm for artificial intelligence (AI) may overwhelm ethical considerations about its use.
Artificial intelligence will change everything about work and life. You can’t overstate its implications, including the good it might do and its potential for damage. AI has evolved over several decades, yet no playbook exists for rolling it out to the public. Today, AI resembles the rules of the road when cars first took to the streets: There weren’t any. Business leaders and government policy-makers must catch up, as was the case with the advent of the automobile.
“There is no segment of society nor slice of the market that will go untouched by AI, and this transformation has the potential to deliver the most positive impact of any technology we have yet devised.”
AI demands the thinking and perspectives of people from all professions, sectors and industries. Every organization must determine how to use AI ethically based on its corporate values before adapting and deploying the technology. If workers and other stakeholders don’t trust the AI their leaders implement, it will fail.
Few people realize the degree to which AI already drives many of the tools they use and the decisions they make. Yet AI is only now capable of disrupting anything. Despite its nearly unlimited potential, AI can do only as humans direct. People and organizations have a brief window of time in which to make ethical choices that will result in fair, transparent, explainable, reliable, responsible and accountable AI: in short, trustworthy AI.
People need to apply their values to drive AI ethics.
Much of the media attention around advanced AI tends to attach human-like attributes to the technology. That is incorrect; AI doesn’t think or reason. It exists to execute algorithms that humans create. Mass, affordable data storage and powerful computer processing have freed AI from the restrictions that stalled its potential from the 1940s through the end of the 20th century. In 2011, deep neural networks – AI that could essentially learn on its own – dazzled the world.
“AI does not ‘think.’ AI tools are in fact highly complex mathematical calculations.”
Today, advanced machine learning, natural language processing (NLP), pattern recognition, image and speech recognition, and algorithms that can seemingly predict the future only scratch the surface of AI’s potential and what it can deliver. Leaders, developers and people everywhere must make sure humanity can trust the AI it develops.
Developers must remove biases from the data they use to train AI algorithms.
Across the board, align your AI safety and security policies with your corporate values.
People expect equitable treatment, though they realize fairness isn’t a fixed quality. It depends on context. This complicates the development and assessment of algorithmic or machine fairness because AI can’t think the way humans think. That is, it can’t weigh the circumstances and nuances that might make the same decision fair for one person and unjust for another.
Developers must ensure that the data AI trains on and learns from contains as little bias as possible. Your representative datasets must avoid selection and confirmation bias, which might restrict the thoroughness or diversity of your data or allow information that aligns with a developer’s views to dominate. Datasets must not contain any explicit racial, gender or other biases. Yet, developers face difficulties in identifying and removing implicit or unconscious bias. This includes stereotypes that datasets may contain relative to race, gender, disability or other factors.
“Expecting a data scientist team to both build models and ensure they adhere to subjective ethical concepts can be a recipe for poor outcomes and costly consequences for the business.”
For example, when a police force uses historical crime data to build an AI model that predicts where crimes may occur in the future, the model might contain data that reflects bias against certain neighborhoods. The algorithm will predict more crime in those neighborhoods, which means that more police will patrol there and more arrests will occur, potentially perpetuating prejudiced views. An algorithm can easily contain many such biases, thus rendering the AI inherently unfair.
Only people can ensure AI’s ethical and fair use. Algorithms will always contain bias – if only so that they can make more specific recommendations. AI should make recommendations – but people, not only data scientists but, ideally, various experts from diverse domains, should review datasets for bias. People should make all consequential decisions arising from AI recommendations and the information behind them.
AI that works in the lab must work in the real world.
Data scientists train AI models with real-world data, expecting them to operate well in actual environments. But this does not always happen. Sometimes a model will test as reliable in the lab but then deteriorate as real-world conditions evolve and the model’s algorithm can’t keep pace. “Brittle” AI engenders mistrust because it sometimes works, but not always. “Robust” AI performs consistently without changing due to shifting circumstances or conditions. This degree of reliability builds faith and trust. A robust AI operating an autonomous vehicle, for example, recognizes pedestrians in all weather conditions, in dark or daylight, and whether they walk, crawl or skip across the road.
“For AI to be trusted, it must be robust and reliable in real-world situations throughout the AI life cycle.”
Humans adapt easily, and mostly, subconsciously. You learned at a young age that you will suffer a burn if you touch a pan on a hot stove. But AI can’t come to such a conclusion without explicit instructions. Creating instructions for every potential set of circumstances takes enormous time and thought.
AI doesn’t generalize, at least, not so far. It can’t take something it learns in one domain – such as how to lift an item off a shelf and onto a conveyor belt – and use it in another, like placing dishes in a dishwasher.
AI’s reliability depends on the quality of the data it trains on; that training data, AI’s inputs, must reflect real-world conditions. It has to be complete and labeled well. Training AI models demands enormous datasets, so developers must sometimes add synthetic data – artificially generated data that resemble real data – to fill in the gaps. The need for reliability varies. For example, people demand perfection in a medical setting, but they allow results that are “close enough” when they want a recommendation for a good TV show.
Organizations should perform regular data audits, test for reliability, and monitor their algorithms’ performance systematically and routinely to ensure their continued robustness.
To build trust, make AI transparent and explainable.
Complex algorithms can produce a “black box” which data enters and from which recommendations emerge. But no one – not even the data scientists who created the algorithms – can explain the reasons behind an AI’s recommendation.
Those who deploy AI models should know enough about how they work to describe them in terms laypeople can understand. They should be transparent about AI, including letting people know whenever AI factors into processes and decisions that affect them. Transparent managers would not expect users to read multiple pages of legal fine print in order to understand and consent to usage terms and conditions. However, you don’t need 100% transparency. Some data must remain private. For example, companies need to be able to protect their intellectual property (IP). And, leaders must balance these demands.
“Transparency engenders trust across a range of factors. It is the beating heart of ethical AI and an essential component in capturing the full potential these systems can generate.”
Data scientists optimize their algorithms according to priorities. For example, a GPS might optimize traffic routes for speed, distance and volume of traffic. Users should know if speed matters more to them than other factors, and therefore, if they can agree on a recommended route or choose another one that minimizes distance.
Explainable algorithms permit testing to make sure they continue to do what they’re meant to: They permit analysts to spot biases and allow stakeholder input and ideas. Depending on the use case, firms that license AI tools from vendors should ask about the transparency of the AI and how easy it is to explain – especially when the AI will aid in making consequential decisions. Whether you license technology or build it in-house, keep people in the loop throughout.
To build confidence, prioritize AI’s safety and security.
Realize that AI is vulnerable to attack. The specter of cybercrime extends beyond data leaks of private information to the manipulation of algorithms and the insertion of bias or other unwanted elements. It even extends to the potential for life-threatening situations on factory floors and highways where AI runs machines or vehicles. Attacks and security breaches can expose companies to product or service malfunctions, loss of company secrets and intellectual property, fines and lawsuits, and overall loss of trust from workers, customers and other stakeholders.
Everyone in an organization is responsible for its safety and security, so train all your employees in safety and security measures and protocols. Conduct risk assessments, especially of tools you license from third parties, and build in security mechanisms, such as intrusion detection.
For safety, consider potential physical, psychological, economic, environmental and legal harm. When you adopt safety protocols, test them frequently to make sure they’re effective and work under all conditions.
“As increasingly sophisticated AI is unleashed on high-consequence tasks, safety cannot be an afterthought.”
Assess any risks that, for instance, robots embedded with AI could accidentally injure someone. Be aware that data breaches and biases in algorithms could cause economic or legal harm. Or, perhaps, advanced AI posing as a human might affect a user’s mental state. Just constructing AI models can cause substantial environmental harm due to their microprocessors’ vast power demands. Training a national language processing model, “can produce as much CO2 as 300 trans-American round trip flights or five times the total lifetime emissions of an American car.”
Beyond safety and security, focus on data privacy.
Wherever you have personally identifiable data – especially when it is linked to sensitive information – prioritize privacy considerations. Enormous amounts of personal information flow into government and commercial systems from nearly everything people do today. Individuals can’t keep track of their data or every system or entity to which it flows. Data collectors and analyzers must ask people’s consent to gather sensitive data and then must protect that data.
“Looking at patterns, AI may be able to infer a data point about a person even though that person did not share that information.”
Data that people share among entities permits an ever-deeper understanding of consumers. Even when you remove your name from use, triangulation of data might identify you. This creates the potential for situations such as insurers receiving information about your driving habits. Add facial recognition, and AI tools might identify you and potentially reveal your name and address even though you are just going about your private business. Authorities might need to intervene to prevent such intrusions. Some national and state governments have already begun to implement data protection and privacy laws, such as those already in effect in Europe and California.
Make AI accountable and responsible.
Where AI makes recommendations and people make the decision, the accountability for that choice clearly lies with the human beings. If AI makes decisions without a human in the loop – in an autonomous vehicle, for example – assigning accountability for negative consequences could prove murkier. Yet, being able to assign responsibility is crucial in establishing workforce accountability.
You can’t blame the AI since it lacks self-awareness, but you can improve it. If developers can explain the AI models they create, they can accept accountability and responsibility. The speed at which AI changes and new systems launch often defies categories of accountability.
“It is a question not just of what can be done with AI but how it should be done – or whether it should be done at all.”
Organizations must assign responsibility and accountability or their stakeholders will not trust their tools and processes. In some cases, where the potential unintended consequences of a new AI innovation prove too broad or consequential, being responsible means not pursuing that technology – no matter how lucrative the opportunity.
When systems fail, the law will almost certainly hold the firms that design and provide those systems legally accountable with financial, if not criminal, consequences for negligence. A firm that licenses a technology could share responsibility with the vendor if something goes wrong. Firms should gather as many diverse perspectives as possible about the potential consequences of all the AI tools they use or create.
About the Author
Beena Ammanath, the executive director of the Global Deloitte AI Institute, founded the nonprofit Humans for AI.