Skip to Content

How Can Companies Mitigate Vulnerabilities in AI and Machine Learning?

What Are the Security and Ethical Risks of Artificial Intelligence?

Explore the security, privacy, and ethical challenges of artificial intelligence with actionable insights from Beyond the Algorithm by Omar Santos and Petar Radanliev. Ready to protect your organization from AI vulnerabilities and navigate the complex ethics of machine learning? Read the full article to discover expert strategies from Beyond the Algorithm and secure your systems today!

Recommendation

Artificial Intelligence (AI) deploys Machine Learning (ML) to generate algorithms and statistical models used in a variety of fields, including healthcare, finance, transportation, communications, and the energy sector. But, as AI security experts Omar Santos and Petar Radanliev argue, the rise of AI also brings moral challenges. It can open companies up to a number of legal and security risks, including cyberattacks. By offering a non-technical overview of AI’s uses and raising awareness of AI-related vulnerabilities, Santos and Radanliev empower readers to tackle related challenges head-on.

Take-Aways

  • People have speculated about the possibility of Artificial Intelligence (AI) for centuries.
  • AI leverages Machine Learning (ML) to analyze data, make predictions, and automate tasks.
  • AI comes in many forms, and companies are using these innovations in a variety of business contexts.
  • Generative AI and large language models (LLMs) analyze the patterns in existing data and generate new content based on those patterns.
  • Hackers can exploit AI and ML’s security vulnerabilities.
  • Malicious attacks on AI and ML systems occur in phases.
  • Identify system vulnerabilities — and ensure AI system infrastructure security.
  • In an AI-dominated world, privacy and ethics are core societal issues.
  • Working with AI demands an understanding of legal issues and regulatory compliance.

Summary

People have speculated about the possibility of Artificial Intelligence (AI) for centuries.

People have imagined creating intelligent machines since ancient times. Greek myths featured mechanical creatures capable of independent thought. The great 17th-century philosopher René Descartes theorized that the human mind was machine-like and posited that mathematics might be able to explain its functions. The first real advances in artificial intelligence came during the Second World War when British mathematician and cryptographer Alan Turing developed his Turing machine. But it wasn’t until the legendary 1956 Dartmouth College “Dartmouth Conference,” which included AI founders like Alan Turing, Marvin Minsky, and Herbert A. Simon, that AI became a real research field. Then the 1960s and 1970s saw significant advances in “expert systems” capable of human-like analysis.

AI leverages Machine Learning (ML) to analyze data, make predictions, and automate tasks.

People refer to the AI systems and tools that exist today as “narrow” AI. This form of AI performs specific tasks — like image recognition or natural language processing — in a human-like way. Some are working to develop “general” AI — an as-of-yet hypothetical form of AI that would be able to learn, comprehend, and apply knowledge in ways that go beyond training data and across a variety of fields — a far closer approximation of human cognition than narrow AI.

“Artificial intelligence is a broad concept that resembles the creation of intelligent computers that can mimic human cognitive processes.”

AI relies on some form of machine learning (ML). Machine learning creates algorithms and statistical models based on the “training data” people provide to the AI. The ML system can detect patterns and correlations in the training data, which allows it to formulate generalizations and make predictions. For an ML model to work properly, specialists must give it massive amounts of data from which it can extract relevant information. Some ML algorithms draw on labeled data — known as “supervised learning” — and others discover patterns in raw, unstructured data — “unsupervised learning.” The most advanced and sophisticated ML models are “deep learning” models that use layered, artificial neural networks to mimic human brain processes. Deep learning models do better with speech and image recognition and natural language processing than other ML models. However, deep learning models require large amounts of labeled data and significant computing power.

AI comes in many forms, and companies are using these innovations in a variety of business contexts.

Deep learning algorithms and related tools come in a variety of forms. “Convolutional neural networks” (CNNs) use layers to extract relevant data from visual inputs. “Recurrent neural networks” (RNNs) process data over time and in sequence, which makes it an especially effective algorithm for natural language processing.“Generative adversarial networks” (GANs) are unusual in part because they involve at least two neural networks that compete against one another to solve problems.

“The most significant AI and ML innovations (their primary ideas, uses, and potential future developments)…show how these domains have the enormous potential to change a wide range of businesses and human-computer interactions.”

“Natural language generation” (NLG) can turn otherwise raw, unstructured information into coherent sentences. Businesspeople can use NLG in a variety of commercial contexts, including generating news stories, writing reports, and preparing customer service communications. AI and ML-powered speech recognition technologies can improve human communication with computers and other technologies, including “virtual agents,” which many companies now employ as a form of customer service. AI and ML technologies can help people make good, data-driven decisions and allow them to use biometrics — distinctive traits, like voices, facial features, and fingerprints — to identify individuals for security purposes.

Generative AI and large language models (LLMs) analyze the patterns in existing data and generate new content based on those patterns.

“Generative AI” models like ChatGPT-4 can produce written content, music, and images that seem remarkably like what humans create. There are numerous types of Generative AI, each with different ways of identifying patterns in data and generating new content, and with varying strengths and weaknesses. For instance, “generative adversarial networks (GANs),” composed of a pair of neural networks, are especially good at synthesizing images, but training them to get good results is challenging. “Autoregressive models,” which predict elements in sequences, excel at natural language processing but may run slowly and need extensive training data sets to work well.

“The evolution of large language models started through the development of different AI architectures and training methodologies over the past several years.”

Most “large language models (LLMs)” — a form of generative AI — are based on “transformer models”: neural networks that can weigh the significance of every word in a sequence and use multiple centers or “heads,” each of which weighs words and ultimately produces a unified end-point sum. This setup allows the model to reflect more complex and nuanced relationships between words and other elements. Perhaps the most successful and popular use of LLMs thus far is OpenAI’s GPT-4. Trained with literally billions of parameters, GPT-4 has a subtle relationship to language and can generate relevant responses even under complicated circumstances. Still, models like GPT-4 raise ethical concerns. These tools may simply reproduce the biases reflected in the data they’re trained on. Bad actors can also use LLMs to create and disseminate misinformation and propaganda.

Hackers can exploit AI and ML’s security vulnerabilities.

A typical person working at a technology firm in the United States might well use AI numerous times over the course of a given day. A smart alarm might wake her up, for instance. Then, her digital assistant could report the weather forecast as her smart coffee machine starts brewing her morning cup. AI could also tell her the day’s work schedule and flag any problems or challenges that might arise. If she works in healthcare, she might use AI to monitor medical research or request some necessary legal advice. At every stage in this person’s day, their AI could come under attack — with potentially serious consequences.

“Adversarial attacks involve subtle manipulation of input data to an AI model to cause it to make mistakes. These manipulations are designed to be almost imperceptible to humans but significant enough to deceive the AI model.”

For example, bad actors might subtly change an autonomous vehicle’s capacity to understand street signs, leading to an accident. Or an automatic trading system could direct an adversarial attack, leading to financial losses, or it could deceive national security surveillance systems, with dangerous consequences. These sorts of attacks are by no means the only security risks that AI and ML systems face. For instance, “data poisoning attacks” alter an AI system’s training data, thereby changing the system’s behavior. Bad actors can use data poisoning attacks to infect social media networks and spread misinformation.

Malicious attacks on AI and ML systems occur in phases.

It’s crucial to understand the phases of an AI attack, to know the attackers’ various possible methods in advance, and to understand how they can gain access to your system. An attack on an AI and ML system almost invariably begins with “reconnaissance”: Attackers attempt to learn everything possible about the target organization’s AI and ML. They might do this through public records research about the organization and how it uses AI and ML, for instance. In the next phase, the attackers acquire the resources necessary for the attack, such as high-end, expensive computational and software resources.

“Reconnaissance includes techniques where adversaries proactively or subtly collect and accumulate information that can assist with their targeting strategies.”

Once they are fully prepared, the attackers are ready to gain initial access, which they can do in numerous ways. They can insinuate themselves into aspects of the ML process, sully or poison data sources, or manipulate public, open-source models. Note that cyber attackers will do anything to evade detection. In particular, they will find ways to fool your system’s breach detection systems. In the final stages of the attack, adversaries seek valuable information on the system and attempt to “exfiltrate” the ML system itself.

Identify system vulnerabilities and ensure AI system infrastructure security.

Being aware of an AI system’s security vulnerabilities helps you mitigate them. Consider, for example, the infrastructure that supports the AI system.

“The underlying infrastructure consists of network, physical and operating system software components that work together to enable a platform where AI systems reside and function.”

AI systems rely on properly functioning network systems. When bad actors breach a network system, they can alter the functioning of the AI. Network vulnerabilities include open communication avenues, inadequate encryption, and even people with access to the system who are willing to misuse and compromise the system. Physical vulnerabilities are fairly straightforward: Servers get stolen or damaged, either on purpose or by accident, and attackers can extract information from servers or alter their functioning.

Software vulnerabilities occur when there’s data overflow into adjacent memory systems which attackers can exploit, insecure application programming interfaces (APIs), and when people fail to update their systems for evolving security issues. Other software-related vulnerabilities include poor control of system access, excessive reliance on third-party services that are themselves insecure, and insecure data storage.

In an AI-dominated world, privacy and ethics are core societal issues.

At this point, AI affects most people’s lives in some way. It has transformed business, industry, and the way people work. AI-driven businesses are on the rise, and companies are eager to hire and retain workers who are competent in using AI and ML technologies. AI has changed healthcare, affecting how doctors diagnose diseases and giving patients access to highly individualized treatment plans.

“The increasing influence of AI also brings up difficult ethical questions. The importance of issues like algorithmic bias, data privacy, and transparency is raised. It is essential to ensure fairness and prevent discrimination in AI systems.”

AI has the potential for enormous benefits to humanity at every level. For that reason, as AI continues to develop, it’s important to maximize its benefits and minimize its ethical risks. For instance, developers can train an AI system on a data set that has inherent biases and thereby produce discriminatory recommendations. To deal with such biases, people must train AI on diverse datasets and monitor algorithm performance.

Developers often train AI using large amounts of personal data and must ensure they keep that data private and secure. Companies should ask users for their consent when engaging in personal data collection, and the technologies’ processing practices should be transparent. In addition, users should retain control over their data, and companies should prioritize data security and safe storage. As AI technologies advance and evolve, people must not only keep these ethical issues in mind but be on the lookout for emerging ones, too.

Working with AI demands an understanding of legal issues and regulatory compliance.

With the rapid development of AI in recent years, and especially of ChatGPT, people have grown concerned about the lack of a legal regulatory framework for AI. In 2017, the United Nations published recommendations for ethical AI development and use, emphasizing “accountability, transparency, and justice.” The United Nations placed special emphasis on the retention of human control, fairness, and privacy protection.

“Although legal and regulatory frameworks for AI are still developing, they are becoming more significant as the use of AI grows. These frameworks ensure the safe, moral, and responsible development and application of AI systems.”

In 2018, the European Union disseminated a text on the ethical development of AI which, likewise, stressed the need for human control and transparency. In 2023, the European Union passed the Artificial Intelligence Act (AIA), which provides a legal framework for AI use within the European Union. The EU is expected to begin enforcing the AIA as of 2024. At the same time, the United Kingdom has proposed AI policies that give priority to data security, transparency, equity, and proper governance. As of 2023, the United States has yet to put forward a substantial regulatory framework for AI. The United States is expected to adopt a regulatory framework for AI within the next few years.

About the Authors

Omar Santos is an expert in ethical hacking, vulnerability research, incident response, and AI security. He is the author of numerous books and video courses. Petar Radanliev completed his PhD at the University of Wales and became a Post-Doctoral Research Associate in the Department of Computer Science at the University of Oxford.