Mastering the Rare Art of Machine Learning Deployment. In “The AI Playbook,” renowned author and AI expert Eric Siegel unveils a groundbreaking guide to harnessing the immense power of artificial intelligence for business success. This game-changing book is a must-read for any organization seeking to stay ahead in the AI revolution.
Discover the transformative strategies that will propel your business to new heights in the age of AI. Keep reading to unlock the secrets of “The AI Playbook” and gain a competitive edge in your industry.
Table of Contents
- Genres
- Review
- Recommendation
- Take-Aways
- Summary
- Business and data professionals must develop a shared understanding of Machine Learning (ML) opportunities.
- Step 1: Establish your value-driven deployment goal by leveraging “backward planning.”
- Step 2: Business and tech leaders collaborate to specify a prediction goal.
- Step 3: Use the right evaluation metrics to determine how well your model makes predictions.
- Step 4: Prepare the correct data, or “fuel,” to achieve your desired outcomes.
- Step 5: Train your ML model, ensuring it detects patterns in a sensible way.
- Step 6: Deploy your ML model, getting full-stack buy-in throughout your organization.
- Commit to maintaining your model and a strong ethical compass.
- About the Author
Genres
Business, Technology, Artificial Intelligence, Machine Learning, Data Science, Management, Strategy, Innovation, Predictive Analytics, Leadership
“The AI Playbook” offers a comprehensive roadmap for leveraging AI to drive business growth and innovation. Siegel breaks down complex AI concepts into easily digestible insights, providing practical strategies for implementing AI across various business functions.
The book covers key topics such as predictive analytics, machine learning, deep learning, and AI ethics. Siegel emphasizes the importance of data quality, human expertise, and continuous learning in building successful AI initiatives.
He also addresses common challenges and pitfalls, offering guidance on navigating the AI landscape effectively. Throughout the book, Siegel presents real-world case studies and examples to illustrate the transformative power of AI in action. “The AI Playbook” is an indispensable resource for business leaders, managers, and professionals seeking to harness the potential of AI for competitive advantage.
Review
“The AI Playbook” is a tour de force in the field of AI for business. Eric Siegel’s expertise shines through as he demystifies AI and provides a clear, actionable framework for success. The book strikes an ideal balance between technical depth and accessibility, making it valuable for both AI novices and experienced practitioners.
Siegel’s engaging writing style and use of real-world examples keep readers engaged and motivated to apply the insights to their own organizations. The book’s comprehensive coverage of AI topics, from basic concepts to advanced strategies, sets it apart as a definitive guide. Siegel’s emphasis on the human element in AI is particularly noteworthy, reminding readers that AI is a tool to augment, not replace, human intelligence.
The only minor drawback is that some of the case studies may become dated over time as AI technology evolves rapidly. Nevertheless, the core principles and strategies outlined in “The AI Playbook” are timeless and invaluable. This book is a must-read for anyone serious about harnessing the power of AI for business success.
Recommendation
Many business leaders lack a solid understanding of machine learning (ML). Meanwhile, company data scientists often feel disconnected from the business side. In this practical guide, bestselling author and former Columbia University graduate computer science professor Eric Siegel urges business and tech leaders to come out of their silos. He argues that collaboration is vital for organizations hoping to harness the full potential of machine learning models and explains how to apply ML in ways that will transform your organization and optimize your operations.
Take-Aways
- Business and data professionals must develop a shared understanding of Machine Learning (ML) opportunities.
- Step 1: Establish your value-driven deployment goal by leveraging “backward planning.”
- Step 2: Business and tech leaders collaborate to specify a prediction goal.
- Step 3: Use the right evaluation metrics to determine how well your model makes predictions.
- Step 4: Prepare the correct data, or “fuel,” to achieve your desired outcomes.
- Step 5: Train your ML model, ensuring it detects patterns in a sensible way.
- Step 6: Deploy your ML model, getting full-stack buy-in throughout your organization.
- Commit to maintaining your model and a strong ethical compass.
Summary
Seizing machine learning (ML) opportunities requires deep collaboration between both the business and technical sides of your business. If you’re a business professional, you’ll need to develop a holistic understanding of the ML process: You should grasp what the models you’re using predict; how these predictions will affect your operations; the metrics you’re using to determine how accurately your ML project is making predictions; and the kinds of data you’ll need to collect. If you’re a data professional, you must broaden your perspective on ML to understand its power to transform the entire business.
“The real ‘data science unicorn’ isn’t the person who knows every analytical technique and technology; rather, it’s the one who has expanded their skill set to also participate in a company-wide, business-oriented effort that gets their models deployed.”
Bridge any gaps between the business and data ends of your organization with bizML — a six-step business approach that Eric Siegel designed to help team members successfully launch and deploy transformative ML projects from a place of shared understanding. If you’re wondering how bizML differs from Machine Learning Operations (MLOps), MLOps centers on the technical side of ML projects; bizML focuses on organizational execution. If you’re a data professional familiar with the Cross Industry Standard Process for Data Mining (CRISP-DM), bizML complements CRISP-DM by focusing specifically on ML projects, enabling you to explore topics related to predictive models more deeply.
Step 1: Establish your value-driven deployment goal by leveraging “backward planning.”
When artificial intelligence (AI) strategies fail, University of Toronto management professor Mihnea Moldoveanu explains, it’s because people treat AI as a goal instead of as a tool for achieving a desired outcome. When you ask whether your company or tech team has “an AI strategy,” it’s like asking whether there’s an “Excel strategy.”
Successful ML and AI projects require “backward planning”: establishing an end to work toward. Clarify your business objective, specifying what your model will predict and the ways your predictions will affect your operations. For example, if you hope to “increase ad response rates,” you would specify your application (ad targeting), what’s predicted (whether users will respond to your ads), and your model’s deployment (displaying targeted ads to the users most likely to respond to them).
“All planning is backward planning. You start with a goal and work out how you’re going to get there.”
ML’s applications extend beyond predicting business outcomes: You may want to consider using predictive ML analytics to address social issues. For example, the US nonprofit Predict Align Prevent uses ML to predict which children are at risk of abuse or neglect, and climate-focused organizations have used ML to predict outcomes such as the amount of carbon a reforestation project might capture.
After choosing how to apply your ML, your next step is to get approval from stakeholders with decision-making power at your company. Focus on selling the gains ML can help your team make instead of overly fixating on the “cool technology” you’ll be using.
Step 2: Business and tech leaders collaborate to specify a prediction goal.
Define your prediction goal in detail, translating a broad business intention into the specific requirements needed for technical execution. To identify “viable prediction goals,” determine which behaviors you can analytically predict overlap with prediction goals that leaders believe are, theoretically, of value. Adhere to the “Law of ML Planning” to avoid missing crucial details: Ensure that “deployment” — what the model will predict and how those predictions will shape business operations — stays at the forefront at every step in your ML project. Think, too, about potential ethical issues that may arise. For example, predictive policing models tend to inflate the likelihood that Black parolees will be rearrested after they leave prison because the models are trained on data that skews the predictions. These former convicts usually reside in areas that get more police attention in general, which results in a greater number of arrests.
“Your mission, should you choose to accept it, is to forge a rare collaboration, enlisting business leaders to weigh in on the caveats and qualifications that determine the prediction goal in all its detailed glory.”
If you’re launching a new ML project, consider creating a binary model, or “binary classifier,” that makes predictions by answering yes/no questions. For example, if you want to target customers more effectively with your advertising, your binary model could answer the question: “Will the customer buy if contacted?” One of the benefits of binary models is that they calculate the probability of a binary likelihood — though an overly general question like this one will not yield particularly useful predictions. If the deployment goal is to boost your marketing efforts, you must consider which customers are worth contacting. So, the question might become, “If sent a brochure, will the customer buy within thirteen business days with a purchase value of at least $125 after shipping, and not return the product for a refund within forty-five days?”
You may also find alternate predictive models applicable, such as “numerical” or “continuous” models that predict numerical values and answer questions that start with phrases such as “how many” or “how much.” For example, you might use a numerical model to predict the profit or revenue you expect to gain from a customer over an extended period.
Step 3: Use the right evaluation metrics to determine how well your model makes predictions.
After you’ve specified what your ML project will predict, evaluate the model’s performance to discover how well it’s making predictions. Accuracy is not the best way to measure your model’s success. Many researchers misleadingly describe ML models as “high accuracy.” The public assumes this attribution means the models can reliably identify “positive and negative cases” — like whether something will or will not happen or if something is or isn’t true. However, in reality, “high accuracy” models only perform “better than random guessing.” For example, Stanford University created a model that correctly identified whether a man was gay or straight 91% of the time based on a photo of his face. But test parameters specified that, for every pair of faces presented for scrutiny, one subject was straight and the other wasn’t. Employed in the real world, the model would incorrectly label someone as gay or straight more than half the time.
“Headlines about machine learning promise godlike predictive power… It’s all a lie.”
Focus on metrics such as “lift,” which calculates the ratio by which your model performs better than making guesses without the model, and “cost,” which calculates the price of false negatives (FN) and false positives (FP). The “fatal flaw” of using accuracy models to evaluate your model is that it treats FNs and FPs as equally bad, when in reality, one may be costlier than the other:For example, the misclassification of a credit card transaction as “fraudulent” is an FP, which costs banks money given that the card user will likely use an alternate card to make their purchase. FNs can be even costlier for banks, as they could end up repaying users after criminals make large, fraudulent purchases using stolen cards.
Step 4: Prepare the correct data, or “fuel,” to achieve your desired outcomes.
It doesn’t matter how sophisticated your algorithms are if you don’t use the right data to fuel your predictions, so make sure your data spreadsheet is:
- Long — You need a lengthy list of entries to ensure your data is representative. A bank using ML to spot fraudulent credit card activity would require a long and broad list of actual transactions.
- Wide — Note pertinent information about each entry in the row connected to the item. This information determines the factors the model will use for its predictions. For example, the bank would note merchant and cardholder behavioral characteristics when making an entry about a transaction.
- Labeled — Humans often need to manually label data to help train ML software to detect negative and positive cases. For the bank’s goal, this means going through the list and, in a separate column, noting whether each transaction in the training set is an example of fraud.
“Machine learning algorithms may be the fun, sexy part — everyone wants to crash that party — but improving the data is where you usually get the greatest payoff.”
Establish the type of data that most effectively trains your model. Most ML business applications rely on a “big ol’ flat file” or “BOFF”: A two-dimensional table featuring single-row examples labeled as positive or negative cases. If you use a model that handles big files (such as sounds and images), your raw data won’t fit neatly into one row. Still, the same rules apply when working with this sort of “unstructured” data as when using BOFF data, which is “structured”:Each case needs a label stating whether it’s a positive or negative case.
Whether you’re using structured or unstructured data, be wary of “noise”: Data with incorrect values or “corrupt data” and data displaying seemingly random values. You may corrupt your data if someone mislabels fields, while seemingly random values may indicate that you don’t yet understand the factors affecting your data.
Step 5: Train your ML model, ensuring it detects patterns in a sensible way.
ML algorithms “learn” from your data, deriving functional predictive models.Your model emerges from a “rule” or “pattern” that an algorithm derives from your data and uses to make predictions. While it can be tempting to greenlight a model when it aligns with your human intuition, understanding your model is not typically a straightforward process. For example, many assume their models will determine causation when, in reality, the best you can often hope for is to establish correlation.However, if the patterns your model detects and uses to make predictions tend to be reliable, you do not necessarily need to establish causation.
“When a newborn model emerges, it absorbs all your attention. Like counting a baby’s fingers and toes, you examine it thoroughly, poking around to see how well it works and why — what makes it tick.”
Familiarize yourself with the different modeling methods you can use in ML processes, which may include (but are not limited to): decision trees, which feature nested “if-then” statements; linear regression, which combines input variables to create a weighted sum; and logistic regression, which is ideal for classification and predicting yes/no values.
Investigate your models to ensure there aren’t any “bugs.” Sometimes, models appear to work well but combine input variables in problematic ways. For example, University of Washington researchers designed a model to distinguish huskies from wolves using images. It appeared to be working, but when they looked into its decision-making process, they learned that the model was labeling all images with snow in them “wolves” and those without snow “huskies,” as it detected that wolves appeared more in snowy backgrounds.
Step 6: Deploy your ML model, getting full-stack buy-in throughout your organization.
Deploying your model means moving from the experimental phase to leveraging its predictive capabilities in the field to drive your operational decisions.Deployment will require full-stack cooperation and buy-in from team members at every level of your organization. Though your leaders may be the ones approving your model, it’s vital that you overcome any resistance and get buy-in from the staff tasked with deploying it in the field.People naturally fear change, so work to build trust in your model.
“ML delivers a rocket, but those in charge still must oversee its launch.”
While models may automate decision-making by deploying predictions to determine whether to take actions — such as contacting customers or filtering spam messages — humans are still in the picture. Many processes only partially involve automated decision-making. For example, while models may automatically approve or disapprove new lines of credit for bank customers, human loan operators still make final decisions in some instances. When you have a “human-in-the-loop,” you deploy your predictive model by giving a person the power to make operational decisions after integrating data from your model.Consider mitigating your deployment risk by using a control group. For example, perhaps you deploy incrementally, catching any potential bugs before fully scaling.
Commit to maintaining your model and a strong ethical compass.
Maintaining your model is essential, as models stagnate and degrade. The world is constantly changing, and if you don’t update your model, ensuring the data you use remains relevant, then it may devolve — a phenomenon known as “model drift.” Avoid model drift by monitoring your model’s performance and periodically updating it. This typically involves training an entirely new model using new data you deem more relevant. That said, some organizations have developed more adaptive models that they don’t need to update as frequently, given that they often update data inputs and have engineered their models to track changing patterns.
“When you launch astronauts into space, you commit yourself to a new job: You’ve got to keep them alive. Likewise, once it’s in play, sustaining a model’s viability moving forward takes maintenance, monitoring, and vigilance.”
Remember that while you can use ML models as tools for good, they can also perpetuate systemic disparities. Take steps to ensure your model doesn’t operate in a discriminatory way. Be wary of machine bias, which refers to denying certain services or opportunities to users based on the social group to which they belong. When training your model, aim to equally represent different groups to avoid “representation bias,” which may prevent underrepresented groups from accessing services. Be mindful of the fact that your model’s predictions may infer sensitive attributes about users, such as their sexual orientation or whether they’re pregnant. Aspire to use your data ethically and responsibly, grounding decision-making in empathy.
About the Author
Eric Siegel is a consultant, who worked formerly as a graduate computer science professor at Columbia University. He is the author of Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die.