Discover the secrets to experimentation success with the groundbreaking book, “The Experimentation Field Book: A Step-by-Step Project Guide.” This invaluable resource, crafted by innovation experts Jeanne Liedtka, Natalie Foley, Elizabeth Chen and David Kester, empowers you to navigate the complexities of experimentation with confidence and precision. Prepare to revolutionize your approach to innovation and unlock unparalleled results.
Don’t miss out on this game-changing guide to experimentation success. Keep reading to learn how “The Experimentation Field Book” can transform your projects and propel your innovation efforts to new heights.
Table of Contents
- Genres
- Review
- Recommendation
- Take-Aways
- Summary
- Use experiments to test if new ideas are likely to succeed.
- The experimentation process includes defining what to test, designing how to test it, and analyzing what you learned.
- Step 1: Frame testable ideas to know what concept to test.
- Step 2: Define your evidence to know what to collect.
- Step 3: Select your test to find the right recruits.
- Step 4: Build a prototype to get the feedback you need.
- Step 5: Execute, analyze, and iterate to determine what to do next.
- About the Authors
Genres
Business, Innovation, Entrepreneurship, Project Management, Design Thinking, Lean Startup, Agile Methodology, Organizational Change, Product Development, Creativity
“The Experimentation Field Book” provides a comprehensive, step-by-step guide to designing and executing successful experiments in various contexts. The authors draw upon their extensive experience in innovation and design thinking to offer practical tools, frameworks, and real-world examples.
The book covers key topics such as defining the problem, generating hypotheses, designing experiments, collecting and analyzing data, and iterating based on insights. It emphasizes the importance of a structured yet flexible approach, encouraging readers to embrace uncertainty and learn from failures. The authors also highlight the role of collaboration, stakeholder engagement, and ethical considerations throughout the experimentation process.
Review
“The Experimentation Field Book” is an indispensable resource for anyone seeking to harness the power of experimentation for innovation and problem-solving. The authors’ expertise shines through in their clear, actionable guidance and relatable case studies. The book strikes a perfect balance between theory and practice, providing readers with a solid foundation in experimentation principles while offering concrete tools and templates for immediate application.
One of the book’s greatest strengths is its emphasis on iterative learning and adaptation. The authors recognize that experimentation is not a linear process, but rather a continuous cycle of testing, learning, and refining. They provide strategies for navigating the challenges and uncertainties inherent in experimentation, encouraging readers to embrace a growth mindset and view failures as opportunities for improvement.
The book also excels in its coverage of stakeholder engagement and collaboration. The authors underscore the importance of involving diverse perspectives and building buy-in throughout the experimentation process. They offer practical advice on communicating effectively with stakeholders, managing expectations, and creating a culture of experimentation within organizations.
While the book is comprehensive in scope, it remains accessible and user-friendly. The authors break down complex concepts into digestible chunks and provide ample visuals and examples to illustrate key points. The field book format makes it easy for readers to reference specific sections as needed during their own experimentation journeys.
Overall, “The Experimentation Field Book” is a must-read for anyone looking to unlock the potential of experimentation in their work. Whether you are an entrepreneur, innovator, or project manager, this book provides the tools and insights you need to design and execute impactful experiments. By following the authors’ guidance, you can approach challenges with greater confidence, creativity, and resilience, ultimately driving meaningful results and sustainable innovation.
Recommendation
Is your new idea for a product or service worth pursuing? To find out, treat your idea like a hypothesis and test it with an experiment. In this concise and practical guide to the experimentation process, a star team of educators and consultants outline the five steps necessary to turn testable ideas into profitable realities. Grounded in the real stories of four organizations, the team explains how to make an idea testable, how to design the actual test, and what to do with the results that follow. Come prepared with an idea to work with, and this field book will guide you toward success.
Take-Aways
- Use experiments to test if new ideas are likely to succeed.
- The experimentation process includes defining what to test, designing how to test it, and analyzing what you learned.
- Step 1: Frame testable ideas to know what concept to test.
- Step 2: Define your evidence to know what to collect.
- Step 3: Select your test to find the right recruits.
- Step 4: Build a prototype to get the feedback you need.
- Step 5: Execute, analyze, and iterate to determine what to do next.
Summary
Use experiments to test if new ideas are likely to succeed.
Experimentation helps bridge the gap between imagining an idea and making it a reality. The data you collect from the experimentation process builds up an evidence base that lets you know whether to launch a new product or service and how to do it. Experimenting mitigates risk in a dynamic world, allowing you to address known unknowns — things you know you don’t know — and discover unknown unknowns — things you don’t know you don’t know.
“Well-designed, learning-oriented field experiments can add tremendous learning and risk reduction but take only hours instead of weeks and cost next to nothing.”
Unfortunately, experiments are underused when launching new product or service solutions. All too often, they get overlooked as organizations go full steam ahead with the mantra of “just build it, and they will come.” However, if you build your product or service and the customers don’t come, you waste a lot of resources. Experiments help you avoid that waste. They protect you from overspending on unwanted solutions, allow you to test many ideas to see what scales, and connect you with early adopters to guide future development.
In addition to being underused, experiments are often poorly designed. People assume gathering evidence is the starting point for testing when, in fact, you must first develop a hypothesis: your best guess as to what you think your customer wants and whether you can deliver it. You should gather evidence to prove or disprove that best guess only after forming a hypothesis. Hypothesis-driven thinking encourages you to test your assumptions about potential products or services, and it helps you make evidence-backed decisions about whether you should move forward with your new solution or set it aside for now.
For example, Nike believed parents needed a solution to a collection of interconnected problems: Children are constantly growing out of their shoes; shoe shopping in stores with children is more stressful than fun; and people are often unsure how to dispose of old, beat-up shoes. Their hypothesis that parents would welcome a means of easily obtaining new shoes for their kids and getting rid of used pairs served as the starting place for testing a new shoe subscription service, EasyKicks.
The experimentation process includes defining what to test, designing how to test it, and analyzing what you learned.
Whether you’re fixing a broken feature, developing a new product, or launching a new program, it helps to know how to design and execute experiments. The experimentation process has five steps:
- “Framing testable ideas” — Look at the specifics of your idea to ensure it’s truly testable.
- “Defining evidence” — Specify the kind of evidence you need to test your hypothesis and determine where to find that evidence.
- “Selecting your test” — Ascertain what type of test will allow you to collect the required evidence.
- “Building the prototype” — Develop the simplest prototype that will allow you to get feedback.
- “Executing, analyzing, iterating” — Execute the test, analyze the results, and iterate your way to success.
The first two steps clarify what you will test; the next two outline how you will test it; and the last asks what you learned. Although the steps are sequential, most experiments involve looping back and forth between steps as you iterate your way to your final offering.
Step 1: Frame testable ideas to know what concept to test.
A testable idea is specific, distinct, and worth the effort needed to test it. Rather than a vague idea for a new offering, a testable idea gives precise details about your intended customer, what problems it solves, how it works, and why it’s better than other available solutions.
Concepts are collections of ideas. For example, EasyKicks combined the idea of a subscription service with the idea of recycling old shoes. Not all concepts are created equal, however. Prioritize them by using the “Value/Effort Matrix.” This helps you determine how much value your concept creates and how much effort you need to expend to create that value.
To draw this matrix, write “VALUE” on the y-axis and “EFFORT” on the x-axis. Then, create four quadrants with “High Value Created/Easy to Execute” in the upper left, “High Value Created/Hard to Execute” in the upper right, “Low Value Created/Easy to Execute” in the lower left, and “Low Value Created/Hard to Execute” in the lower right. Place your ideas and concepts into the appropriate quadrants. If you have many discrete ideas, combine them into larger, strategic concepts.
“An idea that cannot be tested is useless; if you can’t bring data to bear on it, it needs more development.”
Once plotted on your matrix, pick one concept to work on. If you’re looking for quick wins, you may wish to prioritize those in the upper-left quadrant, but ultimately, the one you choose should consider both your short-term project constraints and your long-term strategic goals. Consider the access you have to a target market, the resources you have to run an experiment, and the support you have to build your final offering.
With your prioritized concept in hand, complete a “Concept Snapshot.” This simple and concise document outlines who the target user is, their unmet needs, what you’ll offer them, why that offering benefits them, and how your offering is differentiated from others’. Finally, transform that static snapshot into a dynamic storyboard that depicts the user experience, allowing your team to share a visual understanding of what they’re creating.
Step 2: Define your evidence to know what to collect.
Defining the evidence you need to test your concept starts with uncovering your assumptions about that concept. Those assumptions are your underlying beliefs about why you think your concept exists in the “Wow Zone,” where desirability, feasibility, and viability meet. To uncover those assumptions, explain why your concept is desirable enough to attract an audience, feasible enough to execute, and viable enough to sustain over time. Once uncovered, prioritize those assumptions to know what to test. Identify the most critical ones — the ones that, if true, move the concept forward, and if false, stop it in its tracks — and consider if you already know enough to determine if they’re true or false.
“Almost all new concepts rest on a limited number of particularly significant assumptions.”
With your critical assumptions prioritized, define the evidence, sources, and targets needed to test those assumptions. Whether it is qualitative phenomena (such as customer satisfaction), quantitative data (such as ticket sales), or a mix of both, ensure you gather the right kind of evidence by considering what’s meaningful in the sector in which you’re working and in terms of the audience you need to convince to buy. Then, note down the sources of your evidence. Will you find it in the archives or the field?
Finally, clarify your target metrics. Include a minimum value needed to take the next step (your “threshold target”) and a hoped-for value you’d like to see in the future (your “aspirational target”).
Step 3: Select your test to find the right recruits.
Consider four criteria to find the most appropriate test for the critical assumptions of the concept with which you’re working. The first is whether you can test your assumption with existing data. If you can, run a “thought test”: Guided by your hypothesis, find secondary evidence to confirm or reject it. If you can’t, you must gather new data from a field test. The second question is whether you need to test individual components or the whole concept. If critical assumptions belong to specific elements and the synergy between them is nonessential, test individual components by themselves. If not, test the whole concept.
“Taking the extra time to design your test thoughtfully ensures that your ultimate test outcomes will be about the quality of your concept.”
The third consideration is whether you need “Say data” focused on what customers say about their needs and wants or “Do data” focused on what they actually do. Determining which data you need depends on your research goals, stakeholder requirements, the developmental stage of your concept, and whether it requires users to make significant changes to their daily routines.
“Plans are worthless, but planning is everything.” (General Dwight Eisenhower)
The fourth criterion is choosing the most appropriate Say or Do test. From most Say-like to most Do-like, the five field tests are as follows:
- “Cognitive walkthrough” — A simple storyboard or pitch deck that walks prospective users through the details of a concept to test if your value proposition speaks to their needs and pain points.
- “Lemonade stand” — A short-term pop-up venue that attracts users to better understand what draws them to your concept.
- “Smoke test” — A series of ads connected to landing pages that allow you to test specific value propositions.
- “Simulation” — A virtual or physical experience, such as a research session, that allows you to gather and evaluate data to “test a specific solution.”
- “Trial” — A fully functional prototype or rigorously measured test meant to boost confidence in the viability of your solution.
Summarize the details of the test you select. Include your budget, time, assumptions, evidence, and thresholds. In addition, create a “Recruiting Guide”: Outline who your needed research participants are and how you’ll locate and recruit them.
Step 4: Build a prototype to get the feedback you need.
A prototype shows how a concept works, giving you the feedback necessary to determine if it’s desirable, feasible, and viable. To build your prototype, start by assessing how similar your prototype needs to be to your final product, service, or solution. Your answer, ranging from low to high, should depend on form (what it needs to look like), function (how it needs to work), and interactivity (how users need to engage with it).
“Think about starting with the simplest prototype that will let you accomplish the job at hand — testing your prioritized assumptions.”
After determining your fidelity level, pick a prototype format that matches your experimental stage, the assumptions you’re testing, and the field test you selected in step three. For early-stage cognitive walkthroughs and lemonade stands, choose low-fidelity prototypes like storyboards, posters, and pitch decks. These let you test if people have the problem you assume they do and if they’re interested in your solution. For mid-stage smoke tests and simulations, choose mid-level fidelity prototypes like mock-ups and digital ads with landing pages. These test what specific form a prototype should take, how it should function, and whether it’s likely to sell. For late-stage simulations and trials, choose a high-fidelity prototype like a minimum viable product. This tests if all the components work together, how much effort is needed to deliver the solution, and whether it shows any signs that it will scale.
After selecting your prototype format, build it. Review earlier steps, sketch a design, gather required materials, make the prototype, and walk through it with your team. In addition, create a research guide to ensure test consistency. Outline what the tester needs to do (interview questions, consent forms, etc.) and how the participants should interact with the test.
Step 5: Execute, analyze, and iterate to determine what to do next.
With your prototype made, your next step is execution. Start by auditing the test design: Outline who your target audience is, why they need your solution, your assumptions and measurements, and what format, prototype, and tools you’ll use. Pre-test the design to see if it functions as intended. Ask team members to stand in for real users and do a cognitive walkthrough of the test to ensure your value proposition is clear, your data capture plan works, and the right information gets collected. With auditing and pretesting done, run the actual test. Capture any necessary data using raw capture tools like Otter.ai and data synthesizing tools like Airtable.
“Tests (especially early ones) are not about proof; they are about increased levels of confidence in your concept.”
After you execute the test, analyze what you find. Does your data confirm or reject your critical assumptions? Based on your results, iterate to a newer version, pivot to a new idea, or abandon the concept entirely. Share what you learned with key stakeholders. Use quotes and stories from testers and users to highlight significant findings and make your argument about the offering’s value more compelling.
Finally, decide what you should do next. You don’t need complete data, just enough to feel confident you’re moving in the right direction. Generally, the greater the investment, the greater the amount of confidence you need in the concept. On a scale from “very unlikely” to “very likely,” how confident are you that your assumption is valid? Should you design another test or move forward with commercialization?
About the Authors
Jeanne Liedtka is a management professor at Darden Graduate School of Business who specializes in inclusive strategies and corporate innovation. Elizabeth Chen is an associate professor at UNC Gillings School of Global Public Health who specializes in human-centered design in public health training. David Kester founded the strategic design consultancy DK&A, and the executive training school Design Thinkers Academy London. Natalie Foley is a team leader at the start-up social enterprise Opportunity@Work, and was the former CEO of the consulting firm Peer Insight.