There are two ways of making data-driven decisions, analytical and experimental, and only one of them is right for you at any given moment!

Motivation
We need more forecasting, more predictive analytics, generic before & after analysis frameworks, more AI…. and yet 65% of managers report no visible value from any of those efforts. I think one important component of failed analytics projects is that companies fail to understand when to use analytical data-driven decision making, and when to use experimental data-driven decision making.
I claim, companies like Netflix, Airbnb and Zynga are so successful, because they understand these two approaches to “data-driven” decision making, and excel in the experimental approach, whereas usually, companies focus only on the analytical side. So let’s define them:
- Analytical: we analyze data from the past, then an optimal decision is derived, then we act “optimally”. This is often what is meant by “data-driven decision making”. But it’s only one side of the coin.
- Experimental: data from the future is considered. How that? Data is built into the product, feedback is built into the decision. For both products & decisions, we just probe, we don’t “act with all our chips”. We sense the data & feedback, deduce patterns, act, then repeat this tight cycle over and over until someday we might end up at an analytical approach.
Companies like Netflix, Airbnb, and Zynga choose the experimental approach more often…
Companies like Netflix, Airbnb, and Zynga choose the experimental approach more often than other companies as far as I can see. In doing so, they effectively navigate the cynefin framework and decide whether they are on the “complex” or “complicated” side of things.
So let’s figure out how you can excel at both approaches as well!
- Let’s see why and when an analytical approach works.
- Let’s see why and when an experimental approach works, where an analytical would be disastrous.
- Let’s recap the cynefin framework which puts a nice context to this, and see, that depending on the situation you have to be able to use both approaches.
- Let’s finally see, why OKRs work so well with both of these approaches, but how you have to be careful to know your domain in OKRs as well!
Analytical data-driven decision making
Let’s consider an architect who builds bridges. He’s 40 and has built 50 bridges in his career so far. Now he decides to think about how to better build bridges in order to make them more sustainable.
He gathers together his historical (past) data, checks all the bridges he built, checks the weather data, the traffic data and figures out a few things. Using an analytical process, he is able to decompose the deterioration into its main drivers which are:
- heavy storms
- heavy traffic load
Based on his observation, and the knowledge that those underlying causes will probably not change for the foreseeable future, this architect can now make a perfectly data-based decision and build lots of bridges with certain strongholds again either weather (in regions with lots of heavy storms) or against the traffic load where he suspects it.

That’s an analytical data-driven decision. And it perfectly fits into this domain, one where we do need an expert architect and lots of data to properly analyze & build better bridges, but one where we have “known unknowns”. We have clear constraints like the typical weather on earth for the next 50 years, everything within those constraints is physics and thus relatively “stable & forecastable”.
Analytical data-driven decision-making examples
So what kinds of decisions fall into this category? Turns out, while companies like Airbnb are known for an experimental mindset, they do have entire data scientist tracks turned to analytical data-driven decision making.
Everything that falls into the category of “optimization” is very much in this sector. Operational workflows, Search Engine Optimization, Anything that fits into a 5 sigma mindset. All of those things have clear constraints that we know and are very forecastable.
The usual day-to-day business of a product owner fits here as well. An example from my experience: My team experienced quite a bit of stress and complained about too much “noise in the sprint”. We decided to take an analytical approach to this, got a bunch of benchmarks of “noise”, which in our company apparently is somewhere around 10–20%, and got a bunch of data from our sprints. Based on the observations, we were able to reduce the “noise in the sprint” to under 5–10%, a figure that holds pretty well to this date. This is a classical analytical approach.
Experimental data-driven decision making
The FX (foreign exchange) trader is working in a totally different domain. Imagine a day trader. He takes a bunch of positions, then closes them, analyses the data, the wins and the losses. He then identifies the perfect trade, deduces analytically the main drivers, forecasts them and then makes his next trade.
Only to see his position turn south….
What’s the difference to the architect? The key difference between the architect and the trader is, that the architect can actually deduce cause & effect relations, where the trader can only deduce patterns, and thus has to probe constantly, very much unlike the architect who, once deduced, can build great bridges for the next 20 years, exactly in the same way.
The architect deduced the cause & effect relation:
Heavy storms => cause lots of humidity & spray damage => which deteriorates the steel parts of the bridge => which deteriorates the overall state of the bridge.
The trader cannot. He can, of course, deduce the cause & effect relationship afterward, but he cannot use it to forecast, because things will not be in the same relation in the future, not even the next day. Far from that even.

So what can the trader do? After all, there are quite a few traders who make money in the markets and even large hedge funds like Bridgewater Associates and Renaissance Technologies who even trade automatically/algorithmically. What the trader can do is deduce patterns. He can continually probe to adjust and refine the patterns which become his trading strategy. Indeed this is somewhat the way Ray Dalio describes the founding story of Bridgewater capital.
But for that, the trader has not to act, but to probe all the time and act based on the probes. His process is more like this:
- Probe T=0: Try to use a trailing stop/loss with 50 pips & and some crossing of a bunch of moving averages as entry point, use 1% of capital.
- Turnout T=1: work out fine, got stopped out after 200 pips. Trader can deduce some pattern, he can deduce that hard rallies after NFP data releases tend to last in pairs against the USD.
- Probe again T=2: Use that strategy & continually probes, deduces more patterns, adjusts his strategy to incorporate the patterns.
- Probe, Sense, Act, Probe,…
- T=5: Now got stopped out 3 times in a row with 50 pip loss, notices that here no hard news data was involved. So he sees a pattern relating to news data and the rallies he’s trading on, and incorporates that into the strategy.
- …
And this is really, what you see in lots of current tech companies.
Experimental data-driven decision making examples
The key to experimental data-driven work is to “build data into the product” or “build feedback into the decision”, and start out with a hypothesis-driven mindset to problem-solving.
A good example is the NetFlix personalization engine, or really most of today’s machine learning products. A machine learning product that has a CD4ML attached to it does exactly this: it experiments, without assuming anything. It deduces patterns and then acts, deploys a new model. No one at NetFlix tries to guess what to recommend to whom, instead they rely on lots of experiments to figure it out.
Another great example is most of the work the company Zynga does. They “build data into every decision”, they even invented a new kind of tracking for their games (ZTrack) to get feedback from their games. In an experiment, they started to let users pay for progress in buildables in their “*Ville” games. They then evaluated the data and realized users weren’t paying for everything, but they would pay for the last part of the progress (that’s a pattern). Based on that insight Zynga optimized the game for buildables that encourage exactly that behavior.
Of course, prototyping and the very nature of SCRUM or agile are based on this kind of work. Failing to incorporate this into the product will hinder the success of such frameworks at your company. A great example from my work is, that I was missing data about the products we were developing. We started to include a “track the users” story into all of our releases. After getting more & more feedback, we through a couple of “probes” to see how shifting the direction would go, and it turned out pretty well. The new direction of the product is now a totally different one.
Cynefin & decision making
Cynefin is a framework developed by Snowden & Kurz, published in an article in 2003. Cynefin is roughly a framework for determining your “surrounding circumstances”. It can be used in a variety of contexts, but we care about the decision making part here.

The four key takeaways from the Cynefin framework are (for more details, read the paper!):
- The three assumptions usually put into decision making don’t work all the time, just sometimes: Not everything involving humans & markets is ordered; rational choice in humans is an illusion quite often, even though it is a good approximation most of the time; not every “blink” is a “wink”, not every signal of someone is actually a signal.
- There are five domains that govern our world, not just the ordered “easy & complicated” ones as dictated by the usual assumptions above, which imply order.
- You should act in a very different way in all of them.
- There are a bunch of ways of pushing things from one domain into another like exploration, exploitation and the likes.
Neither the chaotic nor the simple domain are in the realms of data-driven decision making, the complex and complicated however are the two that are of interest to us.
When to use what?
Parts of your business will fall into the unknown domain, parts will fall into the complex. Snowden & Kurz described a “context exercise” in their 2003 paper which will roughly produce something like a map with which you can determine which decisions should be rather analytically data-driven and which one should be experimental data-driven.
The process for a session is:
- Focus on one Context, maybe your company, parts of the company, etc. Get a group of people into one room. And give them some preparatory material to get them into the context.
- Do a structured brainstorming on whatever is important to your “sense-making” process.
- Draw up a Cynefin frame, just with the four corners marked.
- Let the group together place items of next to no discussion in those corners together.
- People place all other items somewhere in the square.
- People together draw the lines that clearly mark things into the proper domains; This should leave a large “disorder” domain which is to be discussed next.
- People discuss all of the disorder items, and “pull-in” the borders until only the items of the heavy dispute are left in the disorder domain.

Once you’re here, you know which parts to handle with an experimental approach, and which ones with an analytical one. But wait, you’re doing OKRs? Let’s see how this fits perfectly into this framework.
Pair it with OKRs!
So how does this work together with data-based controlling mechanisms? I think it works together great. OKRs as a framework focused on larger objectives for instance, when applied correctly actually leave just the right amount of space for both, analytical and experimental decision making. In the process of formulating an OKR, you simply have to be aware of in which domain you act. Here’s an example:
A typical sales objective could be:
Objective analytical: Increase sales revenue for product X by 20%
KR1: Sales revenue increase by 5% in large company segment
KR2: 30% increase in small-medium sized company segment
KR3: Every sales rep contacted 100 customers in Q1 about product X
KR4: Customer survey reports satisfaction scores of 7,5+ on product X
If you’ve been running the company for a few years, know the markets, and have a decent sales team, the problem of getting more revenue out of a particular product will probably fall into the complicated domain. You might deduce that to sell such a product you usually have a closing rate of X%, so you can break things down into numbers of customers to be contacted, per segment, and possibly a quality metric in the end.
On the other hand, if your product is new, you don’t know the markets well, the customers are new, or you’ve made some major changes to it, then this problem will probably fall into the complex domain. The objective changes a bit to incorporate probing & sensing, but the really important change is in the OKR check-in you should do regularly!
Objective experimental: Increase sales revenue for product X by 20%
KR1: Every sales rep contacted 10 customers in Week1–2 of Q1 about product X
KR2: Customer survey reports recommendation scores of 7,5+ on product X beta.
KR3: Every contact is surveyed on pros/cons of the product.
KR4: We get 500 new customers to use product X
KR5: Every sales rep contacted 500 customers in Q1
See the difference? Experimental means we don’t know which segment to focus on, so we simply estimate the new customer numbers (KR4 replaces KR1, KR2). But it means we have to probe, to get tight feedback fast, that’s why we now have KR1 & KR5. For a new product, we want recommendations, not just quality so we changed KR2. KR3 focusses on getting more feedback.
But the OKR process is not done with this objective! Unlike the “Objective analytical”, in “Objective experimental” we have no idea where to go, which customers to focus on, we don’t even know whether we might need further development efforts for product X. So here we will have bi-weekly check-ins and go & change the KRs, to hopefully at the end of the quarter reach something like the “Objective analytical”, a target group we know, segments that are tight, and reliable numbers. But that needs constant probing & sensing throughout the quarter.

Now it’s your turn. Do you know which of your problems & decisions fit into the complex or complicated domain? Go ahead and map them out! Do your OKRs cover that difference? Does your approach to data-driven decision making reflect the reality of your domain?
Resources
- Kurz & Snowden, 2003 with their cynefin framework.
- There is a great book J. Highsmith et. al. from ThoughtWorks called the EDGE. So far I’ve only read the free chapter, but the ideas go in the same direction. Indeed their argument is that today, “experimental data-driven” is the key to success.
- More about Airbnb and their data-informed decision making.
- A CD4ML implementation on GitLab by the author with some explanations and the link to the generic concept by ThoughtWorks.
- For more on OKRs read Measure What Matters.