Chapter 11. Eliminate Waste – Data Versus Opinions
Once product development is underway, there are new feature ideas that will make their way into the product backlog. Having a data strategy can help us to evaluate new feature ideas and how our product performs in terms of success metrics. As we saw in Chapter 7, Track, Measure and Review Customer Feedback we need to track, measure, and review customer feedback. We need to seek qualitative and quantitative insights. However, teams can be easily distracted when looking at the wrong data and biases. We need to eliminate wasteful processes in how we seek and interpret data. Product teams need to have a data strategy as an inherent part of their way of working.
Accordingly, this chapter will address the following topics:
- Defining the hypothesis that we seek to validate
- The problems with data
Defining the hypothesis
"If you don't know where you're going, any road will take you there." - Lewis Carroll
Product feedback can come in many ways. Chapter 7, Track, Measure, and Review Customer Feedback, outlined some of the internal and external channels used to gather feedback. The problem is that when there is a lot of input from all these channels, we can get overwhelmed. There can be distraction from having too many voices in our ears. We don't know which voice to respond to because we often don't know where we're headed. Many times, businesses respond by listening to the loudest voice they can hear.
After all, the squeaky wheel gets the grease! A complaint on social media, or a suggestion from a close friend or a respected advisor, can suddenly become our focus of interest. It is indeed important that we respond to an influential consumer, whose opinion can cause damage to our brand. However, not every instance requires that we change our product because one user (however influential) complained. We can manage expectations with messaging, PR, and customer service too.
In fact, early-stage products frequently fall into this trap of getting feedback and advice, and not knowing which to heed and which to ignore. If you're a parent, you probably know what I mean. Raising an infant is one of the most challenging times for a parent. When we're fumbling about with no experience in having to handle a delicate infant, any support seems welcome. We look to our parents, neighbors, doctors, friends, and the internet to constantly check if we're doing the right thing. However, one untimely burp from the baby is all that is needed for advice to flood in. Grandparents, neighbors, and random strangers on the street advise you on what your baby is going through, how to hold him, what to feed him, or not feed him, and why you need to be a better parent. When we're in unsure territory and need all the help we can get, it's difficult to turn down advice. However, free advice soon becomes a bane. Knowing whose advice matters and more importantly knowing if we even need advice, is important. Every parent sooner or later figures this out and so should product teams!
Working around strongly opinionated stakeholders requires people management skills. Having a data strategy as an inherent way to make product decisions can help us to navigate the quagmire of opinions. Knowing how to leverage data to drive decisions and curb opinions is a key skill for product managers today. Data is key to learning about a product's success. Finding the pulse of the consumer is an important aspect of product building. We need to know whether we're headed in the right direction, whether our features are being used or not, who is using them, and how to measure the success of what we're building. On the other hand, gathering data without an idea of what we seek to validate or falsify, can be wasteful.
Success metrics are a good way to define what we want to achieve, based on our capacity/capability. The core DNA of the business teams determines whether we put out bold, ambitious success metrics and the necessary operations to ensure that we deliver them. We also need to track our progress in meeting those success metrics.
There are two key phases of a feature idea (or product) where data-driven decision-making is crucial. The first phase is before we start to build a feature idea and second is after we have launched the feature idea. The type validation we seek at each stage can vary. In the first phase (before we build), we try to validate whether our assumptions about the impact that we expect our feature to have, hold good. In the second (after we launch), we try to measure how well our product is performing on the success metrics. Let's explore more about these two phases of data collection.
#1 – data we need before we build the feature idea
In Chapter 3, Identifying the Solution and its Impact on Key Business Outcomes, we discussed how to derive impact scores for a feature idea. For impact on Key Business Outcomes that was rated to be more than 5 on a 0-10 scale, we came up with detailed success metrics, but how did stakeholders arrive at this rating? What made them decide to rate an idea as having an impact of 2 on one key business outcome and 8 on another? Was it a gut feeling? Well, it is not entirely a bad idea to go with our gut feeling. There are always situations where we don't have data or there is no existing reference. So, we are essentially placing a bet on the success of the feature idea being able to meet our key business outcomes.
However, we cannot presume that this gut feeling is right and jump straight into building the feature. It is important to step back and analyze if there are ways in which we could find indicators that point us toward the accuracy or inaccuracy of our gut feeling. We need to find ways by which we can validate the core assumptions we're making about the impact a feature will have on key business outcomes, without building the product or setting up elaborate operations. They could be small experiments that we can run to test some hypotheses without spending our resources. I refrain from using the term minimum viable product here, because in many cases, technology or business viability isn't what we're going after. These experiments are more about getting a pulse of the market. They are sort of similar to putting up posters about a movie to tease the interest of the audience, before really making the movie itself.
We can activate the interest of our target group by introducing pricing plans with the proposed feature included in a bundle and excluded in a different one. We could see if customers show an interest in our new feature or not. We can also try out teaser campaigns, landing pages with sign up options , and so on, to see whether the feature piques our customers' interests, and also whether they are willing to pay for such a feature. Problem interviews with a targeted group of customers can also be a useful input to this. For instance, let's say ArtGalore seeks to find out if introducing a gift-wrapping option will result in increased purchases of artworks during the festival season in India. We can add content to the ArtGalore website introducing the concept of gifting artworks for the festive season, and track number of views, clicks , and so on. The entire gifting experience and the end-to-end process need not be built or thought through until we know that there is enough interest from customers.
A big advantage of product experiments, especially in software, is that we can be Agile. We have the opportunity to make minor tweaks quickly, run multiple experiments in parallel, and respond fast to what we're observing. This allows us to conserve our resources and direct them toward the things that are working for us.
We need to figure out the best way to validate our bets, but what doesn't work in the early stages of a product, may work well in a later stage of maturity. What works well with early adopters, may not work well with a scaling consumer base. What works in one demography, may not work for another. If we choose to hold onto our opinions without an open mind, we're in for trouble.
Agility and learnability are key when we're figuring out how to survive. Having a testable hypothesis is about validating our riskiest proposition. If our hypothesis is falsified, then it's time to pivot (if the feature idea is core to the business model, or not, then add it to the product backlog). As author Ash Maurya puts it, "Life is too short to build something that nobody wants." We can keep our product backlog lean by adding only those feature ideas that have the backing of early validation metrics.
#2 – data we need after we launch a feature
Once we launch a feature, we also need to measure and track how our product responds under different scenarios. We defined success metrics to validate the bets we made about the feature idea's impact on key business outcomes. While we check these metrics, we are also evaluating other limitations of our product. Does our product still work well when there is a surge in demand? How does our app respond to peak time demand? What if the peak demand period shifts? What if a new consumer base is adapting our product? Does our product work well in a different geography?
These are ongoing health checks that are needed to ensure that our product continues to deliver and drive value to the consumer and to the business. The data that we gather when a feature is live will be useful in the next phase of product building. If the product performance is stopping us from meeting customer demand, then this is an important input for stakeholders when they decide which key business outcome to invest in. These metrics not only help us respond to change but also to enhance our product's capabilities and identify its limitations. Individual health metrics may not provide ample data that could drive a decision to pivot. However, they may over time provide ample data points to unearth trends, bottlenecks, and strengths. They may also help us to understand the success or failure of an individual feature. Databases, user interaction analytics, volume and performance tracking tools, and so on, can be part of our data strategy to capture and analyze data and trends over time.
The problem with data
- Accessing data: One of the bottlenecks with data (or information) is that it is usually hoarded. Access to data is often limited to technology/data teams or to a few exclusive users. So, stakeholders come to depend on the technology/data teams to provide them with data. They raise requests for data or reports, which data teams provide based on how much time they have to hand. The data/technology teams make decisions on the fly about when to share data, who to share data with, and in what formats to share the data. When a more powerful stakeholder requests data, it is assumed that the need is urgent and data teams may drop everything else and attend to this. When someone not as powerful requests data, teams may deprioritize this task and not respond as swiftly. These requests also come in sporadically, so there could be redundant requests from different teams and so on. Working on these requests takes time and requires that technology/data teams switch context from product development into addressing ad hoc requests. This is one instance of a feature black hole that we saw in Chapter 10, Eliminating Waste – Don't Build What We Can Buy.
It is imperative that today's product teams start with a data mindset. Data strategy and accessibility must be built into a product team's DNA. We cannot assume that we will handle this if the need arises. In many cases, stakeholders don't know the power of data until we show them. Stakeholders also hold themselves back from seeking data because the process of getting data is hard and cumbersome, especially when it feels like they are imposing on the technology team's time. So, it becomes a Catch 22. Technology teams don't build a data strategy because they don't see stakeholders asking for data. Stakeholders don't ask for data because there isn't an easy way to access data.
Product strategy must proactively think, plan, and set up ways to collect data and share data transparently without elaborate procedures. The discussion on success metrics is a good indicator for the type of Key Performance Indicators that should be captured. An effective data strategy sometimes doesn't even need complicated digital tools to capture data. Simple paper-based observations are sometimes enough. Key metrics around revenue, acquisitions, sales , and so on. can even be shared on a whiteboard with a person assigned exclusively to doing this. This works in a small team with an early stage product, but finding digital tools in the market that allow real-time visualization isn't very hard either.
- Running incorrect experiments: In the nonprofit organization where I worked, the finance team wanted us to build an ability for our investors to be able to donate or invest money every month in the rural borrowers listed on our platform. The problem was that investments/donations were sporadic. There was no way to predict how many investors would invest every month. So, because the investment amount was not predictable, we could not determine how many borrowers we should be onboarding. Indian businesses (with the exception of a few utility services) do not have the ability to automatically bill credit cards. So, our best option to get consent once and receive money automatically was to set up monthly direct auto-debits from bank accounts. However, the banks required paperwork to be signed and submitted before enabling this.
The finance team was convinced that investors were not investing every month because we hadn't made this process easier for investors. The product team was asked to pick up on this as a priority, and we started designing the feature. We soon realized that this was a huge feature to implement (purely based on the amount of complexity of rolling this out, and the dependencies on banks to deliver this successfully). We didn't have to estimate story points to figure out how big this was. Also, the paperwork aspect was a government regulation and outside of our control. So, while we could build requests for auto-debits into the workflow of the product, the paperwork still had to be done.
The team was getting pressurized into delivering this, so we started to gather some data. Why did the finance team think this feature would be so impactful in ensuring predictable monthly investments? The finance team insisted that every single customer they had spoken to wanted this option. Now, 100% of consumers wanting to invest every month is too compelling to ignore. Everyone in the leadership team was now convinced that implementing this feature was crucial for us to get repeat investments. Yet as we dug deeper and looked at our data, we found out that we had a very miniscule percentage of our investors who were investing through direct bank debits. The finance team had apparently spoken to only 15 people over the past three months. In a consumer base of over 9000 folks, 15 (the numbers are only indicative and not actuals) was not a sample big enough to base our product decisions on. Essentially, this was a decision not based on facts, but more on an opinion arising out of a limited context. Did it make sense for us to invest in a feature that was impacting so few of our consumers? If all our investors, who were investing through other payment options, such as credit cards, debit cards, and payment wallets, had to transition into paying through auto-debit, it presented a huge operational burden for us, given the paperwork involved. It was clear that given our finance team's capacity, this was not doable.
Once we had invalidated the basis on which the impact on business outcomes had been made, we ran a new experiment. We were now trying to validate if our investors (who were investing through other payment options such as credit cards, debit cards, and payment wallets) were even inclined to invest in us every month. If so, how many such investors were ready?
We built something very simple to validate this. We introduced an option for users to tell us whether they wanted a reminder service that would nudge them to invest in rural entrepreneurs every month. It took us half a day to add this option to our investment workflow. If they chose this option, we informed them that we hadn't yet built the feature and thanked them for helping us to improve our product. After three months of observation, we found that ~12% (the numbers are only indicative and not actuals) of the consumer base (who transacted on our website) opted in.
This was a big improvement from our earlier target base. While it was a good enough indicator and worth exploring we were still limited by our ability to automatically charge credit cards. So, we limited our solution to a reminder service to send out automated emails on specific dates to the customers who had opted in for a re-investment and tracked conversions from those. We explored our data to see if there was a trend in investments peaking on certain days/dates each month. We found that data trends indicated certain dates when there was a peak in investment. We scheduled our reminder emails to be sent on the peak investment date of each month.
After three months of observing conversions from reminder emails, we figured that this strategy was working well enough for us. We continued to sign up more investors and to socialize the payment reminder on our website.
- Learning from the wrong data: What if we have compelling data, but our data is flawed in how we chose to collect it? Design has a great influence on how people use products, for instance, using coercive design versus persuasive design. These concepts boil down to simple things such as which option presented to the user is checked by default. If we choose to select an option to donate $1 to charity by default, and we keep it hidden at the bottom of a page, where no user has seen it, then we can't claim that visitors to our website are very generous.
Basing product decisions on data alone is not enough. It is necessary to collect ample verifiable evidence, but it is also important to capture this data at a time when the consumer is in the right context. For instance, asking for feedback on a website's payment process two weeks after a customer purchased something trivial, may not work very well. Context, timing, content, and sample size are key to finding data that is relevant for use.
- Bias: Gathering data is only half the battle. Interpreting data is the dangerous other half. Human cognitive biases form a big part of the incorrect decisions that we make based on data. We feel great that we have used data to base our decisions on, which means that we don't even recognize the inherent biases we bring into making our decisions.
For instance, my biases influence how I configure my social feeds. I found that a lot of content on my feeds was not appealing to my tastes or opinions. I started unfollowing a lot of people. I got picky about the groups and people I followed. Voilà, my social feed was suddenly palatable and full of things I wanted to hear.
This personal bias could potentially trickle into how we make recommendations on product platforms. We make recommendations of songs/movies/products/blogs based on our consumer's own likes and dislikes. This means that we are essentially appealing to the confirmation bias of our consumers. The more content we show them that appeals to their existing interests, the more likely they will be to engage with us. This shows us a positive trend in our engagement rates, and our recommendation strategy gets further strengthened. In the long run, though, we are slowly but silently creating highly opinionated individuals who have very little tolerance for anything but their own preferences.
Whether this is good or bad for business is dependent on the business intent itself. However, the bigger question to ask is: how do we learn something new about our customers, if we don't go beyond their current preferences?
Our bias also influences how we interpret data. For example, we might start with a hypothesis that women don't apply for core technology jobs. This might mean that our ads, websites, and social content have nothing that appeals to women. Yet, if the messaging and imagery on our careers website is well-attuned to middle-aged men in white-collar jobs, then can we claim that we can't find women who are qualified to work with us? Does this prove our hypothesis correct?
Summary
In this chapter, we found out that formulating an effective data strategy right from the start can help businesses to build a healthy data culture. Product decisions and priorities can be made objectively. We can eliminate waste in our data strategy by following some simple tactics:
- Defining what to validate
- Defining our success metrics
- Digging deeper into data that sounds like facts, but is actually only opinions
We now have a lean product backlog and data to back up our success metrics. In the next chapter, we will figure out if our team processes are slowing us down from delivering the Impact Driven product.