Over the years, I've conducted dozens of research streams with hundreds of participants. This is a sampling of the questions I was seeking to answer through that research, with details anonymized to protect confidentiality of both organizations and participants.
The Contextualized Question: We had been on the path of high-growth, high-innovation for a while and our leadership team was eager to see that continue. How would our customers respond to a proposed roadmap with a heavy continued emphasis on innovation?
The Process: An in-person group meeting with 15 strategic, long-standing customers.
The Answer: Our customers were brutally honest with us: the innovation was interesting, but our defect rate had been increasing. They had no interest in new features until we improved quality. I turned my presentation off, sat down, and took notes. They outlined the problems and the effect that the reduced credibility of our system had on their day to day.
The Impact: I introduced "swimlanes" in the roadmap to allow forward progress on strategic initiatives, while devoting the majority of engineering time to quality. Defects went down, and customers responded extremely positively to our willingness to listen. We had a devoted core crew of renewals and references that drove measurable additional business.
The Contextualized Question: Like everyone else, we were running hard at multiple AI-related projects but were unsure how customers would perceive the enhancements. Would AI-based insights and features be trusted?
The Process: A series of two phone interviews with 5 customers.
The Answer: Interestingly, although customers self-identified into early adopter and late adopter groups, both held similar perspectives. All were curious about the insights that ML, modeling, and automated visualizations could unlock from their data, with chatbot/LLM-style features seeming more whiz-bang than useful. But both early and late adopters expressed concern about credibility of insights.
The Impact: We turned the dials on our roadmap to accelerate visualizations, while slowing efforts on a planned chatbot. We also worked with our BI vendor to ensure that any AI-generated insights would offer full transparency into their sources.
The Contextualized Question: We had developed an innovative ML-based method for helping customers identify critical trends within giant data sets. We were ready to surface these insights in workflow tools serving a variety of users, but wanted to be sure that business users would act on the data. How could we maximize trust and adoption of the model's recommendations?
The Process: In person and video conference conversations with 10 users of varying seniority within the pilot organization.
The Answer: Although more senior leaders bought into the goal of the project, most users had a difficult time believing that the app could identify insights better than they could. It felt "black box" and risked being underutilized.
The Impact: We redesigned the data recommendations to offer smaller sets of insights, with clear documentation of why each datapoint was raised. This increased transparency allowed the released product to be well received from all user types, enabling the pilot to grow into a significant driver of new revenue.
The Contextualized Question: We had little insight into our buyers or their purchase process, other than the operational details needed to fulfill the orders placed. What kinds of buyers do we serve, and how can we optimize messaging, product information, and features to serve them?
The Process: Phone conversations with 10 won customers as well as 5 "lookalikes" to ensure we don't bias our decisions toward a self-selected sample.
The Answer: Buyers subdivided into two primary categories: those who were price-sensitive solo decision makers, and those who were making decisions as part of a larger buying center. That buying center was not addressed as thoroughly by our products as it could be, introducing a straightforward opportunity to add the information and services they needed.
The Impact: Providing the correct information for the complex buying center during the purchase process dramatically improved retention and revenue, while reducing the cost of our own staff to support these customers.
The Contextualized Question: Customers would reach the last stage of the funnel but would often fail to purchase. The leadership team had divergent theories on why, with different solutions: Did a complex buying center need better sharing mechanisms? Did the site's delayed calculation of the final price create mistrust? Was there insufficient information about the product on the site?
The Process: Instead of an engineering-intensive A/B experiment, it would be faster and lower-cost to analyze our data and interview customers about their experience. I utilized funnel analytics from the prior 12 months, supplemented by phone calls with 5 customers and 10 "lookalikes" so that we did not bias our results with a self-selected sample.
The Answer: Although all theories held some truth, the research revealed how sensitive customers were to changes in final price. Surprises (especially regarding price) break trust. Customers found stable prices to be more important than other features -- and if they wouldn't be possible, it would be better to omit the price upfront rather than display one number and have the other change.
The Impact: Reducing price changes from catalog to checkout reduced funnel drop-off and doubled month-over-month bookings, boosting revenue and reducing the cost of acquisition.
The Contextualized Question: The outgoing legacy BI system could only support data refreshes once or twice a day. The upcoming data lake could support more frequent data refreshes, but at a tradeoff with other desired features. Would a 2-4 hour lag be acceptable?
The Process: Phone conversations with 5 strategic customers representing a variety of segments.
The Answer: No, a multi-hour lag would not be acceptable. Customers use data to drive operational decisions. Every customer was able to cite multiple, concrete instances where data that was no older than 15 minutes would be important to optimizing their internal resources. Although they would accept an hour turnaround time initially, their expectation was that up to 4x an hour should be possible in the near future.
The Impact: This information changed the selection criteria for the new BI system and the architecture of the underlying data lake. The roadmap shifted to accommodate the changes.
The Contextualized Question: Early versions of the product shipped with built-in dashboards and canned reports, and we believed customers would benefit from ad hoc reporting. Our power users were data-savvy decision makers, but would they readily adopt the slice-and-dice style of analytics we were designing?
The Process: A mix of in-person and phone conversations with 10 reporting users.
The Answer: No, even the most data-savvy of our users weren't sure how to get started with a blank slate. They loved the idea of full control, but needed the scaffolding of a few prebuilt templates to get "unstuck" when faced with the new analytics system.
The Impact: The simple addition of templates drove an immediate increase in adoption and stickiness, with no downside to users' perception of flexibility and analytical power.
The Contextualized Question: Our existing APIs were built according to the needs of customers several years ago. In the time since, our usage data shows that API consumers are calling the same API repeatedly. What are the modern use cases that we can serve with either new or redesigned endpoints?
The Process: Usage analytics for a 30-day period, supplemented by conversations with the top 4 API users.
The Answer: Surprise twist -- our customers weren't our biggest users. They were confused by our outreach, because they weren't making these calls. Although the usage data showed specific customer names, it was really an application within our own portfolio that was calling the API on behalf of those customers. We pivoted our research to our peer Product Managers and Architects and identified a series of iterative changes that would address their needs.
The Impact: Knowing the correct audience for our APIs allowed us to prioritize correctly, dramatically reducing system load and streamlining the movement of data across our portfolio.