Before we get started, let’s level set. This series focuses on evolving your Product team’s analytical competency. In the first post of the series, we began with creating a baseline of your organization’s data culture and a quick model of how to start a cultural shift. The underlying belief is that having the right data and a bulletproof analysis is useless if the organization isn’t prepared to use it. This post will move past culture and focus on the raw materials of the analytics craft: useful data. At the end of this post, you will be able to identify the fundamental data you need. Along the way, I’ll share some anecdotes and lessons learned.
Common Questions About Data Availability
Upon joining Cornerstone, I started working with individuals across Product to help create measurement plans. I heard the following questions and responses from all over the Product team:
- “What data do I need?”
- “How do I answer this question?”
- “Which systems have this data?”
- “Who has this data?’
- “Oh, we don’t have access to that.”
- “Oh, we have access to that?”
- “Can you download this report for me?”
- “We don’t have this data.”
The common threads across these questions are: (1) awareness of your data needs, (2) data sources, (3) data availability and access. This post will focus on the (1) and lay out the most fundamental data you’ll need. Mind that the list of questions and metrics are not complete. We’ll get to those in a later post in this series on analytics acumen.
Awareness: Knowing What You Need to Know
My first order of business was to understand what data our team needed to evolve our analytics. One of the fundamental jobs of a Product Manager is to achieve product/market fit for a valuable problem. The PM needs to understand what it means to achieve fit and how to measure value creation for the client.
Cornerstone’s core business is allowing enterprises to effectively systematize workflows and achieve compliance. (Our primary users are system administrators, managers, recruiters, HR business partners, learning and development managers, and so on.) With that in mind, we could geek out with metrics and blend measures into scores. But, the reality is that we need to keep it simple: (1) figure out if clients are using the product/feature, (2) how are they using it, and (3) are they satisfied with it.
Adoption: Are your users using the product?
The underpinning question — “What does ‘using the product’ mean?” — should be answered carefully.
“Are they using the product?” The question is binary for simplicity. However, the underpinning question — “What does ‘using the product’ mean?” — requires a careful answer. At first, Cornerstone was not so sophisticated in answering this question. We often measured adoption in Page Views. This doesn’t make much sense considering the complexity of our enterprise software. For context, we serve thousands of clients, many with several hundred thousand employees. Across products, we estimate several hundred features with different levels of configurability. On a slow day, we handle tens of millions of administrative actions like assign training and process job applications. That number doesn’t include “end-user” actions like taking training or submitting job applications.
Let that sink in for a bit… Imagine if your local Apple Store counted the number of people walking in, but not the number of purchases made. Kinda weird, right? (For what it’s worth, our own reporting and data systems was lacking, but more on that later). In any case, we moved on from vanity metrics to something more insightful. For each functional area of the platform, we defined usage as a completion of a funnel. Adoption would count if a user demonstrated a successful completion of a workflow.
Sidebar: We could split hairs that “a single user completing a single funnel” does not constitute adoption. I mostly agree. In Cornerstone’s case, the jobs to be done by our users aren’t always daily tasks and, in some cases, occur just a handful of times per year. The cyclical nature of our client-base — think annual performance reviews, quarterly compliance training, monthly onboarding, etc. — translates into relatively low frequency-high volume actions. We’ve actually matured to break down the difference between administrative adoption and end-user / employee-level adoption. Choose the usage definition that fits best with your business.
Engagement and Retention: Do they keep coming back for more?
The idea is to be clear on what it means for our users to be “on-script and on-schedule”. Once we have that understanding, we can measure the success of our product more effectively.
Alright. Now we know that they’ve adopted the product or feature. The immediate follow-up question should be “How are they using it?”. We can break this into a few critical shards.
- Engagement — Using the product as anticipated. You should be able to tell whether users are going “off-script” or not. You design the product and expect users to behave and interact with it a certain way. Articulating “off-script” usage helps you adjust your product or strategy.
- Engagement — Using the product without friction. Friction can defined in many ways. Our good friends in UX research measure these during user tests through “time on task”, “expected vs. actual effort”, “task completion rate”, and so on. Measuring these helps you quantify friction. Qualitative information can also convey the amount of friction in your product.
- Retention — Returning to the product on a regular basis. Ideally, you have a sense of timing of your user’s needs. This heuristic helps you look at your users’ behavior and see if they are “off-schedule”. Based on your design, you expect users to complete tasks on a regular basis. If they aren’t, then you have a few questions to answer and experiments to run.
- Retention — Having a long usage history. On another temporal note, it’s nice to build a product that has a lot of staying power. While I don’t have a full-blown model to share, it’s easy to see how a highly-retentive user will have an increased likelihood of renewal / repurchase. Repeated usage over an extended period of time implies that the product is valuable. Or at least valuable enough compared to other alternatives.
The idea is to be clear on what it means for our users to be “on-script and on-schedule”. Once we can articulate that, we can measure the success of our product more effectively.
Sidebar: I mentioned in the sidebar above that most Cornerstone users have a cyclical and compliance-based need for the product. This is great for us as Cornerstone provides a necessary solution for our clients’ business continuity. However, there are two notable instances where clients are not required to use our system.
The first is when we build new features that mean to replace or extend older functionality. This happens when we see users take shortcuts and workarounds. Once, we launched a page that was to blend key admin functions across the system with a snapshot of all activity. The idea was to simplify and provide a one-stop shop for administrators. It was a gorgeous page that demoed well. Clients adopted it almost immediately upon launch. After a couple weeks they abandoned it. Why? Because of muscle memory. After years of doing it the old way, users didn’t want to put in the mental effort to find and navigate to this new page. Their muscle memory won the battle.
The second instance may be a little less riveting, but is more commonplace. Cornerstone has shifted more towards building employee-centric products (read: less compliance-driven). With that, we now have to expect / encourage usage patterns for tens of millions of users. This means that we have to be even more specific about our personas and their variants. Moreover, we need to be extra specific about adoption, retention, and engagement.
Satisfaction: How happy are they really?
The underlying value of satisfaction data is in its prescience. The trajectory and tone of the data tells us about our relationship with the client and the propensity to renew.
Product and client satisfaction are wonderful indicators of the longevity of your business. At face value, satisfaction data measures what clients are thinking and feeling. But the underlying value is in its prescience. The trajectory and tone of the data tells us about our relationship with the client. Every touch point impacts that relationship and their propensity to renew. This covers from their first website visit all through using the product and dealing with support.
Satisfaction data can be either quantitative or qualitative.
- Quantitative — the standard measures are CSAT (Customer Satisfaction) and NPS (Net Promoter Score). CSAT is a transactional measure that asks about a specific experience with a product. It is a snapshot. NPS asks “How likely are you to recommend ____”. It measures the ongoing relationship with a product, service or organization. Check out these links from Qualtrics if you want to learn more: What is CSAT? and What is NPS?.
- Qualitative — this data be in the form of survey responses, reviews, comments. This type of data is especially powerful when reviewed with the quant measures above. Qual data helps us identify the main drivers for specific ratings — high and low. If you want to be fancy, you can run the text data through topic modeling to categorize / theme everything. You can also do sentiment analysis to create pseudo-quantitative measures of how people feel about your product.
Sidebar: I was lucky to have worked with data geek and lone Client Experience Manager, Yulia Burova. Her North Star was satisfaction across the client journey map. We partnered early on to (a) clean up data across the org, (b) share knowledge across teams, and (c) improve Product’s roadmap prioritization.
A few reflections / lessons from working with Yulia (list not exhaustive):
- The data you need might exist elsewhere in the organization. Go forth and partner.
- The data you have and the data you need might not be clean or reliable. Assume that it isn’t.
- The data you have might be biased**. Assume that it is.
- There is proverbial strength in numbers (yay, analytics) and literal strength in numbers (yay, collaboration). Yulia and I were way more convincing whenever we could pitch ideas / programs together.
**Quick story time… Yulia and I were reviewing Qualtrics data (CSAT and NPS) and noticed something funky. The current measurements from her instrumentation were noticeably and consistently lower than measurements before she joined the company. Her approach was to send a satisfaction survey to a random sample of clients. We asked around and learned that our Client Success Managers were previously the ones in charge of distributing the surveys. The CSMs also got to not-so-randomly pick who they sent the surveys to. Please don’t do this. Just don’t.
Other data to consider
Finance Data — In reality, product analytics doesn’t stop at the categories above. In most organizations, one true measure of success is straight up money. How much revenue does the product generate? Is the product line profitable? What’s the growth trajectory looking like? Make sure to team up with Finance to get the data. Set up a regular work stream with FP&A to do some deeper modeling and analysis.
Customer Support Data — PMs should also be aware of the quality of the product. Rather, PMs are responsible for the quality of the product and, thus, should have situational awareness. We break down support tickets into products questions, service requests, and bugs / defects. Normalizing the number of cases with the usage helps us keep a pulse on product quality. We’ve found that there’s a strong correlation between case resolution and client satisfaction. Surprise!
Laying Foundation for More Strategic PMing
After a few meetings, the PMs were able to articulate what data they should have about their products. This formed their measurement plans. We didn’t have all the data in hand, but the PMs define usage well and describe expected behavior. The PMs also started to use CSAT and NPS as another viable font of insight. The plans would soon help direct strategic decisions (like prioritizing roadmap). They would coordinate execution across teams from engineering to client readiness and training.
But first, we needed to figure out where the data would come from and how we could tap into those sources. That will be the topic of the next post.
If you’re looking for inspiration on what data you need, feel free to use the framework I presented above. Thanks for checking out the post. I love any discourse here and see how you’ve thought about the fundamental data you need. Leave a response! Share with anyone who might find this useful!