Product analytics case study

Hi All,

Looking for some help regarding a business case study I am trying to solve. I am new to this company and am trying my best to impress, so would love some ideas. For the sake of anonymity, I will be vague but will try to add details where I can.

Goal: How can we measure the value/impact of integrations to our business?

Background: Imagine we work for a chat company like Slack. Slack integrates with many different apps: Calendar apps (Google Calendar/Outlook), Zoom, JIRA, etc.

How can we go about answering this question of what is the value of these integrations

Some things I have thought about:

  • We obviously can’t really just compare Group A (users who have these integrations installed) vs. Group B (users who do not have these integrations installed) because that doesn’t tell us anything - a lot of noise and bias there.

  • It’s hard to A/B test these because you can’t just make someone use an integration

  • Can we maybe run an A/B test where we send alerts to our test group to install/try integrations, and measure the engagement over time?

1 Like

I would try to split users into 2 similar groups and A/B test them.
There could be multiple meanings of value, I would start by looking at the current KPIs the business cares about. Maybe it’s the number of new users recommended by the existing pool, or the number of support calls received from existing users, or the number of new hashtags regarding the business shared by existing users.

I feel what’s difficult is maybe an activity requires a collection of items to be in place all at once for it to work, and that A/B testing only 1 integration at a time may not show the effectiveness of adding 1 integration alone, like how a bicycle needs wheels, chain, and rider to shift his weight on it to balance.

thank you for the response! i think you are hitting the nail when you talk about the difficulty A/B testing this.

but even thinking more about the design of this A/B test:

  • Let’s say we are ok a/b testing just 1 integration. Let’s say we work for Slack and we want to test the integration Zoom.
  • Let’s say we have come up with a couple metrics that we really like
  • How do we go about actually testing this metric? For example, what exactly is our test group - it’s not like we can force people to use an integration?

Have you heard of “Natural Experiment”, it occurs when experiment designers cannot control who gets the treatment, for whatever reason. I have never conducted this, but maybe it can give you some ideas by looking at how other researchers deal with it
1 Like

Thank you hanqi. This looks pretty great, i always found this link here https://medium.com/analytics-vidhya/https-medium-com-kcpub21-did-analysis-a8317c5aa5e6 that seems to be nice.

It looks like this method still requires a “manipulations” point and I am still struggling with coming up with a good way to introduce or define our manipulation for integrations.

For example, we cannot force someone to use an integration. what could our manipulation be here? One way could be to market/notify integrations as our manipulation. Like send a notification “hey come check out this integration”

Thoughts on how else we can do this?

If you didn’t care about everyone in treatment group receiving treatment from exactly the same point in time, then there is no need to worry about when or how to manipulate?
It becomes purely observational. Find out who is using the integration in a certain point/period of time, ignoring how/why they began using it, and putting them in the treatment group

but then aren’t we ignoring biases and at high risk to confounding effects? so we compare users who use the integrations/installed the integrations vs those that didn’t? but there is a high bias between users who choose to use integrations and don’t?

or are you saying we can compare user activity before they install and integration vs after they installed it?

I was thinking according to this goal, and “integrations” in this goal seems to me to be about a state of things - integrated vs not integrated (could be with 1 tool, or multiple tools), rather than an action (eg. make this company add an integration), so it didn’t occur to me that there is anything to manipulate or any cause-effect (and therefore no confounding effect too) to discover.

I wasn’t thinking along the lines of a per-user study comparing before and after integration as you suggested because I assumed the study was only thought of now, so there was no historical data of the state before integration available. If there is data on each user and not too many/ or too many types of users, then it would be really accurate to compare before and after integrations per user.