Top
Jul 27, 2018
in

Guide: set up your data team for success

Post by 
Michael Muse
W

e are living in the golden age of Data Driven Organizations. Algorithms! Big Data! Why, you’ve probably even got a Data Scientist or two! But…

If your data people are spending most of their time doing simple arithmetic on tricky business concepts, then you’re being overly generous calling it Data Science. It’s really just Business Intelligence. So which is your team doing?

Image for post

To be honest, it’s hopefully somewhere between. The distinction doesn’t matter. Academic rigor is useless if it doesn’t inform business outcomes. What does matter, on the other hand, is that a lot of what they are doing likely has NO scientific OR business merit. And it probably has nothing to do with their competence at complex math. Know what’s really holding them back?

Image for post
VP of Businessing

It’s you. Leader of an operational team. Lover of endless dashboards:

Image for post
How ‘bout them KPIs?

Like an Engineer with a bad Product Manager, your data team will be set up to fail unless you, the business owner, has set them up for success, upstream. Don’t be a fraud, be a real ‘numbers person’.

Eight fairly simple gut-check strategies can avoid the most dangerous mistakes.

The Tale of a Zombie Experiment

Our first strategy comes in story form:

1) Know if Analysis is Even Worth Doing

This seems obvious, but from my experience, it is the most frequently missed one. It tends to sneak up on you. It did to me, but I’ll show you how to catch it. This is a true story of a time I botched it, and why.

As our CRM admin, I was helping scale a support team, who would handle over 22,000 threads through our CRM in under 3 years. Agents could tag a Case by Reason(s) when closing it. Here’s the key: at least one tag was required. Managers set the requirement without much concern for operational cost. And I wrote the code to validate this in our CRM. Operational cost be damned. We would be a data-driven org.

What was the operational cost? Over three years, people tagged roughly 22,000 of these threads. A conservative estimate of 5 seconds to tag a Case (it was probably more) estimates this effort to around 30 hours of data entry. This might not seem like much, but imagine sitting someone down and telling them their job for a whole week was solely creating this data. We essentially did this, so what did the managers glean from this costly endeavor?

Nobody ever ran the report. How do I know? Had they done so, even once, they would have seen a pretty typical data pattern:

Image for post

Telling them was that our main problem was …

… ‘Other’.

Being fair, conversations are hard to taxonify! Being unfair, our agents didn’t see the point. But let’s go a step further. Imagine a system of perfect tagging, free from both agent confusion and apathy.

An oversimplified proxy is to just exclude the ‘Other’ tag. What do we see in the remaining list? That our second most frequent variety of problem still completely dwarfed all others. It isn’t important for you to know what it was. But to us, it was an obvious, doozy of a problem that most of the company was already working on. It was nothing we didn’t already know.

So, let’s read on in the chart. After our first two unhelpful tags, which accounted for 90% of the data, maybe we could find value in the long tail. In that tail, there were 5 or so mostly unsurprising tags that came up intermittently. And rest of the tags (like issues with our Mobile App) were pretty rare. All stuff an agent could have told us anecdotally. Had the numbers been more evenly spread, maybe we could have scrutinized their ranking (like if mobile app problems were higher than expected). But the data was so lumpy, it was useless.

Frequently, data in the real world is far less interesting than the results you had imagined in advance

Collecting the data was a wasted effort. Not just the 30 collective hours of tagging. But all the mental overhead of making this part of our Process™. Training agents. Changing taxonomies. Data maintenance. Opportunity costs.

What's worse, we were doing this with tons of other Custom Objects for Operations across our teams in Partner Management, Fulfillment, and Project Management in Salesforce. Tracking the why but never using the data (so painful to remember).

Often, ‘running the numbers’ isn’t worth the time spent, particularly if time consuming. So what should I have I been on the lookout for from the onset to catch this earlier?

The obvious one I’ve been harping on is to always seek Learning from analysis, and give up on an effort if there is nothing to learn. But beyond that, there was an even more important strategy we lacked: there wasn’t any Decision we intended to inform.

As an operator, I absolutely hate process without purpose.

More Data is better than less data*, but data is often more expensive to get than you realize. Flippantly delegating someone else’s time on data (often your data team or junior reps) is treacherous territory. Always:

1) Have a Decision you intend to inform

2) Keep efforts accountable to Learning against that decision

Summarizing the story, fitted to this framework:

Decision: None. We weren’t doing this analysis to inform any specific decision or planned future decisions. Aspirations of usefulness never came to fruition.

Method: Tag every customer service Case with a MECE taxonomy of tags

Learning: None. We didn’t look at it. If we did, we would have learned only obvious things. We could get better anecdotal color from agents.

Image for post
Know If It’s Worth Doing

Problem: Obviously, the problem with this example was that we had no decision objective or learning against it.

*Is more data always better than less? Can analysis be ruined before the data team even gets its hand on it? Even with a clear goals to inform a decision, talented mathematicians can be set up to fail. Here are seven other issues (using our new framework) to look out for:

Other Best Practices

Hereafter, I’ve created fictional scenarios that illustrate common strategies managers should use to make sure they don’t undermine their data team.

2) Look Out for Survivorship Bias

(popularized by the excellent book How Not to be Wrong)

Decision: A B2B software company is preparing their roadmap. Their primary goal is to cross the chasm from their first 200 early adopters to the next thousand customers.

Method: They build and distribute a customer survey. “20% of respondents voted to add this feature next”.

Learning: “This analysis will help us win more, by prioritizing a customer-driven feature set”

Image for post
Survivorship Bias

Problem: This methodology matches well to retention or upsell goals. But the goal was getting new customers. The method has measured exactly NOT this audience. It interviewed the people who bought the product, not those who didn’t. To increase hit rate, they need instead to hear why people didn’t buy.

3) Judge Effectiveness Against a Control Group

Decision: A manager wants to know if an SLA will improve SDR’s avg proposal response time

Method: Manager institutes an internal SLA that all inbound inquiries get a detailed proposal within 24 hours, data team models business-hours-adjusted average response time and builds a dashboard.

Learning: “Response times went down! The SLA worked!”

Image for post
No Control Group

Problem: For all you know, your SDRs had fewer inbounds/rep recently, so their queues were just shorter, which would explain faster response times. Is it possible that they almost always meet the SLA anyway, and that all this does is punish them during high stress periods?

4) Don’t Forget Opportunity Cost

Decision: Company wants to analyze the effectiveness of a ‘Related Items’ feature to increase add on purchases

Method: Product A/B tests a feature on a Product page that suggests ordering a complementary item. Data Engineers build and dashboard this new KPI.

Learning: “People are adding the related item! We should invest in a ‘Related Items’ feature to maximize cart add-ons”

Image for post
Opportunity Cost

Problem: Say fully building the feature from the A/B test takes 3 weeks to build and creates 8% more multi-item carts, now across all products. We’ve justified investing in the feature while overlooking our opportunity cost. Every day, we are burning money on rent, salaries, etc. What if the same team could instead build a ‘Bundles’ shopping experience in 4 weeks, and it has the potential to create 200% more multi-item carts. What are we doing wasting time on Related Items?!?!

5) Always Acknowledge Assumptions

Decision: A finance leader needs to budget how many Payroll Managers to hire to support a new market (Canada)

Method: Data team uses sales numbers to project demand. Then models supply headcount growth, infers how many Payroll folks are needed.

Learning: “As long as we don’t blow sales targets out of the water, one incremental hire can manage incremental Payroll needs for new hires in this new market over the next six months”

Problem: We acknowledged that growing too fast might affect our estimate. But an important assumption we missed is [our new market will have Payroll needs at roughly the same rate as our existing markets]. What about currency? Tax laws? Maybe each employee will be twice as much work to support!

6) Question Statistical Significance

Decision: Three Eng candidates in the same quarter reject offer letters. The hiring manager wants to figure out how to lose fewer candidates

Method: An HR rep asks each why they passed. All three describe choosing opportunities on a ‘smaller team where I can have more impact’. With consensus, who needs a data team this time?

Learning: “We only lose when candidates don’t get that our subteams are nimble and independent. Our 40 person engineering team sounds overwhelming. We should play up the smallness of the subteams candidates will be on, and avoid questions about the size of the overall engineering team”

Problem: Is there even enough data to draw a conclusion? The null hypothesis is: “If we had mentioned small subteams, candidates would never reject”. You don’t need a refresher on p-values to ask: “Does that even smell right?” Sometimes you flip heads five times in a row.

7) Look for Missing Data

Decision: A CX manager’s team has early/late shifts; and they want to staff star agents during the shift with the hardest, most urgent problems.

Method: Support Case escalation rate is bucketed by the data team into shifts, and a dashboard shows those buckets over time

Learning: “Our most urgent issues are spread 55% night, / 45% day. We need to actually spread out our best people on both shifts”

Image for post
Missing Data

Problem: Just looking at urgent emails misses the whole picture. We forgot to count phone calls to the agent helpline, which is likely how many urgent issues come in. Is it possible these skew towards the evening, and paired with the email data, argue strongly for staffing allstars on the night shift?

8) Beware of False Choices

Decision: A company wants to research customer appetite for potential new pricing models for its core product

Method: A survey is conducted asking which new option they like best.

Learning: “Of the 5 options, 80% said option 3. It’s a clear winner.”

Image for post
False Choices

Problem: What about all the options that weren’t in your survey? What if they wanted to pick more than one option (and 75% would also have picked option 2). What if 100% like your current pricing model better than the new options?

Summary

Don’t think hiring a PhD and investing loads of time on fancy math will save your data team. During draft editing, my former colleague (and data all star) Ryan Brennan wisely articulated that all these examples are simply problems of abstraction. When we represent ideas with everyday numbers, the onus should be on a business owner to ensure the representations are accurate. Don’t shirk your duty. Don’t set your team up to fail.

Get the free tool described in this post