Debunking Narrative Fallacies with Empirically-Justified Explanations

This post was originally published on this blog

When we experience volatility in business metrics we tend to grasp for explanations.

We fall for availability bias, and the more visceral or intuitive the explanation the quicker we latch on.

‘The cool weather is dissuading customers’, ‘customers are happier on Fridays because the weekend is coming’, ‘people are concerned with the economic downturn’, ‘competitor xyz is making a lot of noise in the market which is diluting our messaging’, … etc.

The list goes on and on.

Of all our many talents – bipedalism, opposable thumbs, etc. – one of humanity’s most remarkable traits is our tendency to infer meaning from what happens around us.

We understand the world through stories, and this is such a fundamental part of our nature that it is almost impossible to stop ourselves from inventing very reasonable-sounding explanations for what we see.

A lot of these stories are intuitive and a lot of them might be right (seasonality is real in many businesses!), but we’re not good at knowing when our stories are trustworthy and when they aren’t.

So how do we deal with these issues at Stitch Fix?

Modeling a Business Metric

At Stitch Fix, we try to delight our clients with the fashion choices we send them.

Human stylists, aided by an algorithmic recommendation system, select assortments of 5 articles of clothing or accessories that are personally chosen for each client.

Clients keep what they want and ship back what they don’t want, only paying for what they keep.

We have various metrics of our overall business performance and health, which we monitor on a weekly basis.This blog post describes how we have tried to understand the weekly fluctuations of one of these metrics, but the story told here is applicable to any effort to understand business metrics.

Some weeks, this metric (which we’ll call ) is a little higher, some weeks it’s a little lower.

We know that even if nothing substantive changed about our business, our clients, our inventory, one wouldn’t expect to be absolutely identical from one week to the next, because of statistical fluctuations – the same reason why if you flip a fair coin 100 times, you wouldn’t be surprised if you didn’t get exactly 50 Heads.

But it turns out that our typical actual weekly fluctuation in is about 5 times the scatter that would be expected from the null hypothesis that everything is identical from one week to the next.

This means that there are real drivers underlying most of the fluctuations we see.

But how can we tell accurate stories about our changing metrics, and not fall victim to “narrative fallacy” by using explanations that sound good but might not be empirically grounded? 1

To address this issue, we decided to build a Metric Explainer – a model that predicts ’s value as a function of other things we know about our business.What do we know about our business that might be relevant to ?

Each week:

  • Different groups of clients receive shipments and some clients tend to consistently choose to keep more items than other clients.

  • We have different assortments of inventory and some pieces of clothing in our inventory tend to do better than others.

  • Different stylists work, and some stylists might be better at matching clothing to clients than others.

  • We have different numbers of new clients signing up, and new clients have different purchasing patterns on average than existing clients have.

  • The distribution of client tenures changes from week to week.Clients have different purchasing patterns throughout their tenure as members.

  • We have some operational issues that affect our business metrics, related to how many days per week our warehouses are open and processing returns.

  • Different promotions are going on.Promotions can affect our distribution of clients and can affect client incentives, which can affect our metrics.

  • There are seasonal variations.

  • And so on.

We can quantify what we know about the business with a list of features , describing the factors listed above and more.

For instance, might quantify something about how well we have managed to delight a particular week’s client cohort in the past; might be a measure of the quality of that week’s inventory; …; might be the fraction of clients who are new; and so on.

Using a specified set of features, we can try to build a model that will map known these known quantities to a predicted value of .

Selecting a Model Type

What sort of model is a good one to build?

We want accuracy, but not only accuracy.

The goal is to use this model to diagnose problems and usefully direct actions within the business.

If we had a black box that perfectly predicted every week, it wouldn’t necessarily give us any insight.

We want accuracy and interpretability.

And we want the interpretation to actually be related to the mechanics of the business.

The first two types of models that came to mind with a view toward interpretability were linear regression models and decision tree regression models.

Linear models are simple and can be fairly accurate, but one can often get more accuracy with nonlinear models.

On the other hand, decision trees can be both accurate and easily interpretable by non-data-scientist partners, but they are prone to overfitting and don’t lead to smooth relationships between input and output variables.

Interpretable Random Forest Regression

In search of a better way, we considered variations on random forest regression models, which are basically black-box collections of lots of decision trees.

They can be more robust and accurate than single trees, but in general they are significantly less interpretable.

We’d like to be able to explain the variations in metrics and say things like, “This week, went up by .

Of this , is attributable to the change in feature 1, to the change in feature 2, …” etc., where “feature 1” and so on are replaced by English descriptions of the thing that each feature quantifies.

With a linear model, once the coefficients are learned, the way to allocate portions of the total to each feature is trivial, but with a random forest regression model it’s less clear.

However, we realized that it’s possible to maintain interpretability with a random-forest regression model, too, as long as the weekly excursions in feature space aren’t too large.The procedure works as follows.

  • Suppose that the random forest regressor learns a function , and we want to explain the change in from week , when the features have values to week , when the features have values .

  • We can assign a portion of the total change to feature 1 as follows(note that the two lists of arguments below are identical, except the superscript of changes from to ):

We’re essentially calculating the partial difference of from week to with respect to .

We can also calculate ’s that are attributable to each of the other features.

The sum of the ’s does not in general exactly equal the total in this procedure, but if the jump in feature space from week to week is not too extreme, this discrepancy is not large.

With the above procedure in mind, therefore, we dabbled in “interpretable random forest models”.

Bayesian Linear Modeling

Ultimately the simple interpretability of linear models was attractive enough that we decided to return to them despite the potential slight improvement in accuracy from nonlinear models.Note that, with a linear model, we aren’t completely sacrificing the ability to capture nonlinear behavior or feature interactions – we can add terms that are linear in nonlinear functions of features or in feature interactions (i.e., terms like , , and ).

Another benefit to linear models is that it’s easy to apply priors.

We have some prior notions about how the model output ought to depend on the features, and it’s easy to fold these into a linear model (and a lot harder to do so with a random forest model).

For instance, in weeks when a higher proportion of the clients are those who generally like to keep more items, this ought to lead the model, all else being equal, to predict that will be higher.

We can incorporate this expectation as a strong prior that should have a positive coefficient.

After setting up sensible priors, we then sample from the posterior distribution of coefficients by making use of the affine-invariant MCMC sampler Emcee .

Selecting the maximum a posteriori (MAP) values of the coefficients produces a reasonable model, and one that is highly interpretable. 2

Also, for a large class of priors on linear-model coefficients (any that produce a concave log-posterior), it’s not necessary to do any sampling to get a MAP estimate, since we can just solve a convex optimization problem.

Feature Selection

If we start by thinking of features, should our final model have exactly the features we thought of?

Not necessarily.

Some features might not be predictive.

Having too many features might lead to overfitting.

How can we figure out which features to preserve in the model?

For a given set of features, we used -fold cross validation (with ) to estimate the mean and variance of the mean-squared error with data that the model was not trained on.

A set of features has non-null subsets, so if isn’t too large it’s possible to evaluate the model on all subsets.

For instance, if there are features, then it’s tractable to evaluate the model’s performance on all 1023 possible groups of features (although perhaps not advisable because of the hazards of multiple comparisons).

If the feature set contains 20-30 features or more, brute-force checking all possible subsets of features is no longer tractable, and we used greedy forward feature selection or greedy backward feature selection to find reasonable, if not globally-optimal, subsets of features. 3


In the end, we have a linear model with sensible coefficients that has good accuracy when applied to data it was not trained on.

So what do we do with this model?

The final step was to “textify” the output of the model. 4

This part was fun.

We wrote a Python wrapper for the model that finds the s associated with each feature, and writes an English sentence for each one, presenting the s in a useful order (say, largest to smallest).

The textification function compares the total to the model’s predicted and tells us the internal dynamics that led to the model’s .

For instance, it writes sentences like, “Last week, metric increased by from the previous week.

The model predicted an increase of , which is explained by the following changes in features: …”

If the model’s is close to the actual change in , this gives us some confidence that the model’s attribution of the reason why changed the way it did might be accurate.The textified model output is displayed on a dashboard where people in the company can easily access it.

Learning from “The Explainer”

Sometimes we see interesting dynamics that we might have entirely missed without the model.

One such example is that sometimes the metric’s value is nearly constant from one week to the next but there are large feature-related s that happen to almost entirely cancel out.

For instance, maybe the quality of inventory is much higher one week than the previous, which would tend to predict a higher value of , but we also had a lot more new clients in the second week, which might tend to predict a lower value of .

Absent our Explainer model, we might have simply glanced at the overall values of both weeks and said, “Looks like nothing much changed…”, and missed that there were actually large changes that happened to cancel each other out.

Another key benefit of the Explainer model is that it keeps us honest and empirically-grounded when the metric moves upward more than we might naively have expected.

Maybe the key drivers of a positive fluctuation are things we can put resources behind and reproduce, but maybe they’re chance alignments of factors that aren’t directly under our control.

The Explainer helps us to understand which is the case, and to avoid facile or self-congratulatory explanations for good news.

Although the Explainer model often presents useful insights and explanations about why the business metric has varied, there are occasionally times when the Explainer’s prediction for doesn’t closely match the actual value of .

When this happens, it means that unmodeled factors have played a large role in determining the outcome variable.

In other words, these situations present opportunities for us to learn more about what drives our business, and ultimately to refine our model and get smarter.

Statistical models such as those described here do a better job of capturing the relationships that exist between business metrics than people can do by simply telling sensible-sounding narratives.

But there is still a core issue of differentiating correlation from causation in the business world.

Our automated Explainer model helps keep us honest about what statistical relationships exist in our data, but we haven’t automated away the need for human modelers who build and interpret the models, using the careful reasoning and skeptical perspectives of a scientist.

Much like the styling part of our business, our business-metric-explaining efforts are ultimately a blend between algorithmic processing and human judgment.

By facilitating better understanding what drives variations in business metrics, an interpretable model and an algorithmic textification of the model’s output can help allocate finite resources to improve business performance and client delight.If you’d like to be a part of a company that approaches understanding our business this way, please get in touch!

We borrow the phrase “Narrative Fallacy” from Nassim Taleb’s The Black Swan .

MAP values do have the downside that they aren’t invariant under reparameterization, but it’s a lot easier to find the MAP value than a reparameterization-invariant quantity like the median in a high-dimensional space, and in practice the difference between the two isn’t large.

Though PCA is often used for reducing dimensionality, it didn’t seem ideal for our purposes because if we used PCA we’d end up with explanations like “ went up by 7% because principal component 1 went down by 3%”, which isn’t very interpretable.
Nevertheless, the same interpretability procedure that renders random forest regression interpretable could also make PCA regression interpretable.

Companies such as Narrative Science have taken computer-generated prose to a high level.
Our goal here was not top-notch writing, but rather statistically-significant interpretations of explanations for business-metric values, rendered in sentence form.


转载本站任何文章请注明:转载至神刀安全网,谢谢神刀安全网 » Debunking Narrative Fallacies with Empirically-Justified Explanations

分享到:更多 ()

评论 抢沙发

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址