How to avoid Model Decay

Engineering
September 7, 2021

Model decay is a known phenomenon in Machine Learning that occurs when predictions of a model become less reliable over time. The reason for such a decay is that the data shifts due to changes in the underlying environment. To avoid models from decaying, they need to be retrained - which means that further labeled data is needed. With programmatic labeling, additional labels can be crafted easily.

ML Lifecycle

Machine Learning-based solutions are known for their great ability to learn from the data created by the environment they are trained in. For instance, if some artificial neural network is trained on labeled texts of a Twitter dataset (such as the sentiment towards a given hashtag), the model will be capable of making reliable predictions for new texts - given that it has seen enough samples during training.

As the model has been successfully trained, and the evaluation has proven that the model is in fact good enough to be put into production, we’re done working on it once the model is live - right?

In some cases, the answer can in fact be “yes”. That is if the environment creating the new data is not dynamic at all. However, if the underlying patterns can change, retraining the model is a must! For highly dynamic environments like public sentiment on a given topic, models can be retrained on a monthly to weekly basis.

What is the consequence if you do not retrain your model? Let’s take a look at the type of shifts that can result in your model losing its reliability - the so-called model decay.

Type of Shifts

We’ll take a look at three versions of shifts in this blog:

  • A prior probability shift appears when the label distributions of two datasets differ; when splitting your data during implementation into training and test set, this would be for instance that your training set has a label ratio of 80% / 20%, while your test set has a ratio of 50% / 50%. Now, let’s rename “test” into “production”. If the label distribution seen during production is significantly different from the training set, your model will most likely make unexpected predictions.
  • A covariate shift is quite similar to a prior probability shift. It occurs when the distribution of input features changes. Imagine this shift as follows: When you split your data during implementation, you can assign the split category to each record (such that for instance 80% of your records have a category “training” and 20% “test”). Now, if you’d be able to build a classifier to detect that difference based on the input features, you’d face a covariate shift. The result of a covariate shift, again, is that your model might create unreliable predictions.
  • A concept shift is different from a covariate shift or prior probability shift in that it is not related to the data distribution. Instead, a concept shift occurs when the relationship between two variables changes. This can happen due to seasonality effects or economic trends (or pandemics)! As you might already expect - this will lead to your model performing poorly.


Now, we don’t want our model to perform poorly. It should keep its performance; better would be if the performance even increases over time! That is why monitoring your Machine Learning model is important, such that you can keep track of the performance and potential shifts. But monitoring alone won’t help you - you need to take action.

Labeling data to retrain your model

When a shift in your data appears, the best solution is to train your model on added data. Now, in some scenarios you gain the labeled data automatically by waiting - such as in predicting weather forecasts or cancellations. In lots of cases, however, you’ll have to label data manually again, such as in analyzing tweet streams.

To avoid such constant manual work, you can set up a pipeline with programmatic labeling. This way, you might still have to make adaptations to the labeling procedure (such as adding trending keywords), but you can do so on scale. Your data is being labeled automatically, such that you can retrain your AI models on a regular basis without needing to do lots of manual work. You could even set up a CRON job to retrain your model on freshly labeled data periodically. This way, you can still monitor your models - but don’t have to worry about large shifts leading to your model making poor predictions anymore.


Leverage your Labeling with onetask

Become a labeling pioneer