How machine studying improves buyer segmentation

One of the key challenges marketing teams must solve is allocating their resources in a way that minimizes cost per acquisition (CPA) and increases ROI. This is possible through segmentation, that is, by dividing customers into different groups based on their behavior or characteristics.

Customer segmentation can help reduce waste in marketing campaigns. When you know which customers are similar to each other, you will be better positioned to target your campaigns to the right people.

Customer segmentation can also help with other marketing tasks such as product recommendations, pricing, and upselling strategies.

Customer segmentation was previously a challenging and time consuming task that required hours of manual searching through various tables and querying the data in hopes of finding ways to group customers. But in the last few years it has gotten a lot easier thanks to Machine learning, artificial intelligence algorithms that find statistical regularities in data. Machine learning models can process customer data and identify recurring patterns for various functions. In many cases, machine learning algorithms can help marketing analysts find customer segments that are very difficult to spot through intuition and manual inspection of data.

Customer segmentation is a perfect example of how the combination of artificial intelligence and human intuition can create something that is greater than the sum of its parts.

The k-means clustering algorithm

K-means clustering is a machine learning algorithm that arranges unlabeled data points around a specified number of clusters.

Machine learning algorithms come in several flavors, each suitable for specific types of tasks. One of the algorithms that are suitable for customer segmentation is k-means clustering.

K-means clustering is a unsupervised machine learning algorithm. Unsupervised algorithms have no truth value or labeled data against which to evaluate their performance. The idea behind k-means clustering is very simple: arrange the data into clusters that are more similar.

[Read: How Netflix shapes mainstream culture, explained by data]

For example, if your customer data includes age, income, and expenses, a well-configured k-means model can help divide your customers into groups where their attributes are closer together. In this setting, the similarity between clusters is measured by calculating the difference between the age, income, and expenses of customers.

When training a k-means model, you specify the number of clusters into which you want to divide your data. The model begins with randomly placed centroids, variables that determine the center of each cluster. The model goes through the training data and assigns it to the cluster whose focus is closer to you. Once all training instances are classified, the centroids’ parameters are reset to be in the center of their clusters. The same process is repeated, reassigning the training instances to the fine-tuned centroids and realigning the centroids based on the rearrangement of the data points. At one point the model converges. When you iterate over the data, the training instances will not switch between clusters and focal points and change the parameters.

Finding the right number of customer segments

One of the keys to the successful use of the machine learning algorithm k-means is the determination of the number of clusters. While a model will converge on any number of clusters that you deploy, not every configuration is suitable. In some cases, a quick visualization of the data can indicate the logical number of clusters the model should contain. For example, in the following illustration, the training data has two characteristics (x1 and x2), and the mapping onto a scatter plot shows five easily identifiable clusters.

k- means not grouped data

If your problem has three functions (e.g., x1, x2, x3) your data can be visualized in 3D space, where clusters are more difficult to see. Beyond three functions, it is impossible to visualize all functions in one picture and you will have to use other tricks such as: B. a Scatterplot Matrix to visualize the correlations of different pairs of features.

Scatterplot MatrixThe scatter plot matrix visualizes correlations between different pairs of features. In this example, the problem area consists of four features.

Another trick that can help with clustering the data is Dimension reduction, machine learning techniques that examine the correlations in the data points and remove features that are incorrect or contain less information. By reducing the dimensionality, you can simplify your problem area and make it easier to visualize the data and spot clustering opportunities.

In many cases, however, the number of clusters is not apparent even using the above techniques. In these cases you will have to experiment with different numbers of clusters until you find an optimal one.

But how do you find the optimal configuration? K-means models can be compared based on them Inertia, which is the average distance between the instances in a cluster and its center of gravity. In general, models with lower inertia are more coherent.

However, inertia alone is not enough to evaluate the performance of your machine learning model. Increasing the number of clusters always reduces the distance between instances and their cluster centroids. And when each individual instance becomes its own cluster, the inertia drops to zero. However, you don’t want a machine learning model that allocates one cluster per customer.

An efficient technique for finding the optimal number of clusters is to use Elbow method, where you gradually expand your machine learning model until you find the point where adding more clusters doesn’t result in a significant decrease in inertia. This is called this elbow of the machine learning model. In the following picture, for example, the elbow is in four groups. Adding more clusters beyond this results in an inefficient machine learning model.

k- means clustering elbow methodThe elbow method finds the most efficient configuration of k-means machine learning models by comparing how adding clusters compares to decreasing inertia.

Use of k-means clustering and customer segments

Once trained, your machine learning model can determine which segment new customers belong to by measuring their distance from each cluster focus. There are many ways that you can take advantage of this.

For example, when you attract a new customer, you want to give them product recommendations. Your machine learning model will help you determine your customer’s segment and the products most commonly used with that segment.

In product marketing, your clustering algorithm helps to readjust your campaigns. For example, you can start an advertising campaign with a random sample of customers belonging to different segments. After you’ve run the campaign for a while, you can investigate which segments are more responsive and refine your campaign to only show ads to members of those segments. Alternatively, you can run multiple versions of your campaign and use machine learning to segment your customers based on their responses to the different campaigns. In general, there are many other tools that you can use to test and optimize your advertising campaigns.

Learn ensemble

K-means clustering is a fast and efficient machine learning algorithm. But it’s not a magic wand that quickly turns your data into logical customer segments. You first need to define the setting of your marketing campaigns and the types of features that are relevant to them. For example, if your campaigns target specific countries, geographic location is not a relevant feature and you’d better filter your data on that particular region. When promoting a men’s health product, filter your customer data to include only men and not include gender as one of the characteristics of your machine learning model.

In some cases you may want to add additional information, such as: B. the products they have bought in the past. In that case, you’ll need to create one Customer-product matrix, a table with customers as rows and articles as columns as well as the number of articles that were purchased at the interface between customers and articles. If the number of products is too high, you can create one Embedding, in which products are represented as values ​​in the multidimensional vector space.

Overall, machine learning is a very effective tool for marketing and customer segmentation. It likely won’t replace human judgment and intuition in the near future, but it can help increase human effort to levels that were previously impossible.

This article was originally published by Mona Eslamijam on TechTalks, a publication that examines technology trends, how they affect the way we live and do business, and what problems they solve. But we also discuss the evil side of technology, the darker effects of the new technology, and what to look out for. You can read the original article here.

Published on January 20, 2021 – 10:00 UTC

Comments are closed.