Now I’m going to show you another set of data that won’t work out quite so perfectly, but you can see how k-means clustering is still. And the type of data that I’ll use in this example is uniform points. This is what uniform points look like. It’s just scattered everywhere. So I wouldn’t look at this and say there’s clear clusters in here that I want to pick out, but I might still want to be able to describe that, say, these points over here are all more similar to each other than these points over there. And k-means clustering could be one way of mathematically describing that, that fact about the data. So I don’t a priori have a number of centroids that I know I want to use here, so I’ll use two. Seems like a reasonable number. One, two. And then let’s see what happens in this case. Few points are going to be reassigned. Move the centroids. If you can see that there’s a few more little adjustments here. But in the end, it basically just ends up splitting the data along this axis. If I try this again, depending on the exact initial conditions that I have and the exact details of how these points are allocated, I can come up with something that looks a little bit different. So you can see here that I ended up splitting the data vertically rather than horizontally. And the way you should think about this is the initial placement of the centroids is usually pretty random and very important. And so depending on what exactly the initial conditions are, you can get clustering in the end that looks totally different. Now, this might seem like a big problem, but there is one pretty powerful way to solve it. So let’s talk about that.