Clustering is usually done via the k-means algorithm. It works by assigning each observation to the closest centroid (vector of means for each group), then recalculating the centroid, and then iterating across all observations. The algorithm stops when no observation changes from cluster. But k-means has a major flaw. Because each observation is assigned to the closest centroid, we are implicitly assuming that the clusters are spherical (no correlation between the variables). In many cases this is not a realistic assumption.
A different approach is to assume that each observation comes from one out of three possible distributions. These distributions are assumed to be multivariate Gaussian, with possibly different covariance matrices. Of course, since the algorithm relies on estimating covariance matrices using standard techniques...