Skip to main content

Victoria Harbour, Hongkong

Github Repository

Dimensionality Reduction

Manifold learning is an approach to non-linear dimensionality reduction. Algorithms for this task are based on the idea that the dimensionality of many data sets is only artificially high.

High-dimensional datasets can be very difficult to visualize. While data in two or three dimensions can be plotted to show the inherent structure of the data, equivalent high-dimensional plots are much less intuitive. To aid visualization of the structure of a dataset, the dimension must be reduced in some way.

The simplest way to accomplish this dimensionality reduction is by taking a random projection of the data. Though this allows some degree of visualization of the data structure, the randomness of the choice leaves much to be desired. In a random projection, it is likely that the more interesting structure within the data will be lost.

To address this concern, a number of supervised and unsupervised linear dimensionality reduction frameworks have been designed, such as:

Dataset

A multivariate study of variation in two species of rock crab of genus Leptograpsus: A multivariate approach has been used to study morphological variation in the blue and orange-form species of rock crab of the genus Leptograpsus. Objective criteria for the identification of the two species are established, based on the following characters:

  • SP: Species (Blue or Orange)
  • Sex: Male or Female
  • FL: Width of the frontal region of the carapace;
  • RW: Width of the posterior region of the carapace (rear width);
  • CL: Length of the carapace along the midline;
  • CW: Maximum width of the carapace;
  • BD: and the depth of the body;

The dataset can be downloaded from Github:

pd.set_option('display.precision', 3)
leptograpsus_data = pd.read_csv('data/A_multivariate_study_of_variation_in_two_species_of_rock_crab_of_genus_Leptograpsus.csv')
leptograpsus_data.head()
spsexindexFLRWCLCWBD
0BM18.16.716.119.07.0
1BM28.87.718.120.87.4
2BM39.27.819.022.47.7
3BM49.67.920.123.18.2
4BM59.88.020.323.08.2

Preprocessing

data = leptograpsus_data.rename(columns={
    'sp': 'species',
    'FL': 'Frontal Lobe',
    'RW': 'Rear Width',
    'CL': 'Carapace Midline',
    'CW': 'Maximum Width',
    'BD': 'Body Depth'})

data['species'] = data['species'].map({'B':'Blue', 'O':'Orange'})
data['sex'] = data['sex'].map({'M':'Male', 'F':'Female'})

data.head()
speciessexindexFrontal LobeRear WidthCarapace MidlineMaximum WidthBody Depth
0BlueMale18.16.716.119.07.0
1BlueMale28.87.718.120.87.4
2BlueMale39.27.819.022.47.7
3BlueMale49.67.920.123.18.2
4BlueMale59.88.020.323.08.2
data.shape
# (200, 8)

data_columns = ['Frontal Lobe',
                'Rear Width',
                'Carapace Midline',
                'Maximum Width',
                'Body Depth']

data[data_columns].describe()
Frontal LobeRear WidthCarapace MidlineMaximum WidthBody Depth
200.000200.000200.000200.000200.000
25.50015.58312.80032.10036.800
14.4673.4952.5737.1197.872
1.0007.2006.50014.70017.100
13.00012.90011.00027.27531.500
25.50015.55012.80032.10036.800
38.00018.05014.30037.22542.000
50.00023.10020.20047.60054.600

The dataset now needs to be segmented into 4 Classes for sex (male, female) and species (blue, orange). We can add this identifier as an additional row to our dataset inform of a concatenate value from the species and sex feature:

data['class'] = data.species + data.sex
data['class'].value_counts()

The entire dataset has a size of 200 and each class is equally represented with 50 specimens:

  • BlueMale : 50
  • BlueFemale : 50
  • OrangeMale : 50
  • OrangeFemale : 50

Name: class, dtype: int64

Visualization

Boxplots

# plot features vs classes
fig, axes = plt.subplots(nrows=3, ncols=2, figsize=(18,18))
data[data_columns].boxplot(ax=axes[0,0])
data.boxplot(column='Frontal Lobe', by='class', ax=axes[0,1])
data.boxplot(column='Rear Width', by = 'class', ax=axes[1,0])
data.boxplot(column='Carapace Midline', by='class', ax=axes[1,1])
data.boxplot(column='Maximum Width', by = 'class', ax=axes[2,0])
data.boxplot(column='Body Depth', by = 'class', ax=axes[2,1])

Dimensionality Reduction

While the orange and blue female show a good separation in several features the male counterparts are very close together. The Body Depth and Frontal Lobe dimensions are the best features to differentiate both species in the male sub class.

Histograms

data[data_columns].hist(figsize=(12,6), layout=(2,3))

Dimensionality Reduction

fig, axes = plt.subplots(nrows=5, ncols=1, figsize=(10,20))
sns.histplot(data, x='Frontal Lobe', hue='class', kde=True, element='step', bins=20, ax=axes[0])
sns.histplot(data, x='Rear Width', hue='class', kde=True, element='step', bins=20, ax=axes[1])
sns.histplot(data, x='Carapace Midline', hue='class', kde=True, element='step', bins=20, ax=axes[2])
sns.histplot(data, x='Maximum Width', hue='class', kde=True, element='step', bins=20, ax=axes[3])
sns.histplot(data, x='Body Depth', hue='class', kde=True, element='step', bins=20, ax=axes[4])

Dimensionality Reduction

Again, the orange and blue coloured distributions - representing the females of the orange and blue species - are well seperated. But there is a large overlap between the male counterparts. We can see that while the boxplot still showed a visible difference in the Frontal Lobe and Body Depth mean value, it is much harder to differentiate the histrograms.

Pairplot

sns.pairplot(data, hue='class')
# sns.pairplot(data, hue='class', diag_kind="hist")

Dimensionality Reduction

The pairplot plots the relationships of each pair of features. We can see that there are several plots that separate between our female and male classes. For example the Rear Width separates the green/blue (male) dots from the orange/red (female) ones. There is some separation between both female species (red/orange dots) in the Frontal Lobe and Body Depth graphs. But again, it is hard to separate both male species - there is always a strong overlap between the blue and green dots.

Principal Component Analaysis

A PCA is a reduction technique that transforms a high-dimensional data set into a new lower-dimensional data set. At the same time, preserving the maximum amount of information from the original data.

# Normalize data columns before applying PCA
data_norm = data.copy()
data_norm[data_columns] = StandardScaler().fit_transform(data[data_columns])
data_norm.describe().T

Normalization sets the mean of all data columns to ~0 and the standard deviation to ~1:

countmeanstdmin25%50%75%max
index200.02.550e+0114.4671.00013.0002.550e+0138.00050.000
Frontal Lobe200.0-7.105e-171.003-2.404-0.770-9.465e-030.7082.156
Rear Width200.06.040e-161.003-2.430-0.6772.396e-020.6082.907
Carapace Midline200.01.066e-161.003-2.451-0.680-7.745e-040.7212.182
Maximum Width200.0-4.974e-161.003-2.460-0.6264.909e-020.7112.316
Body Depth200.00.000e+001.003-2.321-0.770-3.820e-020.7522.216
# number of classes = 5
no_components = 5
principal = PCA(n_components = no_components)
principal.fit(data_norm[data_columns])

data_transformed=principal.transform(data_norm[data_columns])
print(data_transformed.shape)
# (200, 5)

singular_values = principal.singular_values_
variance_ratio = principal.explained_variance_ratio_
# show variance vector for each dimension
print(variance_ratio)
print(variance_ratio.cumsum())
print(singular_values)
Frontal LobeRear WidthCarapace MidlineMaximum WidthBody Depth
Explained Variance9.57766957e-013.03370413e-029.32659482e-032.22707143e-033.42335531e-04
Cumulative Sum0.957766960.9881040.997430590.999657661.
Singular Values30.947810215.507907173.053947421.492337570.58509446

Adding variables to our model can increase our models performance if the added variable adds explanatory power. Too many variables, especially non-correlating or noisy dimensions, can lead to overfitting. As seen above, already using 2 (98.8%) or 3 (99.7%) of our 5 classes allows us to describe our dataset with a high accuracy - the additional 2 will not add much value.

Scree Plot

A Scree plot is a graph useful to plot the eigenvectors. This plot is useful to determine the PCA. It orders the values in descending order that is from largest to smallest. It allows us to determine the number of Principal Component is a graphical representation by visualizing the amount of variation a value adds to a given dataset.

fig = plt.figure(figsize=(10, 6))
plt.plot(range(1, (no_components+1)), singular_values, marker='.')
y_label = plt.ylabel('Singular Values')
x_label = plt.xlabel('Principal Components')
plt.title('Scree Plot')

According to the scree test, the "elbow" of the graph where the eigenvalues seem to level off is found and factors or components to the left of this point should be retained as significant - here this would be the first two or three classes:

Dimensionality Reduction

fig = plt.figure(figsize=(10, 6))

plt.plot(range(1, (no_components+1)), variance_ratio, marker='.', label='Explained Variance')
plt.plot(range(1, (no_components+1)), variance_ratio.cumsum(), marker='.', label='Cumulative Sum')

y_label = plt.ylabel('Explained Variance')
x_label = plt.xlabel('Principal Components')
plt.title('Percentage of Variance by Component')
plt.legend()

The values of the amount of variance a component brings to our dataset and it's cumulative sum shows the same 'elbow' to pick our principal components from:

Dimensionality Reduction

Component PCA Weights

Our Principal Component Analysis assigned weights to each component allowing us to discard components that do not help us to classify the species in our dataset. Those weights can be visualized in a heatmap:

fig = plt.figure(figsize=(12, 8))
sns.heatmap(
    principal.components_,
    cmap='coolwarm',
    xticklabels=list(data.columns[3:-1]),
    annot=True)

Dimensionality Reduction

Transformation and Visualization

# use 3 principal components out of the 5 components
print(data_transformed[:,:3])

# append the 3 principal components to the norm dataframe
data_norm[['PC1', 'PC2', 'PC3']] = data_transformed[:,:3]

data_norm.head()
speciessexindexFrontal LobeRear WidthCarapace MidlineMaximum WidthBody DepthclassPC1PC2PC3
0BlueMale1-2.146-2.352-2.254-2.218-2.058BlueMale4.928-0.268-0.122
1BlueMale2-1.945-1.963-1.972-1.989-1.941BlueMale4.386-0.094-0.039
2BlueMale3-1.831-1.924-1.846-1.785-1.853BlueMale4.129-0.1690.034
3BlueMale4-1.716-1.885-1.691-1.696-1.707BlueMale3.884-0.2460.015
4BlueMale5-1.659-1.846-1.662-1.708-1.707BlueMale3.834-0.224-0.015

2D Plot

fig = plt.figure(figsize=(12, 8))
_ = sns.scatterplot(x='PC1', y='PC2', hue='class', data=data_norm)

Dimensionality Reduction

3D Plot

class_colours = {
    'BlueMale': '#0027c4', #blue
    'BlueFemale': '#f18b0a', #orange
    'OrangeMale': '#0af10a', # green
    'OrangeFemale': '#ff1500', #red
}

colours = data['class'].apply(lambda x: class_colours[x])

x=data_norm.PC1
y=data_norm.PC2
z=data_norm.PC3

fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(projection='3d')

ax.scatter(xs=x, ys=y, zs=z, s=50, c=colours)
plot = px.scatter_3d(
    data_norm,
    x = 'PC1',
    y = 'PC2',
    z='PC3',
    color='class')

plot.show()

Dimensionality Reduction

Dimensionality Reduction

Separation! Nice :)