How To Implement Machine Learning Metrics From Scratch in Python

Table of Contents

Machine learning models require thorough evaluation to assess their performance accurately. To achieve this, we often use various metrics to quantify how well a model is performing. While many libraries offer pre-built functions for these metrics, it can be educational and insightful to implement them from scratch. In this article, we’ll walk through the process of implementing some common machine learning metrics in Python.

Setting Up the Environment

Before we start implementing machine learning metrics, let’s set up our Python environment by importing necessary libraries:

import numpy as np
import pandas as pd

We’ll also create some sample data for testing our metrics:

# Sample data for classification
true_labels = np.array([1, 0, 1, 1, 0, 1, 0, 1, 0, 0])
predicted_labels = np.array([1, 0, 0, 1, 1, 1, 0, 0, 0, 1])

# Sample data for regression
true_values = np.array([2.5, 3.0, 1.8, 4.2, 2.0])
predicted_values = np.array([2.7, 2.9, 1.5, 4.0, 2.2])

Implementing Classification Metrics

1. Accuracy

Accuracy measures the ratio of correctly predicted instances to the total instances.

def accuracy(y_true, y_pred):
    correct = np.sum(y_true == y_pred)
    total = len(y_true)
    return correct / total

accuracy_score = accuracy(true_labels, predicted_labels)
print("Accuracy:", accuracy_score)

2. Precision

Precision quantifies the number of true positives out of all predicted positives.

def precision(y_true, y_pred):
    true_positives = np.sum((y_true == 1) & (y_pred == 1))
    predicted_positives = np.sum(y_pred == 1)
    return true_positives / predicted_positives

precision_score = precision(true_labels, predicted_labels)
print("Precision:", precision_score)

3. Recall

Recall calculates the number of true positives out of all actual positives.

def recall(y_true, y_pred):
    true_positives = np.sum((y_true == 1) & (y_pred == 1))
    actual_positives = np.sum(y_true == 1)
    return true_positives / actual_positives

recall_score = recall(true_labels, predicted_labels)
print("Recall:", recall_score)

4. F1-Score

The F1-score is the harmonic mean of precision and recall and provides a balanced measure.

def f1_score(y_true, y_pred):
    prec = precision(y_true, y_pred)
    rec = recall(y_true, y_pred)
    return 2 * (prec * rec) / (prec + rec)

f1_score_value = f1_score(true_labels, predicted_labels)
print("F1-Score:", f1_score_value)

Implementing Regression Metrics

1. Mean Absolute Error (MAE)

MAE calculates the average absolute difference between predicted and true values.

def mean_absolute_error(y_true, y_pred):
    return np.mean(np.abs(y_true - y_pred))

mae_score = mean_absolute_error(true_values, predicted_values)
print("Mean Absolute Error:", mae_score)

2. Mean Squared Error (MSE)

MSE calculates the average of the squared differences between predicted and true values.

def mean_squared_error(y_true, y_pred):
    return np.mean((y_true - y_pred) ** 2)

mse_score = mean_squared_error(true_values, predicted_values)
print("Mean Squared Error:", mse_score)

3. Root Mean Squared Error (RMSE)

RMSE is the square root of the MSE and provides a measure in the same units as the target variable.

def root_mean_squared_error(y_true, y_pred):
    return np.sqrt(mean_squared_error(y_true, y_pred))

rmse_score = root_mean_squared_error(true_values, predicted_values)
print("Root Mean Squared Error:", rmse_score)

Cross-Validation and Metric Evaluation

Evaluating your machine learning model’s performance on a single dataset may not provide a complete picture of its capabilities. Cross-validation is a technique used to assess model generalization by splitting your data into multiple subsets (folds) and training the model on different combinations of these subsets. You can then calculate metrics on each fold to get a more robust assessment of your model’s performance.

Here’s an example of how to perform cross-validation with a classification metric (e.g., accuracy) using scikit-learn:

from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression

# Create a classification model (e.g., Logistic Regression)
model = LogisticRegression()

# Perform 5-fold cross-validation
cv_scores = cross_val_score(model, X, y, cv=5, scoring='accuracy')

# Calculate the average accuracy score
average_accuracy = np.mean(cv_scores)
print("Average Accuracy (Cross-Validation):", average_accuracy)

Custom Metrics for Specific Problems

While the common metrics covered earlier work well for most machine learning tasks, sometimes you may need to create custom metrics tailored to your specific problem. These metrics can capture domain-specific nuances and provide a better assessment of your model’s performance.

For instance, if you’re working on a medical diagnosis problem where false negatives are costly, you might prioritize recall over precision. Conversely, in a recommendation system, you may want to focus on a different set of metrics, such as Mean Average Precision (MAP) or Normalized Discounted Cumulative Gain (NDCG).

Visualization of Metrics

Data visualization is a powerful tool for understanding the performance of your machine learning models. You can create various plots and charts to visualize how metrics change with different model configurations or hyperparameters. Some common visualization techniques include ROC curves for binary classification, precision-recall curves, and scatter plots to visualize regression metrics.

Here’s an example of creating a ROC curve using scikit-learn:

from sklearn.metrics import roc_curve, roc_auc_score
import matplotlib.pyplot as plt

# Compute ROC curve and ROC area
fpr, tpr, _ = roc_curve(y_true, y_scores)
roc_auc = roc_auc_score(y_true, y_scores)

# Plot ROC curve
plt.figure()
plt.plot(fpr, tpr, color='darkorange', lw=2, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic')
plt.legend(loc='lower right')
plt.show()

Conclusion

In this article, we’ve covered the implementation of common machine learning metrics from scratch in Python. We discussed classification metrics like accuracy, precision, recall, and F1-score, as well as regression metrics like MAE, MSE, and RMSE.

We also delved into the importance of cross-validation, the creation of custom metrics for specific problems, and the visualization of metrics using various plots and charts. Understanding and implementing these metrics will empower you to assess your machine learning models effectively and make informed decisions for model improvement.

Command PATH Security in Go

Command PATH Security in Go

In the realm of software development, security is paramount. Whether you’re building a small utility or a large-scale application, ensuring that your code is robust

Read More »
Undefined vs Null in JavaScript

Undefined vs Null in JavaScript

JavaScript, as a dynamically-typed language, provides two distinct primitive values to represent the absence of a meaningful value: undefined and null. Although they might seem

Read More »