Roger Noble, Author at Towards Data Science https://towardsdatascience.com/author/roger_noble/ The world’s leading publication for data science, AI, and ML professionals. Tue, 14 Jan 2025 13:39:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://towardsdatascience.com/wp-content/uploads/2025/02/cropped-Favicon-32x32.png Roger Noble, Author at Towards Data Science https://towardsdatascience.com/author/roger_noble/ 32 32 Models, MLFlow, and Microsoft Fabric https://towardsdatascience.com/models-mlflow-and-microsoft-fabric-8faacaa90814/ Mon, 29 Apr 2024 11:41:17 +0000 https://towardsdatascience.com/models-mlflow-and-microsoft-fabric-8faacaa90814/ Fabric Madness part 5

The post Models, MLFlow, and Microsoft Fabric appeared first on Towards Data Science.

]]>
A Huge thanks to Martim Chaves who co-authored this post and developed the example scripts.

So far in this series, we’ve looked at how to use Fabric for collecting data, feature engineering, and training models.

But now that we have our shiny new models, what do we do with them? How do we keep track of them, and how do we use them to make predictions? This is where Mlflow’s Model Registry comes in, or what Fabric calls an ML Model.

A model registry allows us to keep track of different versions of a model and their respective performances. This is especially useful in production scenarios, where we need to deploy a specific version of a model for inference.

A Model Registry can be seen as source control for ML Models. Fundamentally, each version represents a distinct set of model files. These files contain the model’s architecture, its trained weights, as well as any other files necessary to load the model and use it.

In this post, we’ll discuss how to log models and how to use the model registry to keep track of different versions of a model. We’ll also discuss how to load a model from the registry and use it to make predictions.

Registering a Model

There are two ways to register a model in Fabric: via code or via the UI. Let’s look at both.

Registering a Model using code

In the previous post we looked at creating experiments and logging runs with different configurations. Logging or registering a model can be done using code within a run. To do that, we just have to add a couple of lines of code.

# Start the training job with `start_run()`
with mlflow.start_run(run_name="logging_a_model") as run:
  # Previous code...
  # Train model
  # Log metrics

  # Calculate predictions for training set
  predictions = model.predict(X_train_scaled_df)

  # Create Signature
  # Signature required for model loading later on
  signature = infer_signature(np.array(X_train_scaled_df), predictions)

  # Model File Name
  model_file_name = model_name + "_file"

  # Log model
  mlflow.tensorflow.log_model(best_model, model_file_name, signature=signature)

  # Get model URI
  model_uri = f"runs:/{run.info.run_id}/{model_file_name}"

  # Register Model
  result = mlflow.register_model(model_uri, model_name)

In this code snippet, we first calculate the predictions for the training set. Then create a signature, which is essentially the input and output shape of the model. This is necessary to ensure that the model can be loaded later on.

MLFlow has functions to log models made with different commonly used packages, such as TensorFlow, PyTorch, and scikit-learn. When mlflow.tensorflow.log_model is used, a folder is saved, as an artifact, attached to the run, containing the files needed to load and run the model. In these files, the architecture along with with trained weights of the model and any other configuration necessary for reconstruction are found. This makes it possible to load the model later, either to do inference, fine-tune it, or any other regular model operations without having to re-run the original code that created it.

The model’s URI is used as a "path" to the model file, and is made up of the run ID and the name of the file used for the model. Once we have the model’s URI, we can register a ML Model, using the model’s URI.

What’s neat about this is that if a model with the same name already exists, a new version is added. That way we can keep track of different versions of the same model, and see how they perform without having overly complex code to manage this.

In our previous post, we ran three experiments, one for each model architecture being tested with three different learning rates. For each model architecture, an ML Model was created, and for each learning rate, a version was saved. In total we now have 9 versions to choose from, each with a different architecture and learning rate.

Registering a Model using the UI

An ML Model can also be registered via Fabric’s UI. Model versions can be imported from the experiments that have been created.

Fig. 1 - Creating a ML Model using the UI. Image by author.
Fig. 1 – Creating a ML Model using the UI. Image by author.

After creating an ML Model, we can import a model from an existing experiment. To do that, in a run, we have to select Save in the Save run as an ML Model section.

Fig. 2 - Creating a new version of the created ML Model from a run. Image by author.
Fig. 2 – Creating a new version of the created ML Model from a run. Image by author.

Selecting Best Model

Now that we have registered all of the models, we can select the best one. This can be done either via the UI or code. This can be done by opening each experiment, selecting the list view, and selecting all of the available runs. After finding the best run, we would have to check which model and version that would be.

Fig. 3 - Inspecting Experiment. Image by author.
Fig. 3 – Inspecting Experiment. Image by author.

Alternatively, it can also be done via code, by getting all of the versions of all of the ML Models performance, and selecting the version with the best score.

from mlflow.tracking import MlflowClient

client = MlflowClient()

mlmodel_names = list(model_dict.keys())
best_score = 2
metric_name = "brier"
best_model = {"model_name": "", "model_version": -1}

for mlmodel in mlmodel_names:

 model_versions = client.search_model_versions(filter_string=f"name = '{mlmodel}'")

 for version in model_versions:

  # Get metric history for Brier score and run ID
  metric_history = client.get_metric_history(run_id=version.run_id,
                                             key=metric_name)

  # If score better than best score, save model name and version
  if metric_history:
   last_value = metric_history[-1].value
   if last_value < best_score:
    best_model["model_name"] = mlmodel
    best_model["model_version"] = version.version
    best_score = last_value
  else:
   continue

In this code snippet, we get a list of all of the available ML Models. Then, we iterate over this list and get all of the available versions of each ML Model.

Getting a list of the versions of an ML Model can be done using the following line:

model_versions = client.search_model_versions(filter_string=f"name = '{mlmodel}'")

Then, for each version, we simply have to get its metric history. That can be done with the following line:

metric_history = client.get_metric_history(run_id=version.run_id,
                                         key=metric_name)

After that, we simply have to keep track of the best performing version. At the end of this, we had found the best performing model overall, regardless of architecture and hyperparameters.

Loading the Best Model

After finding the best model, using it to get the final predictions can be done using the following code snippet:

# Load the best model
loaded_best_model = mlflow.pyfunc.load_model(f"models:/{best_model['model_name']}/{best_model['model_version'].version}")

# Evaluate the best model
final_brier_score = evaluate_model(loaded_best_model, X_test_scaled_df, y_test)
print(f"Best final Brier score: {final_brier_score}")

Loading the model can be done using mlflow.pyfunc.load_model(), and the only argument that is needed is the model’s path. The path of the model is made up of its name and version, in a models:/[model name]/[version] format. After that, we just have to make sure that the input is the same shape and the features are in the same order as when it was trained – and that’s it!

Using the test set, we calculated the final Brier Score, 0.20.

Conclusion

In this post we discussed the ideas behind a model registry, and why it’s beneficial to use one. We showed how Fabric’s model registry can be used, through the ML Model tool, either via the UI or code. Finally, we looked at loading a model from the registry, to do inference.

This concludes our Fabric series. We hope you enjoyed it and that you learned something new. If you have any questions or comments, feel free to reach out to us. We’d love to hear from you! πŸ‘‹


Originally published at https://nobledynamic.com on April 29, 2024.

The post Models, MLFlow, and Microsoft Fabric appeared first on Towards Data Science.

]]>
Experimenting with MLFlow and Microsoft Fabric https://towardsdatascience.com/experimenting-with-mlflow-and-microsoft-fabric-68f43043ff34/ Mon, 22 Apr 2024 11:38:44 +0000 https://towardsdatascience.com/experimenting-with-mlflow-and-microsoft-fabric-68f43043ff34/ Fabric Madness part 4

The post Experimenting with MLFlow and Microsoft Fabric appeared first on Towards Data Science.

]]>
A Huge thanks to Martim Chaves who co-authored this post and developed the example scripts.

It’s no secret that Machine Learning (ML) systems require careful tuning to become truly useful, and it would be an extremely rare occurrence for a model to work perfectly the first time it’s run!

When first starting out on your ML journey, an easy trap to fall into is to try lots of different things to improve performance, but not recording these configurations along the way. This then makes it difficult to know which configuration (or combination of configurations) had the best performance.

When developing models, there are lots of "knobs" and "levers" that can be adjusted, and often the best way to improve is to try different configurations and see which one works best. These things include improving the features being used, trying different model architectures, adjusting the model’s hyperparameters, and others. Experimentation needs to be systematic, and the results need to be logged. That’s why having a good setup to carry out these experiments is fundamental in the development of any practical ML System, in the same way that source control is fundamental for code.

This is where experiments come in to play. Experiments are a way to keep track of these different configurations, and the results that come from them.

What’s great about experiments in Fabric is that they are actually a wrapper for MLFlow, a hugely popular, open-source platform for managing the end-to-end machine learning lifecycle. This means that we can use all of the great features that Mlflow has to offer, but with the added benefit of not having to worry about setting up the infrastructure that a collaborative MLFlow environment would require. This allows us to focus on the fun stuff 😎 !

In this post, we’ll be going over how to use experiments in Fabric, and how to log and analyse the results of these experiments. Specifically, we’ll cover:

  • How does MLFlow work?
  • Creating and Setting experiments
  • Running experiments and Logging Results
  • Analysing Results

At a high level, MLFlow is a platform that helps manage the end-to-end machine learning lifecycle. It’s a tool that helps with tracking experiments, packaging code into reproducible runs, and sharing and deploying models. It’s essentially a database that’s dedicated to keeping track of all the different configurations and results of the experiments that you run.

There are two main organisational structures in MLFlow – experiments and runs.

An experiment is a group of runs, where a run is the execution of a block of code, a function or a script. This could be training a model, but it could also be used to track anything where things might change between runs. An experiment is then a way to group related runs.

For each run, information can be logged and attached to it – these could be metrics, hyperparameters, tags, artifacts (like plots, files or other useful outputs), and even models! By attaching models to runs, we can keep track of which model was used in which run, and how it performed. Think of it like source control for models, which is something we’ll go into in the next post.

Runs can be filtered and compared. This allows us to understand which runs were more successful, and select the best performing run and use its setup (for example, in deployment).

Now that we’ve covered the basics of how MLFlow works, let’s get into how we can use it in Fabric!

Creating and setting experiments

Like everything in Fabric, creating items can be done in a few ways, either from the workspace + New menu, using the Data Science experience or in code. In this case, we’ll be using the Data Science experience.

Fig. 1 - Creating an Experiment using the UI. Image by author.
Fig. 1 – Creating an Experiment using the UI. Image by author.

Once that is done, to use that experiment in a Notebook, we need to import mlflow and set up the experiment name:

import mlflow

experiment_name = "[name of the experiment goes here]"

# Set the experiment
mlflow.set_experiment(experiment_name)

Alternatively, an experiment can be created from code, which requires one extra command:

import mlflow

experiment_name = "[name of the experiment goes here]"

# First create the experiment
mlflow.create_experiment(name=experiment_name)

# Then select it
mlflow.set_experiment(experiment_name)

Note that, if an experiment with that name already exists, create_experiment will throw an error. We can avoid this by first checking for the existence of an experiment, and only creating it if it doesn’t exist:

# Check if experiment exists
# if not, create it
if not mlflow.get_experiment_by_name(experiment_name):
    mlflow.create_experiment(name=experiment_name)

Now that we have the experiment set in the current context, we can start running code that will be saved to that experiment.

Running experiments and logging results

To start logging our results to an experiment, we need to start a run. This is done using the start_run() function and returns a run context manager. Here’s an example of how to start a run:


# Start the training job with `start_run()`
with mlflow.start_run(run_name="example_run") as run:
    # rest of the code goes here

Once the run is started, we can then begin logging metrics, parameters, and artifacts. Here’s an example of code that would do that using a simple model and dataset, where we log the model’s score and the hyperparameters used:

# Set the hyperparameters
hyper_params = {"alpha": 0.5, "beta": 1.2}

# Start the training job with `start_run()`
with mlflow.start_run(run_name="simple_training") as run:
 # Create model and dataset
 model = create_model(hyper_params)
 X, y = create_dataset()

 # Train model
 model.fit(X, y)

 # Calculate score
 score = lr.score(X, y)

 # Log metrics and hyper-parameters
 print("Log metric.")
 mlflow.log_metric("score", score)

 print("Log params.")
 mlflow.log_param("alpha", hyper_params["alpha"])
 mlflow.log_param("beta", hyper_params["beta"])

In our example above, a simple model is trained, and its score is calculated. Note how metrics can be logged by using mlflow.log_metric("metric_name", metric) and hyperparameters can be logged using mlflow.log_param("param_name", param).

The Data

Let’s now look at the code used for training our models, which are based on the outcome of basketball games. The data we are looking at is from the 2024 US college basketball tournaments, which was obtained from the March Machine Learning Mania 2024 Kaggle competition, the details of which can be found here, and is licensed under CC BY 4.0

In out setup, we wanted to try three different models, that used an increasing number of parameters. For each model, we also wanted to try three different learning rates (a hyperparameter that controls how much we are adjusting the weights of our network for each iteration). The goal was to find the best model and learning rate combination that would give us the best Brier score on the test set.

The Models

To define the model architecture, we used TensorFlow, creating three simple neural networks. Here are the functions that helped define the models.

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense

def create_model_small(input_shape):
    model = Sequential([
        Dense(64, activation='relu', input_shape=(input_shape,)),
        Dense(1, activation='sigmoid')
    ])
    return model

def create_model_medium(input_shape):
    model = Sequential([
        Dense(64, activation='relu', input_shape=(input_shape,)),
        Dense(64, activation='relu'),
        Dense(1, activation='sigmoid')
    ])
    return model

def create_model_large(input_shape):
    model = Sequential([
        Dense(128, activation='relu', input_shape=(input_shape,)),
        Dense(64, activation='relu'),
        Dense(64, activation='relu'),
        Dense(1, activation='sigmoid')
    ])
    return model

Creating our models in this way allows us to easily experiment with different architectures, and see how they perform. We can then use a dictionary to create a little model factory, that will allow us to easily create the models we want to experiment with.

We also defined the input shape, which was the number of features that were available. We decided to train the models for 100 epochs, which should be enough for convergence 🀞.

model_dict = {
    'model_sma': create_model_small,   # small
    'model_med': create_model_medium,  # medium
    'model_lar': create_model_large    # large
}

input_shape = X_train_scaled_df.shape[1]
epochs = 100

After this initial setup, it was time to iterate over the models’ dictionary. For each model, an experiment was created. Note how we’re using the code snippet from before, where we first check if the experiment exists, and only if it doesn’t do we create it. Otherwise, we just set it.

import mlflow

for model_name in model_dict:

    # create mlflow experiment
    experiment_name = "experiment_v2_" + model_name

    # Check if experiment exists
    # if not, create it
    if not mlflow.get_experiment_by_name(experiment_name):
        mlflow.create_experiment(name=experiment_name)

    # Set experiment
    mlflow.set_experiment(experiment_name)

Having set the experiment, we then performed three runs for each model, trying out different learning rates [0.001, 0.01, 0.1].

for model_name in model_dict:

 # Set the experiment
 ...

 learning_rate_list = [0.001, 0.01, 0.1]

    for lr in learning_rate_list:

        # Create run name for better identification
        run_name = f"{model_name}_{lr}"
        with mlflow.start_run(run_name=run_name) as run:
   ...
   # Train model
   # Save metrics

Then, in each run, we initialised a model, compiled it, and trained it. The compilation and training were done in a separate function, which we’ll go into next. As we wanted to set the learning rate, we had to manually initialise the Adam optimiser. As our metric we used the Mean Squared Error (MSE) loss function, saving the model with the best validation loss, and logged the training and validation loss to ensure that the model was converging.

def compile_and_train(model, X_train, y_train, X_val, y_val, epochs=100, learning_rate=0.001):
    # Instantiate the Adam optimiser with the desired learning rate
    optimiser = Adam(learning_rate=learning_rate)

    model.compile(optimizer=optimiser, loss='mean_squared_error', metrics=['mean_squared_error'])

    # Checkpoint to save the best model according to validation loss
    checkpoint_cb = ModelCheckpoint("best_model.h5", save_best_only=True, monitor='val_loss')

    history = model.fit(X_train, y_train, validation_data=(X_val, y_val),
                        epochs=epochs, callbacks=[checkpoint_cb], verbose=1)

    # Load and return the best model saved during training
    best_model = load_model("best_model.h5")
    return history, best_model

Having initialised a model, compiled and trained it, the next step was logging the training and validation losses, calculating the brier score for the test set, then logging the score and the learning rate used. Typically we would also log the training and validation loss using the step argument in log_metric, like so:

# Log training and validation losses
for epoch in range(epochs):
 train_loss = history.history['loss'][epoch]
 val_loss = history.history['val_loss'][epoch]
 mlflow.log_metric("train_loss", train_loss, step=epoch)
 mlflow.log_metric("val_loss", val_loss, step=epoch)

However, we opted to create the training and validation loss plot ourselves using matplotlib and log that as an artifact.

Here’s the plot function:

import matplotlib.pyplot as plt

def create_and_save_plot(train_loss, val_loss, model_name, lr):
    epochs = range(1, len(train_loss) + 1)

    # Creating the plot
    plt.figure(figsize=(10, 6))
    plt.plot(epochs, train_loss, 'b', label='Training loss')
    plt.plot(epochs, val_loss, 'r', label='Validation loss')
    plt.title('Training and Validation Loss')
    plt.xlabel('Epochs')
    plt.ylabel('Loss')
    plt.legend()
    plt.grid(True)

    plt.title(f"Training and Validation Loss (M: {model_name}, LR: {lr})")

    # Save plot to a file
    plot_path = f"{model_name}_{lr}_loss_plot.png"
    plt.savefig(plot_path)
    plt.close()

    return plot_path

Putting everything together, here’s what the code for that looks like:


with mlflow.start_run(run_name=run_name) as run:
 # Create model and dataset
 model = model_dict[model_name](input_shape)

 # Train model
 history, best_model = compile_and_train(model,
           X_train_scaled_df, y_train,
           X_validation_scaled_df, y_validation,
           epochs,
           lr)

 # Log training and validation loss plot as an artifact
 train_loss = history.history['loss']
 val_loss = history.history['val_loss']

 plot_path = create_and_save_plot(train_loss, val_loss, model_name, lr)
 mlflow.log_artifact(plot_path)

 # Calculate score
 brier_score = evaluate_model(best_model, X_test_scaled_df, y_test)

 # Log metrics and hyper-parameters
 mlflow.log_metric("brier", brier_score)

 # Log hyper-param
 mlflow.log_param("lr", lr)

 # Log model
 ...

For each run we also logged the model, which will be useful later on.

The experiments were run, creating an experiment for each model, and three different runs for each experiment with each of the learning rates.

Analysing results

Now that we’ve run some experiments, it’s time to analyse the results! To do this, we can go back to the workspace, where we’ll find our newly created experiments with several runs.

Fig. 2 - List of experiments. Image by author.
Fig. 2 – List of experiments. Image by author.

Clicking on one experiment, here’s what we’ll see:

Fig. 3 - The Experiment UI. Image by author.
Fig. 3 – The Experiment UI. Image by author.

On the left we’ll find all of the runs related to that experiment. In this case, we’re looking at the small model experiment. For each run, there’s two artifacts, the validation loss plot and the trained model. There’s also information about the run’s properties – its status and duration, as well as the metrics and hyper-parameters logged.

By clicking on the View run list, under the Compare runs section, we can compare the different runs.

Fig. 4 - Comparing runs. Image by author.
Fig. 4 – Comparing runs. Image by author.

Inside the run list view, we can select the runs that we wish to compare. In the metric comparison tab, we can find plots that show the Brier score against the learning rate. In our case, it looks like the lower the learning rate, the better the score. We could even go further and create more plots for the different metrics against other hyperparameters (if different metrics and hyperparameters had been logged).

Fig. 5 - Plot that shows Brier score against learning rate. Image by author.
Fig. 5 – Plot that shows Brier score against learning rate. Image by author.

Perhaps we would like to filter the runs – that can be done using Filters. For example we can select the runs that have a Brier score lower than 0.25. You can create filters based on logged metrics and parameters and the runs’ properties.

Fig. 6 - Filtering runs based on their Brier score. Image by author.
Fig. 6 – Filtering runs based on their Brier score. Image by author.

By doing this, we can visually compare the different runs and assess which configuration led to the best performance. This can also be done using code – this is something that will be further explored in the next post.

Using the experiment UI, we are then able to visually explore the different experiments and runs, comparing and filtering them as needed, to understand which configuration works best.

Conclusion

And that wraps up our exploration of experiments in Fabric!

Not only did we cover how to create and set up experiments, but we also went through how to run experiments and log the results. We also showed how to analyse the results, using the experiment UI to compare and filter runs.

In the next post, we’ll be looking at how to select the best model, and how to deploy it. Stay tuned!


Originally published at https://nobledynamic.com on April 22, 2024.

The post Experimenting with MLFlow and Microsoft Fabric appeared first on Towards Data Science.

]]>
Feature Engineering with Microsoft Fabric and Dataflow Gen2 https://towardsdatascience.com/feature-engineering-with-microsoft-fabric-and-dataflow-gen2-1471d22014b9/ Mon, 15 Apr 2024 11:37:51 +0000 https://towardsdatascience.com/feature-engineering-with-microsoft-fabric-and-dataflow-gen2-1471d22014b9/ Fabric Madness part 3

The post Feature Engineering with Microsoft Fabric and Dataflow Gen2 appeared first on Towards Data Science.

]]>
In the previous post, we discussed how to use Notebooks with PySpark for Feature Engineering. While spark offers a lot of flexibility and power, it can be quite complex and requires a lot of code to get started. Not everyone is comfortable with writing code or has the time to learn a new programming language, which is where Dataflow Gen2 comes in.

What is Dataflow Gen2?

Dataflow Gen2 is a low-code data transformation and integration engine that allows you to create data pipelines for loading data from a wide variety of sources into Microsoft Fabric. It’s based on Power Query, which is integrated into many Microsoft products, such as Excel, Power BI, and Azure Data Factory. Dataflow Gen2 is a great tool for creating data pipelines without code via a visual interface, making it easy to create data pipelines quickly. If you are already familiar with Power Query or are not afraid of writing code, you can also use the underlying M ("Mashup") language to create more complex transformations.

In this post, we will walk through how to use Dataflow Gen2 to create the same features needed to train our machine learning model. We will use the same dataset as in the previous post, which contains data about college basketball games.

Fig. 1 - The final result. Image by author.
Fig. 1 – The final result. Image by author.

The Challenge

There are two datasets that we will be using to create our features: the regular season games and the tournament games. These two datasets are also split into the Men’s and Women’s tournaments, which will need to be combined into a single dataset. In total there are four csv files, that need to be combined and transformed into two separate tables in the Lakehouse.

Using Dataflows there are multiple ways to solve this problem, and in this post I want to show three different approaches: a no code approach, a low code approach and finally a more advanced all code approach.

The no code approach

The first and simplest approach is to use the Dataflow Gen2 visual interface to load the data and create the features.

The Data

The data we are looking at is from the 2024 US college basketball tournaments, which was obtained from the on-going March Machine Learning Mania 2024 Kaggle competition, the details of which can be found here, and is licensed under CC BY 4.0

Loading the data

The first step is to get the data from the Lakehouse, which can be done by selecting the "Get Data" button in the Home ribbon and then selecting More… from the list of data sources.

Fig. 2 - Choosing a data source. Image by author.
Fig. 2 – Choosing a data source. Image by author.

From the list, select OneLake data hub to find the Lakehouse and then once selected, find the csv file in the Files folder.

Fig. 3 - Select the csv file. Image by author.
Fig. 3 – Select the csv file. Image by author.

This will create a new query with four steps, which are:

  • Source: A function that queries the Lakehouse for all the contents.
  • Navigation 1: Converts the contents of the Lakehouse into a table.
  • Navigation 2: Filters the table to retrieve the selected csv file by name.
  • Imported CSV: Converts the binary file into a table.
Fig. 4 - Initial load. Image by author.
Fig. 4 – Initial load. Image by author.

Now that the data is loaded we can start with some basic data preparation to get it into a format that we can use to create our features. The first thing we need to do is set the column names to be based on the first row of the dataset. This can be done by selecting the "Use first row as headers" option in either the Transform group on the Home ribbon or in the Transform menu item.

The next step is to rename the column "WLoc" to "location" by either selecting the column in the table view, or by right clicking on the column and selecting "Rename".

The location column contains the location of the game, which is either "H" for home, "A" for away, or "N" for neutral. For our purposes, we want to convert this to a numerical value, where "H" is 1, "A" is -1, and "N" is 0, as this will make it easier to use in our model. This can be done by selecting the column and then using the Replace values… transform in the Transform menu item.

Fig. 5 - Replace Values. Image by author.
Fig. 5 – Replace Values. Image by author.

This will need to be done for the other two location values as well.

Finally, we need to change the data type of the location column to be a Whole number instead of Text. This can be done by selecting the column and then selecting the data type from the drop down list in the Transform group on the Home ribbon.

Fig. 6 - Final data load. Image by author.
Fig. 6 – Final data load. Image by author.

Instead of repeating the rename step for each of the location types, a little bit of M code can be used to replace the values in the location column. This can be done by selecting the previous transform in the query (Renamed columns) and then selecting the Insert step button in the formula bar. This will add a new step, and you can enter the following code to replace the values in the location column.

Table.ReplaceValue(#"Renamed columns", each [location], each if Text.Contains([location], "H") then "1" else if Text.Contains([location], "A") then "-1" else "0", Replacer.ReplaceText, {"location"})

Adding features

We’ve got the data loaded, but it’s still not right for our model. Each row in the dataset represents a game between two teams, and includes the scores and statistics for both the winning and losing team in a single wide table. We need to create features that represent the performance of each team in the game and to have a row per team per game.

To do this we need to split the data into two tables, one for the winning team and one for the losing team. The simplest way to do this is to create a new query for each team and then merge them back together at the end. There are a few ways that this could be done, however to keep things simple and understandable (especially if we ever need to come back to this later), we will create two references to the source query and then append them together again, after doing some light transformations.

Referencing a column can be done either from the Queries panel on the left, or by selecting the context menu of the query if using Diagram view. This will create a new query that references the original query, and any changes made to the original query will be reflected in the new query. I did this twice, once for the winning team and once for the losing team and then renamed the columns by prefixing them with "T1" and "T2" respectively.

Fig. 7 - Split the dataset. Image by author.
Fig. 7 – Split the dataset. Image by author.

Once the column values are set, we can then combine the two queries back together by using Append Queries and then create our first feature, which is the point difference between the two teams. This can be done by selecting the T1_Score and T2_Score columns and then selecting "Subtract" from the "Standard" group on the Add column ribbon.

Now that’s done, we can then load the data into the Lakehouse as a new table. The final result should look something like this:

Fig. 8 - All joined up. Image by author.
Fig. 8 – All joined up. Image by author.

There are a few limitations with the no code approach, the main one is that it’s not easy to reuse queries or transformations. In the above example we would need to repeat the same steps another three times to load each of the individual csv files. This is where copy / paste comes in handy, but it’s not ideal. Let’s look at a low code approach next.

The low code approach

In the low code approach we will use a combination of the visual interface and the M language to load and transform the data. This approach is more flexible than the no code approach, but still doesn’t require a lot of code to be written.

Loading the data

The goal of the low code approach is to reduce the number of repeated queries that are needed and to make it easier to reuse transformations. To do this we will take advantage of the fact that Power Query is a functional language and that we can create functions to encapsulate the transformations that we want to apply to the data. When we first loaded the data from the Lakehouse there were four steps that were created, the second step was to convert the contents of the Lakehouse into a table, with each row containing a reference to a binary csv file. We can use this as the input into a function, which will load the csv into a new table, using the Invoke custom function transformation for each row of the table.

Fig. 9 - Lakehouse query with the binary csv files in a column called Content. Image by author.
Fig. 9 – Lakehouse query with the binary csv files in a column called Content. Image by author.

To create the function, select "Blank query" from the Get data menu, or right click the Queries panel and select "New query" > "Blank query". In the new query window, enter the following code:

(TableContents as binary) =>let
  Source = Csv.Document(TableContents, [Delimiter = ",", Columns = 34, QuoteStyle = QuoteStyle.None]),
  PromoteHeaders = Table.PromoteHeaders(Source, [PromoteAllScalars = true])
in
  PromoteHeaders

The code of this function has been copied from our initial no code approach, but instead of loading the csv file directly, it takes a parameter called TableContents, reads it as a csv file Csv.Document and then sets the first row of the data to be the column headers Table.PromoteHeaders.

We can then use the Invoke custom function transformation to apply this function to each row of the Lakehouse query. This can be done by selecting the "Invoke custom function" transformation from the Add column ribbon and then selecting the function that we just created.

Fig. 10 - Invoke custom function. Image by author.
Fig. 10 – Invoke custom function. Image by author.

This will create a new column in the Lakehouse query, with the entire contents of the csv file loaded into a table, which is represented as [Table] in the table view. We can then use the expand function on the column heading to expand the table into individual columns.

Fig. 11 - Expand columns. Image by author.
Fig. 11 – Expand columns. Image by author.

The result effectively combines the two csv files into a single table, which we can then continue to create our features from as before.

There are still some limitations with this approach, while we’ve reduced the number of repeated queries, we still need to duplicate everything for both the regular season and tournament games datasets. This is where the all code approach comes in.

The all code approach

The all code approach is the most flexible and powerful approach, but also requires the most amount of code to be written. This approach is best suited for those who are comfortable with writing code and want to have full control over the transformations that are applied to the data.

Essentially what we’ll do is grab all the M code that was generated in each of the queries and combine them into a single query. This will allow us to load all the csv files in a single query and then apply the transformations to each of them in a single step. To get all the M code, we can select each query and then click on the Advanced Editor from the Home ribbon, which displays all the M code that was generated for that query. We can then copy and paste this code into a new query and then combine them all together.

To do this, we need to create a new blank query and then enter the following code:

(TourneyType as text) => let
  Source = Lakehouse.Contents(null){[workspaceId = "..."]}[Data]{[lakehouseId = "..."]}[Data],
  #"Navigation 1" = Source{[Id = "Files", ItemKind = "Folder"]}[Data],
  #"Filtered rows" = Table.SelectRows(#"Navigation 1", each Text.Contains([Name], TourneyType)),
  #"Invoked custom function" = Table.AddColumn(#"Filtered rows", "Invoked custom function", each LoadCSV([Content])),
  #"Removed columns" = Table.RemoveColumns(#"Invoked custom function", {"Content", "Name", "Extension", "Date accessed", "Date modified", "Date created", "Attributes", "Folder Path", "ItemKind", "IsLeaf"}),
  #"Expanded Invoked custom function" = Table.ExpandTableColumn(#"Removed columns", "Invoked custom function", {"Season", "DayNum", "WTeamID", "WScore", "LTeamID", "LScore", "WLoc", "NumOT", "WFGM", "WFGA", "WFGM3", "WFGA3", "WFTM", "WFTA", "WOR", "WDR", "WAst", "WTO", "WStl", "WBlk", "WPF", "LFGM", "LFGA", "LFGM3", "LFGA3", "LFTM", "LFTA", "LOR", "LDR", "LAst", "LTO", "LStl", "LBlk", "LPF"}, {"Season", "DayNum", "WTeamID", "WScore", "LTeamID", "LScore", "WLoc", "NumOT", "WFGM", "WFGA", "WFGM3", "WFGA3", "WFTM", "WFTA", "WOR", "WDR", "WAst", "WTO", "WStl", "WBlk", "WPF", "LFGM", "LFGA", "LFGM3", "LFGA3", "LFTM", "LFTA", "LOR", "LDR", "LAst", "LTO", "LStl", "LBlk", "LPF"}),
  #"Renamed columns" = Table.RenameColumns(#"Expanded Invoked custom function", {{"WLoc", "location"}}),
  Custom = Table.ReplaceValue(#"Renamed columns", each [location], each if Text.Contains([location], "H") then "1" else if Text.Contains([location], "A") then "-1" else "0", Replacer.ReplaceText, {"location"}),
  #"Change Types" = Table.TransformColumnTypes(Custom, {{"Season", Int64.Type}, {"DayNum", Int64.Type}, {"WTeamID", Int64.Type}, {"WScore", Int64.Type}, {"LTeamID", Int64.Type}, {"LScore", Int64.Type}, {"location", Int64.Type}, {"NumOT", Int64.Type}, {"WFGM", Int64.Type}, {"WFGA", Int64.Type}, {"WFGM3", Int64.Type}, {"WFGA3", Int64.Type}, {"WFTM", Int64.Type}, {"WFTA", Int64.Type}, {"WOR", Int64.Type}, {"WDR", Int64.Type}, {"WAst", Int64.Type}, {"WTO", Int64.Type}, {"WStl", Int64.Type}, {"WBlk", Int64.Type}, {"WPF", Int64.Type}, {"LFGM", Int64.Type}, {"LFGA", Int64.Type}, {"LFGM3", Int64.Type}, {"LFGA3", Int64.Type}, {"LFTM", Int64.Type}, {"LFTA", Int64.Type}, {"LOR", Int64.Type}, {"LDR", Int64.Type}, {"LAst", Int64.Type}, {"LTO", Int64.Type}, {"LStl", Int64.Type}, {"LBlk", Int64.Type}, {"LPF", Int64.Type}}),
  Winners = Table.TransformColumnNames(#"Change Types", each if Text.StartsWith(_, "W") then Text.Replace(_, "W", "T1_") else Text.Replace(_, "L", "T2_")),
  #"Rename L" = Table.TransformColumnNames(#"Change Types", each if Text.StartsWith(_, "W") then Text.Replace(_, "W", "T2_") else Text.Replace(_, "L", "T1_")),
  #"Replaced Value L" = Table.ReplaceValue(#"Rename L", each [location], each if [location] = 1 then -1 else if Text.Contains([location], -1) then 1 else [location], Replacer.ReplaceValue, {"location"}),
  Losers = Table.TransformColumnTypes(#"Replaced Value L", {{"location", Int64.Type}}),
  Combined = Table.Combine({Winners, Losers}),
  PointDiff = Table.AddColumn(Combined, "PointDiff", each [T1_Score] - [T2_Score], Int64.Type)
in
  PointDiff

Note: the Lakehouse connection values have been removed

What’s happening here is that we’re:

  1. Loading the data from the Lakehouse;
  2. Filtering the rows to only include the csv files that match the TourneyType parameter;
  3. Loading the csv files into tables;
  4. Expanding the tables into columns;
  5. Renaming the columns;
  6. Changing the data types;
  7. Combining the two tables back together;
  8. Calculating the point difference between the two teams.

Using the query is then as simple as selecting it, and then invoking the function with the TourneyType parameter.

Fig. 12 - Invoke function. Image by author.
Fig. 12 – Invoke function. Image by author.

This will create a new query with the function as it’s source, and the data loaded and transformed. It’s then just a case of loading the data into the Lakehouse as a new table.

Fig. 13 - Function load. Image by author.
Fig. 13 – Function load. Image by author.

As you can see, the LoadTournamentData function is invoked with the parameter "RegularSeasonDetailedResults" which will load both the Men’s and Women’s regular season games into a single table.

Conclusion

And that’s it!

Hopefully this post has given you a good overview of how to use Dataflow Gen2 to prepare data and create features for your machine learning model. Its low code approach makes it easy to create data pipelines quickly, and it contains a lot of powerful features that can be used to create complex transformations. It’s a great first port of call for anyone who needs to transform data, but more importantly, has the benefit of not needing to write complex code that is prone to errors, is hard to test, and is difficult to maintain.

At the time of writing, Dataflows Gen2 are unsupported with the Git integration, and so it’s not possible to version control or share the dataflows. This feature is expected to be released in Q4 2024.


Originally published at https://nobledynamic.com on April 15, 2024.

The post Feature Engineering with Microsoft Fabric and Dataflow Gen2 appeared first on Towards Data Science.

]]>
Feature Engineering with Microsoft Fabric and PySpark https://towardsdatascience.com/feature-engineering-with-microsoft-fabric-and-pyspark-16d458018744/ Mon, 08 Apr 2024 09:38:31 +0000 https://towardsdatascience.com/feature-engineering-with-microsoft-fabric-and-pyspark-16d458018744/ Fabric Madness part 2

The post Feature Engineering with Microsoft Fabric and PySpark appeared first on Towards Data Science.

]]>
A Huge thanks to Martim Chaves who co-authored this post and developed the example scripts.

In our previous post we took a high level view of how to train a machine learning model in Microsoft Fabric. In this post we wanted to dive deeper into the process of feature engineering.

Feature engineering is a crucial part of the development lifecycle for any Machine Learning (ML) systems. It is a step in the development cycle where raw data is processed to better represent its underlying structure and provide additional information that enhance our ML models. Feature engineering is both an art and a science. Even though there are specific steps that we can take to create good features, sometimes, it is only through experimentation that good results are achieved. Good features are crucial in guaranteeing a good system performance.

As datasets grow exponentially, traditional feature engineering may struggle with the size of very large datasets. This is where PySpark can help – as it is a scalable and efficient processing platform for massive datasets. A great thing about Fabric is that it makes using PySpark easy!

In this post, we’ll be going over:

  • How does PySpark Work?
  • Basics of PySpark
  • Feature Engineering in Action

By the end of this post, hopefully you’ll feel comfortable carrying out feature engineering with PySpark in Fabric. Let’s get started!

How does PySpark work?

Spark is a distributed computing system that allows for the processing of large datasets with speed and efficiency across a cluster of machines. It is built around the concept of a Resilient Distributed Dataset (RDD), which is a fault-tolerant collection of elements that can be operated on in parallel. RDDs are the fundamental data structure of Spark, and they allow for the distribution of data across a cluster of machines.

PySpark is the Python API for Spark. It allows for the creation of Spark DataFrames, which are similar to Pandas DataFrames, but with the added benefit of being distributed across a cluster of machines. PySpark DataFrames are the core data structure in PySpark, and they allow for the manipulation of large datasets in a distributed manner.

At the core of PySpark is the SparkSession object, which is what fundamentally interacts with Spark. This SparkSession is what allows for the creation of DataFrames, and other functionalities. Note that, when running a Notebook in Fabric, a SparkSession is automatically created for you, so you don’t have to worry about that.

Having a rough idea of how PySpark works, let’s get to the basics.

Basics of PySpark

Although Spark DataFrames may remind us of Pandas DataFrames due to their similarities, the syntax when using PySpark can be a bit different. In this section, we’ll go over some of the basics of PySpark, such as reading data, combining DataFrames, selecting columns, grouping data, joining DataFrames, and using functions.

The Data

The data we are looking at is from the 2024 US college Basketball tournaments, which was obtained from the on-going March Machine Learning Mania 2024 Kaggle competition, the details of which can be found here, and is licensed under CC BY 4.0 [1]

Reading data

As mentioned in the previous post of this series, the first step is usually to create a Lakehouse and upload some data. Then, when creating a Notebook, we can attach it to the created Lakehouse, and we’ll have access to the data stored there.

PySpark Dataframes can read various data formats, such as CSV, JSON, Parquet, and others. Our data is stored in CSV format, so we’ll be using that, like in the following code snippet:

# Read women's data
w_data = (
    spark.read.option("header", True)
    .option("inferSchema", True)
    .csv(f"Files/WNCAATourneyDetailedResults.csv")
    .cache()
)

In this code snippet, we’re reading the detailed results data set of the final women’s basketball college tournament matches. Note that the "header" option being true means that the names of the columns will be derived from the first row of the CSV file. The inferSchema option tells Spark to guess the data types of the columns – otherwise they would all be read as strings. .cache() is used to keep the DataFrame in memory.

If you’re coming from Pandas, you may be wondering what the equivalent of df.head() is for PySpark – it’s df.show(5). The default for .show() is the top 20 rows, hence the need to specifically select 5.

Combining DataFrames

Combining DataFrames can be done in multiple ways. The first we will look at is a union, where the columns are the same for both DataFrames:

# Read women's data
...

# Read men's data
m_data = (
    spark.read.option("header", True)
    .option("inferSchema", True)
    .csv(f"Files/MNCAATourneyDetailedResults.csv")
    .cache()
)

# Combine (union) the DataFrames
combined_results = m_data.unionByName(w_data)

Here, unionByName joins the two DataFrames by matching the names of the columns. Since both the women’s and the men’s detailed match results have the same columns, this is a good approach. Alternatively, there’s also union, which combines two DataFrames, matching column positions.

Selecting Columns

Selecting columns from a DataFrame in PySpark can be done using the .select() method. We just have to indicate the name or names of the columns that are relevant as a parameter.

Here’s the output for w_scores.show(5):

# Selecting a single column
w_scores = w_data.select("WScore")

# Selecting multiple columns
teamid_w_scores = w_data.select("WTeamID", "WScore")

Here’s the output for w_scores.show(5):

+------+
|Season|
+------+
|  2010|
|  2010|
|  2010|
|  2010|
|  2010|
+------+
only showing top 5 rows

The columns can also be renamed when being selected using the .alias() method:

winners = w_data.select(
    w_data.WTeamID.alias("TeamID"),
    w_data.WScore.alias("Score")
)

Grouping Data

Grouping allows us to carry out certain operations for the groups that exist within the data and is usually combined with a aggregation functions. We can use .groupBy() for this:

# Grouping and aggregating
winners_average_scores = winners.groupBy("TeamID").avg("Score")

In this example, we are grouping by "TeamID", meaning we’re considering the groups of rows that have a distinct value for "TeamID". For each of those groups, we’re calculating the average of the "Score". This way, we get the average score for each team.

Here’s the output of winners_average_scores.show(5), showing the average score of each team:

+------+-----------------+
|TeamID|       avg(Score)|
+------+-----------------+
|  3125|             68.5|
|  3345|             74.2|
|  3346|79.66666666666667|
|  3376|73.58333333333333|
|  3107|             61.0|
+------+-----------------+

Joining Data

Joining two DataFrames can be done using the .join() method. Joining is essentially extending the DataFrame by adding the columns of one DataFrame to another.

# Joining on Season and TeamID
final_df = matches_df.join(stats_df, on=['Season', 'TeamID'], how='left')

In this example, both stats_df and matches_df were using Season and TeamID as unique identifiers for each row. Besides Season and TeamID, stats_df has other columns, such as statistics for each team during each season, whereas matches_df has information about the matches, such as date and location. This operation allows us to add those interesting statistics to the matches information!

Functions

There are several functions that PySpark provides that help us transform DataFrames. You can find the full list here.

Here’s an example of a simple function:

from pyspark.sql import functions as F

w_data = w_data.withColumn("HighScore", F.when(F.col("Score") > 80, "Yes").otherwise("No"))

In the code snippet above, a "HighScore" column is created when the score is higher than 80. For each row in the "Score" column (indicated by the .col() function), the value "Yes" is chosen for the "HighScore" column if the "Score" value is larger than 80, determined by the .when() function. .otherwise(), the value chosen is "No".

Feature Engineering in Action

Now that we have a basic understanding of PySpark and how it can be used, let’s go over how the regular season statistics features were created. These features were then used as inputs into our machine learning model to try to predict the outcome of the final tournament games.

The starting point was a DataFrame, regular_data, that contained match by match statistics for the regular seasons, which is the United States College Basketball Season that happens from November to March each year.

Each row in this DataFrame contained the season, the day the match was held, the ID of team 1, the ID of team 2, and other information such as the location of the match. Importantly, it also contained statistics for each team for that specific match, such as "T1_FGM", meaning the Field Goals Made (FGM) for team 1, or "T2_OR", meaning the Offensive Rebounds (OR) of team 2.

The first step was selecting which columns would be used. These were columns that strictly contained in-game statistics.

# Columns that we'll want to get statistics from
boxscore_cols = [
    'T1_FGM', 'T1_FGA', 'T1_FGM3', 'T1_FGA3', 'T1_OR', 'T1_DR', 'T1_Ast', 'T1_Stl', 'T1_PF', 
    'T2_FGM', 'T2_FGA', 'T2_FGM3', 'T2_FGA3', 'T2_OR', 'T2_DR', 'T2_Ast', 'T2_Stl', 'T2_PF'
]

If you’re interested, here’s what each statistic’s code means:

  • FGM: Field Goals Made
  • FGA: Field Goals Attempted
  • FGM3: Field Goals Made from the 3-point-line
  • FGA3: Field Goals Attempted for 3-point-line goals
  • OR: Offensive Rebounds. A rebounds is when the ball rebounds from the board when a goal is attempted, not getting in the net. If the team that attempted the goal gets possession of the ball, it’s called an "Offensive" rebound. Otherwise, it’s called a "Defensive" Rebound.
  • DR: Defensive Rebounds
  • Ast: Assist, a pass that led directly to a goal
  • Stl: Steal, when the possession of the ball is stolen
  • PF: Personal Foul, when a player makes a foul

From there, a dictionary of aggregation expressions was created. Basically, for each column name in the previous list of columns, a function was stored that would calculate the mean of the column, and rename it, by adding a suffix, "mean".

from pyspark.sql import functions as F
from pyspark.sql.functions import col  # select a column

agg_exprs = {col: F.mean(col).alias(col + 'mean') for col in boxscore_cols}

Then, the data was grouped by "Season" and "T1_TeamID", and the aggregation functions of the previously created dictionary were used as the argument for .agg().

season_statistics = regular_data.groupBy(["Season", "T1_TeamID"]).agg(*agg_exprs.values())

Note that the grouping was done by season and the ID of team 1 – this means that "T2_FGAmean", for example, will actually be the mean of the Field Goals Attempted made by the opponents of T1, not necessarily of a specific team. So, we actually need to rename the columns that are something like "T2_FGAmean" to something like "T1_opponent_FGAmean".

# Rename columns for T1
for col in boxscore_cols:
    season_statistics = season_statistics.withColumnRenamed(col + 'mean', 'T1_' + col[3:] + 'mean') if 'T1_' in col 
        else season_statistics.withColumnRenamed(col + 'mean', 'T1_opponent_' + col[3:] + 'mean')

At this point, it’s important to mention that the regular_data DataFrame actually has two rows per each match that occurred. This is so that both teams can be "T1" and "T2", for each match. This little "trick" is what makes these statistics useful.

Note that we "only" have the statistics for "T1". We "need" the statistics for "T2" as well – "need" in quotations because there are no new statistics being calculated. We just need the same data, but with the columns having different names, so that for a match with "T1" and "T2", we have statistics for both T1 and T2. So, we created a mirror DataFrame, where, instead of "T1…mean" and "T1opponent…mean", we have "T2…mean" and "T2opponent…mean". This is important because, later on, when we’re joining these regular season statistics to tournament matches, we’ll be able to have statistics for both team 1 and team 2.

season_statistics_T2 = season_statistics.select(
    *[F.col(col).alias(col.replace('T1_opponent_', 'T2_opponent_').replace('T1_', 'T2_')) if col not in ['Season'] else F.col(col) for col in season_statistics.columns]
)

Now, there are two DataFrames, with season statistics for "both" T1 and T2. Since the final DataFrame will contain the "Season", the "T1TeamID" and the "T2TeamID", we can join these newly created features with a join!

tourney_df = tourney_df.join(season_statistics, on=['Season', 'T1_TeamID'], how='left')
tourney_df = tourney_df.join(season_statistics_T2, on=['Season', 'T2_TeamID'], how='left')

Elo Ratings

First created by Arpad Elo, Elo is a rating system for zero-sum games (games where one player wins and the other loses), like basketball. With the Elo rating system, each team has an Elo rating, a value that generally conveys the team’s quality. At first, every team has the same Elo, and whenever they win, their Elo increases, and when they lose, their Elo decreases. A key characteristic of this system is that this value increases more with a win against a strong opponent than with a win against a weak opponent. Thus, it can be a very useful feature to have!

We wanted to capture the Elo rating of a team at the end of the regular season, and use that as feature for the tournament. To do this, we calculated the Elo for each team on a per match basis. To calculate Elo for this feature, we found it more straightforward to use Pandas.

Central to Elo is calculating the expected score for each team. It can be described in code like so:

# Function to calculate expected score
def expected_score(ra, rb):
    # ra = rating (Elo) team A
    # rb = rating (Elo) team B
    # Elo function
    return 1 / (1 + 10 ** ((rb - ra) / 400))

Considering a team A and a team B, this function computes the expected score of team A against team B.

For each match, we would update the teams’ Elos. Note that the location of the match also played a part – winning at home was considered less impressive than winning away.

# Function to update Elo ratings, keeping T1 and T2 terminology
def update_elo(t1_elo, t2_elo, location, T1_Score, T2_Score):
    expected_t1 = expected_score(t1_elo, t2_elo)
    expected_t2 = expected_score(t2_elo, t1_elo)

    actual_t1 = 1 if T1_Score > T2_Score else 0
    actual_t2 = 1 - actual_t1

    # Determine K based on game location
    # The larger the K, the bigger the impact
    # team1 winning at home (location=1) less impressive than winning away (location = -1)
    if actual_t1 == 1:  # team1 won
        if location == 1:
            k = 20
        elif location == 0:
            k = 30
        else:  # location = -1
            k = 40
    else:  # team2 won
        if location == 1:
            k = 40
        elif location == 0:
            k = 30
        else:  # location = -1
            k = 20

    new_t1_elo = t1_elo + k * (actual_t1 - expected_t1)
    new_t2_elo = t2_elo + k * (actual_t2 - expected_t2)

    return new_t1_elo, new_t2_elo

To apply the Elo rating system, we iterated through each season’s matches, initializing teams with a base rating and updating their ratings match by match. The final Elo available for each team in each season will, hopefully, be a good descriptor of the team’s quality.

def calculate_elo_through_seasons(regular_data):

    # For this feature, using Pandas
    regular_data = regular_data.toPandas()

    # Set value of initial elo
    initial_elo = 1500

    # DataFrame to collect final Elo ratings
    final_elo_list = []

    for season in sorted(regular_data['Season'].unique()):
        print(f"Season: {season}")
        # Initialize elo ratings dictionary
        elo_ratings = {}

        print(f"Processing Season: {season}")
        # Get the teams that played in the season
        season_teams = set(regular_data[regular_data['Season'] == season]['T1_TeamID']).union(set(regular_data[regular_data['Season'] == season]['T2_TeamID']))

        # Initialize season teams' Elo ratings
        for team in season_teams:
            if (season, team) not in elo_ratings:
                elo_ratings[(season, team)] = initial_elo

        # Update Elo ratings per game
        season_games = regular_data[regular_data['Season'] == season]
        for _, row in season_games.iterrows():
            t1_elo = elo_ratings[(season, row['T1_TeamID'])]
            t2_elo = elo_ratings[(season, row['T2_TeamID'])]

            new_t1_elo, new_t2_elo = update_elo(t1_elo, t2_elo, row['location'], row['T1_Score'], row['T2_Score'])

            # Only keep the last season rating
            elo_ratings[(season, row['T1_TeamID'])] = new_t1_elo
            elo_ratings[(season, row['T2_TeamID'])] = new_t2_elo

        # Collect final Elo ratings for the season
        for team in season_teams:
            final_elo_list.append({'Season': season, 'TeamID': team, 'Elo': elo_ratings[(season, team)]})

    # Convert list to DataFrame
    final_elo_df = pd.DataFrame(final_elo_list)

    # Separate DataFrames for T1 and T2
    final_elo_t1_df = final_elo_df.copy().rename(columns={'TeamID': 'T1_TeamID', 'Elo': 'T1_Elo'})
    final_elo_t2_df = final_elo_df.copy().rename(columns={'TeamID': 'T2_TeamID', 'Elo': 'T2_Elo'})

    # Convert the pandas DataFrames back to Spark DataFrames
    final_elo_t1_df = spark.createDataFrame(final_elo_t1_df)
    final_elo_t2_df = spark.createDataFrame(final_elo_t2_df)

    return final_elo_t1_df, final_elo_t2_df

Ideally, we wouldn’t calculate Elo changes on a match-by-match basis to determine each team’s final Elo for the season. However, we couldn’t come up with a better approach. Do you have any ideas? If so, let us know!

Value Added

The Feature Engineering steps demonstrated show how we can transform raw data – regular season statistics – into valuable information with predictive power. It is reasonable to assume that a team’s performance during the regular season is indicative of its potential performance in the final tournaments. By calculating the mean of observed match-by-match statistics for both the teams and their opponents, along with each team’s Elo rating in their final match, we were able to create a dataset suitable for modelling. Then, models were trained to predict the outcome of tournament matches using these features, among others developed in a similar way. With these models, we only need the two team IDs to look up the mean of their regular season statistics and their Elos to feed into the model and predict a score!

Conclusion

In this post, we looked at some of the theory behind Spark and PySpark, how that can be applied, and a concrete practical example. We explored how feature engineering can be done in the case of sports data, creating regular season statistics to use as features for final tournament games. Hopefully you’ve found this interesting and helpful – happy feature engineering!

The full source code for this post and others in the series can be found here.


Originally published at https://nobledynamic.com on April 8, 2024.

References

[1] Jeff Sonas, Ryan Holbrook, Addison Howard, Anju Kandru. (2024). March Machine Learning Mania 2024. Kaggle. https://kaggle.com/competitions/march-machine-learning-mania-2024

The post Feature Engineering with Microsoft Fabric and PySpark appeared first on Towards Data Science.

]]>
Fabric Madness https://towardsdatascience.com/fabric-madness-96b84dc5f241/ Mon, 01 Apr 2024 19:01:29 +0000 https://towardsdatascience.com/fabric-madness-96b84dc5f241/ Predicting basketball games with Microsoft Fabric

The post Fabric Madness appeared first on Towards Data Science.

]]>
Image by author and ChatGPT. "Design an illustration, focusing on a basketball player in action, the design integrates sports and data analytics themes in a graphic novel style" prompt. ChatGPT, 4, OpenAI, 28 March. 2024. https://chat.openai.com.
Image by author and ChatGPT. "Design an illustration, focusing on a basketball player in action, the design integrates sports and data analytics themes in a graphic novel style" prompt. ChatGPT, 4, OpenAI, 28 March. 2024. https://chat.openai.com.

A Huge thanks to Martim Chaves who co-authored this post and developed the example scripts.

At the time of writing, it’s Basketball season in the United States, and there is a lot of excitement around the men’s and women’s college basketball tournaments. The format is single elimination, so over the course of several rounds, teams are eliminated, till eventually we get a champion. This tournament is not only a showcase of upcoming basketball talent, but, more importantly, a fertile ground for data enthusiasts like us to analyse trends and predict outcomes.

One of the great things about sports is that there is lots of data available, and we at Noble Dynamic wanted to take a crack at it πŸ€“.

In this series of posts titled Fabric Madness, we’re going to be diving deep into some of the most interesting features of Microsoft Fabric, for an end-to-end demonstration of how to train and use a machine learning model.

In this first blog post, we’ll be going over:

  • A first look at the data using Data Wrangler.
  • Exploratory Data Analysis (EDA) and Feature Engineering
  • Tracking the performance of different Machine Learning (ML) Models using Experiments
  • Selecting the best performing model using the ML Model functionality

The Data

The data used was obtained from the on-going Kaggle competition, the details of which can be found here, which is licensed under CC BY 4.0 [1]

Among all of the interesting data available, our focus for this case study was on the match-by-match statistics. This data was available for both the regular seasons and the tournaments, going all the way back to 2003. For each match, besides the date, the teams that were playing, and their scores, other relevant features were made available, such as field goals made and personal fouls by each team.

Loading the Data

The first step was creating a Fabric Workspace. Workspaces in Fabric are one of the fundamental building blocks of the platform, and are used for grouping together related items and for collaboration.

After downloading all of the CSV files available, a Lakehouse was created. A Lakehouse, in simple terms, is a mix between a Database of Tables (structured) and a Data Lake of Files (unstructured). The big benefit of a Lakehouse is that data is available for every tool in the workspace.

Uploading the files was done using the UI:

Fig. 1 - Uploading Files. Image by Martim Chaves
Fig. 1 – Uploading Files. Image by Martim Chaves

Now that we have a Lakehouse with the CSV files, it was time to dig in, and get a first look at the data. To do that, we created a Notebook, using the UI, and attached the previously created Lakehouse.

Fig. 2 - Adding Lakehouse to Notebook. Image by Martim Chaves
Fig. 2 – Adding Lakehouse to Notebook. Image by Martim Chaves

First Look

After a quick data wrangling, it was found that, as expected with data from Kaggle, the quality was great. With no duplicates or missing values.

For this task we used Data Wrangler, a tool built into Microsoft Fabric notebooks. Once an initial DataFrame has been created (Spark or Pandas supported), Data Wrangler becomes available to use and can attach to any DataFrame in the Notebook. What’s great is that it allows for easy analysis of loaded DataFrames.

In a Notebook, after reading the files into PySpark DataFrames, in the "Data" section, the "Transform DataFrame in Data Wrangler" was selected, and from there the several DataFrames were explored. Specific DataFrames can be chosen, carrying out a careful inspection.

Fig. 3 - Opening Data Wrangler. Image by Martim Chaves
Fig. 3 – Opening Data Wrangler. Image by Martim Chaves
Fig. 4 - Analysing the DataFrame with Data Wrangler. Image by Martim Chaves
Fig. 4 – Analysing the DataFrame with Data Wrangler. Image by Martim Chaves

In the centre, we have access to all of the rows of the loaded DataFrame. On the right, a Summary tab, showing that indeed there are no duplicates or missing values. Clicking in a certain column, summary statistics of that column will be shown.

On the left, in the Operations tab, there are several pre-built operations that can be applied to the DataFrame. The operations feature many of the most common data wrangling tasks, such as filtering, sorting, and grouping, and is a quick way to generate boilerplate code for these tasks.

In our case, the data was already in good shape, so we moved on to the EDA stage.

Exploratory Data Analysis

A short Exploratory Data Analysis (EDA) followed, with the goal of getting a general idea of the data. Charts were plotted to get a sense of the distribution of the data and if there were any statistics that could be problematic due to, for example, very long tails.

Fig. 5 - Histogram of field goals made. Image by Martim Chaves
Fig. 5 – Histogram of field goals made. Image by Martim Chaves

At a quick glance, it was found that the data available from the regular season had normal distributions, suitable to use in the creation of features. Knowing the importance that good features have in creating solid predictive systems, the next sensible step was to carry out feature engineering to extract relevant information from the data.

The goal was to create a dataset where each sample’s input would be a set of features for a game, containing information of both teams. For example, both teams average field goals made for the regular season. The target for each sample, the desired output, would be 1 if Team 1 won the game, or 0 if Team 2 won the game (which was done by subtracting the scores). Here’s a representation of the dataset:

Feature Engineering

The first feature that we decided to explore was win rate. Not only would it be an interesting feature to explore, but it would also provide a baseline score. This initial approach employed a simple rule: the team with the higher win rate would be predicted as the winner. This method provides a fundamental baseline against which the performance of more sophisticated predictive systems can be compared to.

To evaluate the accuracy of our predictions across different models, we adopted the Brier score. The Brier score is the mean of the square of the difference between the predicted probability (p) and the actual outcome (o) for each sample, and can be described by the following formula:

Image by the author
Image by the author

The predicted probability will vary between 0 and 1, and the actual outcome will either be 0 or 1. Thus, the Brier score will always be between 0 and 1. As we want the predicted probability to be as close to the actual outcome as possible, the lower the Brier score, the better, with 0 being the perfect score, and 1 the worst.

For the baseline, the previously mentioned dataset structure was followed. Each sample of the dataset was a match, containing the win rates for the regular season for Team 1 and Team 2. The actual outcome was considered 1 if Team 1 won, or 0 if Team 2 won. To simulate a probability, the prediction was a normalised difference between T1’s win rate and T2’s win rate. For the maximum value of the difference between the win rates, the prediction would be 1. For the minimum value, the prediction would be 0.

After calculating the win rate, and then using it to predict the outcomes, we got a Brier score of 0.23. Considering that guessing at random leads to a Brier score of 0.25, it’s clear that this feature alone is not very good 😬 .

By starting with a simple baseline, it clearly highlighted that more complex patterns were at play. We went ahead to developed another 42 features, in preparation for utilising more complex algorithms, machine learning models, that might have a better chance.

It was then time to create machine learning models!

Models & Machine Learning Experiments

For the models, we opted for simple Neural Networks (NN). To determine which level of complexity would be best, we created three different NNs, with an increasing number of layers and hyper-parameters. Here’s an example of a small NN, one that was used:

Fig. 6 - Diagram of a Neural Network. Image by Martim Chaves using draw.io
Fig. 6 – Diagram of a Neural Network. Image by Martim Chaves using draw.io

If you’re familiar with NNs, feel free to skip to the Experiments! If you’re unfamiliar with NNs think of them as a set of layers, where each layer acts as a filter for relevant information. Data passes through successive layers, in a step-by-step fashion, where each layer has inputs and outputs. Data moves through the network in one direction, from the first layer (the model’s input) to the last layer (the model’s output), without looping back, hence the Sequential function.

Each layer is made up of several neurons, that can be described as nodes. The model’s input, the first layer, will contain as many neurons as there are features available, and each neuron will hold the value of a feature. The model’s output, the last layer, in binary problems such as the one we’re tackling, will only have 1 neuron. The value held by this neuron should be 1 if the model is processing a match where Team 1 won, or 0 if Team 2 won. The intermediate layers have an ad hoc number of neurons. In the example in the code snippet, 64 neurons were chosen.

In a Dense layer, as is the case here, each neuron in the layer is connected to every neuron in the preceding layer. Fundamentally, each neuron processes the information provided by the neurons from the previous layer.

The processing of the previous layer’s information requires an activation function. There are many types of activation functions – ReLU, standing for Rectified Linear Unit, is one of them. It allows only positive values to pass and sets negative values to zero, making it effective for many types of data.

Note that the final activation function is a sigmoid function – this converts the output to a number between 0 and 1. This is crucial for binary classification tasks, where you need the model to express its output as a probability.

Besides these small models, medium and large models were created, with an increasing number of layers and parameters. The size of a model affects its ability to capture complex patterns in the data, with larger models generally being more capable in this regard. However, larger models also require more data to learn effectively – if there’s not enough data, issues may occur. Finding the right size is sometimes only possible through experimentation, by training different models and comparing their performance to identify the most effective configuration.

The next step was running the experiments βš—!

What is an Experiment?

In Fabric, an Experiment can be seen as a group of related runs, where a run is an execution of a code snippet. In this context, a run is a training of a model. For each run, a model will be trained with a different set of hyper-parameters. The set of hyper-parameters, along with the final model score, is logged, and this information is available for each run. Once enough runs have been completed, the final model scores can be compared, so that the best version of each model can be selected.

Creating an Experiment in Fabric can be done via the UI or directly from a Notebook. The Experiment is essentially a wrapper for MLFlow Experiments. One of the great things about using Experiments in Fabric is that the results can be shared with others. This makes it possible to collaborate and allow others to participate in experiments, either writing code to run experiments, or analysing the results.

Creating an Experiment

Using the UI to create an Experiment simply select Experiment from the + New button, and choose a name.

Fig. 7 - Creating an Experiment using the UI. Image by Martim Chaves
Fig. 7 – Creating an Experiment using the UI. Image by Martim Chaves

When training each of the models, the hyper-parameters are logged with the experiment, as well as the final score. Once completed we can see the results in the UI, and compare the different runs to see which model performed best.

Fig. 8 - Comparing different runs. Image by Martim Chaves
Fig. 8 – Comparing different runs. Image by Martim Chaves

After that we can select the best model and use it to make the final prediction. When comparing the three models, the best Brier score was 0.20, a slight improvement πŸŽ‰ !

Conclusion

After loading and analysing data from this year’s US major college basketball tournament, and creating a dataset with relevant features, we were able to predict the outcome of the games using a simple Neural Network. Experiments were used to compare the performance of different models. Finally, the best performing model was selected to carry out the final prediction.

In the next post we will go into detail on how we created the features using pyspark. Stay tuned for more! πŸ‘‹

The full source code for this post can be found here.


Originally published at https://nobledynamic.com on April 1, 2024.

References

[1] Jeff Sonas, Ryan Holbrook, Addison Howard, Anju Kandru. (2024). March Machine Learning Mania 2024. Kaggle. https://kaggle.com/competitions/march-machine-learning-mania-2024

The post Fabric Madness appeared first on Towards Data Science.

]]>