Moving files with Azure Data Factory

The storage and backup company Backblaze publishes some neat data on hard drive reliability. My purpose today is to create an Azure Data Factory that reads these raw files and uploads them to Azure SQL.

Backblaze data

Data Schema

Backblaze provides a schema for use, that I edited the data types slightly to fit Azure SQL and Data Factory better. Each of the provided data files in year 2013 follow this format.

CREATE TABLE drive_stats (
    date DATE NOT NULL,
    serial_number NVARCHAR(MAX) NOT NULL,
    model NVARCHAR(MAX) NOT NULL,
    capacity_bytes BIGINT NOT NULL,
    failure INTEGER NOT NULL,
    smart_1_normalized INTEGER,
    smart_1_raw INTEGER,
    smart_2_normalized INTEGER,
    smart_2_raw INTEGER,
    smart_3_normalized INTEGER,
    smart_3_raw INTEGER,
    smart_4_normalized INTEGER,
    smart_4_raw INTEGER,
    smart_5_normalized INTEGER,
    smart_5_raw INTEGER,
    smart_7_normalized INTEGER,
    smart_7_raw INTEGER,
    smart_8_normalized INTEGER,
    smart_8_raw INTEGER,
    smart_9_normalized INTEGER,
    smart_9_raw INTEGER,
    smart_10_normalized INTEGER,
    smart_10_raw INTEGER,
    smart_11_normalized INTEGER,
    smart_11_raw INTEGER,
    smart_12_normalized INTEGER,
    smart_12_raw INTEGER,
    smart_13_normalized INTEGER,
    smart_13_raw INTEGER,
    smart_15_normalized INTEGER,
    smart_15_raw INTEGER,
    smart_183_normalized INTEGER,
    smart_183_raw INTEGER,
    smart_184_normalized INTEGER,
    smart_184_raw INTEGER,
    smart_187_normalized INTEGER,
    smart_187_raw INTEGER,
    smart_188_normalized INTEGER,
    smart_188_raw INTEGER,
    smart_189_normalized INTEGER,
    smart_189_raw INTEGER,
    smart_190_normalized INTEGER,
    smart_190_raw INTEGER,
    smart_191_normalized INTEGER,
    smart_191_raw INTEGER,
    smart_192_normalized INTEGER,
    smart_192_raw INTEGER,
    smart_193_normalized INTEGER,
    smart_193_raw INTEGER,
    smart_194_normalized INTEGER,
    smart_194_raw INTEGER,
    smart_195_normalized INTEGER,
    smart_195_raw INTEGER,
    smart_196_normalized INTEGER,
    smart_196_raw INTEGER,
    smart_197_normalized INTEGER,
    smart_197_raw INTEGER,
    smart_198_normalized INTEGER,
    smart_198_raw INTEGER,
    smart_199_normalized INTEGER,
    smart_199_raw INTEGER,
    smart_200_normalized INTEGER,
    smart_200_raw INTEGER,
    smart_201_normalized INTEGER,
    smart_201_raw INTEGER,
    smart_223_normalized INTEGER,
    smart_223_raw INTEGER,
    smart_225_normalized INTEGER,
    smart_225_raw INTEGER,
    smart_240_normalized INTEGER,
    smart_240_raw INTEGER,
    smart_241_normalized INTEGER,
    smart_241_raw INTEGER,
    smart_242_normalized INTEGER,
    smart_242_raw INTEGER,
    smart_250_normalized INTEGER,
    smart_250_raw INTEGER,
    smart_251_normalized INTEGER,
    smart_251_raw INTEGER,
    smart_252_normalized INTEGER,
    smart_252_raw INTEGER,
    smart_254_normalized INTEGER,
    smart_254_raw INTEGER,
    smart_255_normalized INTEGER,
    smart_255_raw INTEGER
    );

Azure Resources

I have an Azure SQL Database, an Azure Data Factory, and an Azure Storage Account setup in one resource group for today’s purposes.

Azure Resources

File Storage

I am uploading the files to blobs in the Azure storage account. Here are the first two delimited files uploaded.

File Storage

Data Factory V2

The new data factory doesn’t have anything yet and just shows the quickstart.

Factory Creation

Adding Git Storage

The first thing I’m going to do is to click on the pencil and then the dropdown here to select a new git repository.

Git Selection


I have an empty Azure DevOps repo ready.

Empty Repo


Azure Data Factory can point to this repo to store settings and information, which allows for git collaboration.

Empty Repo


There wouldn’t be any data yet in an empty project, but this is what it might look like later.

Git Data


Back in Data Factory, you’ll see the option to switch branches which is handy for feature branches.

Feature Branches

Datasets

I’m going to create two new datasets, one for Azure SQL and one for Azure Blob Storage.

Blob

Click on the plus-sign on the Factory Resources and select Dataset. A side window will appear where you can search through connectors and pick Blob.

New Blob


Pick delimited

Delimited


Next on blob properties, say the first row has the header, and then link a new service.

Blob Properties


The new linked service window will look like this

Linked Service


Allow the service to import schema from connection, which will use one of the data files uploaded to infer the schema.

Blob Schema


SQL

The Azure SQL setup dataset is very similar to the Blob Storage.

SQL Connection


The schema comes from the SQL defined far above.

SQL Schema

Dataflow

To move data from Blobs to SQL, I’ve created a dataflow that uses Blob storage as a source, and SQL as a sink.

Dataflow


Source:

DataflowSource


Sink:

DataflowSink


Projection of the source data is necessary so that the source’s data types will match the sink’s data types.

Projection


Once data types and names match, the source and the sink can be automapped.

Automap


The sink has settings that allow the table to be truncated and new rows inserted. I’m rebuilding the entire table’s data from scratch.

Sink settings

Running

Now that I have a dataflow, I’ve created a new pipeline and added a single task to it.

Pipeline


Click the debug and let’s run this!

Debug


Debug status will be at the bottom of the screen.

Debug status


Upon success, look at the details page found here:

Details Debug


I can see that a total of 42,390 rows were found in the source.

Details Source

Summary

I have a working pipeline that moves data from delimited files to SQL, and below is a sample of a few rows from SQL. To make this automatic, perhaps adding a trigger to execute on blob creation would be good. To continue on with continuous integration, I’d follow this document for comprehensive guide. I’ve also created a second blog post to do it simply with PowerShell.

Sample Data