Adding StructType columns to Spark DataFrames

StructType objects define the schema of Spark DataFrames. StructType objects contain a list of StructField objects that define the name, type, and nullable flag for each column in a DataFrame.

Let’s start with an overview of StructType objects and then demonstrate how StructType columns can be added to DataFrame schemas (essentially creating a nested schema).

StructType columns are a great way to eliminate order dependencies from Spark code.

StructType overview

The StructType case class can be used to define a DataFrame schema as follows.

The DataFrame method returns a object.

Let’s look at another example to see how StructType columns can be appended to DataFrames.

Appending StructType columns

Let’s use the function to append a StructType column to a DataFrame.

Let’s take a look at the schema.

The column has a type — this DataFrame has a nested schema.

It’s easier to view the schema with the method.

We can flatten the DataFrame as follows.

Using StructTypes to eliminate order dependencies

Let’s demonstrate some order dependent code and then use a StructType column to eliminate the order dependencies.

Let’s consider three custom transformations that add , , and columns to a DataFrame.

Notice that both the and transformations must be run before the transformation can be run. The functions have an order dependency because they must be run in a certain order for the code to work.

Let’s build a DataFrame and execute the functions in the right order so the code will run.

Let’s use the function to append a StructType column to the DataFrame and remove the order depenencies from this code.

Order dependencies can be a big problem in large Spark codebases

If you’re code is organized as DataFrame transformations, order dependencies can become a big problem.

You might need to figure out how to call 20 functions in exactly the right order to get the desired result.

StructType columns are one way to eliminate order dependencies from your code. I’ll discuss other strategies in more detail in a future blog post!

Spark coder, live in Colombia / Brazil / US, love Scala / Python / Ruby, working on empowering Latinos and Latinas in tech

Spark coder, live in Colombia / Brazil / US, love Scala / Python / Ruby, working on empowering Latinos and Latinas in tech