This library provides utilities to work with Protobuf objects in SparkSQL. It provides a way to read parquet file written by SparkSQL back as an RDD of compatible protobuf object. It can also converts RDD of protobuf objects into DataFrame.
For sbt 0.13.6+
resolvers += Resolver.jcenterRepo libraryDependencies ++= Seq( "com.github.saurfang" %% "sparksql-protobuf" % "0.1.3", "org.apache.parquet" % "parquet-protobuf" % "1.8.3" )
SparkSQL is very powerful and easy to use. However it has a few limitations and schema is only detected during runtime makes developers a lot less confident that they will get things right at first time. Static typing helps a lot! This is where protobuf comes in:
- Protobuf defines nested data structure easily
- It doesn't constraint you to the 22 fields limit in case class (no longer true once we upgrade to 2.11+)
- It is language agnostic and generates code that gives you native objects
hence you get all the benefit of type checking and code completion unlike operating
Read Parquet file as
val personsPB = new ProtoParquetRDD(sc, "persons.parquet", classOf[Person])
where we need
SparkContext, parquet path and protobuf class.
This converts the existing workflow:
- Ingest raw data as DataFrame with nested data structure
- Create awkward runtime type checking udfs
- Transform raw DataFrame using above udfs into a tabular DataFrame for data analytics
- Ingest raw data as DataFrame with nested data structure and persist as Parquet file
- Read Parquet file back as
- Perform any data transformation and extraction by working with compile typesafe Protobuf getters
- Create a DataFrame out of the above transformation and perform additional downstream data analytics on the tabular DataFrame
Infer SparkSQL Schema from Protobuf Definition
val personSchema = ProtoReflection.schemaFor[Person].dataType.asInstanceOf[StructType]
import com.github.saurfang.parquet.proto.spark.sql._ val personsDF = sqlContext.createDataFrame(protoPersons)
For more information, please see test cases.
Under the hood
ProtoMessageConverterhas been improved to read from LIST specification according to latest parquet documentation. This implementation should be backwards compatible and is able to read repeated fields generated by writers like SparkSQL.
ProtoMessageParquetInputFormathelps the above process by correctly returning the built protobuf object as value.
ProtoParquetRDDabstract the Hadoop input format and returns an RDD of your protobuf objects from parquet files directly.
ProtoReflectioninfers SparkSQL schema from any Protobuf message class.
ProtoRDDConversionsconverts Protobuf objects into SparkSQL rows.