Azure Cosmos DB Connector for Apache Spark

Build Status

azure-cosmosdb-spark is the official connector for Azure CosmosDB and Apache Spark. The connector allows you to easily read to and write from Azure Cosmos DB via Apache Spark DataFrames in python and scala. It also allows you to easily create a lambda architecture for batch-processing, stream-processing, and a serving layer while being globally replicated and minimizing the latency involved in working with big data.

Table of Contents



Jump Start

Reading from Cosmos DB

Below are excerpts in Python and Scala on how to create a Spark DataFrame to read from Cosmos DB

# Read Configuration
readConfig = {
  "Endpoint" : "",
  "Masterkey" : "SPSVkSfA7f6vMgMvnYdzc1MaWb65v4VQNcI2Tp1WfSP2vtgmAwGXEPcxoYra5QBHHyjDGYuHKSkguHIz1vvmWQ==",
  "Database" : "DepartureDelays",
  "preferredRegions" : "Central US;East US2",
  "Collection" : "flights_pcoll",
  "SamplingRatio" : "1.0",
  "schema_samplesize" : "1000",
  "query_pagesize" : "2147483647",
  "query_custom" : "SELECT, c.delay, c.distance, c.origin, c.destination FROM c WHERE c.origin = 'SEA'"

# Connect via azure-cosmosdb-spark to create Spark DataFrame
flights ="").options(**readConfig).load()
Click for Scala Excerpt

// Import Necessary Libraries

// Configure connection to your collection
val readConfig = Config(Map(
  "Endpoint" -> "",
  "Masterkey" -> "SPSVkSfA7f6vMgMvnYdzc1MaWb65v4VQNcI2Tp1WfSP2vtgmAwGXEPcxoYra5QBHHyjDGYuHKSkguHIz1vvmWQ==",
  "Database" -> "DepartureDelays",
  "PreferredRegions" -> "Central US;East US2;",
  "Collection" -> "flights_pcoll",
  "SamplingRatio" -> "1.0",
  "query_custom" -> "SELECT, c.delay, c.distance, c.origin, c.destination FROM c WHERE c.origin = 'SEA'"

// Connect via azure-cosmosdb-spark to create Spark DataFrame
val flights =

Writing to Cosmos DB

Below are excerpts in Python and Scala on how to write a Spark DataFrame to Cosmos DB

# Write configuration
writeConfig = {
 "Endpoint" : "",
 "Masterkey" : "SPSVkSfA7f6vMgMvnYdzc1MaWb65v4VQNcI2Tp1WfSP2vtgmAwGXEPcxoYra5QBHHyjDGYuHKSkguHIz1vvmWQ==",
 "Database" : "DepartureDelays",
 "Collection" : "flights_fromsea",
 "Upsert" : "true"

# Write to Cosmos DB from the flights DataFrame
Click for Scala Excerpt

// Configure connection to the sink collection
val writeConfig = Config(Map(
  "Endpoint" -> "",
  "Masterkey" -> "SPSVkSfA7f6vMgMvnYdzc1MaWb65v4VQNcI2Tp1WfSP2vtgmAwGXEPcxoYra5QBHHyjDGYuHKSkguHIz1vvmWQ==",
  "Database" -> "DepartureDelays",
  "PreferredRegions" -> "Central US;East US2;",
  "Collection" -> "flights_fromsea",
  "WritingBatchSize" -> "100"

// Upsert the dataframe to Cosmos DB
import org.apache.spark.sql.SaveMode


See other sample Jupyter and Databricks notebooks as well as PySpark and Spark scripts.



azure-cosmosdb-spark has been regularly tested using HDInsight 3.6 (Spark 2.1), 3.7 (Spark 2.2) and Azure Databricks Runtime 3.5 (Spark 2.2.1), 4.0 (Spark 2.3.0).

Review supported component versions

Component Versions Supported
Apache Spark 2.2.1, 2.3
Scala 2.11
Python 2.7, 3.6
Azure Cosmos DB Java SDK 1.16.1, 1.16.2


Working with the connector

You can build and/or use the maven coordinates to work with azure-cosmosdb-spark.

Review the connector's maven versions

Spark Scala Latest version
2.3.0 2.11 azure-cosmosdb-spark_2.3.0_2.11_1.3.2
2.2.0 2.11 azure-cosmosdb-spark_2.2.0_2.11_1.1.1
2.1.0 2.11 azure-cosmosdb-spark_2.1.0_2.11_1.2.2

Using spark-cli

To work with the connector using the spark-cli (i.e. spark-shell, pyspark, spark-submit), you can use the --packages parameter with the connector's maven coordinates.

spark-shell --master YARN --packages ""

Using Jupyter notebooks

If you're using Jupyter notebooks within HDInsight, you can use spark-magic %%configure cell to specify the connector's maven coordinates.

{ "name":"Spark-to-Cosmos_DB_Connector",
  "conf": {
    "spark.jars.packages": "",
    "spark.jars.excludes": "org.scala-lang:scala-reflect"

Note, the inclusion of the spark.jars.excludes is specific to remove potential conflicts between the connector, Apache Spark, and Livy.

Using Databricks notebooks

Please create a library using within your Databricks workspace by following the guidance within the Azure Databricks Guide > Use the Azure Cosmos DB Spark connector

Note, the Use the Azure Cosmos DB Spark Connector page is currently not up-to-date; issue is assigned to @dennyglee. Instead of downloading the six separate JARs into six different libraries, you can download the uber JAR of [ from maven and just install this one JAR / library.

Build the connector

Currently, this connector project uses maven so to build without dependencies, you can run:

mvn clean package


Working with our samples

Included in this GitHub repository are a number of sample notebooks and scripts that you can utilize:


More Information

We have more information in the azure-cosmosdb-spark wiki including:

Configuration and Setup



Change Feed



Contributing & Feedback

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact with any additional questions or comments.

See for contribution guidelines.

To give feedback and/or report an issue, open a GitHub Issue.

ApacheĀ®, Apache Spark, and SparkĀ® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.