SynapseML

Synapse Machine Learning

SynapseML (previously known as MMLSpark), is an open-source library that simplifies the creation of massively scalable machine learning (ML) pipelines. SynapseML provides simple, composable, and distributed APIs for a wide variety of different machine learning tasks such as text analytics, vision, anomaly detection, and many others. SynapseML is built on the Apache Spark distributed computing framework and shares the same API as the SparkML/MLLib library, allowing you to seamlessly embed SynapseML models into existing Apache Spark workflows.

With SynapseML, you can build scalable and intelligent systems to solve challenges in domains such as anomaly detection, computer vision, deep learning, text analytics, and others. SynapseML can train and evaluate models on single-node, multi-node, and elastically resizable clusters of computers. This lets you scale your work without wasting resources. SynapseML is usable across Python, R, Scala, Java, and .NET. Furthermore, its API abstracts over a wide variety of databases, file systems, and cloud data stores to simplify experiments no matter where data is located.

SynapseML requires Scala 2.12, Spark 3.4+, and Python 3.8+.

Topics Links
Build Build Status codecov Code style: black
Version Version Release Notes Snapshot Version
Docs Website Scala Docs PySpark Docs Academic Paper
Support Gitter Mail
Binder Binder
Usage Downloads
Table of Contents

Features

Vowpal Wabbit on Spark The Cognitive Services for Big Data LightGBM on Spark Spark Serving
Fast, Sparse, and Effective Text Analytics Leverage the Microsoft Cognitive Services at Unprecedented Scales in your existing SparkML pipelines Train Gradient Boosted Machines with LightGBM Serve any Spark Computation as a Web Service with Sub-Millisecond Latency
HTTP on Spark ONNX on Spark Responsible AI Spark Binding Autogeneration
An Integration Between Spark and the HTTP Protocol, enabling Distributed Microservice Orchestration Distributed and Hardware Accelerated Model Inference on Spark Understand Opaque-box Models and Measure Dataset Biases Automatically Generate Spark bindings for PySpark and SparklyR
Isolation Forest on Spark CyberML Conditional KNN
Distributed Nonlinear Outlier Detection Machine Learning Tools for Cyber Security Scalable KNN Models with Conditional Queries

Documentation and Examples

For quickstarts, documentation, demos, and examples please see our website.

Setup and installation

First select the correct platform that you are installing SynapseML into:

Microsoft Fabric

In Microsoft Fabric notebooks SynapseML is already installed. To change the version please place the following in the first cell of your notebook.

%%configure -f
{
  "name": "synapseml",
  "conf": {
      "spark.jars.packages": "com.microsoft.azure:synapseml_2.12:<THE_SYNAPSEML_VERSION_YOU_WANT>",
      "spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
      "spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind",
      "spark.yarn.user.classpath.first": "true",
      "spark.sql.parquet.enableVectorizedReader": "false"
  }
}

Synapse Analytics

In Azure Synapse notebooks please place the following in the first cell of your notebook.

  • For Spark 3.4 Pools:
%%configure -f
{
  "name": "synapseml",
  "conf": {
      "spark.jars.packages": "com.microsoft.azure:synapseml_2.12:1.0.8",
      "spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
      "spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind",
      "spark.yarn.user.classpath.first": "true",
      "spark.sql.parquet.enableVectorizedReader": "false"
  }
}
  • For Spark 3.3 Pools:
%%configure -f
{
  "name": "synapseml",
  "conf": {
      "spark.jars.packages": "com.microsoft.azure:synapseml_2.12:0.11.4-spark3.3",
      "spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
      "spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind",
      "spark.yarn.user.classpath.first": "true",
      "spark.sql.parquet.enableVectorizedReader": "false"
  }
}

To install at the pool level instead of the notebook level add the spark properties listed above to the pool configuration.

Databricks

To install SynapseML on the Databricks cloud, create a new library from Maven coordinates in your workspace.

For the coordinates use: com.microsoft.azure:synapseml_2.12:1.0.8 with the resolver: https://mmlspark.azureedge.net/maven. Ensure this library is attached to your target cluster(s).

Finally, ensure that your Spark cluster has at least Spark 3.2 and Scala 2.12. If you encounter Netty dependency issues please use DBR 10.1.

You can use SynapseML in both your Scala and PySpark notebooks. To get started with our example notebooks import the following databricks archive:

https://mmlspark.blob.core.windows.net/dbcs/SynapseMLExamplesv1.0.8.dbc

Python Standalone

To try out SynapseML on a Python (or Conda) installation you can get Spark installed via pip with pip install pyspark. You can then use pyspark as in the above example, or from python:

import pyspark
spark = pyspark.sql.SparkSession.builder.appName("MyApp") \
            .config("spark.jars.packages", "com.microsoft.azure:synapseml_2.12:1.0.8") \
            .getOrCreate()
import synapse.ml

Spark Submit

SynapseML can be conveniently installed on existing Spark clusters via the --packages option, examples:

spark-shell --packages com.microsoft.azure:synapseml_2.12:1.0.8
pyspark --packages com.microsoft.azure:synapseml_2.12:1.0.8
spark-submit --packages com.microsoft.azure:synapseml_2.12:1.0.8 MyApp.jar

SBT

If you are building a Spark application in Scala, add the following lines to your build.sbt:

libraryDependencies += "com.microsoft.azure" % "synapseml_2.12" % "1.0.8"

Apache Livy and HDInsight

To install SynapseML from within a Jupyter notebook served by Apache Livy the following configure magic can be used. You will need to start a new session after this configure cell is executed.

Excluding certain packages from the library may be necessary due to current issues with Livy 0.5.

%%configure -f
{
    "name": "synapseml",
    "conf": {
        "spark.jars.packages": "com.microsoft.azure:synapseml_2.12:1.0.8",
        "spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind"
    }
}

Docker

The easiest way to evaluate SynapseML is via our pre-built Docker container. To do so, run the following command:

docker run -it -p 8888:8888 -e ACCEPT_EULA=yes mcr.microsoft.com/mmlspark/release jupyter notebook

Navigate to http://localhost:8888/ in your web browser to run the sample notebooks. See the documentation for more on Docker use.

To read the EULA for using the docker image, run docker run -it -p 8888:8888 mcr.microsoft.com/mmlspark/release eula

R

To try out SynapseML using the R autogenerated wrappers see our instructions. Note: This feature is still under development and some necessary custom wrappers may be missing.

Building from source

SynapseML has recently transitioned to a new build infrastructure. For detailed developer docs please see the Developer Readme

If you are an existing synapsemldeveloper, you will need to reconfigure your development setup. We now support platform independent development and better integrate with intellij and SBT. If you encounter issues please reach out to our support email!

Papers

Learn More

Contributing & feedback

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

See CONTRIBUTING.md for contribution guidelines.

To give feedback and/or report an issue, open a GitHub Issue.

Other relevant projects

Apache®, Apache Spark, and Spark® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.