Build Status Maven Central PyPI version Anaconda-Cloud License

John Snow Labs Spark-NLP is a natural language processing library built on top of Apache Spark ML. It provides simple, performant & accurate NLP annotations for machine learning pipelines, that scale easily in a distributed environment.

Project's website

Take a look at our official spark-nlp page: for user documentation and examples

Slack community channel

Join Slack

Table of contents


Apache Spark Support

Spark-NLP 2.0.4 has been built on top of Apache Spark 2.4.0

Note that Spark is not retrocompatible with Spark 2.3.x, so models and environments might not work.

If you are still stuck on Spark 2.3.x feel free to use this assembly jar instead. Support is limited. For OCR module, this is for spark 2.3.x.

Spark NLP Spark 2.3.x Spark 2.4
2.x.x YES YES
1.8.x Partially YES
1.7.3 YES N/A
1.6.3 YES N/A
1.5.0 YES N/A

Find out more about Spark-NLP versions from our release notes.

Spark Packages

Command line (requires internet connection)

This library has been uploaded to the spark-packages repository.

Benefit of spark-packages is that makes it available for both Scala-Java and Python

To use the most recent version just add the --packages JohnSnowLabs:spark-nlp:2.0.4 to you spark command

spark-shell --packages JohnSnowLabs:spark-nlp:2.0.4
pyspark --packages JohnSnowLabs:spark-nlp:2.0.4
spark-submit --packages JohnSnowLabs:spark-nlp:2.0.4

This can also be used to create a SparkSession manually by using the spark.jars.packages option in both Python and Scala

Compiled JARs

Build from source

Spark NLP

  • FAT-JAR for CPU
sbt assembly
  • FAT-JAR for GPU
sbt -Dis_gpu=true assembly
  • Packaging the project
sbt package


Requires native Tesseract 4.x+ for image based OCR. Does not require Spark-NLP to work but highly suggested

sbt ocr/assembly
  • Packaging the project
sbt ocr/package

Using the jar manually

If for some reason you need to use the JAR, you can either download the Fat JARs provided here or download it from Maven Central.

To add JARs to spark programs use the --jars option:

spark-shell --jars spark-nlp.jar

The preferred way to use the library when running spark programs is using the --packages option as specified in the spark-packages section.


Our package is deployed to maven central. In order to add this package as a dependency in your application:


<!-- -->


<!-- -->


libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp" % "2.0.4"


libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-ocr" % "2.0.4"

Maven Central:


Python without explicit Pyspark installation


If you installed pyspark through pip, you can install spark-nlp through pip as well.

pip install spark-nlp==2.0.4

PyPI spark-nlp package


If you are using Anaconda/Conda for managing Python packages, you can install spark-nlp as follow:

conda install -c johnsnowlabs spark-nlp

Anaconda spark-nlp package

Then you'll have to create a SparkSession manually, for example:

spark = SparkSession.builder \
    .config("spark.driver.maxResultSize", "2G") \
    .config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.0.4")\
    .config("spark.kryoserializer.buffer.max", "500m")\

If using local jars, you can use spark.jars instead for a comma delimited jar files. For cluster setups, of course you'll have to put the jars in a reachable location for all driver and executor nodes

Apache Zeppelin

Use either one of the following options

  • Add the following Maven Coordinates to the interpreter's library list
  • Add path to pre-built jar from here in the interpreter's library list making sure the jar is available to driver path

Python in Zeppelin

Apart from previous step, install python module through pip

pip install spark-nlp==2.0.4

Or you can install spark-nlp from inside Zeppelin by using Conda:

%python.conda install -c johnsnowlabs spark-nlp

Configure Zeppelin properly, use cells with %spark.pyspark or any interpreter name you chose.

Finally, in Zeppelin interpreter settings, make sure you set properly zeppelin.python to the python you want to use and installed the pip library with (e.g. python3).

An alternative option would be to set SPARK_SUBMIT_OPTIONS ( and make sure --packages is there as shown earlier, since it includes both scala and python side installation.

Jupyter Notebook (Python)

Easiest way to get this done is by making Jupyter Notebook run using pyspark as follows:

export SPARK_HOME=/path/to/your/spark/folder
export PYSPARK_PYTHON=python3

pyspark --packages JohnSnowLabs:spark-nlp:2.0.4

Alternatively, you can mix in using --jars option for pyspark + pip install spark-nlp

If not using pyspark at all, you'll have to run the instructions pointed here

S3 Cluster

With no hadoop configuration

If your distributed storage is S3 and you don't have a standard hadoop configuration (i.e. fs.defaultFS) You need to specify where in the cluster distributed storage you want to store Spark-NLP's tmp files. First, decide where you want to put your application.conf file

import com.johnsnowlabs.uti.ConfigLoader

And then we need to put in such application.conf the following content

sparknlp {
  settings {
    cluster_tmp_dir = "somewhere in s3n:// path to some folder"

Models and Pipelines


Pipelines Name English
Explain Document ML explain_document_ml Download
Explain Document DL explain_document_dl Download
Entity Recognizer DL entity_recognizer_dl Download



Model Name English
LemmatizerModel (Lemmatizer) lemma_antbnc Download
PerceptronModel (POS) pos_anc Download
NerCRFModel (NER with GloVe) ner_crf Download
NerDLModel (NER with GloVe) ner_dl Download
WordEmbeddings (GloVe) glove_100d Download
WordEmbeddings (BERT) bert_uncased Download
NerDLModel (NER with BERT) ner_dl_bert Download
DeepSentenceDetector ner_dl_sentence Download
ContextSpellCheckerModel (Spell Checker) spellcheck_dl Download
SymmetricDeleteModel (Spell Checker) spellcheck_sd Download
NorvigSweetingModel (Spell Checker) spellcheck_norvig Download
ViveknSentimentModel (Sentiment) sentiment_vivekn Download
DependencyParser (Dependency) dependency_conllu Download
TypedDependencyParser (Dependency) dependency_typed_conllu Download


Model Name Italian
LemmatizerModel (Lemmatizer) lemma_dxc Download
SentimentDetector (Sentiment) sentiment_dxc Download


Model Name French
PerceptronModel (POS UD) pos_ud_gsd Download
LemmatizerModel (Lemmatizer) lemma Download

How to use Models and Pipelines


To use Spark NLP pretrained pipelines, you can call PretrainedPipeline with pipeline's name and its language (default is en):

pipeline = PretrainedPipeline('explain_document_dl', lang='en')

Same in Scala

val pipeline = PretrainedPipeline("explain_document_dl", lang="en")

You can follow the same approach to use Spark NLP pretrained models:

# load NER model trained by deep learning approach and GloVe word embeddings
ner_dl = NerDLModel.pretrained('ner_dl')
# load NER model trained by deep learning approach and BERT word embeddings
ner_crf = NerDLModel.pretrained('ner_dl_bert')

The default language is English, so for other laguages you should set the language:

// load French POS tagger model trained by Universal Dependencies
val french_pos = PerceptronModel.pretrained("pos_ud_gsd", lang="fr")
// load Italain LemmatizerModel
val italian_lemma = LemmatizerModel.pretrained("lemma_dxc", lang="it")


If you have any trouble using online pipelines or models in your environment (maybe it's air-gapped), you can directly download them for offline use.

After downloading offline models/pipelines and extracting them, here is how you can use them iside your code (the path could be a shared storage like HDFS in a cluster):

  • Loading PerceptronModel annotator model inside Spark NLP Pipeline
val french_pos = PerceptronModel.load("/tmp/pos_ud_gsd_fr_2.0.2_2.4_1556531457346/")
      .setInputCols("document", "token")
  • Loading Offline Pipeline
val advancedPipeline = PipelineModel.load("/tmp/explain_document_dl_en_2.0.2_2.4_1556530585689/")
// To use the loaded Pipeline for prediction


Need more examples? Check out our dedicated repository to showcase Spark NLP use cases!



Check our Articles and FAQ page here



  • Q: I am getting a Java Core Dump when running OCR transformation

    • A: Add LC_ALL=C environment variable
  • Q: Getting org.apache.pdfbox.filter.MissingImageReaderException: Cannot read JPEG2000 image: Java Advanced Imaging (JAI) Image I/O Tools are not installed when running an OCR transformation

    • A: --packages com.github.jai-imageio:jai-imageio-jpeg2000:1.3.0. This library is non-free thus we can't include it as a Spark-NLP dependency by default


Special community aknowledgments

Thanks in general to the community who have been lately reporting important issues and pull request with bugfixes. Community has been key in the last releases with feedback in various Spark based environments.

Here a few specific mentions for recurring feedback and slack participation

  • @maziyarpanahi - For contributing with testing and valuable feedback
  • @easimadi - For contributing with documentation and valuable feedback


We appreciate any sort of contributions:

  • ideas
  • feedback
  • documentation
  • bug reports
  • nlp training and testing corpora
  • development and testing

Clone the repo and submit your pull-requests! Or directly create issues in this repo.


John Snow Labs