Lolo is a random forest-centered machine learning library in Scala.
The core of Lolo is bagging simple base learners, like decision trees, to produce models that can generate robust uncertainty estimates.
Lolo supports:
- continuous and categorical features
- regression, classification, and multi-task trees
- bagged learners to produce ensemble models, e.g. random forests
- linear and ridge regression
- regression leaf models, e.g. ridge regression trained on the leaf data
- random rotation ensembles
- recalibrated bootstrap prediction interval estimates
- bias-corrected jackknife-after-bootstrap and infinitesimal jackknife confidence interval estimates
- bias models trained on out-of-bag residuals
- feature importances computed via variance reduction or Shapley values (which are additive and per-prediction)
- model based feature importance
- distance correlation
- hyperparameter optimization via grid or random search
- parallel training via scala parallel collections
- validation metrics for accuracy and uncertainty quantification
- visualization of predicted-vs-actual validations
- deterministic training via random seeds
Lolo is on the central repository, and can be used by simply adding the following dependency block in your pom file:
<dependency>
<groupId>io.citrine</groupId>
<artifactId>lolo</artifactId>
<version>6.0.0</version>
</dependency>
Lolo provides higher level wrappers for common learner combinations. For example, you can use Random Forest with:
import io.citrine.lolo.learners.RandomForestRegressor
val trainingData: Seq[TrainingRow[Double]] = TrainingRow.build(features.zip(labels))
val model = RandomForestRegressor().train(trainingData).model
val predictions: Seq[Double] = model.transform(testInputs).expected
Lolo prioritizes functionality over performance, but it is still quite fast. In its random forest use case, the complexity scales as:
Time complexity | Training rows | Features | Trees |
---|---|---|---|
train |
O(n log n) | O(n) | O(n) |
loss |
O(n log n) | O(n) | O(n) |
expected |
O(log n) | O(1) | O(n) |
uncertainty |
O(n) | O(1) | O(n) |
On an Ivy Bridge test platform, the (1024 row, 1024 tree, 8 feature) performance test took 1.4 sec to train and 2.3 ms per prediction with uncertainty.
We welcome bug reports, feature requests, and pull requests.
Pull requests should be made following the feature branch workflow: branching off of and opening PRs into main
.
Production releases are triggered by tags.
The sbt-ci-release plugin will use the tag as the lolo
version.
On the other hand, lolopy
versions are still read from setup.py
, so version bumps are needed for successful releases.
Failing to bump the lolopy
version number will result in a skipped lolopy
release rather than a build failure.
- Consistent formatting is enforced by scalafmt.
- The easiest way to check whether scalafmt is satisfied is to run scalafmt from the command line:
sbt scalafmtCheckAll
. This will check whether any files need to be reformatted. Pull requests are gated on this running successfully. You can automatically check whether code is formatted properly before pushing to an upstream repository using a git hook. To set this up, install the pre-commit framework by following the instructions here. Then enable the hooks in.pre-commit-config.yaml
by runningpre-commit install --hook-type pre-push
from the root directory. This will runscalafmtCheckAll
before pushing to a remote repo. - To ensure code is formatted properly, you can run
sbt scalafmtAll
from the command line or configure your IDE to format files on save.
See Contributors
- randomForestCI is an R-based implementation of jackknife variance estimates by S. Wager