novakov-alexey / http4s-spnego   0.2.3

Apache License 2.0 GitHub

http4s middleware for HTTP SPNEGO Authentication

Scala versions: 2.13 2.12


Codacy Badge Maven Central Build

This library provides SPNEGO Authentication as a middleware for http4s.

Project is an adaptation of akka-http-spnego, but for http4s.

How to use

  1. Add library into your dependencies:
libraryDependencies += "io.github.novakov-alexey" % "http4s-spnego_2.13" % "<version>"
libraryDependencies += "io.github.novakov-alexey" % "http4s-spnego_2.12" % "<version>"
  1. Instantiate Spnego using SpnegoConfig and JaasConfig case classes:
import io.github.novakovalexey.http4s.spnego.SpnegoConfig
import io.github.novakovalexey.http4s.spnego.JaasConfig

val realm = "EXAMPLE.ORG"
val principal = s"HTTP/myservice@$realm"
val keytab = "/etc/krb5.keytab"
val debug = true
val domain = Some("myservice")
val path: Option[String] = None
val tokenValidity: FiniteDuration = 3600.seconds
val cookieName = "http4s.spnego"
val cfg = SpnegoConfig(
    Some(JaasConfig(keytab, debug, None)) // option 1

val spnegoIO: IO[Spnego[IO]] = Spnego[IO](cfg) // creation is side-effectful

JaasConfig can be also set to None value (option 2) in order pass JaasConfig via standard JAAS file. For example:

System.setProperty("", "test-server/resources/server-jaas.conf")

See example of standard JAAS file at test-server/resources/server-jaas.conf

  1. Wrap AuthedRoutes with spnego#middleware, so that you can get an instance of SPNEGO token. Wrapped routes will be called successfully only if SPNEGO authentication succeeded.
import cats.effect.Sync
import cats.implicits._
import org.http4s.{AuthedRoutes, HttpRoutes}
import org.http4s.dsl.Http4sDsl

class LoginEndpoint[F[_]: Sync](spnego: Spnego[F]) extends Http4sDsl[F] {

  val routes: HttpRoutes[F] =
    spnego(AuthedRoutes.of[Token, F] {
      case GET -> Root as token =>
        Ok(s"This page is protected using HTTP SPNEGO authentication; logged in as ${token.principal}")
  1. Use routes in your server:
def stream[F[_]: ConcurrentEffect: ContextShift: Timer]: Stream[F, ExitCode] = 
  for {
    spnego <- Stream.eval(spnegoIO)

    httpApp = Router("/auth" -> new LoginEndpoint[F](spnego).routes).orNotFound
    finalHttpApp = Logger.httpApp(logHeaders = true, logBody = true)(httpApp)

    stream <- BlazeServerBuilder[F]
      .bindHttp(8080, "")
  } yield stream  

Add property to the Token

If you need to add more fields into JWT token, there is a special String field Token.attributes:

// this route is used to create cookie once SPNEGO is done
case GET -> Root as token =>
   val id = "groupId=1"
   Ok(s"logged in as ${token.principal}")
      .map(_.addCookie(spnego.signCookie(token.copy(attributes = id))))

// this route takes already authenticated user and its token 
case POST -> Root as token =>
   val id = token.attributes
   // do something with id

Added field will be used to create a JWT signature.

See tests and test-server module for more examples.

Testing with test server


  1. Make sure Kerberos is installed and configured for your server and client machines.
  2. Configure test server with proper realm, principal, keytab path (see config above)
  3. Authenticate client via kinit CLI tool to the same realm used for the server side
  4. Start test server: sbt 'project test-server' run
  5. Use curl or Web-Browser to initiate a negotiation request (google for that or try this link). In case you want to test with curl, there is command for that:
curl -k --negotiate -u : -b ~/cookiejar.txt -c ~/cookiejar.txt http://<yourserver>:8080/

Using Kerberos Operator

Kerberos Operator allows to spin up a KDC instance in Kubernetes via CRD. See more details on the operator here:

First of all you need a Kubernetes cluster. Then use existing make file to run the following.

make deploy-krb-operator
make create-principals

Once operator and Kerberos servev (KDC container) are up running check that new Kubernetes secret created with name test-keytab. If secret is there, then deploy client and server pods by running:


If secret is not yet created then wait a minute and check again.

Once client and server pods are up and running, tail server pod log to see what is going on http4s server side. Then go to client pod shell. You can use kubectl exec -it ... for example. In the client Pod shell run:

sh /opt/docker/

Expected result is Ok status in the curl output of the client's Pod shell.

Remove Kubernetes Test setup

Run the following commmands:

make delete-principals
make undeploy-client-server
make undeploy-krb-operator