Containerization

How to use Docker, Kubernetes, and Helm Charts to deploy your VLINGO XOOM platform services.

The following shows how to set up a Reactive, scalable, event-driven application based for VLINGO XOOM, being deployed on Kubernetes and packaged by Helm Chart.

Quick start with VLINGO XOOM

First, we need a project structure that allows us to start building our application. That's when XOOM Designer comes to play. It saves a lot of effort providing a web/graphical user interface to generate initial development resources such as application files, directory structure, Dockerfile and much more. Once it is installed, you can use the project generator wizard running the following command:

$ ./xoom gui

This command will open your preferred browser. Just fill in the wizard steps so the project will be generated and ready to start the development.

Building the Docker image

If you choose either Docker or Kubernetes on the deployment step, a Dockerfile will be placed in the root folder:

FROM adoptopenjdk/openjdk11-openj9:jdk-11.0.1.13-alpine-slim
COPY target/xoom-example-*.jar xoom-example.jar
EXPOSE 8080
CMD java -Dcom.sun.management.jmxremote -noverify ${JAVA_OPTS} -jar xoom-example.jar

That means the image is ready to be built along with the executable jar. Both tasks are performed through a single Starter CLI command:

$ ./xoom docker package

Now, let's tag and publish this local image into Docker Hub.

$ ./xoom docker push

You can find more information on xoom docker push and other containerization shortcut commands here.

Alternative Without VLINGO XOOM

The previous steps are pretty similar for a VLINGO XOOM service or application without VLINGO XOOM. The executable jar, including the dependency jars, can be generated with the following plugin configuration:

<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>exec-maven-plugin</artifactId>
  <version>1.6.0</version>
  <executions>
    <execution>
      <goals>
        <goal>java</goal>
      </goals>
    </execution>
  </executions>
  <configuration>
    <mainClass>io.vlingo.xoom.app.infra.Bootstrap</mainClass>
  </configuration>
</plugin>
<plugin>
  <artifactId>maven-assembly-plugin</artifactId>
  <executions>
    <execution>
      <phase>package</phase>
      <goals>
        <goal>single</goal>
      </goals>
    </execution>
  </executions>
  <configuration>
    <finalName>vlingo-xoom-app</finalName>
    <descriptors>
      <descriptor>assembly.xml</descriptor>
    </descriptors>
    <archive>
      <manifest>
        <addClasspath>true</addClasspath>
        <mainClass>io.vlingo.xoom.app.infra.Bootstrap</mainClass>
        <classpathPrefix>dependency-jars/</classpathPrefix>
      </manifest>
    </archive>
  </configuration>
</plugin>

The Dockerfile requires the jar with dependencies:

FROM adoptopenjdk/openjdk11-openj9:jdk-11.0.1.13-alpine-slim
COPY target/vlingo-xoom-app-withdeps.jar vlingo-xoom-app.jar
EXPOSE 8082
CMD java -Dcom.sun.management.jmxremote -noverify ${JAVA_OPTS} -jar vlingo-xoom-app.jar

Now, besides the application itself, the Docker image is ready to be built and published:

$ ./mvn clean package && docker build ./ -t vlingo-xoom-app:latest
$ ./docker tag vlingo-xoom-app:latest [publisher]/vlingo-xoom-app:latest 
$ ./docker push [publisher]/vlingo-xoom-app

Kubernetes Deployment

Kubernetes is the chosen tool for container orchestration. In this scenario, it will run a single node cluster serving the VLINGO XOOM application. Whereas kubeadm is installed, the cluster initialization is showed below:

$ ./sudo kubeadm init --pod-network-cidr=192.168.0.0/16

Secondly, the settings folder should be mapped:

$ ./mkdir -p $HOME/.kube
$ ./sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ ./sudo chown $(id -u):$(id -g) $HOME/.kube/config

Kubernetes supports multiple networking model implementations. For now, we choose Calico. Its network policy configuration file can be added using kubectl, the command line tool for controlling Kubernetes clusters:

$ ./kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml   

Considering the single node cluster is the option for this example, the last step is to prepare the master node by removing taints which, in short, prevents a deployable unit (Pods) to run on it.

$ ./kubectl taint nodes --all node-role.kubernetes.io/master-

Management and Deployment With Helm Chart

At this point, we need to tell Kubernetes what is the application desired state and how we want to expose our services, number of replicas, allocated resources... The simpler way is through Helm, a special tool for Kubernetes application management. It simplifies installation, upgrade, scaling and other common tasks. Getting started, let's create a chart, which is a collection of files inside of a directory. This is how it's made:

$ ./helm create xoom-example

The output looks like the following structure:

xoom-example/
  Chart.yaml          # A YAML file containing information about the chart
  LICENSE             # OPTIONAL: A plain text file containing the license for the chart
  README.md           # OPTIONAL: A human-readable README file
  values.yaml         # The default configuration values for this chart
  values.schema.json  # OPTIONAL: A JSON Schema for imposing a structure on the values.yaml file
  charts/             # A directory containing any charts upon which this chart depends.
  crds/               # Custom Resource Definitions
  templates/          # A directory of templates that, when combined with values,
                      # will generate valid Kubernetes manifest files.
  templates/NOTES.txt # OPTIONAL: A plain text file containing short usage notes

In this basic scenario, all we need to do is editing values.yaml , informing the Docker image repository, service type / port and number of replicas:

# Default values for xoom-example.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 3

image:
  repository: [publisher]/xoom-example
  
  ...
  
service:
  type: ClusterIP
  port: 8080

Using lint, we can check if the chart is well-formed after the addition:

$ ./helm lint xoom-example

==> Linting xoom-example
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, 0 chart(s) failed

With template command, we can see all files that Helm will generate and install into Kubernetes:

# Source: xoom-example/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: RELEASE-NAME-xoom-example
  labels:
    helm.sh/chart: xoom-example-0.1.0
    app.kubernetes.io/name: xoom-example
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
---
# Source: xoom-example/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: RELEASE-NAME-xoom-example
  labels:
    helm.sh/chart: xoom-example-0.1.0
    app.kubernetes.io/name: xoom-example
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 8080
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: xoom-example
    app.kubernetes.io/instance: RELEASE-NAME
---
# Source: xoom-example/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: RELEASE-NAME-xoom-example
  labels:
    helm.sh/chart: xoom-example-0.1.0
    app.kubernetes.io/name: xoom-example
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 3
  selector:
    matchLabels:
      app.kubernetes.io/name: xoom-example
      app.kubernetes.io/instance: RELEASE-NAME
  template:
    metadata:
      labels:
        app.kubernetes.io/name: xoom-example
        app.kubernetes.io/instance: RELEASE-NAME
    spec:
      serviceAccountName: RELEASE-NAME-xoom-example
      securityContext:
        {}
      containers:
        - name: xoom-example
          securityContext:
            {}
          image: "xoom-example:1.16.0"
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /
              port: http
          readinessProbe:
            httpGet:
              path: /
              port: http
          resources:
            {}
---
# Source: xoom-example/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "RELEASE-NAME-xoom-example-test-connection"
  labels:
    helm.sh/chart: xoom-example-0.1.0
    app.kubernetes.io/name: xoom-example
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
  annotations:
    "helm.sh/hook": test-success
spec:
  containers:
    - name: wget
      image: busybox
      command: ['wget']
      args: ['RELEASE-NAME-xoom-example:8080']
  restartPolicy: Never

We finish the deployment step executing the install command:

$ ./helm install xoom-example

A new Pod is created by Kubernetes to hold the xoom-example app. You should check if it's running fine:

$ ./kubectl get pods

NAME                           READY   STATUS    RESTARTS   AGE
xoom-example-765bf4c7b4-26z48  1/1     Running   0          64s

Also, it is recommended to check the application logs:

$ ./kubectl logs xoom-example-765bf4c7b4-26z48

Helm also supports a packaging and versioning mechanism, that is, a set of commands that allows us to package the Chart structure and files to make it collaborative. First, an index.yaml file should be created based on a Git repository, that will be the chart repository:

$ ./helm repo index chart-repo/ --url https://<username>.github.io/chart-repo

Next, the remote repository is added:

$ ./helm repo add chart-repo https://<username>.github.io/chart-repo

At last, enable the chart installation from the repository:

$ ./helm install my-repo/hello-world --name=hello-world

More information

Find a complete code example on GitHub, built on a DDD microservices architecture, combining Kubernetes, Helm Chart and VLINGO XOOM.

Last updated