Author: Phil Thomson


What is Rocket.Chat

Rocket.Chat is a free, Open Source, enterprise Team Chat. It’s a great Slack alternative that comes in a variety of install and hosting options, one of them being a free community install option. This is what we’ll look at today.

Rocket.Chat comes in a containerized deployment option which means we can get an enterprise grade highly available configuration deployed quickly and easily. If you are running a container platform such as the powerful enterprise Kubernetes distribution OpenShift, scaling and application availability will become that much easier to manage.

Why a Central Chat Platform?

Communication. Something all teams struggle with and something great teams do well.

When Arctiq first engages with an organization, we sense if there is a culture of effective communication or if our emails are going to a black hole of an inbox, never to been seen of again. A central chat platform isn’t going to be a silver bullet to fixing team communication but it’s a great start. Getting ALL the team members to chat in a central place where they can get updates on tasks and projects, share ideas, ask for advice is a HUGE step in the right direction.

When we begin a client project, onboarding all team members into a chat platform is one of the first steps. For members of the project who are new to this type of collaboration, it is often an impactful experience to see how quickly things can be discussed, decided, and actioned. And beyond human to human discussion, a central chat platform allows teams to tap into the world of ChatOps as well, so we can get notifications about code PRs, server status, and interact with bots more ways then you can imagine. All in a centralized and radically open space.

Why is Rocket.Chat a great choice for this:

  • Plenty of configuration options and features.
  • It’s open source; if you want to change something, pull down the code and go for it.
  • You can host your own server and data. If your organization has data residency issues, Rocket.Chat gives you full control.
  • Running on container platform makes HA and scale easy. Designing and testing took weeks, deployment of whole platform took minutes.
  • You have complete control of the platform, where and how you run it is your choice.

How to use Rocket.Chat on OpenShift

OpenShift templates are one simple tool for deploying Rocket.Chat onto our OpenShift platform. We can parameterize variable values in the template and pass them in as needed for each of our separate environments.

Let’s have a look at snippets of the template and focus in on a few sections:

  • MongoDB related parameters.
Kind: Template
apiVersion: v1
metadata:
  name: rocket-chat-mongodb
  annotations:
    description: "MongoDB database running as replicate set"
    iconClass: "icon-nodejs"
    tags: "nodejs,mongodb,replication,instant-app"

parameters:
  - name: MONGODB_IMAGE
    displayName: "MongoDB Docker Image"
    description: "A reference to a supported MongoDB Docker image."
    value: "docker-registry.default.svc:5000/openshift/mongodb"
    required: true

  - name: MONGODB_IMAGE_TAG
    description: Name of the MongoDB tag that should be used
    displayName: MongoDB Tag
    value: "3.6"
    required: true
    
 - name: MONGODB_SERVICE_NAME
    description: Name of the MongoDB Service
    displayName: MongoDB Service Name
    value: "mongodb"
    required: true

  - name: MONGODB_REPLICAS
    description: Number of MongoDB replica pods
    displayName: MongoDB Replicas
    value: "3"
    required: true
  • OpenShift object definitions using the parameter values; notice the anti-affinity configuration which is designed to ensure that multiple Pods run on separate nodes to increase availability.
# MongoDB StatefulSet
  - kind: StatefulSet
    apiVersion: apps/v1beta1
    metadata:
      name: "${MONGODB_SERVICE_NAME}"
    spec:
      serviceName: "${MONGODB_SERVICE_NAME}-internal"
      replicas: "${MONGODB_REPLICAS}"
      template:
        metadata:
          labels:
            name: "${MONGODB_SERVICE_NAME}"
        spec:
          affinity:
            podAntiAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchExpressions:
                  - key: name
                    operator: In
                    values: 
                    - ${MONGODB_SERVICE_NAME}
                topologyKey: "kubernetes.io/hostname"
          containers:
            - name: mongo-container
              image: "${MONGODB_IMAGE}:${MONGODB_IMAGE_TAG}"
              ports:
                - containerPort: 27017
              args:
                - "run-mongod-replication"
  • One great feature about Rocket.Chat is that the configuration can be stored in environment variables, simplifying deployment across multiple environments and reducing the need for persistent volumes.
# Rocket.Chat Environment Variables
  - name: FILEUPLOAD_ENABLED
  - name: FILEUPLOAD_STORAGE_TYPE
  - name: FILEUPLOAD_PROTECTFILES
  - name: FILEUPLOAD_FILESYSTEMPATH
  - name: FILEUPLOAD_MAXFILESIZE
  - name: FILEUPLOAD_MEDIATYPEWHITELIST

objects:
  - apiVersion: v1
    kind: ConfigMap
    metadata:
      name: rocketchat-config
    data:
      #Admin user
      ADMIN_USERNAME: "admin"
      ADMIN_PASS: "${ROCKETCHAT_ADMIN_PASSWORD}"
      #File Upload Settings
      OVERWRITE_SETTING_FileUpload_Enabled: "${FILEUPLOAD_ENABLED}"
      OVERWRITE_SETTING_FileUpload_Storage_Type: "${FILEUPLOAD_STORAGE_TYPE}"
      OVERWRITE_SETTING_FileUpload_ProtectFiles: "${FILEUPLOAD_PROTECTFILES}" 
      OVERWRITE_SETTING_FileUpload_FileSystemPath: "${FILEUPLOAD_FILESYSTEMPATH}"
      OVERWRITE_SETTING_FileUpload_MaxFileSize: "${FILEUPLOAD_MAXFILESIZE}"
      OVERWRITE_SETTING_FileUpload_MediaTypeWhiteList: "${FILEUPLOAD_MEDIATYPEWHITELIST}"

  - apiVersion: v1
    kind: DeploymentConfig
    spec:
      template:
        metadata:
          labels:
            app: "${APPLICATION_NAME}"
            deploymentConfig: "${APPLICATION_NAME}"
        spec:
          volumes:
            - name: rocketchat-uploads
              persistentVolumeClaim:
                claimName: "rocketchat-uploads"
          containers:
          - env:
            - name: MONGO_URL
              valueFrom:
                secretKeyRef:
                  key: mongo-url
                  name: "${MONGODB_SECRET_NAME}"
            - name: MONGO_OPLOG_URL
              valueFrom:
                secretKeyRef:
                  key: mongo-oplog-url
                  name: "${MONGODB_SECRET_NAME}"
            envFrom: 
              - configMapRef:
                  name: rocketchat-config
            image: "${ROCKETCHAT_IMAGE_REGISTRY}:${ROCKETCHAT_IMAGE_TAG}"
  • An environment specific configuration file holds all the variable information needed for the Rocket.Chat deployment.
#----- OpenShift Settings

FILE_UPLOAD_STORAGE_SIZE=5Gi
HOSTNAME_HTTPS=chat.lab.openshift.example
ROCKETCHAT_REPLICAS=3
ROCKETCHAT_MIN_HPA=3
ROCKETCHAT_MAX_HPA=5
MEMORY_REQUEST=512Mi
MEMORY_LIMIT=2Gi
CPU_REQUEST=500m
CPU_LIMIT=2
VOLUME_CAPACITY=10Gi
SC_MONGO=gluster-file
SC_FILE_UPLOAD=nfs
ROCKETCHAT_IMAGE_TAG=1.1.1

#----- Rocket.Chat App Settings

#File Upload Settings
FILEUPLOAD_ENABLED=True
FILEUPLOAD_STORAGE_TYPE=FileSystem
FILEUPLOAD_PROTECTFILES=False
FILEUPLOAD_FILESYSTEMPATH=/app/uploads

Once the configuration settings are determined, the template can be processed and the application stack deployed:

oc process -f template-rocketchat-mongodb.yaml --param-file=dev.env | oc create -f -

High Availability

High-availability is achieved through the MongoDB replicas created through the OpenShift StatefulSet and the multiple replicas of the Rocket.Chat NodeJS application. The default replicas for these is set to three. This configuration will allow for up to two Rocket.Chat and MongoDB pods to fail and the application will still remain operational. For increased redundancy the replica count can be set higher. This should only be done if you have more than three app nodes in your OpenShift cluster.

Storage

Rocket.Chat will require persistent storage for:

  • the MongoDB databases and
  • file uploads to Rocket.Chat

As we’re using a StatefulSet for the MongoDB layer, the storage is defined within the StatefulSet. Each MongoDB pod is provided a dedicated Persistent Volume with ReadWriteOnce access mode.

Though Mongo is storing quite a bit of Rocket.Chat information (settings, channel information, channel chat history) not a lot of space is required. One example from the The BCDevExchange is a Rocket.Chat instance running with 600+ users, 200,000+ messages, and 100+ channels; MongoDB is using 1.1 G of storage.

  • Storage snippet from MongoDB StatefulSet.
volumeClaimTemplates:
  - metadata:
      name: mongo-data
      labels:
        name: rc-mongodb
    spec:
      accessModes: [ ReadWriteOnce ]
      storageClassName: "gluster-block"
      resources:
        requests:
          storage: "10Gi"
  • Storage for file uploads to Rocket.Chat are needed for persistence. The single Persistent Volume will need to be accessed by multiple Rocket.Chat pods; so a shared filesystem with ReadWriteMany access mode will be required. What you configure for file retention settings in Rocket.Chat will determine how big you need this volume to be.
- kind: PersistentVolumeClaim
  apiVersion: v1
  metadata:
    name: rocketchat-uploads
  spec:
    storageClassName: gluster-file
    accessModes:
      - ReadWriteMany
    resources:
      requests:
        storage: 20Gi
    volumename: rocketchat-uploads

Authentication

This can be the tricky bit depending on your organization. You are going to want to make it easy for users to authenticate to Rocket.Chat using some central authority. Rocket.Chat has a lot of configuration options to choose from. A quick win would be to point to GitHub.

Mongo Backups

Once the MongoDB and Rocket.Chat pods are up and running, we probably want to backup the rocketdb. Rocket.Chat stores all configuration and chat history in the database, which makes restoring after a disaster quite simple.

We’ll need some storage that we can store the database backup on. A Persistent Volume that is separate from your primary storage solution and is backed up by your existing enterprise backup tooling is a good option.

Once there is a Persistent Volume in the OpenShift project, deploy a CronJob template that will schedule a job to run at a defined interval to start a MongoDB pod and run mongodump against the MongoDB service and dump the backup files to the Persistent Volume.

objects:
  - apiVersion: batch/v1beta1
    kind: CronJob
    metadata:
      name: mongodb-backup
    spec:
      schedule: ${MONGODB_BACKUP_SCHEDULE}
      concurrencyPolicy: Forbid
      jobTemplate:             
        spec:
          template:
            spec:
              volumes:
                - name: mongodb-backup
                  persistentVolumeClaim:
                    claimName: ${MONGODB_BACKUP_VOLUME_CLAIM}
              containers:
                - name: mongodb-backup
                  image: 'docker-registry.default.svc:5000/openshift/mongodb:latest'
                  command:
                    - 'bash'
                    - '-c'
                    - >-
                      ls -rdt /var/lib/mongodb-backup/dump-* |
                      head -n -$MONGODB_BACKUP_KEEP |
                      xargs rm -rf &&
                      DIR=/var/lib/mongodb-backup/dump-`date +%F-%T` &&
                      mongodump -j 1 -u admin -p $MONGODB_ADMIN_PASSWORD --host $MONGODB_SERVICE_HOST --port $MONGODB_SERVICE_PORT --authenticationDatabase=admin --gzip --out=$DIR &&
                      echo &&
                      echo "To restore, use:" &&
                      echo "~# mongorestore -u admin -p \$MONGODB_ADMIN_PASSWORD --authenticationDatabase admin --gzip $DIR/DB_TO_RESTORE -d DB_TO_RESTORE_INTO"
  • To load the CronJon template into your project:
oc process -f mongodb-backup-template.yaml MONGODB_ADMIN_PASSWORD=adminpass MONGODB_BACKUP_VOLUME_CLAIM=nfs-pvc MONGODB_BACKUP_KEEP=7 MONGODB_BACKUP_SCHEDULE='1 0 * * *' | oc create -f -

Restore Backups

To restore the database, start another MongoDB instance and then copy or mount the backup files to this instance and issue this restore command.

mongorestore -u admin -p \$MONGODB_ADMIN_PASSWORD --authenticationDatabase admin --gzip $DIR/DB_TO_RESTORE -d DB_TO_RESTORE_INTO"

Wrapping Up

This tool has been deployed within one of our customer environments that promotes the extensive use of Open Source technologies and loves to contribute to the community in any way they can. The BCDevExchange proudly hosts all of its code on GitHub and welcomes comments, feedback, and support. The code used for their Rocket.Chat deployment is hosted in the BCDevOps org and can be found in the platform-services repo.

Interested in learning more about deploying scalable applications on an OpenShift platform? Take the first step and give us a shout or comment below.

Thanks!!

Cheers,

Phil

Tagged:



//comments


//blog search


//other topics