This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Getting Started

For guide to install, setup and manage Logging Operator

1 - Installation

Logging Operator installation, upgrade guide

Logging operator is based on the CRD framework of Kubernetes, for more information about the CRD framework please refer to the official documentation. In a nutshell, CRD is a feature through which we can develop our own custom API’s inside Kubernetes.

The API versions for Logging Operator available are:-

  • ElasticSearch
  • Fluentd
  • Kibana

Logging Operator requires a Kubernetes cluster of version >=1.16.0. If you have just started with the CRD and Operators, its highly recommended using the latest version of Kubernetes.

Setup of Logging operator can be easily done by using simple helm and kubectl commands.

Setup using Helm tool

The setup can be done by using helm. The logging-operator can easily get installed using helm commands.

# Add the helm chart
$ helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/
...
"ot-helm" has been added to your repositories
# Deploy the Logging Operator
$ helm upgrade logging-operator ot-helm/logging-operator \
  --install --namespace ot-operators
...
Release "logging-operator" does not exist. Installing it now.
NAME:          logging-operator
LAST DEPLOYED: Sun May 29 01:06:58 2022
NAMESPACE:     ot-operators
STATUS:        deployed
REVISION:      1

After the deployment, verify the installation of operator.

# Testing Operator
$ helm test logging-operator --namespace ot-operators
...
NAME:           logging-operator
LAST DEPLOYED:  Sun May 29 01:06:58 2022
NAMESPACE:      ot-operators
STATUS:         deployed
REVISION:       1
TEST SUITE:     logging-operator-test-connection
Last Started:   Sun May 29 01:07:56 2022
Last Completed: Sun May 29 01:08:02 2022
Phase:          Succeeded

Verify the deployment of Logging Operator using kubectl command.

# List the pod and status of logging-operator
$ kubectl get pods -n ot-operators -l name=logging-operator
...
NAME                               READY   STATUS    RESTARTS   AGE
logging-operator-fc88b45b5-8rmtj   1/1     Running   0          21d

Setup using Kubectl

In any case using helm chart is not a possiblity, the Logging operator can be installed by kubectl commands as well.

As a first step, we need to set up a namespace and then deploy the CRD definitions inside Kubernetes.

# Setup of CRDS
$ kubectl apply -f https://raw.githubusercontent.com/OT-CONTAINER-KIT/logging-operator/master/config/crd/bases/logging.logging.opstreelabs.in_elasticsearches.yaml
$ kubectl apply -f https://raw.githubusercontent.com/OT-CONTAINER-KIT/logging-operator/master/config/crd/bases/logging.logging.opstreelabs.in_fluentds.yaml
$ kubectl apply -f https://raw.githubusercontent.com/OT-CONTAINER-KIT/logging-operator/master/config/crd/bases/logging.logging.opstreelabs.in_kibanas.yaml
$ kubectl apply -f https://github.com/OT-CONTAINER-KIT/logging-operator/raw/master/config/crd/bases/logging.logging.opstreelabs.in_indextemplates.yaml
$ kubectl apply -f https://github.com/OT-CONTAINER-KIT/logging-operator/raw/master/config/crd/bases/logging.logging.opstreelabs.in_indexlifecycles.yaml

Once we have namespace in the place, we need to set up the RBAC related stuff like:- ClusterRoleBindings, ClusterRole, Serviceaccount.

# Setup of RBAC account
$ kubectl apply -f https://raw.githubusercontent.com/OT-CONTAINER-KIT/logging-operator/main/config/rbac/service_account.yaml
$ kubectl apply -f https://raw.githubusercontent.com/OT-CONTAINER-KIT/logging-operator/main/config/rbac/role.yaml
$ kubectl apply -f https://github.com/OT-CONTAINER-KIT/logging-operator/blob/main/config/rbac/role_binding.yaml

As last part of the setup, now we can deploy the Logging Operator as deployment of Kubernetes.

# Deployment for MongoDB Operator
$ kubectl apply -f https://github.com/OT-CONTAINER-KIT/logging-operator/raw/main/config/manager/manager.yaml

Verify the deployment of Logging Operator using kubectl command.

# List the pod and status of logging-operator
$ kubectl get pods -n ot-operators -l name=logging-operator
...
NAME                               READY   STATUS    RESTARTS   AGE
logging-operator-fc88b45b5-8rmtj   1/1     Running   0          21d

2 - Elasticsearch Setup

Elasticsearch setup and management using logging operator

The operator is capable for setting up elasticsearch cluster with all the best practices in terms of security, performance and reliability.

There are different elasticsearch nodes supported by this operator:-

  • Master Node: A node that has the master role (default), which makes it eligible to be elected as the master node, which controls the cluster.
  • Data Node: A node that has the data role (default). Data nodes hold data and perform data related operations such as CRUD, search, and aggregations.
  • Ingestion Node: A node that has ingest role (default). Ingest nodes are able to apply an ingest pipeline to a document in order to transform and enrich the document before indexing. With a heavy ingest load, it makes sense to use dedicated ingest nodes and to not include ingest role from nodes that have the master or data roles.
  • Client or Coordinator Node: Requests like search requests or bulk-indexing requests may involve data held on different data nodes. A search request, for example, is executed in two phases which are coordinated by the node which receives the client request the coordinating node.

There are few additional functionalities supported in the elasticsearch CRD.

  • TLS support and xpack support
  • Multi node cluster setup - master, data, ingestion, client
  • Custom configuration for each type of elasticsearch node

Setup using Helm (Deployment Tool)

Add the helm repository, so that Elasticsearch chart can be available for the installation. The repository can be added by:-

# Adding helm repository
$ helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/
...
"ot-helm" has been added to your repositories

If the repository is added make sure you have updated it with the latest information.

# Updating ot-helm repository
$ helm repo update

Once all these things have completed, we can install Elasticsearch cluster by using:-

# Install the helm chart of Elasticsearch
$ helm install elasticsearch ot-helm/elasticsearch --namespace ot-operators \
  --set esMaster.storage.storageClass=do-block-storage \
  --set esData.storage.storageClass=do-block-storage
...
NAME: elasticsearch
LAST DEPLOYED: Mon Jun  6 15:06:45 2022
NAMESPACE:     ot-operators
STATUS:        deployed
REVISION:      1
TEST SUITE:    None
NOTES:
  CHART NAME:    elasticsearch
  CHART VERSION: 0.3.1
  APP VERSION:   0.3.0

The helm chart for Elasticsearch setup has been deployed.

Get the list of pods by executing:
    kubectl get pods --namespace ot-operators -l 'role in (master,data,ingestion,client)'

For getting the credential for admin user:
    kubectl get secrets -n ot-operators elasticsearch-password -o jsonpath="{.data.password}" | base64 -d

Verify the pod status and secret value by using:-

# Verify the status of the pods
$ kubectl get pods --namespace ot-operators -l 'role in (master,data,ingestion,client)'
...
NAME                     READY   STATUS    RESTARTS   AGE
elasticsearch-data-0     1/1     Running   0          77s
elasticsearch-data-1     1/1     Running   0          77s
elasticsearch-data-2     1/1     Running   0          77s
elasticsearch-master-0   1/1     Running   0          77s
elasticsearch-master-1   1/1     Running   0          77s
elasticsearch-master-2   1/1     Running   0          77s
# Verify the secret value
$ kubectl get secrets -n ot-operators elasticsearch-password -o jsonpath="{.data.password}" | base64 -d
...
EuDyr4A105EjqaNW

Elasticsearch cluster can be listed and verify using kubectl cli as well.

$ kubectl get elasticsearch -n ot-operators
...
NAME            VERSION   STATE   SHARDS   INDICES
elasticsearch   7.17.0    green   2        2

Setup by Kubectl (Kubernetes CLI)

It is not a recommended way for setting for Elasticsearch cluster, it can be used for the POC and learning of Logging operator deployment.

All the kubectl related manifest are located inside the example folder which can be applied using kubectl apply -f.

For an example:-

$ kubectl apply -f examples/elasticsearch/basic-cluster/basic-elastic.yaml -n ot-operators
...
elasticsearch/elasticsearch is created

Validation of Elasticsearch

To validate the state of Elasticsearch cluster, we can take the shell access of the Elasticsearch pod and verify elasticsearch version and details using curl command.

# Verify endpoint of elasticsearch
$ export ELASTIC_PASSWORD=$(kubectl get secrets -n ot-operators \
  elasticsearch-password -o jsonpath="{.data.password}" | base64 -d)

$ kubectl exec -it elasticsearch-master-0 -c elastic -n ot-operators \
  -- curl -u elastic:$ELASTIC_PASSWORD -k https://localhost:9200
...
{
  "name" : "elasticsearch-master-0",
  "cluster_name" : "elastic-prod",
  "cluster_uuid" : "vPtAZQt9SEWsl8NSfNVYzw",
  "version" : {
    "number" : "7.17.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "bee86328705acaa9a6daede7140defd4d9ec56bd",
    "build_date" : "2022-01-28T08:36:04.875279988Z",
    "build_snapshot" : false,
    "lucene_version" : "8.11.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

Node status and health

Once the version details are verified we can list down the nodes connected to elasticsearch cluster and their health status. Also, we can verify the status health of complete elasticsearch cluster.

# Cluster health of elasticsearch cluster
$ kubectl exec -it elasticsearch-master-0 -c elastic -n ot-operators \
  -- curl -u elastic:$ELASTIC_PASSWORD -k https://localhost:9200/_cluster/health
...
{
  "cluster_name": "elastic-prod",
  "status": "green",
  "timed_out": false,
  "number_of_nodes": 6,
  "number_of_data_nodes": 3,
  "active_primary_shards": 1,
  "active_shards": 2,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "unassigned_shards": 0,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 100
}
# Node status of elasticsearch
$ kubectl exec -it elasticsearch-master-0 -c elastic -n ot-operators \
  -- curl -u elastic:$ELASTIC_PASSWORD -k https://localhost:9200/_cat/nodes
...
ip           heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.244.1.69            54          19   0    0.00    0.00     0.01 m         -      elasticsearch-master-2
10.244.0.82            43          20   0    0.00    0.00     0.00 d         -      elasticsearch-data-2
10.244.0.150           28          19   0    0.00    0.12     0.12 d         -      elasticsearch-data-1
10.244.0.13            57          19   1    0.00    0.00     0.00 m         -      elasticsearch-master-0
10.244.1.72            13          20   0    0.00    0.00     0.01 d         -      elasticsearch-data-0
10.244.0.161           61          20   2    0.00    0.12     0.12 m         *      elasticsearch-master-1

3 - Fluentd Setup

Fluentd setup and management using logging operator

The operator is capable for setting up fluentd as a log shipper to trace, collect and ship logs to elasticsearch cluster. There are few additional functionalities added to this CRD.

  • Namespace and application name based indexes
  • Custom and additional configuration support
  • TLS and auth support for authentication

Setup using Helm (Deployment Tool)

Add the helm repository, so that Fluentd chart can be available for the installation. The repository can be added by:-

# Adding helm repository
$ helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/
...
"ot-helm" has been added to your repositories

If the repository is added make sure you have updated it with the latest information.

# Updating ot-helm repository
$ helm repo update

Once all these things have completed, we can install Fluentd cluster by using:-

# Install the helm chart of Fluentd
$ helm install fluentd ot-helm/fluentd --namespace ot-operators
...
NAME:          fluentd
LAST DEPLOYED: Mon Jun  6 19:37:11 2022
NAMESPACE:     ot-operators
STATUS:        deployed
REVISION:      1
TEST SUITE:    None
NOTES:
  CHART NAME:    fluentd
  CHART VERSION: 0.3.0
  APP VERSION:   0.3.0

The helm chart for Fluentd setup has been deployed.

Get the list of pods by executing:
    kubectl get pods --namespace ot-operators -l 'app=fluentd'

For getting the credential for admin user:
    kubectl get fluentd fluentd -n ot-operators

Verify the pod status and secret value by using:-

# Verify the status of the pods
$ kubectl get pods --namespace ot-operators -l 'app=fluentd'
...
NAME            READY   STATUS    RESTARTS   AGE
fluentd-7w48q   1/1     Running   0          3m9s
fluentd-dgcwx   1/1     Running   0          3m9s
fluentd-kq52c   1/1     Running   0          3m9s

Fluentd daemonset can be listed and verify using kubectl cli as well.

$ kubectl get fluentd -n ot-operators
...
NAME      ELASTICSEARCH HOST     TOTAL AGENTS
fluentd   elasticsearch-master   3

Setup by Kubectl (Kubernetes CLI)

It is not a recommended way for setting for Fluentd, it can be used for the POC and learning of Logging operator deployment.

All the kubectl related manifest are located inside the example folder which can be applied using kubectl apply -f.

For an example:-

$ kubectl apply -f examples/fluentd/basic/fluentd.yaml -n ot-operators
...
fluentd/fluentd is created

Validation of Fluentd

To validate the state of Fluentd, we can verify the log status of fluentd pods managed by daemonset.

# Validation of fluentd logs
$ kubectl logs fluentd-7w48q -n ot-operators
...
2022-06-06 14:07:28 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/fluentd-7w48q_ot-operators_fluentd-f49b48f7f447d05139819861b8b17c30e2bf2de094e25e23d1e9c5a274fd3d7e.log
2022-06-06 14:07:28 +0000 [info]: #0 fluentd worker is now running worker=0
2022-06-06 14:07:54 +0000 [info]: #0 [filter_kube_metadata] stats - namespace_cache_size: 5, pod_cache_size: 32, namespace_cache_api_updates: 16, pod_cache_api_updates: 16, id_cache_miss: 16, pod_cache_host_updates: 32, namespace_cache_host_updates: 5

Also, we can list down the indices using the curl command from the elasticsearch pod/container. If indices are available inside the elasticsearch that means fluentd is shipping the logs to elasticsearch without any issues.

$ export ELASTIC_PASSWORD=$(kubectl get secrets -n ot-operators \
  elasticsearch-password -o jsonpath="{.data.password}" | base64 -d)

$ kubectl exec -it elasticsearch-master-0 -c elastic -n ot-operators \
  -- curl -u elastic:$ELASTIC_PASSWORD -k "https://localhost:9200/_cat/indices?v"
...
health status index                              uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .geoip_databases                   _GEkcekFSr2KY1Z4jFmWRQ   1   1         40            0     76.4mb         38.2mb
green  open   kubernetes-ot-operators-2022.06.06 QlS_dyjzQ8qIXQi2PgpABA   1   1      20665            0      7.9mb          3.9mb
green  open   kubernetes-kube-system-2022.06.06  vWQ5IzoHQWW9zl8bQk0jlw   1   1      12006            0        7mb          4.3mb

4 - Kibana Setup

Kibana setup and management using logging operator

The operator is capable for setting up Kibana as a visualization and dashboard tool for elasticsearch cluster. There are few additional functionalities added to this CRD.

Setup using Helm (Deployment Tool)

Add the helm repository, so that Kibana chart can be available for the installation. The repository can be added by:-

# Adding helm repository
$ helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/
...
"ot-helm" has been added to your repositories

If the repository is added make sure you have updated it with the latest information.

# Updating ot-helm repository
$ helm repo update

Once all these things have completed, we can install Kibana cluster by using:-

# Install the helm chart of Kibana
$ helm upgrade kibana ot-helm/kibana --install --namespace ot-operators
...
NAME: kibana
LAST DEPLOYED: Sat Aug  6 23:51:28 2022
NAMESPACE: ot-operators
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
  CHART NAME: kibana
  CHART VERSION: 0.3.2
  APP VERSION: 0.3.0

The helm chart for Kibana setup has been deployed.

Get the list of pods by executing:
    kubectl get pods --namespace ot-operators -l 'app=kibana'

For getting the credential for admin user:
    kubectl get kibana kibana -n ot-operators

Verify the pod status value by using:-

# Verify the status of the pods
$ kubectl get pods --namespace ot-operators -l 'app=kibana'
...
NAME                      READY   STATUS    RESTARTS   AGE
kibana-7b649df777-nkr2p   1/1     Running   0          3m27s

Kibana deployment can be listed and verify using kubectl cli as well.

$ kubectl get kibana -n ot-operators
NAME     VERSION   ES CLUSTER
kibana   7.17.0    elasticsearch

Setup by Kubectl (Kubernetes CLI)

It is not a recommended way for setting for Kibana, it can be used for the POC and learning of Logging operator deployment.

All the kubectl related manifest are located inside the example folder which can be applied using kubectl apply -f.

For an example:-

$ kubectl apply -f examples/kibana/basic/kibana.yaml -n ot-operators
...
kibana.logging.logging.opstreelabs.in/kibana is created

Validation of Kibana

To validate the state of Kibana, we can verify the log status of kibana pods managed by deployment.

# Validation of kibana logs
$ kubectl logs kibana-7bc5cd8747-pgtzc -n ot-operators
...
{"type":"log","@timestamp":"2022-08-06T18:22:04+00:00","tags":["info","plugins-service"],"pid":8,"message":"Plugin \"metricsEntities\" is disabled."}
{"type":"log","@timestamp":"2022-08-06T18:22:04+00:00","tags":["info","http","server","Preboot"],"pid":8,"message":"http server running at http://0.0.0.0:5601"}
{"type":"log","@timestamp":"2022-08-06T18:22:04+00:00","tags":["warning","config","deprecation"],"pid":8,"message":"Starting in 8.0, the Kibana logging format will be changing. This may affect you if you are doing any special handling of your Kibana logs, such as ingesting logs into Elasticsearch for further analysis. If you are using the new logging configuration, you are already receiving logs in both old and new formats, and the old format will simply be going away. If you are not yet using the new logging configuration, the log format will change upon upgrade to 8.0. Beginning in 8.0, the format of JSON logs will be ECS-compatible JSON, and the default pattern log format will be configurable with our new logging system. Please refer to the documentation for more information about the new logging format."}

Also, the UI can be accessed on 5601 port for validation.