Configure Cluster Monitoring with the Operator

HTTPS Connections

Use the following instructions to configure monitoring with HTTPS connections. To use HTTP connections, skip to Configure the Exporter Process.

Create an SSL Secret

Create a Secret containing SSL certificates that will be used for HTTPS connections. The Secret must be named <cluster-name>-additional-secrets to be automatically mounted to each pod of the cluster.

Option 1: Use kubectl

Use kubectl to create the Secret.

kubectl create secret generic <cluster-name>-additional-secrets \
--from-file=ssl-crt=<path_to_server-cert.pem> \
--from-file=ssl-key=<path_to_server-key.pem> \
--from-file=ssl-ca=<path_to_ca-cert.pem>

Option 2: Declare an SSL Secret in a YAML File

The data section of the secret must have the following key/value pairs:

  • ssl-crt: The Base64-encoded server certificate

  • ssl-key: The Base64-encoded server private key

  • ssl-ca: The Base64-encoded Certificate Authority (CA) certificate

For example:

apiVersion: v1
kind: Secret
metadata:
name: <cluster-name>-additional-secrets
type: Opaque
data:
ssl-ca: ...WdNQWtOQk1SWXdGQ...
ssl-crt: ...U5wYzJOdk1ROHdEU...
ssl-key: ...HaVBOTytQaEh2QSt...

Note: Replace <cluster-name> with your SingleStore cluster name.

Confirm that the Keys are Mounted to the Cluster

  1. Exec into the Master Aggregator (MA) pod.

    kubectl exec node-<cluster-name>-master-0 -c node
  2. Confirm that the following files are present in the /etc/memsql/extra-secret directory.

    ssl-crt
    ssl-key
    ssl-ca

Refer to SSL Secure Connections for more information.

Add the Exporter SSL Args

  1. In the sdb-operator.yaml file on the Source cluster, add the following argument to the args list in the sdb-operator section.

    "--master-exporter-parameters",
    "--config.ssl-cert=/etc/memsql/extra-secret/ssl-crt
    --config.ssl-key=/etc/memsql/extra-secret/ssl-key --config.use-https --config.user=root --no-cluster-collect.info_schema.tables
    --no-cluster-collect.info_schema.tablestats
    --no-collect.info_schema.tables --no-collect.info_schema.tablestats"

    Note that this is a single master-exporter-parameters argument and the remainder is its value. When modified, the file will resemble the following.

    If the cluster is configured to use the root user with SSL, an additional --config.ssl-ca=/etc/memsql/ssl/ca-cert.pem argument must be added into the --master-exporter-parameters.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: sdb-operator
    labels:
    app.kubernetes.io/component: operator
    spec:
    replicas: 1
    selector:
    matchLabels:
    name: sdb-operator
    template:
    metadata:
    labels:
    name: sdb-operator
    spec:
    serviceAccountName: sdb-operator
    containers:
    - name: sdb-operator
    image: operator_image_tag
    imagePullPolicy: Always
    args: [
    # Cause the operator to merge rather than replace annotations on services
    "--merge-service-annotations",
    # Allow the process inside the container to have read/write access to the `/var/lib/memsql` volume.
    "--fs-group-id", "5555",
    "--cluster-id", "sdb-cluster"
    "--master-exporter-parameters",
    "--config.ssl-cert=/etc/memsql/extra-secret/ssl-crt --config.ssl-key=/etc/memsql/extra-secret/ssl-key --config.use-https --config.user=root --no-cluster-collect.info_schema.tables --no-cluster-collect.info_schema.tablestats --no-collect.info_schema.tables --no-collect.info_schema.tablestats" ]
    env:
    - name: WATCH_NAMESPACE
    valueFrom:
    fieldRef:
    fieldPath: metadata.namespace
    - name: POD_NAME
    valueFrom:
    fieldRef:
    fieldPath: metadata.name
    - name: OPERATOR_NAME
    value: "sdb-operator"
  2. Apply the changes to the cluster.

    kubectl apply -f sdb-operator.yaml
  3. Confirm that the Operator pod is running.

    kubectl get pods
    memsql-operator-758ffb66c8-5sn4l      1/1     Running
  4. Run the following command to force a restart of the memsql_exporter container on the master pod.

    kubectl exec -it node-<memsql-cluster-name>-master-0 -cexporter -- /bin/sh -c "kill 1"

Configure the Exporter Process

The monitoring exporter should already be running in a container in the Master node Pod on the Source cluster.

If Metrics and Source clusters are the same or are located in the same Kubernetes cluster (in different namespaces, for example), no further action is required, and you may skip to the next step.

If Metrics and Source clusters are located in different Kubernetes clusters, the exporter process must be exposed to outside of the cluster as a service (such as a Load Balancer service) and this service must be accessible from all nodes in the Metrics cluster.

For example:

  1. Retrieve the ownerReferences UID.

    kubectl get svc svc-<cluster-name>-ddl -o jsonpath='{.metadata.ownerReferences}'
  2. Modify the svc-k8s-cluster-exporter.yaml file using the UID value retrieved in the above step.

    apiVersion: v1
    kind: Service
    metadata:
    annotations:
    custom: annotations
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "4000"
    labels:
    app.kubernetes.io/component: master
    app.kubernetes.io/instance: <memsql-cluster-name>
    app.kubernetes.io/name: memsql-cluster
    custom: label
    name: svc-<memsql-cluster-name>-exporter
    namespace: default
    ownerReferences:
    - apiVersion: memsql.com/v1alpha1
    controller: true
    kind: MemsqlCluster
    name: <memsql-cluster-name>
    uid: <ownerReferences-UID> # Update with ownerReferences UID
    spec:
    externalTrafficPolicy: Cluster
    ipFamilies:
    - IPv4
    ipFamilyPolicy: SingleStack
    ports:
    - name: prometheus
    port: 9104
    protocol: TCP
    selector:
    app.kubernetes.io/instance: <memsql-cluster-name>
    app.kubernetes.io/name: memsql-cluster
    statefulset.kubernetes.io/pod-name: node-<memsql-cluster-name>-master-0
    sessionAffinity: None
    type: LoadBalancer
  3. Create the exporter service.

    kubectl create -f svc-k8s-cluster-exporter.yaml

Configure the Metrics Database

Determine the Hostname of the Exporter

From the previous step, if the Metrics and Source clusters are the same or are located in the same Kubernetes cluster, then <name of the master pod>.svc-<cluster name>.<namespace containing the Source cluster master pod>.svc.cluster.local can be used as the exporter hostname in this section.

However, if the Metrics and Source clusters are located in different Kubernetes clusters, then a hostname/IP address of the created service that can be reached by each node of the Metrics cluster can be used as the exporter hostname in this section.

Automatically Configure the Metrics Database

Create and Apply the Tools RBAC

Either use an existing account with sufficient permissions or create a service account that can be used for running the configuration Pod.

  1. Save the following to a tools-rbac.yaml file.

    apiVersion: v1
    kind: ServiceAccount
    metadata:
    name: tools
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
    namespace: default
    name: tools-role
    rules:
    - apiGroups:
    - ""
    resources:
    - pods
    - services
    - namespaces
    verbs:
    - get
    - list
    - apiGroups: [ "" ]
    resources: [ "pods/exec" ]
    verbs: [ "create" ]
    - apiGroups:
    - apps
    resources:
    - statefulsets
    verbs:
    - get
    - list
    - apiGroups:
    - memsql.com
    resources:
    - '*'
    verbs:
    - get
    - list
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
    name: tools
    namespace: default
    subjects:
    - kind: ServiceAccount
    name: tools
    roleRef:
    kind: Role
    name: tools-role
    apiGroup: rbac.authorization.k8s.io
  2. Apply the tools-rbac.yaml file to the cluster. This creates a tools service account with the required permissions.

    kubectl apply -f tools-rbac.yaml

Create and Apply the Start Monitoring Job

Note

Existing cluster monitoring instances can be configured to collect event traces after upgrading a cluster to SingleStore v8.5 or later. Refer to Query History for more information on how to fully enable this feature.

  1. Add --collect-event-traces to your existing start-monitoring-job.yaml file.

    HTTP Connections

    [...]
    command: ["sdb-admin",
    "start-monitoring-kube",
    "--user=<database-user>",
    "--password=<database-user-password>",
    "--collect-event-traces",
    "--exporter-host=<exporter-hostname>",
    "--yes"
    <other options…>
    ]
    [...]

    HTTPS Connections

    [...]
    command: ["sdb-admin",
    "start-monitoring-kube",
    "--user=<database-user>",
    "--password=<database-user-password>",
    "--collect-event-traces",
    "--exporter-host=<exporter-hostname>",
    "--ssl-ca=/etc/memsql/extra-secret/ssl-ca",
    "--yes"
    <other options…>
    ]
    [...]
  2. Restart monitoring.

    kubectl apply -f start-monitoring-job.yaml

The following YAML creates a job that sets up the metrics database and the associated pipelines.

With Internet Access
  1. Modify the start-monitoring-job.yaml file so that it resembles the following. Note that:

    1. <database-user> must be replaced with the desired database user, such as the admin user

    2. <database-user-password> must be replaced with this database user’s password

    3. <exporter-hostname> must be replaced with the exporter hostname from the Determine the Hostname of the Exporter step

    4. <other-options…> must be removed or replaced with the options available in sdb-admin start-monitoring-kube

    HTTP Connections

    apiVersion: batch/v1
    kind: Job
    metadata:
    name: toolbox-start-monitoring
    spec:
    template:
    spec:
    serviceAccountName: tools
    containers:
    - name: toolbox-start-monitoring
    image: singlestore/tools:alma-v1.11.6-1.17.2-cc87b449d97fd7cde78fdc4621c2aec45cc9a6cb
    imagePullPolicy: IfNotPresent
    command: ["sdb-admin",
    "start-monitoring-kube",
    "--user=<database-user>",
    "--password=<database-user-password>",
    "--collect-event-traces",
    "--exporter-host=<exporter-hostname>",
    "--yes"
    <other options…>
    ]
    restartPolicy: Never
    backoffLimit: 2

    HTTPS Connections

    Update the following lines from the above definition:

    command: ["sdb-admin",
    "start-monitoring-kube",
    "--user=<database-user>",
    "--password=<database-user-password>",
    "--collect-event-traces",
    "--exporter-host=<exporter-hostname>",
    "--yes"
    <other options…>
    ]

    to:

    command: ["sdb-admin",
    "start-monitoring-kube",
    "--user=<database-user>",
    "--password=<database-user-password>",
    "--collect-event-traces",
    "--exporter-host=<exporter-hostname>",
    "--ssl-ca=/etc/memsql/extra-secret/ssl-ca",
    "--yes"
    <other options…>
    ]
  2. Run the following command to apply the changes in the start-monitoring-job.yaml file.

    kubectl apply -f start-monitoring-job.yaml

Confirm that the Start Monitoring Job is Running

Run the following command to confirm that the job has finished successfully. The output displayed will be Completions 1/1 for toolbox-start-monitoring.

kubectl get jobs
NAME                       COMPLETIONS   DURATION   AGE
toolbox-start-monitoring   1/1           13s        21s

You may terminate this job by running the following command.

kubectl delete -f start-monitoring-job.yaml

As of Kubernetes 1.23, ttlSecondsAfterFinished: <seconds> may be added to the job spec to automatically remove the finished job within the specified number of seconds.

Last modified: September 20, 2024

Was this article helpful?