Configure Audit Logging
To enable audit logging, configure audit log settings in the globalVariables section of your cluster custom resource (CR).
Add the audit log configuration to your sdb-cluster.:
apiVersion: memsql.com/v1alpha1kind: MemsqlClustermetadata:name: sdb-clusterspec:license: <license_key>adminHashedPassword: "<hashed_password>"globalVariables:auditlog_level: ADMIN-ONLYauditlog_disk_sync: OFFauditlog_rotation_size: 134217728auditlog_rotation_time: 3600nodeImage:repository: singlestore/nodetag: alma-8.7.10-28804d3b1bredundancyLevel: 2aggregatorSpec:count: 2cores: 8memoryMB: 32768storageGB: 256leafSpec:count: 2cores: 8memoryMB: 32768storageGB: 512
Apply the configuration:
kubectl apply -f sdb-cluster.yaml
You can configure the following audit log variables:
|
Variable |
Description |
Default |
Example |
|---|---|---|---|
|
|
Audit logging level. |
|
|
|
|
Sync to disk after each write |
|
|
|
|
Retention period (in days) for audit log files. |
|
|
|
|
Maximum log file size in bytes |
|
|
|
|
Maximum time in seconds before rotation |
|
|
SHOW GLOBAL VARIABLES LIKE 'audit%';
Connect to your cluster and verify the settings:
You can collect audit logs from your Kubernetes cluster using a Kubernetes Job.
To automate report collection and upload to your storage backend, create a Kubernetes Job.
-
Create storage credentials secret (one-time setup)
For S3-compatible storage:
kubectl create secret generic aws-credentials \--from-literal=access-key-id=YOUR_ACCESS_KEY \--from-literal=secret-access-key=YOUR_SECRET_KEYFor other storage backends, create appropriate secrets for your authentication method.
-
Create
cluster-collection-job.yaml apiVersion: batch/v1kind: Jobmetadata:name: singlestore-report-collectionspec:template:spec:serviceAccountName: toolscontainers:- name: report-collectorimage: singlestore/tools:latestcommand: ["/bin/bash", "-c"]args:- |# Collect the cluster reportsdb-report collect-kube --cluster-name sdb-cluster --namespace default --output-path /tmp/reportREPORT_FILE=$(ls -t /tmp/report/*.tar.gz | head -1)# Upload to S3-compatible object storage (e.g., MinIO)aws s3 cp $REPORT_FILE s3://${BUCKET_NAME}/cluster-reports/ --endpoint-url ${S3_ENDPOINT}# On-premises storage options:# - NFS: cp $REPORT_FILE /mnt/nfs/cluster-reports/# - Local PV: cp $REPORT_FILE /mnt/storage/cluster-reports/env:# S3-compatible storage configuration (e.g., MinIO)- name: AWS_ACCESS_KEY_IDvalueFrom:secretKeyRef:name: storage-credentialskey: access-key-id- name: AWS_SECRET_ACCESS_KEYvalueFrom:secretKeyRef:name: storage-credentialskey: secret-access-key- name: BUCKET_NAMEvalue: "your-bucket-name"- name: S3_ENDPOINTvalue: "http://minio:9000"# Optional: mount on-premises storage# volumeMounts:# - name: nfs-storage# mountPath: /mnt/nfs# - name: local-storage# mountPath: /mnt/storagerestartPolicy: Never# Optional: define on-premises volumes# volumes:# - name: nfs-storage# nfs:# server: your-nfs-server# path: /path/to/storage# - name: local-storage# hostPath:# path: /path/to/local/storage# type: DirectorybackoffLimit: 3 -
Run the job
kubectl apply -f cluster-collection-job.yaml -
Check progress and view logs
# Check job statuskubectl get jobs# View logskubectl logs job/singlestore-report-collection# Verify upload to external storage# For S3-compatible object storage (for example, MinIO):aws s3 ls s3://<your-bucket>/cluster-reports/ --endpoint-url <your-endpoint># For NFS or local persistent storage (from a mounted node or pod):ls /mnt/nfs/cluster-reports/ -
Clean up (optional)
kubectl delete job singlestore-report-collection
Note
Ensure the tools service account has the required RBAC permissions.
Last modified: