Deployment Using YAML File - Tarball
On this page
Introduction
Installing SingleStore on bare metal, on virtual machines, or in the cloud can be done through the use of popular configuration management tools or through SingleStore’s management tools.
In this guide, you will deploy a SingleStore cluster onto physical or virtual machines and connect to the cluster using a SQL client.
A four-node cluster is the minimal recommended cluster size for showcasing SingleStore as a distributed database with high availability; however, you can use the procedures in this tutorial to scale out to additional nodes for increased performance over large data sets or to handle higher concurrency loads.
Note
There are no licensing costs for using up to four license units for the leaf nodes in your cluster.
Prerequisites
For this tutorial you will need:
-
One (for single-host cluster-in-a-box for development) or four physical or virtual machines (
hosts
) with the following:-
Each SingleStore node requires at least four (4) x86_
64 CPU cores and eight (8) GB of RAM per host -
Eight (8) vCPU and 32 GB of RAM are recommended for leaf nodes to align with license unit calculations
-
Running a 64-bit version of RHEL/AlmaLinux 7 or later, or Debian 8 or later, with kernel 3.
10 or later For SingleStore 8.
1 or later, glibc
2.17 or later is also required. -
Port 3306 open on all hosts for intra-cluster communication.
Based on the deployment method, this default can be changed either from the command line or via cluster file. -
Port 8080 open on the main deployment host for the cluster
-
A non-root user with sudo privileges available on all hosts in the cluster that be used to run SingleStore services and own the corresponding runtime state
-
-
SSH access to all hosts
-
Installing and using
ssh-agent
is recommended for SSH keys with passwords.Refer to ssh-agent and ssh-add and Use ssh-agent to Manage Private Keys for more information. -
If your environment does not support the use of
ssh-agent
, make sure the identity key used on the main deployment host can be used to log in to each host in the cluster.Refer to How to Setup Passwordless SSH Login for more information.
-
-
A connection to the Internet to download required packages
If running this in a production environment, it is highly recommended that you follow our host configuration recommendations for optimal cluster performance.
Duplicate Hosts
As of SingleStore Toolbox 1.
✘ Host check failed.host 172.26.212.166 has the same ssh
host keys as 172.16.212.165, toolbox doesn't support
registering the same host twice
Confirm that all specified hosts are indeed different and aren’t using identical SSH host keys./etc/ssh/ssh_
) will also be cloned.
As each cloned host will have the same host key, an SSH client cannot verify that it is connecting to the intended host.
The following steps demonstrate a potential remedy for the duplicate hosts
message.
sudo rootls -al /etc/ssh/rm /etc/ssh/<your-ssh-host-keys>ssh-keygen -f /etc/ssh/<ssh-host-key-filename> -N '' -t rsa1ssh-keygen -f /etc/ssh/<ssh-host-rsa-key-filename> -N '' -t rsassh-keygen -f /etc/ssh/<ssh-host-dsa-key-filename> -N '' -t dsa
For more information about SSH host keys, including the equivalent steps for Ubuntu-based systems, refer to Avoid Duplicating SSH Host Keys.
As of SingleStore Toolbox 1.sdb-deploy setup-cluster
supports an --allow-duplicate-host-fingerprints
option that can be used to ignore duplicate SSH host keys.
Network Configuration
Depending on the host and its function in deployment, some or all of the following port settings should be enabled on hosts in your cluster.
These routing and firewall settings must be configured to:
-
Allow database clients (e.
g. your application) to connect to the SingleStore aggregators -
Allow all nodes in the cluster to talk to each other over the SingleStore protocol (3306)
-
Allow you to connect to management and monitoring tools
Protocol |
Default Port |
Direction |
Description |
---|---|---|---|
TCP |
22 |
Inbound and Outbound |
For host access. |
TCP |
443 |
Outbound |
To get public repo key for package verification. |
TCP |
3306 |
Inbound and Outbound |
Default port used by SingleStore. |
The service port values are configurable if the default values cannot be used in your deployment environment.
-
The cluster file template provided in this guide
-
The sdb-toolbox-config register-host command
We also highly recommend configuring your firewall to prevent other hosts on the Internet from connecting to SingleStore.
Install SingleStore Tools
The first step in deploying your cluster is to download and install the SingleStore Tools on one of the hosts in your cluster.
These tools perform all major cluster operations including downloading the latest version of SingleStore onto your hosts, assigning and configuring nodes in your cluster, and other management operations.
Installation - Tarball
Download SingleStore Files
Download the singlestoredb-toolbox
, singlestore-client
, and singlestoredb-server
files onto the main deployment host, or onto a device with access to the main deployment host.
To obtain the latest version of each file, use the following:
curl https://release.memsql.com/production/index/<singlestore-file>/latest.json
Replace <singlestore-file>
with memsqltoolbox
, memsqlclient
, and singlestoredbserver
to download the list of available file types.
-
To download the latest patch release of a major version, substitute the desired major version for
latest
.For example: curl https://release.
memsql. com/production/index/singlestoredbserver/8. 7. json -
To download a specific patch release of a major version, substitute the desired patch release for
latest
.For example: curl https://release.
memsql. com/production/index/singlestoredbserver/8. 7. 4. json
The JSON you receive contains relative file paths in the following format:
"Path": "production/tar/x86_64/<singlestore-file>-<version>-<commit-hash>.x86_64.tar.gz"
Use wget
to download the file by copying, pasting, and appending the path (minus the quotes) to https://release.
.
wget https://release.memsql.com/production/tar/x86_64/<singlestore-file>-<version>-<commit-hash>.x86_64.tar.gz
Alternatively, download the following SingleStore tarball files onto a device with access to the main deployment host.
Transfer SingleStore Files
Transfer the singlestoredb-toolbox
, singlestore-client
, and singlestoredb-server
tarball files into a dedicated singlestore
directory that has been configured so that non-sudo
users can access on the main deployment host, such as /opt/singlestore
.
Unpack SingleStore Files
Note: For the remainder of this document, <version>-<commit-hash>
will be written simply as <version>
.
Unpack singlestoredb-toolbox
and singlestore-client
into the singlestore
directory.
tar xzvf singlestoredb-toolbox-<version>.tar.gz && \tar xzvf singlestore-client-<version>.tar.gz
You do not need to unpack the singlestoredb-server
file in this step.
Deploy SingleStore
Prerequisites
Warning
Before deploying a SingleStore cluster in a production environment, please review and follow the host configuration recommendations.
In addition, SingleStore recommends that each Master Aggregator and child aggregator reside on its own host when deploying SingleStore in a production environment.
Notes on Users and Groups
The user that deploys SingleStore via SingleStore Toolbox must be able to SSH to each host in the cluster.singlestoredb-server
is installed via an RPM or Debian package when deploying SingleStore, a memsql
user and group are also created on each host in the cluster.
This memsql
user does not have a shell, and attempting to log in or SSH as this user will fail.memsql
group.sudo
privileges, and members of this group can perform many Toolbox operations without the need to escalate to sudo
.memsql
group on each host in the cluster.
Manually creating a memsql
user and group is only recommended in a sudo
-less environment when performing a tarball-based deployment of SingleStore.memsql
user must be configured so that it can SSH to each host in the cluster.
Minimal Deployment
SingleStore has been designed to be deployed with at least two nodes:
-
A Master Aggregator node that runs SQL queries and aggregates the results, and
-
A single leaf node, which is responsible for storing and processing data
These two nodes can be deployed on a single host (via the cluster-in-box
option), or on two hosts, with one SingleStore node on each host.
While additional aggregators and nodes can be added and removed as required, a minimal deployment of SingleStore always consists of at least these two nodes.
Deployment Using YAML File - Tarball
As of SingleStore Toolbox 1.sdb-deploy setup-cluster
command now accepts a YAML-based cluster configuration file (or simply “cluster file”), the format of which is validated before attempting to set up the specified cluster.
The command is designed to be consistent, where re-running the sdb-deploy setup-cluster
command with the same cluster file will always produce the same cluster.sdb-deploy setup-cluster
re-run, in order to generate the desired cluster.
Complete Cluster File Template
license: <LICENSE | /path/to/LICENSE-file> [Required to bootstrap Master Aggregator]high_availability: <true | false>memsql_server_version: <the version of memsql you want to install (6.7+)>memsql_server_file_path: <path to the downloaded memsql server file>memsql_server_preinstalled_path: <equivalent to using the '--preinstalled-path' option;the path to the unpacked singlestoredb-server filewhere the unpacked folder name must be of the form'singlestoredb-server-<version>*' or'memsql-server-<version>*'>skip_install: <true | false> [ADVANCED]skip_validate_env: <true | false> [ADVANCED]allow_duplicate_host_fingerprints: <true | false> [ADVANCED]assert_clean_state: <true | false> [ADVANCED]package_type: <rpm | deb | tar> [Required if multiple package managers are present]root_password: <default password to be used for all nodes>optimize: <true | false>optimize_config:memory_percentage: <percentage of memory you want memsql to use>no_numa: <true | false>sync_variables: [ADVANCED]<variable's name>: <variable's value>hosts:- hostname: <host-name> [Required]localhost: <true | false>skip_auto_config: <true | false>memsqlctl_path: <path to memsqlctl> [ADVANCED]memsqlctl_config_path: <path to memsqlctl config> [ADVANCED]tar_install_dir: <path to tar install dir> [ADVANCED]tar_install_state: <path to tar install state> [ADVANCED]ssh: [Required for remote Hosts]host: <ssh host name>port: <ssh port>user: <ssh user>private_key: <path to your identity key>nodes:- register: <true | false>force_registration: <true | false> [ADVANCED]role: <Unknown | Master | Leaf | Aggregator> (case sensitive) [Required]availability_group: <availability group>no_start: <true | false>config:auditlogsdir: <path to auditlogs directory> [ADVANCED]baseinstalldir: <path to base install directory> [ADVANCED]configpath: <path to configuration path> [ADVANCED] [Required if register is true]datadir: <path to data directory> [ADVANCED]disable_auto_restart: <true | false>password: <password>plancachedir: <path to plancache directory> [ADVANCED]port: <port number> [Required for node creation]tracelogsdir: <path to tracelogs directory> [ADVANCED]bind_address: <bind address> [ADVANCED]ssl_fips_mode: <true | false > [ADVANCED]variables:<variable's name>: <variable's value>
Deploy a Cluster
You can deploy your own SingleStore cluster with your desired cluster configuration using the cluster file template above, and/or the example cluster files in the following sections.
After creating the cluster file, you can deploy the corresponding SingleStore cluster via the sdb-deploy setup-cluster
command.
Run the following from the singlestoredb-toolbox
directory with the path to the cluster file as input.
./sdb-deploy setup-cluster --cluster-file </path/to/cluster-file>
Cluster File Notes
-
high_
: Used to enable high availability on the cluster.availability -
If set to
true
, each node may be assigned an availability group via theavailability_
field.group -
Refer to Availability Groups for more information.
-
-
license
: Use your license from the Cloud Portal.This can be the license itself, or the full path to a text file with the license in it. -
singlestoredb-server_
: You may specify either a major release of SingleStore (such asversion 7.
) or a specific release (such as3 7.
).3. 10 When a major release is specified, the latest patch level of that release will be deployed. -
Setting a Password: There are two ways to set a password in the cluster file YAML:
-
A global root password: Including the
root_
field with a password will ensure that each node uses the same root password.password Recommended. See Example 1. -
A node-specific root password: Including a
password
field in each node definition.This is only recommended if your security protocols require each node to have its own root password. See Example 2.
-
-
register
: Set the value of this field tofalse
to create a new node.Set the value to true
if the node is already present and you want to register it to SingleStore Toolbox.The configpath
field and value are also required whenregister
is set totrue
.Do not set this value to true
to create a new node.For more information, refer to the sdb-deploy setup-cluster
reference page. -
Indicating a Host: You may use either an IP address or a hostname when indicating a host in the cluster file.
-
Aggregator Hosts: When deploying SingleStore, SingleStore recommends that you deploy each Aggregator to its own individual host.
If the Master Aggregator goes down, the Child Aggregators can keep running queries, and coordinating and executing writes. In this scenarios, the only operations that can’t be done are DDL commands and reference table management, which must be done on the Master Aggregator. -
Optimize the Cluster: SingleStore recommends that you include the
optimize
field in the cluster file and set it totrue
.Doing so checks your current cluster configuration against a set of best practices and either makes changes to maximize performance or provides recommendations for you. For hosts with NUMA support, this command will bind the leaf nodes to specific NUMA nodes.
Cluster File Examples
SingleStore uses a combination of aggregator and leaf nodes that are typically configured in a specific ratio.
The examples below deploy two different types of SingleStore cluster:
-
A multi-host, multi-node SingleStore cluster with four hosts, two aggregators, and two leaf nodes
-
A multi-host, multi-node SingleStore cluster with two hosts, a single aggregator, and two leaf nodes
These cluster file examples can be used as a starting point for deploying a SingleStore cluster that fulfills your specific requirements.
Create the memsql. service File
By creating the following memsql.
file and enabling the memsql
service, all nodes on a host will be restarted after the host is rebooted.
Performing the following steps on each host in the cluster.
-
Create a
memsql.
file in theservice /etc/systemd/system
directory.sudo vi /etc/systemd/system/memsql.service -
Using the following example, replace the directory in the
ExecStart
andExecStop
lines with the directory in which thememsqlctl
file resides on the host.In this example, the directory is
/opt/singlestore/singlestoredb-server
.[Unit] Description=SingleStore After=network.target [Service] Type=oneshot RemainAfterExit=yes ExecStart=/opt/singlestore/singlestoredb-server/memsqlctl start-node --yes --all ExecStop=/opt/singlestore/singlestoredb-server/memsqlctl stop-node --yes --all Slice=system-memsql.slice TasksMax = 128000 LimitNICE=-10 LimitNOFILE=1024000 LimitNPROC=128000 [Install] WantedBy=multi-user.target
-
Ensure that this file is owned by
root
.sudo chown root:root /etc/systemd/system/memsql.service -
Set the requisite file permissions.
sudo chmod 644 /etc/systemd/system/memsql.service -
Enable the
memsql
service to start all of the nodes on the host after the host is rebooted.sudo systemctl enable memsql.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/memsql.service to /etc/systemd/system/memsql.service.
Additional Deployment Options
Note
If this deployment method is not ideal for your target environment, you can choose one that fits your requirements from the Deployment Options.
Connect to Your Cluster
The singlestore-client
package contains is a lightweight client application that allows you to run SQL queries against your database from a terminal window.
After you have installed singlestore-client
, use the singlestore
application as you would use the mysql
client to access your database.
For more connection options, help is available through singlestore --help
.
singlestore -h <Master-or-Child-Aggregator-host-IP-address> -P <port> -u <user> -p<secure-password>
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 12
Server version: 5.5.58 MemSQL source distribution (compatible; MySQL Enterprise & MySQL Commercial)
Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
singlestore>
Refer to Connect to SingleStore for additional options for connecting to SingleStore.
Next Steps After Deployment
Now that you have installed SingleStore, check out the following resources to learn more about SingleStore:
-
Optimizing Table Data Structures: Learn the difference between rowstore and columnstore tables, when you should pick one over the other, how to pick a shard key, and so on.
-
How to Load Data into SingleStore: Describes the different options you have when ingesting data into a SingleStore cluster.
-
How to Run Queries: Provides example schema and queries to begin exploring the potential of SingleStore.
-
Configure Monitoring: SingleStore’s native monitoring solution is designed to capture and reveal cluster events over time.
By analyzing this event data, you can identify trends and, if necessary, take action to remediate issues. -
Tools Reference: Contains information about SingleStore Tools, including Toolbox and related commands.
Last modified: September 18, 2023