UI Deployment Using YAML File - Tarball

Introduction

Installing SingleStore on bare metal, on virtual machines, or in the cloud can be done through the use of popular configuration management tools or through SingleStore’s management tools.

In this guide, you will deploy a SingleStore cluster onto physical or virtual machines and connect to the cluster using a SQL client.

A four-node cluster is the minimal recommended cluster size for showcasing SingleStore as a distributed database with high availability; however, you can use the procedures in this tutorial to scale out to additional nodes for increased performance over large data sets or to handle higher concurrency loads. To learn more about SingleStore’s design principles and topology concepts, see Distributed Architecture.

Note

There are no licensing costs for using up to four license units for the leaf nodes in your cluster. If you need a larger cluster with more/larger leaf nodes, please create an Enterprise License trial key.

Prerequisites

For this tutorial you will need:

  • One (for single-host cluster-in-a-box for development) or four physical or virtual machines (hosts) with the following:

    • Each SingleStore node requires at least four (4) x86_64 CPU cores and eight (8) GB of RAM per host

    • Eight (8) vCPU and 32 GB of RAM are recommended for leaf nodes to align with license unit calculations

    • Running a 64-bit version of RHEL/AlmaLinux 7 or later, or Debian 8 or later, with kernel 3.10 or later

      For SingleStore 8.1 or later, glibc 2.17 or later is also required.

    • Port 3306 open on all hosts for intra-cluster communication. Based on the deployment method, this default can be changed either from the command line or via cluster file.

    • Port 8080 open on the main deployment host for the cluster

    • A non-root user with sudo privileges available on all hosts in the cluster that be used to run SingleStore services and own the corresponding runtime state

  • SSH access to all hosts

  • A connection to the Internet to download required packages

If running this in a production environment, it is highly recommended that you follow our host configuration recommendations for optimal cluster performance.

Duplicate Hosts

As of SingleStore Toolbox 1.4.4, a check for duplicate hosts is performed before SingleStore is deployed, and will display a message similar to the following if more than one host has the same SSH host key:

✘ Host check failed.host 172.26.212.166 has the same ssh
host keys as 172.16.212.165, toolbox doesn't support
registering the same host twice

Confirm that all specified hosts are indeed different and aren’t using identical SSH host keys. Identical host keys can be present if you have instantiated your host instances from images (AMIs, snapshots, etc.) that contain existing host keys. When a host is cloned, the host key (typically stored in /etc/ssh/ssh_host_<cipher>_key) will also be cloned.

As each cloned host will have the same host key, an SSH client cannot verify that it is connecting to the intended host. The script that deploys SingleStore will interpret a duplicate host key as an attempt to deploy to the same host twice, and the deployment will fail.

The following steps demonstrate a potential remedy for the duplicate hosts message. Please note these steps may slightly differ depending on your Linux distribution and configuration.

sudo root
ls -al /etc/ssh/
rm /etc/ssh/<your-ssh-host-keys>
ssh-keygen -f /etc/ssh/<ssh-host-key-filename> -N '' -t rsa1
ssh-keygen -f /etc/ssh/<ssh-host-rsa-key-filename> -N '' -t rsa
ssh-keygen -f /etc/ssh/<ssh-host-dsa-key-filename> -N '' -t dsa

For more information about SSH host keys, including the equivalent steps for Ubuntu-based systems, refer to Avoid Duplicating SSH Host Keys.

As of SingleStore Toolbox 1.5.3, sdb-deploy setup-cluster supports an --allow-duplicate-host-fingerprints option that can be used to ignore duplicate SSH host keys.

Network Configuration

Depending on the host and its function in deployment, some or all of the following port settings should be enabled on hosts in your cluster.

These routing and firewall settings must be configured to:

  • Allow database clients (e.g. your application) to connect to the SingleStore aggregators

  • Allow all nodes in the cluster to talk to each other over the SingleStore protocol (3306)

  • Allow you to connect to management and monitoring tools

Protocol

Default Port

Direction

Description

TCP

22

Inbound and Outbound

For host access. Required between nodes in SingleStore tool deployment scenarios. Also useful for remote administration and troubleshooting on the main deployment host.

TCP

443

Outbound

To get public repo key for package verification. Required for nodes downloading SingleStore APT or YUM packages.

TCP

3306

Inbound and Outbound

Default port used by SingleStore. Required on all nodes for intra-cluster communication. Also required on aggregators for client connections.

The service port values are configurable if the default values cannot be used in your deployment environment. For more information on how to change them, see:

We also highly recommend configuring your firewall to prevent other hosts on the Internet from connecting to SingleStore.

Install SingleStore Tools

The first step in deploying your cluster is to download and install the SingleStore Tools on one of the hosts in your cluster. This host will be designated as the main deployment host for deploying SingleStore across your other hosts and setting up your cluster.

These tools perform all major cluster operations including downloading the latest version of SingleStore onto your hosts, assigning and configuring nodes in your cluster, and other management operations. For the purpose of this guide, the main deployment host is the same as the designated Master Aggregator of the SingleStore cluster.

Installation - Tarball

Download SingleStore Files

Download the singlestoredb-toolbox, singlestore-client, and singlestoredb-server files onto the main deployment host, or onto a device with access to the main deployment host.

To obtain the latest version of each file, use the following:

curl https://release.memsql.com/production/index/<singlestore-file>/latest.json

Replace <singlestore-file> with memsqltoolbox, memsqlclient, and singlestoredbserver to download the list of available file types.

  • To download the latest patch release of a major version, substitute the desired major version for latest. For example:

    curl https://release.memsql.com/production/index/singlestoredbserver/8.9.json

  • To download a specific patch release of a major version, substitute the desired patch release for latest. For example:

    curl https://release.memsql.com/production/index/singlestoredbserver/8.9.1.json

The JSON you receive contains relative file paths in the following format:

"Path": "production/tar/x86_64/<singlestore-file>-<version>-<commit-hash>.x86_64.tar.gz"

Use wget to download the file by copying, pasting, and appending the path (minus the quotes) to https://release.memsql.com/. Examples are shown below.

wget https://release.memsql.com/production/tar/x86_64/<singlestore-file>-<version>-<commit-hash>.x86_64.tar.gz

Alternatively, download the following SingleStore tarball files onto a device with access to the main deployment host.

Transfer SingleStore Files

Transfer the singlestoredb-toolbox, singlestore-client, and singlestoredb-server tarball files into a dedicated singlestore directory that has been configured so that non-sudo users can access on the main deployment host, such as /opt/singlestore.

Unpack SingleStore Files

Note: For the remainder of this document, <version>-<commit-hash> will be written simply as <version>.

Unpack singlestoredb-toolbox and singlestore-client into the singlestore directory.

tar xzvf singlestoredb-toolbox-<version>.tar.gz && \
tar xzvf singlestore-client-<version>.tar.gz

You do not need to unpack the singlestoredb-server file in this step. It will be installed as part of deployment, which is shown in the next step.

Deploy SingleStore

Prerequisites

Warning

Before deploying a SingleStore cluster in a production environment, please review and follow the host configuration recommendations. Failing to follow these recommendations will result in sub-optimal cluster performance.

In addition, SingleStore recommends that each Master Aggregator and child aggregator reside on its own host when deploying SingleStore in a production environment.

Notes on Users and Groups

The user that deploys SingleStore via SingleStore Toolbox must be able to SSH to each host in the cluster. When singlestoredb-server is installed via an RPM or Debian package when deploying SingleStore, a memsql user and group are also created on each host in the cluster.

This memsql user does not have a shell, and attempting to log in or SSH as this user will fail. The user that deploys SingleStore is added to the memsql group. This group allows most Toolbox commands to run without sudo privileges, and members of this group can perform many Toolbox operations without the need to escalate to sudo. Users who desire to run SingleStore Toolbox commands must be added to the memsql group on each host in the cluster. They must also be able to SSH to each host.

Manually creating a memsql user and group is only recommended in a sudo-less environment when performing a tarball-based deployment of SingleStore. In order to run SingleStore Toolbox commands against a cluster, this manually-created memsql user must be configured so that it can SSH to each host in the cluster.

Minimal Deployment

SingleStore has been designed to be deployed with at least two nodes:

  • A Master Aggregator node that runs SQL queries and aggregates the results, and

  • A single leaf node, which is responsible for storing and processing data

These two nodes can be deployed on a single host (via the cluster-in-box option), or on two hosts, with one SingleStore node on each host.

While additional aggregators and nodes can be added and removed as required, a minimal deployment of SingleStore always consists of at least these two nodes.

UI Deployment Using YAML File - Tarball

Note

The user that deploys SingleStore via the UI must also be able to SSH into each host in the cluster without using a password.

As of SingleStore Toolbox 1.6, SingleStore can be deployed via browser-based UI. This option describes how to deploy SingleStore using this UI. Please review the prerequisites prior to deploying SingleStore.

In order to use the UI, the user (and the user account that will deploy SingleStore) must:

  • Be able to install SingleStore and SingleStore Toolbox 1.6 using via tarball.

  • Deploy a standard SingleStore configuration. Advanced options, such as those available with a cluster deployment via a YAML file, are also available in the UI.

Start the UI

Run the following command to start the UI.

  1. Change to the directory where the SingleStore Toolbox was uncompressed.

  2. Run the following command.

    ./sdb-deploy ui

This command will display a link with a secure token that you can use to deploy SingleStore via the UI.

For additional options that can be used with ./sdb-deploy ui, refer to the associated reference page.

Access the UI

Copy and paste this link into a Chrome or Firefox browser to access the UI.

Note: You may need to modify the URL by changing localhost to a hostname or IP address depending on how and where you installed SingleStore Tools. The hostname or IP address must be that of the main deployment host, which is typically the Master Aggregator.

Create a YAML File Using the UI and Deploy a Cluster

In lieu of deploying a cluster immediately, a cluster can be configured using the UI and the configuration saved to a YAML file. The YAML file can then be used to deploy a cluster by copying the YAML file to the Master Aggregator and running the following command.

Run the following command from the directory in which the singlestoredb-toolbox tarball file was uncompressed.

./sdb-deploy setup-cluster --cluster-file </path/to/cluster-file>

Note

You may use the UI to create a "base" cluster configuration YAML file that can be saved and further customized prior to deploying a cluster, or create a YAML file by hand. Refer to the YAML-based deployment guides listed in the deployment overview for the YAML file format and example cluster configuration files.

Troubleshooting

  • Message: unknown command "ui" for "sdb-deploy"

    Solution: Confirm that SingleStore Toolbox v1.6 or later has been installed on the main deployment host.

  • Message: sdb-deploy ui is not currently supported by SingleStore.

    Solution: The installed version of SingleStore Toolbox does not support deploying SingleStore via the UI. Please select another deployment option.

  • Message: Registered hosts detected. SingleStore Toolbox supports managing only one cluster per instance. To view them, run './sdb-toolbox-config list-hosts'. To remove them, run './sdb-toolbox-config unregister-host'

    Solution: SingleStore Toolbox can only manage a single instance of SingleStore.

Create the memsql.service File

By creating the following memsql.service file and enabling the memsql service, all nodes on a host will be restarted after the host is rebooted.

Performing the following steps on each host in the cluster.

  1. Create a memsql.service file in the /etc/systemd/system directory.

    sudo vi /etc/systemd/system/memsql.service
  2. Using the following example, replace the directory in the ExecStart and ExecStop lines with the directory in which the memsqlctl file resides on the host.

    In this example, the directory is /opt/singlestore/singlestoredb-server.

    [Unit]
    Description=SingleStore
    After=network.target
    
    [Service]
    Type=oneshot
    RemainAfterExit=yes
    ExecStart=/opt/singlestore/singlestoredb-server/memsqlctl start-node  --yes --all
    ExecStop=/opt/singlestore/singlestoredb-server/memsqlctl stop-node  --yes --all
    Slice=system-memsql.slice
    TasksMax = 128000
    LimitNICE=-10
    LimitNOFILE=1024000
    LimitNPROC=128000
    
    [Install]
    WantedBy=multi-user.target
  3. Ensure that this file is owned by root.

    sudo chown root:root /etc/systemd/system/memsql.service
  4. Set the requisite file permissions.

    sudo chmod 644 /etc/systemd/system/memsql.service
  5. Enable the memsql service to start all of the nodes on the host after the host is rebooted.

    sudo systemctl enable memsql.service
    Created symlink from /etc/systemd/system/multi-user.target.wants/memsql.service to /etc/systemd/system/memsql.service.

Additional Deployment Options

Note

If this deployment method is not ideal for your target environment, you can choose one that fits your requirements from the Deployment Options.

Connect to Your Cluster

The singlestore-client package contains is a lightweight client application that allows you to run SQL queries against your database from a terminal window.

After you have installed singlestore-client, use the singlestore application as you would use the mysql client to access your database.

For more connection options, help is available through singlestore --help.

singlestore -h <Master-or-Child-Aggregator-host-IP-address> -P <port> -u <user> -p<secure-password>
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 12
Server version: 5.5.58 MemSQL source distribution (compatible; MySQL Enterprise & MySQL Commercial)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

singlestore> 

Refer to Connect to SingleStore for additional options for connecting to SingleStore.

Next Steps After Deployment

Now that you have installed SingleStore, check out the following resources to learn more about SingleStore:

  • Optimizing Table Data Structures: Learn the difference between rowstore and columnstore tables, when you should pick one over the other, how to pick a shard key, and so on.

  • How to Load Data into SingleStore: Describes the different options you have when ingesting data into a SingleStore cluster.

  • How to Run Queries: Provides example schema and queries to begin exploring the potential of SingleStore.

  • Configure Monitoring: SingleStore’s native monitoring solution is designed to capture and reveal cluster events over time. By analyzing this event data, you can identify trends and, if necessary, take action to remediate issues.

  • Tools Reference: Contains information about SingleStore Tools, including Toolbox and related commands.

Last modified: September 18, 2023

Was this article helpful?