Replicate an Unlimited Storage Database

Note

Unlimited storage databases are not available in all editions of SingleStore. For more information, see SingleStore Editions.

Replicating an unlimited storage database allows you to recover from a disaster when the database at the primary site is unavailable forever, or for a long time.

SingleStore does not have built-in functionality to replicate an unlimited storage database. As an alternative, you can set up replication using your object store, from one region or data center to another. The next section explains this procedure.

Replication Procedure

The following example demonstrates how to replicate an unlimited storage database, where Amazon S3 is used as the object store. The Setup section contains the prerequisite steps needed to create the database and milestones, prior to performing the replication.

The replication of blobs by the object store should happen in the correct sequence. Assume a blob B1 was PUT by an application thread on the source and then a blob B2 was PUT. B1 should be replicated before B2 to the target site.

Setup

1. Create the unlimited storage database bottomless_db in the object storage bucket bottomless_db_bucket in the folder bottomless_db_folder.

The following definition assumes you are using Amazon S3 as the object storage provider.

Note that aws_session_token is required only if your credentials in the CREDENTIALS clause are temporary.

CREATE DATABASE bottomless_db ON S3 "bottomless_db_bucket/bottomless_db_folder"
CONFIG '{"region":"us-east-1"}'
CREDENTIALS '{"aws_access_key_id":"your_access_key_id","aws_secret_access_key":"your_secret_access_key","aws_session_token":"your_session_token"}';

The following definition assumes you are using Azure as the object storage provider.

CREATE DATABASE bottomless_db ON AZURE "bottomless_db_bucket/bottomless_db_folder"  
CONFIG ''  
CREDENTIALS '{"account_name":"your_account_name","account_key":"your_account_key"}';

The following definition assumes you are using GCS as the object storage provider.

CREATE DATABASE bottomless_db ON GCS "bottomless_db_bucket/bottomless_db_folder"
CONFIG '' 
CREDENTIALS'{"access_id":"your_access_key_id", "secret_key":"your_secret_access_key"}';

2. Make some updates to bottomless_db. In this example, you update the database by creating a table and inserting some data:

USE bottomless_db;
CREATE TABLE t(a INT);
INSERT INTO t(a) VALUES (10);
INSERT INTO t(a) VALUES (20);

3. Create a milestone (a restore point):

CREATE MILESTONE "after_second_insert" FOR bottomless_db;

4. Make more updates to bottomless_db:

INSERT INTO t(a) VALUES (30);
INSERT INTO t(a) VALUES (40);

Replication Steps

In remote object storage, create a new bucket replicated_bottomless_db_bucket where you will replicate the objects from bottomless_db_bucket.

Next, replicate the bottomless_db folder in the bottomless_db_bucket to the replicated_bottomless_db_bucket. Refer to the instructions in your remote object storage provider's documentation for replicating objects from one bucket to another. For example, with Amazon S3 you could use these instructions.

Suppose that bottomless_db_bucket is no longer accessible and you want to failover to replicated_bottomless_db_bucket. You can choose either of the two methods: detach the database on the original bucket before stopping the replication, or remove the storage locks on the replicated bucket before trying the attach. The storage lock files location is <root-path>/<storage-id>/storage_locks/. A lock file are named as <number>_<number> for some pair of numbers.

After stopping the replication, attach replicated_bottomless_db_bucket to bottomless_db. Because you're not specifying AT MILESTONE or AT TIME, the database will be attached at the point in time containing the latest update to the source database.

ATTACH DATABASE bottomless_db ON S3 "replicated_bottomless_db_bucket/bottomless_db"
CONFIG '{"region":"us-east-1"}'
CREDENTIALS '{"aws_access_key_id":"your_access_key_id","aws_secret_access_key":"your_secret_access_key"}';

Caution

Replication must be stopped before the destination database can be attached with objects in the destination bucket. Otherwise, this database may become unusable.

Last modified: January 3, 2022

Was this article helpful?