Watch the 7.3 Webinar On-Demand
This new release brings updates to Universal Storage, query optimization, and usability that you won’t want to miss.

BACKUP DATABASE

Info

In SingleStore Managed Service, creating or restoring a backup to/from a local filesystem is not supported.

Backs up a database. The backup files are saved on the master aggregator and on the leaves. Refer to the Backing Up and Restoring Data guide for additional information.

Syntax

Backup to a Local or Network Drive

BACKUP [DATABASE] <database_name> [WITH { INIT | DIFFERENTIAL | SPLIT PARTITIONS [BY 2] }] TO "backup_path" [(OPTION resource_pool = resource_pool_name)]

Backup to S3

BACKUP [DATABASE] <database_name> [WITH { INIT | DIFFERENTIAL | SPLIT PARTITIONS [BY 2] }] TO S3 "bucket/path" [CONFIG configuration_json] CREDENTIALS credentials_json [(OPTION resource_pool = resource_pool_name)]

configuration_json:
'{"region":"your_region",
  "endpoint_url":"http://other_endpoint"
  [,"x-amz-server-side-encryption":"<encryption_type>" [, "x-amz-server-side-encryption-aws-kms-key-id":"<optional_key>" ] |
  "x-amz-server-side-encryption-customer-algorithm":"<encryption_type>",
  "x-amz-server-side-encryption-customer-key":"<encrypted_or_unencrypted_key>",
  "x-amz-server-side-encryption-customer-key-MD5":"<key>"
  ]
}'

credentials_json:
'{"aws_access_key_id": "replace_with_your_access_key_id",
  "aws_secret_access_key": "replace_with_your_secret_access_key",
  ["aws_session_token": "replace_with_your_temp_session_token",]
  ["role_arn":"replace_with_your_role_arn"]
}'

Backup to an S3 Compatible Storage Provider

BACKUP [DATABASE] <database_name> [WITH { INIT | DIFFERENTIAL | SPLIT PARTITIONS [BY 2] }] TO S3 "bucket/path" [CONFIG configuration_json] CREDENTIALS credentials_json [(OPTION resource_pool = resource_pool_name)]

configuration_json:
'{"endpoint_url":"http://storage_provider_endpoint"}'

credentials_json:
'{"aws_access_key_id": "replace_with_your_access_key_id",
  "aws_secret_access_key": "replace_with_your_secret_access_key"
}'

Backup to Google Cloud Storage

BACKUP [DATABASE] <database_name> [WITH { INIT | DIFFERENTIAL | SPLIT PARTITIONS [BY 2] }] TO GCS "bucket/path" [CONFIG configuration_json] CREDENTIALS credentials_json [(OPTION resource_pool = resource_pool_name)]

configuration_json:
'{}'

credentials_json:
'{"access_id": "replace_with_your_google_access_key",
  "secret_key": "replace_with_your_google_secret_key"
}'

See the remarks for more details.

Backup to Azure Blobs

BACKUP DATABASE <database_name> [WITH { INIT | DIFFERENTIAL | SPLIT PARTITIONS [BY 2] }] TO AZURE "container/blob-prefix" CREDENTIALS credentials_json [(OPTION resource_pool = resource_pool_name)]

credentials_json:
'{"account_name": "your_account_name_here",
  "account_key": "your_account_key_here"
}'
BACKUP [DATABASE] <database_name> TO LINK [link_database_name.]connection_name "backup_path";

General Arguments

  • <database_name> is the SingleStore DB database while link_database_name is the database that stores the connection link.
  • resource_pool_name is optional. It is the name of the resource pool where the backup operation runs.
  • Use the [WITH { INIT | DIFFERENTIAL }] clause to make an incremental backup. If you don’t specify this clause, a full backup is made.
  • Use the [WITH {SPLIT PARTITIONS [BY 2]}] clause to double the number of partitions in your database in an offline manner. See partition split topic for details.

General Remarks

  • The BACKUP DATABASE command writes a consistent snapshot of the database to disk. This means a backup stores the data in the database as it existed at the time the BACKUP DATABASE operation started (all committed transactions at the time the BACKUP DATABASE started will be stored in the backup). Backup files containing the database’s reference tables are saved on the master aggregator. Backup files are also saved on the leaves.
  • This command can be run on the master aggregator node, or a child aggregator node. For more information, see Node Requirements for SingleStore DB Commands.
  • See Backing Up and Restoring Data for a list of items that are included in or excluded from a backup.
  • BACKUP DATABASE is an online operation, which means that writes will continue to run while the BACKUP DATABASE is in progress. Any writes executed after the BACKUP DATABASE has started will not be reflected in the stored backup.
  • All partitions of the database need to be in the online or replicating state to run a BACKUP DATABASE operation. The backup file is created on the server and not the client, unlike mysqldump.
  • The aggregator needs to briefly block new queries from running across the cluster to start a BACKUP DATABASE. Once the BACKUP DATABASE is running, queries can run against the cluster normally. If there is a long-running write query executing at the time a BACKUP DATABASE operation is run, the BACKUP DATABASE will wait for the query to finish.
  • Incremental backups contain incremental data only for columnstore tables. For rowstore tables, an incremental backup contains a full copy of the rowstore data (instead of incremental data).
  • A backup of system databases cannot be created.
  • Remote databases, i.e. databases on a secondary cluster to which the primary database is being replicated, cannot be backed up with the BACKUP DATABASE command.

Back up to a Local or Network Drive

  • The backup_path is relative to the directory stored in the datadir engine variable, by default. You can include / at the front of the backup_path to override the default and use an absolute path.
  • When a backup is being made to local disk, each node is written to the destination folder separately. This means that any local disk chosen to store a backup should be a network disk to ensure all nodes have the require access.
Info

Incremental backups cannot be stored on a local file system.

Backup to S3

  • BACKUP ... TO S3 writes the backup files to the specified S3 bucket.

  • SingleStore DB backup will add database_name.backup to the path and put the objects in the path path/database_name.backup/.

  • bucket/path/database_name.backup must not currently exist; otherwise, an error will be returned.

  • aws_session_token is optional. This is only needed if your AWS account uses the AWS Security Token Service.

  • region is optional. It is the AWS region where S3 bucket resides.

  • endpoint_url is optional.

  • role_arn is optional and can be used instead of specifying aws_access_key_id and aws_secret_access_key values. This is only needed if your AWS security is setup to require a role.

  • compatibility_mode setting, as you would when backing up to Google Cloud Storage using the S3 interface.

  • Backup to an S3 bucket requires the following permissions:

    s3::GetObject
    s3::PutObject
    s3::ListBucket
    

Backup to an S3 Compatible Storage Provider

  • BACKUP ... TO S3 with the endpoint_url set to the URL of a storage provider creates an S3 compatible backup at the storage provider.
  • SingleStore DB backup will add database_name.backup to the path and put the objects in the path path/database_name.backup/.
  • bucket/path/database_name.backup must not currently exist; otherwise, an error will be returned.
  • aws_access_key_id is optional. It is the access key id for accessing the storage provider.
  • aws_secret_access_key is optional. It is the secret access key for accessing the storage provider.
  • You should not specify the compatibility_mode setting, as you would when backing up to Google Cloud Storage using the S3 interface.

Backup to Google Cloud Storage

  • The CONFIG clause may optionally specify an endpoint_url.
  • The CREDENTIALS clause is required
  • We support only HMAC keys
  • The CREDENTIALS clause should be a JSON object with two fields:

access_id: usually a 24 or 60 character alphanumeric string, which is linked to the Google account, typically all uppercase and starts with GOOG.

secret_key: usually a 40 character Base-64 encoded string that is linked to a specific access_id.

Back up to Azure Blobs

  • BACKUP ... TO AZURE writes the backup files to the specified Azure container.
  • SingleStore DB backup will append database_name.backup to the blob prefix and will put objects into the “path” blob-prefix/database_name.backup/.
  • container/blob-prefix/database_name.backup must not currently exist; otherwise, an error will be returned.
  • The BACKUP.. TO LINK command writes the backup files to the specified container using a connection link. To use this command, you only need to know the connection link name, not the connection details and configuration.
  • This command supports backup to S3, Azure, and GCS connections only.

Examples

Making an Incremental Backup

The Backing Up and Restoring Data topic provides an example of making incremental backups.

All of the examples shown below make full backups.

Local Backup

The following example backs up a database to /my-backup-dir.

BACKUP DATABASE memsql_demo to "/my-backup-dir";

The following example backs up a database to the directory stored in the datadir engine variable.

BACKUP DATABASE memsql_demo to "./";

The following example backs up a database to the my-relative-backup-dir directory, which is relative to the directory stored in the datadir engine variable. In this example, if datadir contained the directory /var/lib/memsql/leaf-3308-aaaaaaaaaa/data, the backup would be saved to the /var/lib/memsql/leaf-3308-aaaaaaaaaa/data/my-relative-backup-dir directory.

BACKUP DATABASE memsql_demo to "./my-relative-backup-dir";

S3 Backup

The following example backs up a database to an S3 bucket.

BACKUP DATABASE memsql_demo TO S3 "backup_bucket/backups/6_1_2018"
  CONFIG '{"region":"us-east-1"}'
  CREDENTIALS '{"aws_access_key_id":"your_access_key_id","aws_secret_access_key":"your_secret_access_key"}';

The following example backs up a database to an S3 bucket using an Amazon Resource Name (ARN) for AWS Identity and Access Management (IAM).

BACKUP DATABASE mydatabase TO S3 'my-bucket/memsql-backup/'
  CONFIG '{"region":"us-east-1"}'
  CREDENTIALS '{"role_arn": "arn:aws:iam::<AccountID>:role/EC2AmazonS3FullAccess"}';

The following examples demonstrate the CONFIG clause for backing up to encrypted buckets.

--- For Server-Side Encryption (SSE) ---
CONFIG '{"region":"us-east-1", "x-amz-server-side-encryption":"AES256"}'
--- For Server-Side Encryption - Key Management Service (SSE-KMS) ---
CONFIG '{"region":"us-east-1",
"x-amz-server-side-encryption":"aws:kms",
"x-amz-server-side-encryption-aws-kms-key-id": "<optional_key>"}'
--- For Server-Side Encryption with Customer-Provided Encryption Keys (SSE-C) ---
CONFIG '{"region":"us-east-1", "endpoint_url":"<target_endpoint_url>",
"x-amz-server-side-encryption-customer-algorithm":"AES256",
"x-amz-server-side-encryption-customer-key":"<key>",
"x-amz-server-side-encryption-customer-key-MD5":"<key>"}'

Backup to Google Cloud Storage

The following example backs up a database to a Google Cloud Storage bucket.

BACKUP DATABASE memsql_demo TO GCS "backup_bucket/backups/6_1_2018"
  CREDENTIALS '{"access_id":"your_google_access_key","secret_key":"your_google_secret_key"}';

Azure Blobs Backup

The following example backs up a database to an Azure bucket.

BACKUP DATABASE memsql_demo TO Azure "backup_container/backups/6_1_2018"
CREDENTIALS '{"account_name":"your_account_name_here","account_key":"your_account_key_here"}';

Backup Using a Resource Pool

The following example backs up a database to /my-backup-dir, using an existing resource pool general.

BACKUP DATABASE memsql_demo TO "/my-backup-dir" OPTION (resource_pool = general);

The following example backs up the database db1 to the bucket at the specified path, using an S3 connection link S3_link stored in the database db2. Ensure that S3_link already exists in db2. Refer to the CREATE_LINK topic for details on creating a connection link.

BACKUP DATABASE db1 TO LINK db2.S3_link "backup_path/backups/6_1_2020"

Note: db1 is the SingleStore DB database while db2 is the database that stores the connection link S3_link.

Related Topics