Schedule periodic backups
Schedule backups of your databases to make sure you always have valid backups.
Periodic backups provide a way to restore data with minimal data loss. With Redis Enterprise Software, you can schedule periodic backups to occur once a day (every 24 hours), twice a day (every twelve hours), every four hours, or every hour.
As of v6.2.8, you can specify the start time for 24-hour or 12-hour backups.
To make an on-demand backup, export your data.
You can schedule backups to a variety of locations, including:
- FTP server
- SFTP server
- Local mount point
- Amazon Simple Storage Service (S3)
- Azure Blob Storage
- Google Cloud Storage
The backup process creates compressed (.gz) RDB files that you can import into a database.
When you back up a database configured for database clustering, Redis Enterprise Software creates a backup file for each shard in the configuration. All backup files are copied to the storage location.
- Make sure that you have enough space available in your storage location. If there is not enough space in the backup location, the backup fails.
- The backup configuration only applies to the database it is configured on.
- To limit the parallel backup for shards, set both
tune cluster max_simultaneous_backups
andtune node max_redis_forks
.max_simultaneous_backups
is set to 4 by default.
Schedule periodic backups
Before scheduling periodic backups, verify that your storage location exists and is available to the user running Redis Enterprise Software (redislabs
by default). You should verify that:
- Permissions are set correctly.
- The user running Redis Enterprise Software is authorized to access the storage location.
- The authorization credentials work.
Storage location access is verified before periodic backups are scheduled.
To schedule periodic backups for a database:
-
Sign in to the Redis Enterprise Software Cluster Manager UI using admin credentials.
-
From the Databases list, select the database, then select Configuration.
-
Select the Edit button.
-
Expand the Scheduled backup section.
-
Select Add backup path to open the Path configuration dialog.
-
Select the tab that corresponds to your storage location type, enter the location details, and select Done.
See Supported storage locations for more information about each storage location type.
-
Set the backup Interval and Starting time.
Setting Description Interval Specifies the frequency of the backup; that is, the time between each backup snapshot.
Supported values include Every 24 hours, Every 12 hours, Every 4 hours, and Every hour.Starting time v6.2.8 or later: Specifies the start time for the backup; available when Interval is set to Every 24 hours or Every 12 hours.
If not specified, defaults to a time selected by Redis Enterprise Software. -
Select Save.
Access to the storage location is verified when you apply your updates. This means the location, credentials, and other details must exist and function before you can enable periodic backups.
Default backup start time
If you do not specify a start time for twenty-four or twelve hour backups, Redis Enterprise Software chooses a random starting time for you.
This choice assumes that your database is deployed to a multi-tenant cluster containing multiple databases. This means that default start times are staggered (offset) to ensure availability. This is done by calculating a random offset which specifies a number of seconds added to the start time.
Here's how it works:
- Assume you're enabling the backup at 4:00 pm (1600 hours).
- You choose to back up your database every 12 hours.
- Because you didn't set a start time, the cluster randomly chooses an offset of 4,320 seconds (or 72 minutes).
This means your first periodic backup occurs 72 minutes after the time you enabled periodic backups (4:00 pm + 72 minutes). Backups repeat every twelve hours at roughly same time.
The backup time is imprecise because they're started by a trigger process that runs every five minutes. When the process wakes, it compares the current time to the scheduled backup time. If that time has passed, it triggers a backup.
If the previous backup fails, the trigger process retries the backup until it succeeds.
In addition, throttling and resource limits also affect backup times.
For help with specific backup issues, contact support.
Supported storage locations
Database backups can be saved to a local mount point, transferred to a URI using FTP/SFTP, or stored on cloud provider storage.
When saved to a local mount point or a cloud provider, backup locations need to be available to the group and user running Redis Enterprise Software, redislabs:redislabs
by default.
Redis Enterprise Software needs the ability to view permissions and update objects in the storage location. Implementation details vary according to the provider and your configuration. To learn more, consult the provider's documentation.
The following sections provide general guidelines. Because provider features change frequently, use your provider's documentation for the latest info.
FTP server
Before enabling backups to an FTP server, verify that:
- Your Redis Enterprise cluster can connect and authenticate to the FTP server.
- The user specified in the FTP server location has read and write privileges.
To store your backups on an FTP server, set its Backup Path using the following syntax:
ftp://[username]:[password]@[host]:[port]/[path]/
Where:
- protocol: the server's protocol, can be either
ftp
orftps
. - username: your username, if needed.
- password: your password, if needed.
- hostname: the hostname or IP address of the server.
- port: the port number of the server, if needed.
- path: the backup path, if needed.
Example: ftp://username:password@10.1.1.1/home/backups/
The user account needs permission to write files to the server.
SFTP server
Before enabling backups to an SFTP server, make sure that:
-
Your Redis Enterprise cluster can connect and authenticate to the SFTP server.
-
The user specified in the SFTP server location has read and write privileges.
-
The SSH private keys are specified correctly. You can use the key generated by the cluster or specify a custom key.
To use the cluster auto generated key:
-
Go to Cluster > Security > Certificates.
-
Expand Cluster SSH Public Key.
-
Download or copy the cluster SSH public key to the appropriate location on the SFTP server.
Use the server documentation to determine the appropriate location for the SSH public key.
-
To backup to an SFTP server, enter the SFTP server location in the format:
sftp://user:password@host<:custom_port>/path/
For example: sftp://username:password@10.1.1.1/home/backups/
Local mount point
Before enabling periodic backups to a local mount point, verify that:
- The node can connect to the destination server, the one hosting the mount point.
- The
redislabs:redislabs
user has read and write privileges on the local mount point and on the destination server. - The backup location has enough disk space for your backup files. Backup files are saved with filenames that include the timestamp, which means that earlier backups are not overwritten.
To back up to a local mount point:
-
On each node in the cluster, create the mount point:
-
Connect to a shell running on Redis Enterprise Software server hosting the node.
-
Mount the remote storage to a local mount point.
For example:
sudo mount -t nfs 192.168.10.204:/DataVolume/Public /mnt/Public
-
-
In the path for the backup location, enter the mount point.
For example:
/mnt/Public
-
Verify that the user running Redis Enterprise Software has permissions to access and update files in the mount location.
AWS Simple Storage Service
To store backups in an Amazon Web Services (AWS) Simple Storage Service (S3) bucket:
-
Sign in to the AWS Management Console.
-
Create an S3 bucket if you do not already have one.
-
Create an IAM User with permission to add objects to the bucket.
-
Create an access key for that user if you do not already have one.
-
In the Redis Enterprise Software Cluster Manager UI, when you enter the backup location details:
-
Select the AWS S3 tab on the Path configuration dialog.
-
In the Path field, enter the path of your bucket.
-
In the Access Key ID field, enter the access key ID.
-
In the Secret Access Key field, enter the secret access key.
-
You can also connect to a storage service that uses the S3 protocol but is not hosted by Amazon AWS. The storage service must have a valid SSL certificate. To connect to an S3-compatible storage location, run rladmin cluster config
:
rladmin cluster config s3_url <URL>
Replace <URL>
with the hostname or IP address of the S3-compatible storage location.
Google Cloud Storage
For Google Cloud subscriptions, store your backups in a Google Cloud Storage bucket:
-
Sign in to the Google Cloud Platform console.
-
Create a JSON service account key if you do not already have one.
-
Create a bucket if you do not already have one.
-
Add a principal to your bucket:
-
In the New principals field, add the
client_email
from the service account key. -
Select "Storage Legacy Bucket Writer" from the Role list.
-
-
In the Redis Enterprise Software Cluster Manager UI, when you enter the backup location details:
-
Select the Google Cloud Storage tab on the Path configuration dialog.
-
In the Path field, enter the path of your bucket.
-
In the Client ID field, enter the
client_id
from the service account key. -
In the Client Email field, enter the
client_email
from the service account key. -
In the Private Key ID field, enter the
private_key_id
from the service account key. -
In the Private Key field, enter the
private_key
from the service account key. Replace\n
with new lines.
-
Azure Blob Storage
To store your backup in Microsoft Azure Blob Storage, sign in to the Azure portal and then:
To export to Microsoft Azure Blob Storage, sign in to the Azure portal and then:
-
Create an Azure Storage account if you do not already have one.
-
Create a container if you do not already have one.
-
Manage storage account access keys to find the storage account name and account keys.
-
In the Redis Enterprise Software Cluster Manager UI, when you enter the backup location details:
-
Select the Azure Blob Storage tab on the Path configuration dialog.
-
In the Path field, enter the path of your bucket.
-
In the Azure Account Name field, enter your storage account name.
-
In the Azure Account Key field, enter the storage account key.
-
To learn more, see Authorizing access to data in Azure Storage.