Error from web console when creating a Redis Enterprise database: "memory limit is larger than "

Last updated 18, Apr 2024

Question

How to troubleshoot an error returned from the web console when creating a Redis Enterprise database: "memory limit is larger than "

Answer

Check the following output and refer to memory management with Redis Enterprise.

$ rladmin status
CLUSTER NODES:NODE:ID  ROLE   ADDRESS     EXTERNAL_ADDRESS   HOSTNAME      SHARDS CORES        FREE_RAM       PROVISIONAL_RAM   VERSION   STATUS
*node:1                master x.x.x.x                     dd4d656fd6d1  2/100  4            4.69GB/9.73GB  2.02GB/7.98GB     6.2.8-41  OK   

In this example, we would get the error if attempting to create a database of 4GB. The Free_RAM on this node is 4.69GB out of 9.73GB. Redis Enterprise uses only around 70% of the RAM to ensure there is enough memory in the node to avoid running out of memory, and hence the Provisional_RAM field is 7.98GB. However, only 2.02GB of this provisional ram is free. Remember that

  • Free_RAM - The amount of RAM that is available for system use out of the total RAM on the host.
  • Provisional_RAM - The amount of RAM that is available for provisioning to databases out of the total RAM allocated for databases (a fraction of Free_RAM)

You can also check available memory and make sure the system is not swapping using:

free -m

So in this example, we can verify that everything is ok (free RAM is confirmed to be available, there is no swapping, but Redis Enterprise will allow only a fraction of memory to be used even if 4.7GB are available):

free -m              
total        used        free      shared  buff/cache   availableMem:           9964        4607        
1353         341        4002        4720Swap:          2047           0        2047

Solution

  1. Add more memory to the node/s
  2. Free up the existing memory by stopping other non-Redis processes.
  3. Alternatively, you can create a database with a size smaller than the available provisional RAM
  4. Create a clustered database, so smaller shards can be placed on different nodes