We have identified that the default performance settings provided by EFS can be insufficient for HA Cluster use and cause a significant bottleneck with regard to the sheer number of file requests hitting the EFS store from both nodes in order to keep them synced in near real time.
This bottleneck will introduce additional latency in EFS sync operations and unnecessarily inflate CPU load due to I/O wait
This update to the cluster components introduces the ability fully migrate your data from your current EFS store to a new EFS store with the proper performance parameters
We STRONGLY recommend that all HA Cluster users update cluster components and then migrate to an EFS store with the Performance Mode set to Max I/O
The Max I/O Performance Mode setting for EFS does NOT increase the cost of the EFS store; it simply changes the prioritization of resources to the type of I/O the cluster taxes the most.
No change to the default Throughput Bursting Mode is required
Unfortunately, AWS provides no way to change the EFS Performance Mode after an EFS store has been created, so migration to a new EFS store must be performed
Rest assured, we have a virtually painless process prepared for you! This guide will walk you through the following:
The creation of a new EFS store with the proper Max I/O Performance Mode set
Update of your cluster components
Use of cluster-mgr to migrate your data from the old EFS to the new EFS and update the cluster configuration on Primary node
Use of cluster-mgr to update EFS endpoint information on Backup node
Deletion of your old EFS store
You must keep in mind the following considerations:
The file copy process between EFS stores WILL take several hours, depending on the number of voicemails and call recordings stored on the Cluster. You may wish to consider using the S3 Sync and Auto File Deletion Services in SmartUpgrade EXPERT-MODE to archive and remove excess/old call recordings from your Cluster before proceeding
While this will not place any extra load on your Running cluster node - it incurs less load than normal cluster-agent operations, in fact - it WILL leave the cluster in a SEMI-vulnerable state for the duration of the file copy process - a fail-over would have to be initiated manually and might result in lost call recordings or voicemails, but this is categorized as a minor risk factor for an otherwise healthy HA Cluster
You will want to complete this during your period of least call activity to be safe
Call Services on the Primary/Running node will NOT be affected in any way
You must be capable of connecting to both Cluster nodes via SSH as part of this process, as commands will need ran on both
You will perform the actual migration of data ONLY ON THE PRIMARY/RUNNING NODE and skip the migration step on the Backup node. DO NOT PERFORM THE DATA MIGRATION VIA THE BACKUP/STANDBY NODE OR DATA LOSS MAY OCCUR!!!
Again, Call Services on the Running node will NOT be affected in any way
Do NOT attempt to perform this migration on a Cluster that is currently in a broken state: Primary node down/Backup node serving calls, Backup node missing, cluster-agent won't start, errors being reported, random fail-overs occurring, etc. Run cluster-mgr show-agent-log to see how your Cluster is behaving
As always, we recommend the use of tmux to ensure persistent SSH sessions for long processes like these in case you get disconnected
YOU MAY WISH TO EMPLOY PROVISIONED THROUGHPUT MODE TEMPORARILY ON BOTH THE NEW AND OLD EFS IF YOU HAVE MORE THAN 10GB STORED THERE. THIS WILL DRASTICALLY SHORTEN THE COPY TIME BUT CARRIES THE FOLLOWING CAVEATS:
PROVISIONED THROUGHPUT IS VERY EXPENSIVE AND SHOULD NOT BE USED CONTINUOUSLY ON A CLUSTER!!!
YOU SHOULD ONLY ENABLE PROVISIONED THROUGHPUT FOR A SINGLE DAY (AWS ONLY LETS YOU CHANGE THROUGHPUT MODES ONCE PER 24 HOURS!!!) AND THEN RETURN IT TO BURSTING MODE
A 100MBps PROVISIONED SPEED SHOULD BE SUFFICIENT TO DRASTICALLY SPEED UP THIS PROCEDURE AND INCURS A COST OF ABOUT US$20 PER DAY PER EFS IT IS ENABLED ON...WHICH CAN QUICKLY ADD UP TO US$600/MONTH PER EFS IF LEFT TURNED ON FOR THE ENTIRE BILLING CYCLE!!!
Migrating to a New EFS with Max I/O enabled
First, you need to create a new EFS store, in the SAME AWS Region as your current EFS store and the Instances of your Cluster, via the AWS Console:
Change your Security Groups by DELETING the "default" and adding your proper Cluster Services group:
Give your new EFS store a Name, ideally distinguishing it from your current Cluster EFS by including "MaxIO" in the name:
Scroll down and change the Performance Mode to Max I/O, then click Next Step:
Confirm that you have, indeed, selected Max I/O Performance Mode, as this CANNOT be changed after creation (that's why you're here!), then click Create File System:
Initially, you will see the Mount Target State say Creating while the EFS store is setup. Wait about 3 minutes and then refresh the page...
Once you see the Mount Target State show Available, you may proceed...
Make note of the new EFS DNS name, as we'll need to supply this to the Cluster Nodes next:
Now you will connect to your Primary/Running Node via the Elastic IP Address of your cluster. If you haven't enabled tmux-on-login, go ahead and fire up a tmux session so you don't experience any problems should your SSH session get disconnected. Tmux will retain the session after disconnect, continuing to run commands and allowing you to reconnect to the session in progress. Then run the following commands to update cluster components and change EFS Endpoints, making sure to replace <DNSname> with your new EFS DNS Name from above:
cluster-mgr upgrade-cluster cluster-mgr ClusterEFSendpoint <DNSname>
You will be asked if you wish to migrate data from the old EFS to the new EFS. You will want to answer "yes" to this question on the first node, so that your data is relocated. This process will take a long while to complete. If you have a large deployment, we're talking HOURS! Trust us, the performance gains are well worth this wait.
STOP: DO NOT PROCEED UNTIL THE PRIMARY NODE HAS FINISHED ("EFS DATA MIGRATION SUCCESSFUL")!
Once the Primary node has completed, you'll connect to the Backup node via its temporary IP address to run the same commands, EXCEPT you will answer "no" when asked if you want to migrate data. This will only take a couple of minutes to complete on the Backup node, as the cluster-agent still needs to be restarted to switch Endpoints:
cluster-mgr upgrade-cluster cluster-mgr ClusterEFSendpoint <DNSname>
Your EFS Migration is now complete! You will now place the Cluster back into Normal mode. This can be done from either node:
You may now delete your OLD EFS Store from the AWS Console. As always, if you have any questions or run into any problems, we're always here to help! Simply click on the blue button floating on the right of this page and contact us. If you have Paid Support Credits, you may also call the Telephone Support line for immediate assistance.