top of page

High Availability Fail-Over Support for Instant Recovery

Updated: 2021-06-05


Now you can have TRUE HA clustering of FreePBX on AWS! Inspired by Sangoma's Local HA Cluster support for FreePBX, we have developed our very own HA Cluster solution custom designed from the ground up for AWS. This has been over a year of constant development and rigorous testing. Our solution provides the same level of robust fail-over support for all services as you will find in the Local FreePBX HA solution and leverages AWS services in place of the stringent hardware requirements you would find in a local HA setup. We use the Elastic File Service (EFS) - the AWS NFS solution - in place of DRBD and a Relational Database Service (RDS) Aurora MySQL instance in place of a physical shared SQL datastore, which also provides for immersive live cluster monitoring and management even from outside the cluster itself! By using EFS and RDS, you have the same near-infinite ability to scale up the size of your cluster to meet your organization's needs now and in the future! 

The best part of our HA Cluster solution is the price: FREE! That's right; we do not charge for our HA Cluster solution. You only pay the normal instance charges that will be associated with the second Backup node, RDS, EFS instances AND, per Sangoma licensing policies, a duplicate set of your paid Commercial module licenses from Sangoma for the Backup server node Deployment ID. With Amazon's Reserved Instances ( and our Annual Subscriptions (, you can save up to 56% on your instance charges for your cluster by paying for them annually instead of hourly; a savings that can entirely pay for the Backup node instance!

Now, before we get started, there are several VERY IMPORTANT things you must know:

  • If you will be converting an existing single-server production instance to a cluster, it is important to note that you will likely have to increase the instance size slightly over what you are currently using in order to accommodate the additional overhead involved in the automated cluster management and synchronization. For example, if you are running a t2.large, you may need to grow to a t2.xlarge instance size if you notice performance degradation. This generally isn't an issue in larger instance types with an abundance of vCPUs, as that is the resource most taxed by cluster management. 

  • The AWS Elastic File Service (EFS) and, therefore, CLUSTER SUPPORT IS ONLY AVAILABLE IN THE FOLLOWING REGIONS AT THIS TIME: us-east-1 (N.Virginia), us-east-2 (Ohio), us-west-1 (N.California), us-west-2 (Oregon), ca-central-1 (Canada), eu-west-1 (Ireland), eu-west-2 (London), eu-west-3 (Paris), eu-central-1 (Frankfurt), ap-southeast-1 (Singapore), ap-southeast-2 (Sydney), ap-northeast-1 (Tokyo), ap-northeast-2 (Seoul), ap-south-1 (Mumbai). There is no way to work around this limitation, as EFS is not accessible to other regions outside of its local region for security reasons per AWS policy and inherent security limitations of the underlying NFS. AWS is expanding EFS support to all regions systematically. If your region is not currently supported, it likely will be soon. We will update this page as new regions are added.

  • If you are trying to convert an existing production server into a cluster and this existing production server instance is NOT located in one of the above regions, you will need to move your instance to a supported region. You can find well written instructions for accomplishing this here:

  • Plan your time appropriately!!! This process WILL take 1-5 hours to fully complete, mostly depending on whether this a new clean cluster or an existing Production conversion, AND ALL SERVICES ON AN EXISTING PRODUCTION SERVER WILL BE DISRUPTED FOR THE ENTIRE FIRST HALF OF THE INSTALLATION PROCESS. 

  • Given that this process will take so long, we strongly recommend that you launch into a tmux session before starting the cluster-install-wizard to protect you from accidental disconnects, which would interrupt the wizard and can cause problems. You can view our primer on tmux here:


(​We will walk you through each of these components below. This is just a summary for quick reference.)

To help you keep track of all of the cluster information as we create each component, here is a handy worksheet. Simply copy this into a text editor and fill in each field as you complete this guide. Then, when you are ready to begin the Cluster Install Wizard down below, you can copy/paste the information more easily from just one location. The fields below are in the same order as you'll supply them to the Cluster Install Wizard:


NOTE: If you wish to receive email alerts from the Cluster server nodes (strongly recommended!!), you will need to know the SMTP server Hostname/IP, username, and password for an appropriate account on your company's email server/service. If you use Google's GMail or GSuite for Business/Non-Profit, the MailHost= and the MailPass= MUST be a Google App Password IF you use multi-factor authentication with the sending email account you intend to use here. MailFrom= usually must be the same the MailUser= unless there are multiple aliases assigned to the account. MailTo= can be any normal email recipient address and can even include multiple addresses separated by a comma (,) but do NOT include any spaces between the addresses. If you do NOT wish to receive email alerts or you wish to skip this for now (you can add this at any time later), simply leave the MailHost= parameter blank during the Cluster Install Wizard and the rest of the Mail parameters will be skipped.


GENERAL: AWS Access Key and Secret Key
First, we will create a set of IAM user credentials for AWS so that your cluster server nodes can manage themselves and the ElasticIP assignment automatically. You'll start by going to the following AWS page:

On this page, you will click Add User. Enter a Username (this is for your reference only) and choose the Programmatic Access option. Then select the Attach Existing Policies Directly tab and search for the AmazonEC2FullAccess permission. Once you click the Create User button on the last page, you MUST save the Access Key ID and Secret Access Key (click the 'show' link) for use during the cluster setup. You can also download this information in csv format for your records. EITHER WAY, YOU MUST BE CERTAIN TO SAFEGUARD THIS INFORMATION AS IT GRANTS FULL ACCESS TO YOUR AWS EC2 CONSOLE AND WOULD BE VERY DANGEROUS IN THE WRONG HANDS!!! If this information does become compromised in the future, you can return to this page, delete the user, create a new one, and reprogram your cluster with the new keys. Copy/paste this information into your Worksheet for the AWSaccessKeyID= and AWSsecretAccessKey= parameters.

NOTE: If you are creating multiple clusters across different AWS regions (or even within the same region), you only need to create one set of IAM Keys. This one account can be used by all of your clusters across all of your regions. However, if you ever revoke this key pair (as you are generally recommended to rotate keys on a schedule for optimal security), you will have to update all of your clusters simultaneously. If you use separate key pairs per cluster or per region, this minimizes the number of clusters that need changed simultaneously when you rotate keys.

The remaining requisite components are AWS Region specific!
From this point forward, you must commit to using one of the following AWS Regions for your cluster and all components must be housed within this single region. Make note of this region ID on your Worksheet for the AWSregion= parameter:

  • us-east-1 == N.Virginia

  • us-east-2 == Ohio

  • us-west-1 == N.California

  • us-west-2 == Oregon

  • ca-central-1 == Canada

  • eu-west-1 == Ireland

  • eu-west-2 == London

  • eu-west-3 == Paris

  • eu-central-1 == Frankfurt

  • ap-northeast-1 == Tokyo

  • ap-northeast-2 == Seoul

  • ap-southeast-1 == Singapore

  • ap-southeast-2 == Sydney

  • ap-south-1 == Mumbai


NOTE: GovCloud support is coming soon to US-Gov-West!

Again, if you are trying to convert an existing production server into a cluster and this existing production server instance is not located in one of the above regions, you will need to move your instance to a supported region. You can find well written instructions for accomplishing this here:

If you are adding cluster support to an existing production server, plan your time appropriately!!! This process WILL take 2-5 hours to fully complete depending on how much spool data is currently stored AND ALL SERVICES ON YOUR EXISTING PRIMARY SERVER WILL BE DISRUPTED FOR THE ENTIRE FIRST HALF OF THE INSTALLATION PROCESS!



Elastic IP Allocation

Once you have decided on your cluster region, we need information regarding an Elastic IP you wish you use with your cluster. If you are converting an existing Production instance, you should already have an Elastic IP assigned to it. If this is a brand new setup, you will need to create an Elastic IP for cluster use. Go to your EC2 Console and choose Elastic IPs under Network & Security or click here:

Adding an Elastic IP is very simple. Click the Allocate New Address button, then click Allocate on the next page. That's it! You will be told if the allocation is successful. This can fail if you already have 5 Elastic IPs allocated in this region. You will need to contact AWS at the link they provide if you are denied for reaching this limit. AWS will approve most requests for more Elastic IPs, but can deny requests in regions with limited availability or if abuse of allocation (hoarding) is suspected. HOWEVER, IPv4 ADDRESS AVAILABILITY IS LIMITED INTERNET-WIDE AND YOU SHOULD REQUEST ONLY AS MANY IPv4 ADDRESSES AS YOU ABSOLUTELY NEED! (Learn more about IPv4 address scarcity and the struggle to move the internet to IPv6 here:

With an Elastic IP allocated for your cluster, you'll want to make note of the following information from the main Elastic IPs page: Elastic IP and Allocation ID. Copy/paste this information into your Worksheet (ElasticIP= and ElasticAllocationID=) for easy reference during install.

Cluster Services Security Group

The Cluster Services Security Group (SG) is used to simultaneously secure and interconnect all components of your cluster so they can freely communicate with one another. This SG is very basic, as it points only to itself. However, in order to create a self-sourced entry, the SG must first exist. Then, you will edit it to add the self-sourced entry. Go to the Security Groups page under Network & Security or click here:

First, click Create Security Group, enter the Name "Cluster Services" and a desired Description and then click Create. Once created, you'll want to Edit Inbound Rules and add a single entry with All Traffic set as the Type. In the Source field Custom option, type "sg" and your existing Security Groups will be listed. Choose the Cluster Services SG and it will fill in the rest of the ID for you. Save the rule.

You'll use this Cluster SG when we create the RDS, EFS and EC2 instances below.

Relational Database System (RDS) Aurora MySQL Instance

The RDS Instance will store all Asterisk and FreePBX DBs, including your CDR/CEL, as well as the main Cluster Management DB. This allows both members of the cluster to share configuration information in real-time. Setting up a new RDS instance is also rather easy. You only have to concern yourself with the parameters and options we outline below. You should leave ALL unmentioned fields at their defaults for optimal results. Start by accessing the RDS management section of the EC2 Console by clicking here:

Click Launch DB Instance:

  1. Select Engine: Leave the default Aurora and MySQL 5.6 options

  2. Specify DB Details:

    • -Instance Specifications-
      • DB Instance Class: If your EC2 Instances will be of a Medium or Large size of any type (t2, m4, etc), you may use one of the db.t2 Classes. If you are using a larger instance size on EC2, use a db.r3 or db.r4 Class​. This does NOT have to match your EC2 Instance size (Large, XLarge, etc) but may have to be LARGER than the EC2 Instance size, especially if you have a high call volume , use queues, or engage in heavy amounts of call recording.

      • DB Instance Identifier: Give this a name that makes sense.

    • -Settings-

      • Master Username: This can be whatever you want and should not be something as easy to guess as "admin". You will want to make a note of this on your Worksheet under ClusterRDSusername=

      • Master Password: This should be a secure password with more than 8 characters, so long as it doesn't include the following special characters: [ ] { } \ | / ; " ' & # or @. You will want to make a note of this on your Worksheet under ClusterRDSpassword=

  3. Configure Advanced Settings:​ 

    • -Network & Security-
      • Public Accessibility: YES - This ensures full proper IP access for your cluster. It is protected by the Cluster SG and will NOT be visible on the internet.​

      • VPC Security Groups: You will change this to Choose exiting..., REMOVE the default entry listed, then add the Cluster Services SG you created earlier.

    • -Maintenance-

      • Auto Minor Version Upgrade: Change this to Disable...

      • Maintenance Window: Change this to Select Window and specify a day of the week and time of the day to perform minor maintenance operations on your RDS instance.​

  4. Click​ the Launch DB Instance button: It can take up to 5-10 minutes for the RDS instance to become fully ready. You can monitor the progress and obtain the final piece of RDS information for your Worksheet by clicking the View DB Instance Details button on the confirmation page. Once RDS is live, the Endpoint in the -Connect- section will display the endpoint address of the server. Copy this to your Worksheet for the ClusterRDSendpoint= parameter.

Elastic File Service (EFS)

The Elastic File Service will house a copy of all of the synchronized files between the cluster nodes. In the Cluster Management Guide, you can learn how to add virtually any custom directory on your server nodes to be synchronized, ensuring that even your custom applications can be protected by fail-over to the Backup node. To get started, navigate to the EFS service on the EC2 Console or click here:

Click the Create File System button, provide a Name for the EFS instance, then click the CUSTOMIZE button. Leave Automatic Backups enabled to help protect your cluster data. Change the Lifecycle Management from 30 Days to 7 Days. Change the Performance Mode to Max I/O. Then click Next.


On the Network Access step, remove all Default Security Groups (blue boxes), then choose the Cluster Services Security Group you created earlier from the drop-down for each of the Availability Zones listed. On the File System Policy step, leave everything default (unchecked and blank). Finally, review and click Create File System on Step 3. 

After the File System is created, you can click the Name in the list, then the Attach button to display the EFS Endpoint address (highlighted in orange in the NFS client command line example). You will want to copy this to your Worksheet for the ClusterEFSendpoint= parameter.