Site-to-site Virtual Private Network (VPN) has been used to connect distributed networks for decades. This post describes how to use a VPC VPN Gateway to connect an on-premises (enterprise) network to the IBM Cloud VPC in a transit hub-and-spoke architecture:
Each spoke can be operated by a different business unit or team. The team can allow enterprise access to VPC resources like Virtual Service Instances running applications or VPC RedHat OpenShift IBM Cloud clusters. Private enterprise access to VPE-enabled services, like databases, is also possible through the VPN gateway. With this method, you can enjoy the ease of use and elasticity of cloud resources and pay for just what you need by accessing the resources securely over VPN.
The Centralize communication through a VPC Transit Hub and Spoke architecture tutorial was published a few months ago. The companion GitHub repository was modified to optionally support a policy-mode VPC VPN gateway to replace the IBM Direct Link simulation.
Multi-zone region (MZR) design
The transit hub design integrates with IBM multi-zone regions (MZRs), and the VPN Gateways are zone-specific. After some careful study, the zonal architecture shown below was implemented. It shows only two zones but can be expanded to three:
- A VPN Gateway is connected to each zone. Enterprise CIDR blocks are connected to a specific cloud zone VPN Gateway. Notice the enterprise CIDR block is narrow:192.168.0.0/24. The cloud CIDR block is broad, covering the entire cloud (all VPCs and all zones): 10.0.0.0/8.
- A VPC Address Prefix representing the enterprise zone is added to the transit VPC. See how phantom address prefix allow the spokes to route traffic to the enterprise in the tutorial.
- A VPC ingress route table is added to the transit VPC as described in this example. It will automatically route all ingress traffic from the spokes heading to the enterprise through the VPN gateway appliances.
Follow the steps in the companion GitHub repository in the TLDR section. When editing the
config_tf/terraform.tfvars file, make sure the following variables are configured:
enterprise_phantom_address_prefixes_in_transit = true vpn = true firewall = false
Also consider setting make_redis = true to allow provisioning Redis instances for the transit and spoke with associated Virtual Private Endpoint Gateway connections. If configured, even the private Redis instance in the spoke can be accessed from the enterprise. The details of private DNS configuration and forwarding are covered in this section of part 2 of the tutorial.
When all of the layers have been applied, run the tests (see special notes in the GitHub repository README.md on configuring Python if needed). All the tests should pass:
python install -r requirements.txt pytest
A note on enterprise-to-transit cross-zone routing
The initial design worked well for enterprise <> spokes. The enterprise <> transit within the same zone also worked. But additional configuration is required to resolve enterprise <> transit cross-zone routing failures:
Without the additional cross-zone VPN Gateway Connections, there were no return VPC route table entries in the default route table in the transit VPC to the cross-zone enterprise (see the red line). The VPN Gateway Connections automatically add routes to the default route table in the transit VPC but only in the zones containing the VPN Gateway. In the diagram above, the worker 10.2.0.4 had no route to return to 192.168.0.4.
The extra cross-zone connections for the transit VPC zones resolved this issue, as shown by the blue line.
Site-to-site VPN might be just the technology you need to connect your enterprise to the IBM Cloud VPC in a multi-zone region. Using the steps described in this post, you can minimize the number of VPN Gateways required to fully connect the enterprise to the cloud. Enjoy the private connectivity to VPC resources like Virtual Server Instances and resources from the catalog that can be accessed through a Virtual Private Endpoint Gateway.
Learn more about IBM Cloud VPC
Leave a Reply