an interesting by-product of the 2.1.3 release

My great-grandma lived 93 years, long enough to get some great-great grandkids. She immigrated from Germany during WW I, had a really interesting life, and was full of wise sayings. As I think about cloud computing, which she was never around to see, two of her sayings come to mind. “All the fun’s in the gettin’ there” is sage advice for people rushing through life, but seems completely off for cloud computing, unless cloud migration is your profession or hobby. Mistakes in cloud networking are super hard to isolate from apps, diagnose and debug, not fun.

On the other hand, “Nothing good ever came easy” applies perfectly. In cloud migration projects, nobody thinks about the huge network redesign costs around money, time, and security. There are many promises but few examples of phased app migration to cloud environments. Try to find any that don’t involve significant refactoring (if not entirely rewriting) the app in the process. Rewriting the app (or re-architecting the network) is not migration.

Running in a lightweight VM like a container helps along with container orchestration, but when it comes to east-west communication within a single container host, you just don’t have a lot of control (after all, you may not have written all those apps yourself…who knows how they might affect each other, accidentally or on-purpose, if allowed to be on the same network that you can’t segment with familiar tools?). Perhaps even more challenging is that you must typically “lift and shift” a large complement of servers/services in one big step so they can communicate, because spreading them north-south over a WAN seems unthinkable.

Networking inside the cloud is a great big mystery. Cloud providers present an interface to virtual workloads that makes them “feel like” they are on a L2 or L3 network. We at least know that AWS and Azure keep the details private (presumably so they can change proprietary aspects later, and possibly to avoid certain exploits). In the cloud, certain networking “givens” do not apply, like whether the software-defined L2/L3 “switch” honors arp or routing requests for IP addresses outside the defined subnet. In other words, it’s hard to make a NAT gateway in the VPC that captures your protected device traffic. Fortunately, we have figured out how to do this in all three major cloud environments.

The latest 2.1.3 release of Tempered Networks Conductor and HIPservices is a profound step forward to organizations trying to migrate existing applications to the cloud. We make it easy to create segments from the existing larger network, and control precisely which systems can be in a given segment. The really magical part, though, is that a Tempered network can span data centers, service providers, and media link technology such as wires, WiFi, and cellular. Essentially, your connected things feel like they are on an unrestricted LAN even though they may be separated over the WAN, behind NAT or other tools designed to create perimeters that we Zero Trust-ers know don’t work anymore (we have the tools to prove it).

This flexibility let’s you migrate services from old to new networks, locations, and environments without having to worry about protocol semantics or network security. The entire virtual LAN is encapsulated and encrypted for you, orchestrated to all locations, and easy to manage by people that are not security or networking experts. And best of all, data path communication is point to point (and PCI compliant) rather than backhauled (and in some cases decrypted) in a proprietary cloud service that you must “just trust”.

Every time our engineering team adds a new gateway platform, we bring you the same point-and-click simplicity we did for physical things (we spun-out from a Boeing project protecting factory tools and robots on WiFi) and private virtual environments (ESXi, KVM, Hyper-V, etc.). We protect things that can’t be altered or protect themselves, and we can make bold claims about being the best at handling the “last mile” (or what I call the “forgotten last mile” for OT/SCADA networks -- they have to wait for the IT organization to decide their needs are a priority, and worth the risk).

In 2.1.3 we support AWS, Azure, and Google clouds, along with several flavors of Linux HIPserver (hint: run that in your container). Deploy HIPswitches in your legacy infrastructure, and also in the cloud(s) of your choice, and we can make it appear that your physical or virtual systems are on the same LAN as your cloud instances. You can define as many segments as you want, as granular as you want, and the HIPservices will figure it out. You only need to “burn” one of your precious elastic/public/external IP addresses on the HIPservice gateway. Because we use HIP, IP address or device location changes have zero effect on the communication / security policy you define in Conductor. Because HIPswitch gateways do NAT and SNAT, you can connect multiple VPCs or other environments with overlapping IP addresses into a single LAN. If you are already using Tempered, you know that the security policy IS the communication policy, and is defined in terms of your devices or servers, not their IP networking details.

Did I say clouds? Yep, plural. You can easily connect multiple VPCs (aka Virtual Networks) together with cloud HIPservices. Why would you want to do that? One obvious use case is DR (disaster recovery). Sure, you could pick multiple regions with one cloud provider, but one outage can span multiple regions, or take out your primary and DR site when a common infrastructure component fails. A single cloud vendor is only acceptable now because the status quo says it would be too complicated (thus error prone and insecure) to manage the peculiarities of multiple vendors. Forget the scripts, just look at the screenshots in this…I rest my case.

The user-experience of a single cloud portal (pick any of AWS, Azure, Google) is … not simple. You need to be fairly technical and patient (and have a Rosetta Stone) to deploy and manage anything significant through their UI. This problem helped spawn numerous companies (like one down the street from us in Seattle), along with their own tooling and/or clouds, and countless armies of consulting experts to help you figure it out. So painful. Don’t expect the cloud providers to change soon; there’s a natural vendor lock-in for DevOps teams that know one cloud environment better than the others.

Tempered is all about simple. We tried to deploy HIPservices instances in VPCs using cloud provider portals, and it was way too complicated for us (despite several of us innovating in networking for 20+ years). How could we expect our customers, who have better things to do than waste brain cells learning complex UI and single-vendor concepts to figure out cloud networking? Heck, we even had a major city fire department ask us to clarify what we meant by “AWS” (but they can sure fight fires). So in 2.1.3 we orchestrated the entire thing inside Conductor! One simple workflow that applies to all cloud providers.

You create a provider for each of your cloud(s) with your credentials (e.g. project ID, client email, API secret). Then, you create a HIPservice template for each provider; we put a lot of work into retrieving all valid/necessary choices for machine type, region, zone, image id, network configuration, etc., from each provider’s API, so you don’t have to!

Once your providers and templates are set up, all you need to do to protect a VPC is Create HIPservice from the Conductor HIPservice page, select the provider + template, override any defaults if you wish, and then sit back and watch. The Conductor will take care of everything: Create the HIPservice in your cloud, connect it to the right subnet(s), reconfigure the network so the cloud servers are now using the HIPservice as a gateway to the Internet, and even take extra steps to secure your VPC now that you are protecting it with HIP! Since this may take a while in a cloud environment (typically 2-5 minutes), the Conductor will show you progress specific to that cloud environment. If you have ever tried to navigate the several lists, pages, links, and workflows to set up your VPC instances, security, subnets, routing, etc., you will probably be smiling. After you watch all this orchestration complete, try to ping from cloud to prem, to ESXi to the other cloud, and to the HIPclient on your laptop or phone. With HIPrelay, you won’t even need to worry about who initiates the connections, WAN routing, or firewalls. You could even ping from your cloud instances into your phone running the HIPclient with only a cellular connection. HIPrelay...another blog topic.

But wait, there’s more. With smart device groups, the Conductor will configure the new HIPservice in your cloud with communication policies, defining the segments of devices that are able to communicate whether they are in the cloud, on prem, in virtual environments, or even in a totally separate cloud provider. Diverse environments add no additional policy management complexity (which is the root of security problems -- another blog topic). And if you merely connect or disconnect any devices in a cloud environment, the Conductor will update the routing tables in your VPC(s) as necessary, automatically, without you having to worry about it. This is true WAN micro-segmentation, a cornerstone of the Zero Trust (*) architecture. Communication policy remains point and click, regardless of the crazy complexity of the networking environments where your devices run. That is the Tempered promise…one we aim to keep forever.

(*) rollout/migration recommendations in the article don’t consider that innovations such as a HIPswitch can be used as a gateway for legacy/non-cloud equipment, achieving a close approximation of Zero Trust for old systems whose network stacks can’t be altered.

p.s. Spoiler alert…we are adding a “provider” for OpenStack to Conductor in the next release cycle, which will require zero change in your workflow to be able to connect OpenStack, ESXi, AWS, Azure, Google, laptops, mobile devices, and on prem equipment into virtual private LAN segments.

0 Comments
Monday, June 25, 2018 By Anonymous (not verified)