Most organizations use both on-prem data centers and cloud-based IaaS services, often employing multiple IaaS platforms.
For some, this multicloud reality has come about as part of a steady, one-way migration to the cloud, and they may have intentionally kept their cloud networks distinct as part of that goal. Others may have a business strategy for keeping them distinct, such as providing services for a stand-alone division or a particular geography.
As a consequence, they are almost certainly already tying their on-premises and cloud infrastructure networks together in some way or are about to be.
Those with limited integration among their networks are often dealing with a patchwork of solutions that evolved haphazardly as cloud systems went from being experimental and isolated to being developmental and peripheral and then to being central and in-production.
For those planning to bring these networks together or looking to architect and engineer their current infrastructure more intentionally, there are some fundamental points to consider.
Treat external clouds separately or together?
One model for cloud adoption treats each external cloud as another data center, connected only as additional WAN destinations, and leaves them otherwise distinct. That would mean routing-level connections only, with separate network management and controls for each. The other model is allowing deeper integration, including tunneling Layer 2 protocols and centralizing control not only between on-prem data centers and cloud but among and across clouds.
Keeping things separate has virtues:
- Easier network isolation of workloads from each other for security and compliance reasons
- Easier implementation of network policies within each environment thanks to a more limited scope
- Smaller skill set required for network engineers focused on a single environment.
However, it also has significant drawbacks:
- Less agility
- Less portability across environments
- More limited integration options
- Greater complexity in implementing network and security policies across environments with increased risk of error.
Most organizations seem to be following the path of bringing all their environments together, from the network up. Either way, they are faced with a second major consideration: whether and how to make the environments as similar as possible in terms of what can be done on the networks within them or to allow them to remain different.
Allow all features or only those common across clouds?
When solutions get deployed across multiple platforms that do not have identical feature sets, IT has long chosen one of two solutions:
- Use each platform separately and take advantage of all the “special sauce” features in each to get the best possible performance from them.
- Add a layer of abstraction between IT workloads and the underlying platforms and give up those functions not common to them all in order to get maximum consistency and portability.
The great thing about each cloud being a distinct island of functionality with respect to on-prem data centers and each other is that the networking team has less to do in each. And the modes of interaction among the clouds and on-pre data centers are well understood.
The terrible thing about each cloud being distinct and different is that each cloud is distinct and different. IT folks managing these environments develop custom skill sets, and there is less ability to have cross coverage. As a result, each environment has a shallower bench of support and less resilience at the staff level. When there is turnover, the skill set sought from replacements is more specialized too.
Application and cybersecurity teams must also understand the differences among the environments in order to allow both the flexible placement of workloads within them and the movement of workloads among them. In the age of containerization and microservices, portability is considered a key virtue. Teams can lose track of basic differences like whether an environment defaults to “deny all” or “allow all” on connections among networks—with the potential for disaster.
For these reasons, some organizations decide instead to minimize differences in the application-facing environments by implementing tools to abstract away differences.
Sometimes adding a layer of consistency, via an overlay or a new standard, enormously amplifies the power of a technology. SQL is a good example of the standards-driven approach, or TCP/IP. SD-WAN is a great example of an overlay approach to standardizing network functionality atop disparate underlays.
Implementing a standard across all environments allows interoperability, defines a common skill set, and makes it easier to design and deploy applications to leverage those standards. Extensions beyond a standard are possible, as is support for competing standards. So “secret sauce” functionality in an environment can still get a look in, and implementations of a standard can vary, so vendors can compete on performance.
An important and powerful approach to providing a consistent, abstracted platform across environments is to shim up the low spots. That is, rather than hide functionality from the common catalog of network services or design options if it is not available across all platforms, instead add missing functionality to the platforms that lack it. SD-WAN solutions and multi-cloud network solutions can work this way.
Shimming up the low spots in each platform’s catalog is distinct from simply porting an alien environment into each platform. It keeps each environment as close to its native state as possible, to leverage its strengths and reduce the amount of one-off development required to fit the standard environment into it.
Multicloud networking is either already a reality or in the works for most organizations. In considering the next phase of their network strategy and architecture, they should go back to these fundamental questions and making sure they are clear on how they are answering them and why so the answers can guide the rest of their decisions.
Copyright © 2023 IDG Communications, Inc.