The History – and Challenges – of Next-Generation Firewalls
About sixteen years ago, in 2007, Palo Alto Networks introduced their first product, forever altering network security. The term “Next-Generation Firewall,” or NGFW, had yet to be coined. That would happen when Gartner introduced the term in 2009. At the time, Palo Alto’s products fell under the earlier term “Unified Thread Management,” or UTM. While Palo Alto did not invent the UTM or the NGFW, they have since become the defining product in that space, against whom all competitors are measured. For the purposes of this post, we’re just going to use the NGFW moniker for simplicity.
While what we now call “the cloud” certainly did exist in 2007, data centers were still largely being built in the traditional manner: a hard crunchy shell of security around a soft gooey center of data and compute. Only the very boldest of organizations were moving assets into AWS (Amazon Web Services) and other new cloud services. The data center still had a defined and known boundary. The lines were clear between the assets inside the data center and those on the outside.
NGFWs: Perfect for protecting the data center perimeter
The NGFW, with its bevy of powerful security features, was a beautiful fit for this model. NGFWs quickly became the gatekeepers of data centers, carefully controlling what came in (and to a far lesser extent, what came out) of a given network. Powerful custom CPUs analyzed and moved packets quickly on their way, and people in charge of security slept just a little bit better at night, knowing their assets were protected by a suite of new tools.
NGFW vendors such as Palo Alto Networks, Check Point, and Fortinet now set themselves a new goal: dominance inside of the data center, monitoring and controlling traffic between applications, as network segmentation came into vogue as the “hot new thing.”
Here, though, they faced a far greater challenge. NGFWs were an obvious choice to protect the data center perimeter. The prohibitive cost of hardware, licensing, and support was justified by the clear benefits, and the price was kept in check by the fact that Internet connections into data centers were relatively slow compared to connectivity on the inside, and price goes up with bandwidth. On the inside, however, nowhere near such a clear-cut cost/benefit ratio was to be found. A significant percentage of NGFW features, take DDoS mitigation or traffic shaping as examples, had no value on the inside. But you paid for them, regardless.
As if the eye-watering cost of 10G of throughput was not enough of a deterrent for internal use, every cost was doubled for the sake of redundancy. It was extremely hard to justify five times the spend of a traditional firewall for a set of features that was overkill and often not relevant for the task at hand.
That said, industry and government regulations require certain controls on the inside, at least for certain classes of applications. HIPPA and PCI come to mind as stand-out examples. So, customers settled on hybrid solutions, with NGFW devices protecting the boundary and a few critical, specific applications, with traditional stateful, non-NGFWs taking up the slack of internal protection. This unhappy marriage of old and new worked for a while. All the way up until the mainstream acceptance of the public cloud.
Managing NGFW complexity in the cloud
The public cloud turned the security world on its head. But not for everybody, and not always in obvious ways.
Remember that NGFWs perform a staggering amount of processing on every packet that passes through their chassis. The Intel architecture, as ubiquitous as it is, is a poor choice for the epic amounts of low-level work to be done within a NGFW. Palo Alto chose Cavium Network Processors to do all that low-level packet inspection and manipulation. Fortinet designed their own in-house customer processors to do the work.
Today’s NGFW, by the standards of only a decade ago, is a government agency class supercomputer, which certainly accounts for some of the cost. The NGFW vendors quickly responded to the move to the cloud with virtual versions of their products, but performance was abysmal across the board due to the limitations of the Intel architecture.
This resulted in major changes to the way security was handled at the border of cloud networks. Often, dozens of virtual firewalls were load balanced. Networks were re-architected to have far more Internet peering points than in the brick-and-mortar days, lowering per-firewall performance requirements. And firewall vendors started selling their VM (Virtual Machine) implementations in six-packs and ten-packs, because one or two firewalls could no longer do the job.
If that sounds complex to build and manage, pity the more typical company who moved only a portion of their assets to the cloud. As both IaaS (Infrastructure-as-a-Service) and PaaS (Platform-as-a-Service) proliferated, network boundaries started becoming increasingly indistinct. Whereas the IT-related definition for “cloud” had been derived from the idea of a large number of computers (analogous to water vapor) seen together as one from a distance, it started becoming more appropriate to use a different definition: “Something that obscures or blemishes.”
As data centers started hosting a random selection of applications and parts of applications in the cloud, with other applications (and parts of applications) remaining on-site, it became incredibly difficult to protect them and enforce security policy. This is largely because it became nearly impossible to define boundaries, where security is typically applied. And even in the cases where boundaries were clear, the sheer volume of security hardware, software, and configuration became overwhelming.
As a result, security took a big step backwards.
Looking to the future: NGFW features in a microsegmentation environment
Thus began the area of what was known in the early days as microsegmentation, and what is now more often called Zero Trust Segmentation (ZTS).
The concept of microsegmentation is simple: policy enforcement (i.e., firewalls) on every server, controlling both inbound and outbound traffic to every other server. Fundamentally, microsegmentation is simply an extension of the idea of network segmentation taken to the ultimate (one firewall per server). Microsegmentation gave security teams a powerful new tool to deal with the “fuzzy boundaries” around and within our data centers by addressing security on a server-by-server (or at least application-by-application) basis.
Historically, microsegmentation has dealt with ports and protocols without diving into the Deep Inspection territory necessary for NGFW features.
Within the Office of the CTO, my job is to play “what if?” and try to look ahead to potential future problems and their possible solutions. Thus, current research at Illumio includes looking at the possibilities of implementing NGFW features in a microsegmentation environment.
Read my blog next week to learn more about Illumio’s research and these potential future developments to microsegmentation.