Enterprises and service providers are increasingly flirting
with Network Functions Virtualization (NFV) as a means to achieve greater
efficiency, scalability and agility in the core and datacenter.
NFV promises a host of benefits in the way networks are
created, managed and how they evolve. Compute virtualization has, of course,
redefined data centers, transforming servers from computers to virtual
processing nodes that can run on one or many physical servers. This separation
of processing hardware from the abstract “ability to process” definition of
servers allows a lot of flexibility in the way datacenters are managed and how
workloads are managed, especially in multi-tenant environments.
Network Functions Virtualization (NFV) is a similar concept,
applied to networking. But haven’t switches and appliances always been
distributed network “processing” nodes? NFV proposes replacing the integrated,
purpose built software/hardware boxes, such as routers and switches, with
commodity processing platforms and software that performs the actual network
function. Thus, rather than having a box with its own network OS, processing
power, memory and network ports which together function as a router, NFV
proposes having a general purpose hardware with processing power, memory and
ports that run software that transforms it into a router. In some cases, it’s
more costly and less efficient to hand a networking job to a general purpose
processor. The advantage of this virtualized router is that the software layer
can be changed on the fly to turn this router into a switch or a gateway or a
load balancer. This flexibility enables polymorphism within network
infrastructure and promises to deliver a more nimble design that can be dynamically
repurposed according to the changing needs of the network, thus future proofing
the investment made in acquiring the infrastructure.
Today switching and routing functions can be virtualized, with
some tradeoffs. More sophisticated functions
for security and network/application monitoring still require hardware
acceleration. Tools such as NPM and APM and security systems such as IPS, which
operate on real time data, have arrived in a virtual form factor for some use
cases. Technologically speaking, this seems to be the logical evolution that
follows the virtualization of much of data center infrastructure. While there remains
debate as to whether the tool vendors embrace or attempt to stymie this
evolution, the more critical question is: What elements require optimized
processing and hardware acceleration?
From the customer’s viewpoint, virtualization reduces the CAPEX
allocated to such tools and systems. As virtualized tools become available, it might
become easier for customers to scale their tool deployments to match their
growing networks. The hope of scaling out, without needing to buy additional
costly hardware based appliances, is an obvious attraction. They can instead
just increase the compute power of their existing infrastructure and possibly
buy more instances of the virtualized probes, as necessary. In a multi-tenant
situation, these probes may even be dynamically shared as the traffic load of
individual tenants varies. But what if those tools and probes cannot function
without hardware acceleration? What if running them on general purpose compute
proves more expensive than running them on optimized systems?
There’s no reason to adopt virtual tools and systems that
can’t get the job done or that increase costs.
Further, while routing/switching are very well understood
functions that even nascent players can virtualize, there is a significant
operational cost to any such changeover. Advanced monitoring features are much
more complicated and sophisticated. In contrast to infrastructure elements,
tools and security systems require a greater development investment and more
often require highly integrated hardware to function efficiently.
I think the driving force behind this transformation will have to come from the customers, especially large ones, who have the economic wherewithal to force the vendors to toe the line towards virtualization. An example of such a shift is AT&T’s Domain 2.0 project. As John Donovan put it, “No army can hold back an economic principle whose time has come.”
As the large customers build pressure on the vendors to move
towards virtualization, I think we will start seeing some movement towards NFV within
advanced products of the networking space. One element of this change is
already occurring in forensics or “historical” (as opposed to real time)
network analysis. Historical analysis functions, such as IDS or Network
Forensics, can be virtualized to a great degree, but these systems, today, tend
to be monolithic devices. These devices combine capture, storage and analysis.
As has been shown repeatedly in the past, there’s certainly value to specialization;
especially when line-rate performance is required. Capturing network data,
storing it efficiently for retrieval, and building smart analytics are diverse
functions that have been coupled in the past.
Today, just as we consider decoupling network functions from
underlying hardware, we should also look at the benefits of decoupling network
data from analysis software and hardware appliances. After all, these systems
are hardware, software, and data. Ultimately, NFV provides an opportunity for the
analytics tools and security systems to offload the data capture and storage duties
to other elements, enabling hardware optimization (if required) and freeing the
data to be used by a variety of systems. A move towards NFV by the analytics
vendors would bring with it all the advantages of scalability and cost-effectiveness
that NFV promises in other networking domains—but analytics vendors need to decouple data
processing as much they need to virtualize functionality.
Awesome Artical Really i have searching this type of valuable information From a lot of days i found satisfaction when Read your blog Thanks for giving this type blog and also please Read link bvba Woodstone which provide information network monitoring software & monitoring tools
ReplyDeleteThank you for sharing such a important information about network monitoring. Keep it up sharing such type of information.
ReplyDelete