Wednesday, April 29, 2015

It's Visibly Clear from the RSA Conference 2015

Everyone is looking for more visibility
at the RSA Conference, even the FBI's

Everyone at this year’s RSA Conference was speaking the same language of needing and providing more operational visibility. Even the weather seemed to agree with the visibility discussion as the clouds cleared away each afternoon. 

At VSS Monitoring, our mission has always been centered on delivering total network visibility to optimize the effectiveness of your security and network monitoring tools. InfoSec professionals around the world rely on VSS to give their monitoring and security tools access and visibility to traffic across networks without requiring physical reconfiguration. We’ll talk more about that later. Right now, let’s focus on our top takeaways from this year’s RSA conference.

Moving security tools in line is a key step
for many attendees

Get in line 

The rate that new malware is being introduced into corporate networks is leaving no choice but to place security tools inline. That is clear. We heard from many attendees that bringing their security tools inline was critical. For some, this will be a first and concerns surrounding using SPAN ports, and not disrupting the network, were top of mind issues to be solved in 2015.

Sandboxes provide a safe
environment to analyze

Sandboxes are Popular

For others, sandboxing is viewed as the next step towards getting ahead of emerging attack vectors. Combining endpoint security with a secure sandbox environment to further analyze unknown files and malware is a popular deployment scenario we discussed. In this scenario, attendees were interested in learning how they could direct traffic to multiple tools while also accommodating behavioral sandboxing. We spoke with many attendees that needed a safe environment to isolate, analyze and ultimately address malware in a contained environment. 

RSA attendees are focused on
closing the loop for security analysis

Creating Closed Loops

Another way attendees are responding to the problem of unknown malware is with cloud-based threat monitoring and intelligence services. Attendees were keen to integrate cloud-based threat intelligence feeds and architect a closed monitoring loop, using on-premise appliances as well as and cloud-based services. We had several discussions on different ways traffic could be directed through their tool chain and then forwarded out to a cloud based security services for analysis.

While security tools will always be the darlings of the RSA Conference, we spoke to a number of people who were not planning to deploy any new tools in 2015. Instead these attendees wanted to focus on how they could collect, analyze and direct the right data in real-time to the existing tools. A truly refreshing thought. 

It was good to see that the industry is quickly growing-up and changing. Sound decisions regarding security architecture and how everything (and everyone) needs to play together well for effective security was a welcome thought. 

Thursday, July 10, 2014

Is Tapping Low Optical Budget Links Making you Pull Your Hair (or the TAPs) Out?

By: Gina Fallon, VSS Product Management

If you have ever had to do split ratio calculations for passively tapping network links, you have a tendency to want to pull your hair out over the mathematical Olympics required. When you have to deal with low optical budgets, it takes the challenge even a step closer to the tipping point where there is no budget left to establish link with the network device and/or probe attached to the passive tap.  The most common offenders are 10G & 40G Multimode where Cisco’s 40G Multimode BiDi (40GbaseSR2) budget is so tight it is not even recommended for passive tapping at all.

The solution is to go to active optical tapping. These taps don’t employ optical splitters (which have inherent insertion loss) and actually regenerate the optical signal on both the network and monitor transmit sides. VSS Monitoring offers vBroker Series products with PowerSafe chassis modules which also offer layer 1 Fail Open/Close state configurability if there is power loss or as a manual force Fail Open option during power on and a full range of optic technology support (10G MM, 40G MM, 40G MM BiDi, 40G SM, etc.).

Check out our full technical write up here: Tapping Low Optical Budget Links

Wednesday, June 25, 2014

Optimizing Monitoring Tools and Security Systems for 100Gbps & 40Gbps Networks

Most large organizations are either considering or have already begun to adopt higher bandwidth network infrastructure, typically 40G in the Enterprise and 100G in the carrier domain. Whenever a network undergoes a migration of that magnitude, the network monitoring and security architecture has to be revisited to ensure it’s ready to scale with the network.

Here are top three goals to keep in mind when redesigning the management infrastructure:
  1. Leverage what you have
  2. Maximize ROI
  3. Make it future proof
If there’s already a network packet broker (intelligent TAP) system in place—and in most large networks there will be—it should be used to properly “tune” the network traffic to the existing monitoring tools and security systems. Assuming the NPB system is sufficiently scalable and modular (and again, it should be), adding 100G or 40G capture interfaces/appliances will be fairly straightforward.

Once the physical capture interfaces have been added, most of the functions needed to accomplish tool optimization are reasonably simple, but could do with some emphasis. Check out this solution guide outlining the essentials of leveraging 1G and 10G toolsets across 40G and 100G networks:


Thursday, April 3, 2014

Packet Captures DO Matter

By Adwait Gupte, Product Manager

The other day, I overheard a discussion between a colleague and a market analyst over the value of packet-level information. The analyst didn’t think full packet capture made sense for NPM/APM tools, because they could perform their functions effectively using only metadata and/or flow statistics.

So, are network recorders and their ilk really on the way out? Is complete packet capture useless?

I argue “no.” And here’s why: APM tools can generally identify issues for a given application (e.g. Lync calls are dropping). These issues might arise from the compute infrastructure (slow processors, insufficient RAM), but they could also lie within the network infrastructure (link overload, badly tuned TCP parameters, etc.). In the latter case, the root cause would be extremely difficult to identify and debug without having a complete, packet-level record.

When investigating a breach or “exfiltration” (such as Target’s), you absolutely need the full packet data, not just flow level metrics (which show only some activity, not exactly “what” activity took place) or metadata (which shows “some data” was sent out, not “which data” was sent out). Summarized flow statistics (or metadata) are an inherently a glossy approach to “compressing” monitoring data. True, they take up less space and can be processed faster than a full packet, but they omit information that could be critical to a discovery process.

While full packet capture is not required to show that application infrastructure is faultless when performance issues arise, it is certainly required when the problem is caused by the network or when the exact data that was transmitted is required for troubleshooting or security purposes. Full packet capture makes sense for both APM and security use cases. However, full packet capture for everything, all the time is ridiculously cost prohibitive. Networks engineers and security analysts need to capture just the data they need and no more.

Aside from the obvious compliance mandates, continuous packet capture prevents data gaps. Implemented efficiently, full packet capture is also feasible in terms of cost and management. One of the key elements of such efficiency is decoupling the data from vertically integrated tools. I covered probe virtualization in a previous post, but some of these points are worth repeating in the context of making full packet capture scalable:
  • Tools that integrate capture, storage, and analysis of packet data are expensive. They also have limited storage and compute capacity. If you run out of either, the only way to expand is to buy a new appliance. An open capture and storage infrastructure makes the scaling of at least those parts of the equation more cost effective.
  • NPM/APM tools already make use of complete packets in the sense that they hook into a network tap/span port and accept these packets. Whether they store them internally or process them on the fly and discard them depends on the tool. The point is, if we are able to separate the collection of the data (packet capture) from the consumption of the data (the NPM/APM analytics, forensics etc.), it makes the data a lot more versatile. We can collect the data once and use it for multiple purposes, anytime, anywhere.
  • The exact tool that will be used to collect this data need not be known at the time of collection since the data can being collected in an open format (e.g. PCAP). Such format makes the data future proof. 
  • Virtualized analytics tools are on the horizon (customers are demanding them). Then, these virtualized appliances will need to be fed data from separate capture/storage infrastructure, although some of these functions can be taken care of by the Network Packet Brokers (NPBs) that collect the data across the network.
In addition to these straightforward benefits, preserving packet data for use by more than a single tool enables network data to be managed and utilized with “big data” systems. Decoupling packet capture from tools enables security analysts and network engineers to glean insights by unifying the siloed data. Network packet capture tools allow network data (which, hitherto, has been missing from the big data applications) to be brought into the big data fold and help uncover even more insights.

A full, historical record of packets (based on continuous capture as a separate network function) is not only useful, but will remain relevant for the foreseeable future. A system that utilizes programmability to trigger packer capture based on external events, then forwards packets in real-time while simultaneously recording the flow of interest, enabling asynchronous analysis, will increase the value of such capture. Now, that’s something only VSS Monitoring can do today (a post for another day).

Friday, March 28, 2014

Network Functions Virtualization Meets Network Monitoring and Forensics

By Adwait Gupte, Product Manager

Enterprises and service providers are increasingly flirting with Network Functions Virtualization (NFV) as a means to achieve greater efficiency, scalability and agility in the core and datacenter.

NFV promises a host of benefits in the way networks are created, managed and how they evolve. Compute virtualization has, of course, redefined data centers, transforming servers from computers to virtual processing nodes that can run on one or many physical servers. This separation of processing hardware from the abstract “ability to process” definition of servers allows a lot of flexibility in the way datacenters are managed and how workloads are managed, especially in multi-tenant environments.

Network Functions Virtualization (NFV) is a similar concept, applied to networking. But haven’t switches and appliances always been distributed network “processing” nodes? NFV proposes replacing the integrated, purpose built software/hardware boxes, such as routers and switches, with commodity processing platforms and software that performs the actual network function. Thus, rather than having a box with its own network OS, processing power, memory and network ports which together function as a router, NFV proposes having a general purpose hardware with processing power, memory and ports that run software that transforms it into a router. In some cases, it’s more costly and less efficient to hand a networking job to a general purpose processor. The advantage of this virtualized router is that the software layer can be changed on the fly to turn this router into a switch or a gateway or a load balancer. This flexibility enables polymorphism within network infrastructure and promises to deliver a more nimble design that can be dynamically repurposed according to the changing needs of the network, thus future proofing the investment made in acquiring the infrastructure.

Today switching and routing functions can be virtualized, with some tradeoffs.  More sophisticated functions for security and network/application monitoring still require hardware acceleration. Tools such as NPM and APM and security systems such as IPS, which operate on real time data, have arrived in a virtual form factor for some use cases. Technologically speaking, this seems to be the logical evolution that follows the virtualization of much of data center infrastructure. While there remains debate as to whether the tool vendors embrace or attempt to stymie this evolution, the more critical question is: What elements require optimized processing and hardware acceleration?

From the customer’s viewpoint, virtualization reduces the CAPEX allocated to such tools and systems. As virtualized tools become available, it might become easier for customers to scale their tool deployments to match their growing networks. The hope of scaling out, without needing to buy additional costly hardware based appliances, is an obvious attraction. They can instead just increase the compute power of their existing infrastructure and possibly buy more instances of the virtualized probes, as necessary. In a multi-tenant situation, these probes may even be dynamically shared as the traffic load of individual tenants varies. But what if those tools and probes cannot function without hardware acceleration? What if running them on general purpose compute proves more expensive than running them on optimized systems?

There’s no reason to adopt virtual tools and systems that can’t get the job done or that increase costs.

Further, while routing/switching are very well understood functions that even nascent players can virtualize, there is a significant operational cost to any such changeover. Advanced monitoring features are much more complicated and sophisticated. In contrast to infrastructure elements, tools and security systems require a greater development investment and more often require highly integrated hardware to function efficiently. 

I think the driving force behind this transformation will have to come from the customers, especially large ones, who have the economic wherewithal to force the vendors to toe the line towards virtualization. An example of such a shift is AT&T’s Domain 2.0 project. As John Donovan put it, “No army can hold back an economic principle whose time has come.”

As the large customers build pressure on the vendors to move towards virtualization, I think we will start seeing some movement towards NFV within advanced products of the networking space. One element of this change is already occurring in forensics or “historical” (as opposed to real time) network analysis. Historical analysis functions, such as IDS or Network Forensics, can be virtualized to a great degree, but these systems, today, tend to be monolithic devices. These devices combine capture, storage and analysis. As has been shown repeatedly in the past, there’s certainly value to specialization; especially when line-rate performance is required. Capturing network data, storing it efficiently for retrieval, and building smart analytics are diverse functions that have been coupled in the past.

Today, just as we consider decoupling network functions from underlying hardware, we should also look at the benefits of decoupling network data from analysis software and hardware appliances. After all, these systems are hardware, software, and data. Ultimately, NFV provides an opportunity for the analytics tools and security systems to offload the data capture and storage duties to other elements, enabling hardware optimization (if required) and freeing the data to be used by a variety of systems. A move towards NFV by the analytics vendors would bring with it all the advantages of scalability and cost-effectiveness that NFV promises in other networking domains—but  analytics vendors need to decouple data processing as much they need to virtualize functionality.