Thursday, March 10, 2016

What European Soccer Can Teach Us About Defending the Network



Real Madrid, the star of European soccer, is one of the best teams in the world and as such, attracts top talent from all over the planet. The team won the most coveted club trophy, the Champions League, ten times. However, the road to glory has been a bumpy one and, within their failures, there are lessons for network security professionals.

Real Madrid won its 9th Champions League trophy in 2002 largely thanks to its defensive midfielder Claude Makelele. He left the team the following year and Real Madrid, unable to acquire an adequate replacement, didn’t win the trophy again until 2014 after they finally found the world-class replacement in Xabi Alonso.

Why were Real Madrid’s defensive midfielders so critical to the team’s success and why are they relevant to defending the network? The defensive midfielder’s job is to break up the other team’s attacks, win the ball back and pass it to his offensive players so they can score. They have the skills and the right perspective on the field to provide visibility to the whole team. Coincidentally, visibility is one of the defining features of a mature security posture and key to enhancing cybersecurity capabilities.

Combining a comprehensive traffic delivery strategy with advanced security capabilities creates a pervasive defense system against a broad range of attacks.

A mature cyber-security approach takes into account both the internal enterprise network and the external world of threats; they are dynamic environments that are always evolving. Therefore, protection requires a dynamic security architecture built-in – not added after the fact. It advocates for combinations of security solutions. Some of the most common mixes are:

  • Active inline network analysis
  • Passive, out of band network forensics
  •  Active payload analysis

For this architecture to be effective, it needs to have access to all traffic that moves through the network, and it should be flexible enough so that changes can be made at a moment’s notice. Even today, most network changes are done during a maintenance window, when the volume of traffic is low and the threat of disrupting the business is small. However, imagine a world where, as a network administrator or security professional, you are able to have visibility into all network traffic, and enhance and modify your security infrastructure without any disruptions to the business. This is the promise that unified visibility, enabled by the VSS Network Packet Brokers (NPBs), can deliver on for an organization.

The NPBs aggregate traffic from various network links creating a Unified Visibility Plane. It allows organizations to collect relevant traffic from many locations at speeds from 1Gbps to 100Gbps and deliver it to a centralized security architecture that inspects and analyzes the traffic, generating alerts and possibly blocking traffic in real time. Additionally, it allows the network operator to construct a chain of security devices which inspect network traffic in sequence. Only the traffic of interest is sent to each security device. 


Imagine being able to deploy inline, active, real-time security inspection without any risks to the network performance (no more worries about being fired because of a network outage!). Imagine being able to constantly exercise the application stack of a security system so you know it is working as expected. Go beyond simple pings telling you if the security system’s port is up or down – they are insufficient in a world of real-time traffic inspection.

Network and security professionals have been fighting an uneven match with cyber-criminals. While the “bad guys” can change weapons in a matter of minutes, you, in most cases, have to wait for maintenance windows to upgrade your architecture. This results in a belated modification, and, perhaps worse, an irrelevant one. What makes this battle a more even one is a mature cyber-security posture based on pervasive visibility.

The VSS ActiveProtection Suite and the Unified Visibility Plane deliver these benefits, and more. Now network administrators, like soccer coaches, can adjust their arsenal in real time without having to worry about disrupting the flow of the game or the business operations of the company. Just like Real Madrid’s all-star midfielders, a Unified Visibility Plane provides visibility to all traffic and allows security systems to do what they do best: inspect and block potentially malicious traffic while other systems search for threats inside the network. 

Not a bad world to live in, don’t you think?

Learn how VSS can help you be the best midfielder on your security team. Support multiple layers of defense without risk to network performance or network uptime with our inline tool-chaining capability. See the on-demand demo.

Wednesday, April 29, 2015

It's Visibly Clear from the RSA Conference 2015

Everyone is looking for more visibility
at the RSA Conference, even the FBI's
Fido!
 

Everyone at this year’s RSA Conference was speaking the same language of needing and providing more operational visibility. Even the weather seemed to agree with the visibility discussion as the clouds cleared away each afternoon. 


At VSS Monitoring, our mission has always been centered on delivering total network visibility to optimize the effectiveness of your security and network monitoring tools. InfoSec professionals around the world rely on VSS to give their monitoring and security tools access and visibility to traffic across networks without requiring physical reconfiguration. We’ll talk more about that later. Right now, let’s focus on our top takeaways from this year’s RSA conference.
                                                                                                                

Moving security tools in line is a key step
for many attendees

Get in line 

The rate that new malware is being introduced into corporate networks is leaving no choice but to place security tools inline. That is clear. We heard from many attendees that bringing their security tools inline was critical. For some, this will be a first and concerns surrounding using SPAN ports, and not disrupting the network, were top of mind issues to be solved in 2015.



Sandboxes provide a safe
environment to analyze
malware

Sandboxes are Popular

For others, sandboxing is viewed as the next step towards getting ahead of emerging attack vectors. Combining endpoint security with a secure sandbox environment to further analyze unknown files and malware is a popular deployment scenario we discussed. In this scenario, attendees were interested in learning how they could direct traffic to multiple tools while also accommodating behavioral sandboxing. We spoke with many attendees that needed a safe environment to isolate, analyze and ultimately address malware in a contained environment. 



RSA attendees are focused on
closing the loop for security analysis

Creating Closed Loops

Another way attendees are responding to the problem of unknown malware is with cloud-based threat monitoring and intelligence services. Attendees were keen to integrate cloud-based threat intelligence feeds and architect a closed monitoring loop, using on-premise appliances as well as and cloud-based services. We had several discussions on different ways traffic could be directed through their tool chain and then forwarded out to a cloud based security services for analysis.

While security tools will always be the darlings of the RSA Conference, we spoke to a number of people who were not planning to deploy any new tools in 2015. Instead these attendees wanted to focus on how they could collect, analyze and direct the right data in real-time to the existing tools. A truly refreshing thought. 

It was good to see that the industry is quickly growing-up and changing. Sound decisions regarding security architecture and how everything (and everyone) needs to play together well for effective security was a welcome thought. 

Thursday, July 10, 2014

Is Tapping Low Optical Budget Links Making you Pull Your Hair (or the TAPs) Out?

By: Gina Fallon, VSS Product Management

If you have ever had to do split ratio calculations for passively tapping network links, you have a tendency to want to pull your hair out over the mathematical Olympics required. When you have to deal with low optical budgets, it takes the challenge even a step closer to the tipping point where there is no budget left to establish link with the network device and/or probe attached to the passive tap.  The most common offenders are 10G & 40G Multimode where Cisco’s 40G Multimode BiDi (40GbaseSR2) budget is so tight it is not even recommended for passive tapping at all.

The solution is to go to active optical tapping. These taps don’t employ optical splitters (which have inherent insertion loss) and actually regenerate the optical signal on both the network and monitor transmit sides. VSS Monitoring offers vBroker Series products with PowerSafe chassis modules which also offer layer 1 Fail Open/Close state configurability if there is power loss or as a manual force Fail Open option during power on and a full range of optic technology support (10G MM, 40G MM, 40G MM BiDi, 40G SM, etc.).

Check out our full technical write up here: Tapping Low Optical Budget Links

Wednesday, June 25, 2014

Optimizing Monitoring Tools and Security Systems for 100Gbps & 40Gbps Networks

Most large organizations are either considering or have already begun to adopt higher bandwidth network infrastructure, typically 40G in the Enterprise and 100G in the carrier domain. Whenever a network undergoes a migration of that magnitude, the network monitoring and security architecture has to be revisited to ensure it’s ready to scale with the network.

Here are top three goals to keep in mind when redesigning the management infrastructure:
  1. Leverage what you have
  2. Maximize ROI
  3. Make it future proof
If there’s already a network packet broker (intelligent TAP) system in place—and in most large networks there will be—it should be used to properly “tune” the network traffic to the existing monitoring tools and security systems. Assuming the NPB system is sufficiently scalable and modular (and again, it should be), adding 100G or 40G capture interfaces/appliances will be fairly straightforward.

Once the physical capture interfaces have been added, most of the functions needed to accomplish tool optimization are reasonably simple, but could do with some emphasis. Check out this solution guide outlining the essentials of leveraging 1G and 10G toolsets across 40G and 100G networks:

 

Thursday, April 3, 2014

Packet Captures DO Matter

By Adwait Gupte, Product Manager

The other day, I overheard a discussion between a colleague and a market analyst over the value of packet-level information. The analyst didn’t think full packet capture made sense for NPM/APM tools, because they could perform their functions effectively using only metadata and/or flow statistics.

So, are network recorders and their ilk really on the way out? Is complete packet capture useless?

I argue “no.” And here’s why: APM tools can generally identify issues for a given application (e.g. Lync calls are dropping). These issues might arise from the compute infrastructure (slow processors, insufficient RAM), but they could also lie within the network infrastructure (link overload, badly tuned TCP parameters, etc.). In the latter case, the root cause would be extremely difficult to identify and debug without having a complete, packet-level record.

When investigating a breach or “exfiltration” (such as Target’s), you absolutely need the full packet data, not just flow level metrics (which show only some activity, not exactly “what” activity took place) or metadata (which shows “some data” was sent out, not “which data” was sent out). Summarized flow statistics (or metadata) are an inherently a glossy approach to “compressing” monitoring data. True, they take up less space and can be processed faster than a full packet, but they omit information that could be critical to a discovery process.

While full packet capture is not required to show that application infrastructure is faultless when performance issues arise, it is certainly required when the problem is caused by the network or when the exact data that was transmitted is required for troubleshooting or security purposes. Full packet capture makes sense for both APM and security use cases. However, full packet capture for everything, all the time is ridiculously cost prohibitive. Networks engineers and security analysts need to capture just the data they need and no more.

Aside from the obvious compliance mandates, continuous packet capture prevents data gaps. Implemented efficiently, full packet capture is also feasible in terms of cost and management. One of the key elements of such efficiency is decoupling the data from vertically integrated tools. I covered probe virtualization in a previous post, but some of these points are worth repeating in the context of making full packet capture scalable:
  • Tools that integrate capture, storage, and analysis of packet data are expensive. They also have limited storage and compute capacity. If you run out of either, the only way to expand is to buy a new appliance. An open capture and storage infrastructure makes the scaling of at least those parts of the equation more cost effective.
  • NPM/APM tools already make use of complete packets in the sense that they hook into a network tap/span port and accept these packets. Whether they store them internally or process them on the fly and discard them depends on the tool. The point is, if we are able to separate the collection of the data (packet capture) from the consumption of the data (the NPM/APM analytics, forensics etc.), it makes the data a lot more versatile. We can collect the data once and use it for multiple purposes, anytime, anywhere.
  • The exact tool that will be used to collect this data need not be known at the time of collection since the data can being collected in an open format (e.g. PCAP). Such format makes the data future proof. 
  • Virtualized analytics tools are on the horizon (customers are demanding them). Then, these virtualized appliances will need to be fed data from separate capture/storage infrastructure, although some of these functions can be taken care of by the Network Packet Brokers (NPBs) that collect the data across the network.
In addition to these straightforward benefits, preserving packet data for use by more than a single tool enables network data to be managed and utilized with “big data” systems. Decoupling packet capture from tools enables security analysts and network engineers to glean insights by unifying the siloed data. Network packet capture tools allow network data (which, hitherto, has been missing from the big data applications) to be brought into the big data fold and help uncover even more insights.

A full, historical record of packets (based on continuous capture as a separate network function) is not only useful, but will remain relevant for the foreseeable future. A system that utilizes programmability to trigger packer capture based on external events, then forwards packets in real-time while simultaneously recording the flow of interest, enabling asynchronous analysis, will increase the value of such capture. Now, that’s something only VSS Monitoring can do today (a post for another day).