Showing posts with label Large-scale Network Monitoring. Show all posts
Showing posts with label Large-scale Network Monitoring. Show all posts

Friday, March 18, 2016

Confidence and Control at RSAC '16

A View of RSA from the Hall

RSAC ‘16, hit San Francisco with a record number of attendees, topping out at 40,000 a 15% increase from 2015. The security conference by the Bay, “where the world talks security” has seen steady growth in the past few years. The increase in attendance is mirroring the growth of the industry and fears around cyber crime, cyber espionage and, well, anything cyber.

The exhibition hall was no different as vendors packed in, illustrating not only on-going investment from the big guys like Fortinet, FireEye, Palo Alto and Cisco but also representing the over $4.6 billion dollars of venture capital that has been flowing into start-ups over the past two years. There are a lot of solutions out there, as organizations strive to obtain visibility into what is going on in their environments.

With all this attention, money and great parties at the W, are we any closer to achieving the very reason we are here? Not to get existential, as in the proverbial "why are we here," but where do our networks stand today? Are we any safer than we were just a few years ago? And who is managing all these shiny new boxes full of blinky lights? Isn't there a drought as big as California in the security talent pool, some million strong? California is counting on El Nino to save their day. When is the info-sec rain coming, and will it bring with it much-needed talent? The only clouds we see drive a lack of control and visibility, and create an application and access nightmare.

Before we fall into the familiar pit of pessimism, let's not forget what we are all doing here. We are keeping the connection alive. Keeping the connected workforce on the go, bringing moms closer to their children, doctors to their patients and driving unprecedented economic growth. Guarding the connection is kind of cool, and it matters.

At RSA, visibility and control reigned supreme, combined with ease of management. There are a number of "single panes of glass" solutions that aggregate your visibility at the management plane. These are great to see what is or what has happened within your network, and they even provide cool graphs. But they are doing nothing to feed the tools with the data that supplies the visibility and they provide little control.

After visibility, the underpinning issue of time to detection was everywhere. Plugging every hole and building a massive wall around our perimeter is no longer a viable form of defense in today's connected world. With every new device comes a new IP address and a point of access. Time to detection in weeks, months or years is not something we can afford in the "it's not a matter of if but when" era of security incidents.

We need to see who has entered our network, where they have gone, what they have done. We must react and deploy a response quickly. Recognizing that failures will happen while establishing a well-orchestrated response is a sign of a maturing security posture. Having the ability to respond quickly while being poised under pressure permeates confidence within our systems and the craft of securing the connected. Our security teams and systems need confidence more than anything, in response and in deployment. Because many of these expensive tools are not deployed in active blocking mode, due to fear of disrupting the connection, where is the confidence with partially implemented solutions?

The exhibitors’ hall at RSA is full of possibilities for investment. But no single pane of glass, magic bullet or high price tool is going to be effective if we do not provide the proper support. The lack of personnel and fear of automated systems are compounding a passive approach to prevention and detection. Teams are managing and deploying shiny new boxes while fighting for access to traffic and visibility. Instead of actively protecting the connected.

A wise person once said, "judge me not by the mistakes I make but by the lessons I learn." With these post-incident lessons, how do we respond not only with the right internal behavioral change but with the appropriate technology as well? The speed of deployment and confidence in implementation is an essential factor in incident response. We need to be able to provision new solutions with confidence, with all available active in-line services up and running, while reducing management and provisioning overhead. Freeing our teams from the deployment and management cycle to redeploy them to the protection cycle. This way we can not only be good, we can also be cool, until we all meet again in the City by the Bay. 


Learn more about how you can confidently deploy security in your environment and mature your security posture without disrupting the network connection.

 http://www.vssmonitoring.com/security/

Wednesday, April 29, 2015

It's Visibly Clear from the RSA Conference 2015

Everyone is looking for more visibility
at the RSA Conference, even the FBI's
Fido!
 

Everyone at this year’s RSA Conference was speaking the same language of needing and providing more operational visibility. Even the weather seemed to agree with the visibility discussion as the clouds cleared away each afternoon. 


At VSS Monitoring, our mission has always been centered on delivering total network visibility to optimize the effectiveness of your security and network monitoring tools. InfoSec professionals around the world rely on VSS to give their monitoring and security tools access and visibility to traffic across networks without requiring physical reconfiguration. We’ll talk more about that later. Right now, let’s focus on our top takeaways from this year’s RSA conference.
                                                                                                                

Moving security tools in line is a key step
for many attendees

Get in line 

The rate that new malware is being introduced into corporate networks is leaving no choice but to place security tools inline. That is clear. We heard from many attendees that bringing their security tools inline was critical. For some, this will be a first and concerns surrounding using SPAN ports, and not disrupting the network, were top of mind issues to be solved in 2015.



Sandboxes provide a safe
environment to analyze
malware

Sandboxes are Popular

For others, sandboxing is viewed as the next step towards getting ahead of emerging attack vectors. Combining endpoint security with a secure sandbox environment to further analyze unknown files and malware is a popular deployment scenario we discussed. In this scenario, attendees were interested in learning how they could direct traffic to multiple tools while also accommodating behavioral sandboxing. We spoke with many attendees that needed a safe environment to isolate, analyze and ultimately address malware in a contained environment. 



RSA attendees are focused on
closing the loop for security analysis

Creating Closed Loops

Another way attendees are responding to the problem of unknown malware is with cloud-based threat monitoring and intelligence services. Attendees were keen to integrate cloud-based threat intelligence feeds and architect a closed monitoring loop, using on-premise appliances as well as and cloud-based services. We had several discussions on different ways traffic could be directed through their tool chain and then forwarded out to a cloud based security services for analysis.

While security tools will always be the darlings of the RSA Conference, we spoke to a number of people who were not planning to deploy any new tools in 2015. Instead these attendees wanted to focus on how they could collect, analyze and direct the right data in real-time to the existing tools. A truly refreshing thought. 

It was good to see that the industry is quickly growing-up and changing. Sound decisions regarding security architecture and how everything (and everyone) needs to play together well for effective security was a welcome thought. 

Friday, March 14, 2014

SDN Applications Alone Do Not Meet Customer Needs for Visibility and Security on Large-Scale Networks.

By: Andrew R. Harding, Vice President of Products

Last week I wrote about how the term “network TAP” is being misused in the SDN world. I explained how engineers might combine TAPs, NPBs, and SDN in a solution, using the joint IBM and VSS Monitoring “ConvergedMonitoring Fabric” as an example. And, in the last week, the leading SDN proponent announced a "TAP"--that is, an automated SPAN configuration tool that works with OpenFlow switches. It's an interesting announcement, which you can read about here: http://www.sdncentral.com/news/onf-debuts-network-tapping-hands-on-openflow-education/2014/03/. The announcement was made at the Open Networking Summit, the annual meeting of the Open Networking Foundation (ONF). (http://www.opennetsummit.org/)

ONF, which has been led by Dan Pitt since 2011, and which some say has been driven by Nick McKeown from behind the curtains at Stanford, is moving from shepherding the OpenFlow specification to delivering an open-source project. (https://www.opennetworking.org/) This event is worth noting because their fist SDN application is an “aggregation tap” that works with an OpenFlow controller and OpenFlow switches. This is quite a development for the ONF, which had spurned open source in the past and left white space in the SDN arena for single vendor driven projects (like the languishing Project Floodlight) and multi-vendor projects like Open Daylight (ODL). (http://www.opendaylight.org/) But it's not a TAP. This application requires tapping and TAPs to access traffic. An OpenFlow switch, alone, can't get traffic from the production network, and spanning ports directly from a production OpenFlow switch encounters precisely the same issues as traditional attempts at using SPAN.

Dan Pitt, speaking for the ONF, asserts that the project is merely an educational tool and that the open-source project, called “SampleTap,” is a “non-invasive, experimental project.” That sounds very much like the initial positioning for some SDN applications from embattled SDN startups, which touted their own "tapping" SDN application as “your first production SDN application.” The passive nature of tapping traffic and then aggregating that tapped traffic does make the use case a safe starting point for SDN. TAPs don't perturb the network. Combining TAPs with NPBs delivers visibility into network data. For simple use cases, such as educational and lab deployments, this open-source SDN application might provide a starting point for software engineers needing to learning about the network or networking engineers who are investigating SDN. SDN code alone, however, fails to provide visibility and fails to improve security on large scale networks. SDN applications alone, open-source or commercialized, do not meet those customer needs because:
  • Tools must be optimized. Switches can’t do this. They are limited to link aggregation, and very few production OpenFlow switches even support LAG.
  • Traffic must be groomed. Current switches cannot re-write packets. They cannot support port and time stamping. They can only support basic aggregation and filtering.
  • Monitoring fabric = hardware-accelerated meshed forwarding system. OpenFlow cannot do this today. NPBs and the vMesh architecture do this today.
  • Initial tapping is required. No SDN offering supports a complete solution from TAPs to passive NPB to active use cases.
  • Latency of white-box & commodity silicon switches is unacceptable for many applications.


SDN Apps alone are incomplete and must rely on NPBs and TAPs. This sample application remains an intriguing development. This application runs atop the Open Daylight SDN Controller, the same platform as the converged monitoring fabric from VSS Monitoring and IBM. In a demo of the application, which is based on Java and HTML5, a multi-switch system that supports aggregation and OpenFlow filters was shown. Very basic unidirectional service insertion , a basic approach to augmenting switch functionality with functions available only on remote systems was also shown. This service insertion approaches show the pre-cursor to tool chaining and service chaining, which have been something of a holy grail in networking. The idea of “insertion” and “chaining” goes all the way back to Cisco’s venerable Service Insertion Architecture and Juniper’s “service chaining vision,” announced in 2013 with meager results thereafter. Using complex routing configurations or overloading ancient protocols such as WCCP in pursuit of chaining has been a bugaboo that led to many a network outage over the year, so chaining is an important concept in networks.

Robust service chaining can actually be delivered today—and is deployed in many large-scale networks today. In monitoring networks, the chaining of performance tools and passive IDS systems utilizes VSS Monitoring’s vBrokers. In active security deployments, service chaining for production traffic uses VSS vProtector, which was designed to provide simple, fail-safe service chaining. Today, to deliver functionality that can be demonstrated in an educational application, such as SampleTap from ONF, network engineers need commercial systems. 

As such applications evolve from OpenFlow 1.0 to the more recent and far more robust OpenFlow 1.3 standard, projects such as this sample application represent a new tool for investigating SDN in a well-known use case. This application can also help us clarify the differences between TAPs, NPBs, and SDN aggregation applications, and it might just foreshadow a method for combining SDN systems with NPBs. In discussions about the sample application, Dan Pitt has assured his listeners that there are no plans to turn SampleTap into a product. His goal is advancing OpenFlow, not delivering products, he said.

This announcement might just be a milestone: maybe OpenFlow 1.3 or a follow up version of the specification will mark the point at which users really need to consider integrating OpenFlow support more broadly and expecting that it will be used widely. The announcement stated that the application was tested with available OpenFlow switches and that the source will be available on ONF’s Github repository soon and licensed under the Apache 2.0 open-source license. The ONF has sponsored the job of building an application atop the OpenDayLight controller, which might be the death knell of earlier projects, such as the Floodlight controller, which seems to be languishing as its sponsor pivots to a new business focus. I look forward to further use of OpenFlow and integration between OpenFlow monitoring points and network packets brokers, such as that available from IBM and VSS Monitoring today. As for the open source sample application, we all just need to wait a few days, to get past the demo and get access to the code…

Tuesday, March 11, 2014

Definitions of SDN, TAPs, and What's Required to Monitor Large-Scale Networks

By: Andrew R. Harding, Vice President of Products 

Even if you don't know what a network TAP is, you should read this post, because a recent announcement from the Open Networking Foundation may have caused some confusion about the definitions of SDN, TAPs, and what’s required to monitor large-scale networks. (You can read the announcement here: http://www.sdncentral.com/news/onf-debuts-network-tapping-hands-on-openflow-education/2014/03/ .)

A network TAP is a tool that enables network engineers to access the data on networks to complete performance analysis, troubleshooting, security, and compliance tasks. Engineers tap the network with a TAP, and as networks grow in scale and complexity, tapping systems have evolved into monitoring switches and packet brokering fabrics. Such fabrics require TAPs, and other more sophisticated elements, to aggregate, filter, and optimize the tapped traffic.  These other elements are called Network Packet Brokers (NPBs) or Network Visibility Controllers. I will use the NPB moniker.

Using a TAP is an alternative to configuring mirror or SPAN ports on network switches. SPAN "mirrors" are switch ports that carry copies of network traffic. (SPAN stands for "Switched Port Analyzer " or "Switch Port for Analysis" depending on who you talk to.) They have performance constraints, have physical limits, and perturb the system under analysis (as they are a subordinate function within a network switch), so most folks prefer to use TAPs in large-scale networks. Lately, some software engineers—or their collaborators in marketing—have been calling their software-defined networking applications "TAPs." This naming scheme is clever marketing, but it's not accurate.

If you step back and think about SPAN for a moment, while it's useful for ad-hoc data access, it is a fundamentally limited approach. Yes, it's integrated with the switch, but configuring a switch to copy every packet from several ports to another port on a switch is a silly idea. The switch will encounter performance limits and will start to drop packets because that is what switches are supposed to do. Each switch also has only a limited number of SPAN ports. A passive TAP doesn't perturb the system, won't utilize a switch port, and won’t require switch configuration changes. A TAP simply splits off a copy of the traffic an engineer needs to access. Cisco themselves recognizes the limits of SPAN ports and recommends: "the best strategy is to make decisions based on the traffic levels of the configuration and, when in doubt, to use the SPAN port only for relatively low-throughput situations." (http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/san-consolidation-solution/net_implementation_white_paper0900aecd802cbe92.pdf)

It’s obvious that a clever SDN marketeer would avoid calling their latest application "auto-SPAN" or "programmable SPAN" because that would limit them to use cases where SPAN can meet the needs: low-utilization use cases. Networks need TAPs: no argument there. TAPs do not modify the system or the data under test. With TAPs, Heisenberg uncertainty does not apply. You get what's on the wire from a TAP, including physical-layer errors, which are sometimes required to sort out network issues. And TAPs don't drop packets. If you operate a network, you’re likely to be evaluating the benefits of a system to aggregate and filter network monitoring data to simplify delivering that data to performance tools and security systems. That's what network packet brokers do, at the most basic level. Optimizing that traffic, maximizing the use of performance tools, and simplifying large scale security deployment are more advanced features. VSS Monitoring offers both TAPs as well as basic and advanced NPBs. The SDN gang, for some reason, didn't choose to call their systems software-defined NPBs--maybe because they can't do what NPBs do? Or maybe because that’s a mouthful: SDN-NPBs. TLA2! And, so these systems that use OpenFlow (or other means) to program a switch to aggregate monitored traffic have been called “taps”. (They could more accurately have called them “SDN Aggregators”.)

They might be described as an SDN “forwarding system” because they don’t actually support tapping at all. In fact, IBM and VSS Monitoring have qualified a solution that combines SDN technology with network packet brokers. This solution supports TAPs, NPBs, and integrates with SDN systems, too. You can learn more about this "converged monitoring fabric" here: http://public.dhe.ibm.com/common/ssi/ecm/en/qcs03022usen/QCS03022USEN.PDF and here: http://www.vssmonitoring.com/resources/SolutionBriefs/VSS-IBM%20SDN_Solution%20Brief.pdf


Furthermore, VSS offers the option to have TAP port pairs integrated into the NPBs themselves. SDN switches do not support integrated TAPs and require additional products to actually TAP the network. The vMesh architecture is a network fabric (though it's not a general purpose fabric as it's optimized for monitoring network and security deployments.) To deploy such a fabric, SDN or otherwise, you cannot make progress without TAPs. You can't forget Layer 1.