You can’t make it run better if you don’t know where the problem lies II

Posted by

Riverbed executive outlines how the company has put together a parcel of tools focused on improving IT resource performance.

In an IT organization you’ll often end up with four silos, each of which has their own tools that do a great job of proving problems are not their fault. This is the CIO’s frustration. “I don’t want four tools that prove the innocence of this organization. I want one tool that identifies where the problem is so we can go fix it.” So that’s what we are doing with our integration of OPNET. We had great network performance management, but we did not have application performance management. We needed to be able to address performance end-to-end and be able to say, with our performance management tools we can tell you where the problem is, whether it’s on the client, in the network, in the data center, in one of the servers, in storage, or in the code of the application.

Competitors like F5 are already addressing application performance issues in many big shops and can keep adding functions – like WAN optimization — to their ADC platform, so how do you differentiate yourselves?

With F5 it’s actually easy because they’re not really in the WAN optimization game. They don’t have a remote site box. They only do data-center-to-data-center replication optimization. So you can add a module to your Big IP and do some WAN optimization data-center-to-data-center; two sites that have high bandwidth between them moving big chunks of data. So there they have a chance to compete with us. But in the remote site, we don’t ever see them. We are, however, going into their space with our software-based application delivery controller. And they would say, “Well, Riverbed isn’t in the whole application delivery controller market, they don’t even have an appliance.” And they would be correct.

Our contention is that the market for application delivery controllers in the data center is moving, consistent with all the trends in the industry, to software. Network Function Virtualization (NFV) is the hot term. Well, ADC is a perfect example of NFV. We see it firsthand. We have 80% growth in that business, did over $10 million in the quarter. Eighty percent growth on something that big is meaningful.

And architecturally there is a move to place ADCs closer to applications. There are so many applications and they have different requirements of ADCs, but if the ADCs are all in software you can achieve massive density, all elastic, with cloud-like scale, and that is very attractive in this day and age.

If they are hardware-based, every time you add more apps you add more boxes. Some very large banks have hundreds of ADCs. That model is, looking forward, less appealing. So we believe we’re riding that wave. So while we’re not in the entire market, we are in the fastest growing segment and it is consistent with the SDN network function virtualization.

I was going to bring up SDN. Providing SDN emerges as a force, which appears increasingly likely, what will it mean to you guys?
The architecture is appealing. The implementations of the architecture haven’t quite nailed it yet. So what does SDN mean to us? We either love it or it is orthogonal to us. We love it because the architecture suggests an increased concentration of stuff into data centers, and any time the data center increases in its power, control, authority, functionality, that’s great for us because the workforce continues to be massively distributed so you’re going to have performance problems. And you’re also going to have visibility problems because SDN turns all these things into tunnels and makes application visibility very opaque. Three hundred applications suddenly become one tunnel. You’ll need performance management tools to manage that environment.

It’s a bit orthogonal to us in that we do all of our work in Layers 4 through 7. And SDN, in its initial instantiation, is really focused on Layer 2 inside of a data center. I realize in concept it applies to Layer 3 as well, and it has been extended with network function virtualization and service-chaining to include our stuff as well. So at Layer 2 and Layer 3 it’s orthogonal to us. But we’ll work great with it. We’ll add visibility to it. (Read a FAQ on SDNs.)

As it extends in architecture and concept into network function virtualization, then all of our products need to be software. They all need to be managed by orchestration systems and a variety of controllers. Part of our Stingray application delivery controller is already designed for that model, and over the past four years we’ve taken our Steelhead WAN optimization product and created four different software flavors to fit into that environment, Virtual Steelhead, Cloud Steelhead, Steelhead Cloud Accelerators, Steelhead Mobile. These are all software versions of Steelhead that can fit that architecture, and we’re fully committed to RESTful APIs so we will work nicely in that environment.

But SDN, in its initial instantiation, where it is trying to overcome the east/west bottlenecks inside the data center at Layer 2, we add visibility where visibility was lost and we work great with it, because we’re Layers 4 through 7.

Will Steelhead ever talk OpenFlow?
Yeah. But we network integrate so many different ways that it would be just adding another way of integrating. We aren’t trailblazing OpenFlow. We don’t carry the petard. As the market adopts and uses it, we will make sure it is an interface that is available.

OK. Any closing thoughts?
One thing. We just did a Steelhead announcement that I think is going to be a big deal for the next four years in our space, which is something called hybrid networking. Even though the prevailing architecture at remote sites is to have one MPLS connection, that is going to give way to an MPLS connection plus an Internet connection and that Internet connection will have two connections in it, a connection where you’re VPN’ing back to the data center and a connection where you go out to the Internet. So we’ll have three paths. A path that’s private, a path that’s virtual private and a path that’s to the Internet.

The Internet used to be a toy. Employees would be going to ESPN or shopping online. So many companies were cutting that off because they wanted people working. Well YouTube is a phenomenal business tool now and social networking is important to doing your job. And the economics are staggering if you go from an MPLS connection to Internet. You can provide way more capacity at a lower cost. So there’s a variety of benefits that I think will make hybrid networking a bigger deal.

And the reason I give that little spiel is because in our latest release is step one of a multistep process where we’re adding path selection to all Steelheads, which means I can take important apps and put them on the MPLS connection, take these less important apps and put them on a VPN connection, and offer direct to Internet connections for still other traffic. And if the VPN connection or Internet connection goes down, well, traffic can roll over to the MPLS connection, all managed by QoS. Or vice versa; if you lost the MPLS connection that traffic can roll over to the Internet connections. So you get this ability to do application performance management over a hybrid network with a lot of visibility and control. So I think that is actually a very big deal that’s consistent with how the times will change in the future.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Click to rate this post!
[Total: 0 Average: 0]

Leave a Reply

Your email address will not be published. Required fields are marked *