Click here to close now.

Welcome!

Java Authors: Liz McMillan, AppDynamics Blog, Kelly Murphy, Pat Romanski, Carmen Gonzalez

Related Topics: SDN Journal, Java, Microservices Journal, Virtualization, Cloud Expo, Big Data Journal

SDN Journal: Blog Feed Post

SDN: Capability or Context?

Does software define software-defined?

Why is it that the definition of SDN continues to get debated?

I think the definition of SDN remains a bit squishy. And while I am not entirely certain that it matters (people shouldn’t be buying SDN; they should be building networks), it is an interesting phenomenon, and understanding it better could help with the education process.

When most people talk about what SDN is, they tend to fall into two camps: principles and protocols. You will frequently hear SDN described as the separation of control and forwarding planes. You probably hear people talking about SDN needing to be “open” (a horribly imprecise term as I have argued before). These are the people who fall on the principles side. They point less to specific instantiations of technologies and more to the guidelines that define SDN.

The other camp will point to specific protocols and technologies. They rally around the OpenFlow banner for sure, but they might include other technologies like BGP-TE, PCE, ALTO, and I2RS. They see SDN as an architecture with specific building blocks, and the presence of those building blocks determines the SDN-ness of a solution.

I actually don’t think that either of these positions is correct.

I was debating last week whether GMPLS was SDN. It certainly focuses on the separation of control and forwarding plane. It is an open standard. It is absolutely implemented in software. It seems to hit most of the framework criteria for inclusion in the SDN camp. The conclusion of whether GMPLS is SDN or not is less interesting than the discussion that surrounded it.

Does software define software-defined? Claiming something is software-defined because it is implemented in software is probably among the lamest definitional requirements around. The reality is that the vast majority of traditional networking features are all implemented in software. In fact, the major vendors spend north of 80% of their R&D on software-related efforts. By this definition, everything is software-defined.

The real distinction people seem to be trying to make when they talk about software implementations is whether the functionality is resident on a networking device, or whether it sits somewhere on top of the network (as with a controller). But we should be clear about this. Whether some application runs on or off the box is a packaging detail, not some core attribute. Networking devices all have some forwarding ASIC and a general processor. Whether you write something to run natively within the sheet  metal or on some server somewhere is irrelevant. Put differently, if your vendor of choice decided to ship their boxes with the central processing card physically separated (it sits a half micron on top, with separate sheet metal, power, and cooling), would you suddenly brand the solution software-defined?

[Special callout to Mike Dvorkin (@dvorkinista) who frequently makes this argument on social channels.]

Is the separation of control and forwarding the meaningful determinant? Network device behavior is all state-driven. Whether that state is determined by persistent configuration or learned through some protocol is secondary. More simply, how important is it how state gets onto the device? More simply, if you set the state via an on-box CLI or via a controller, does that make the solution any more or less SDN?

When most people talk about control and forwarding, they are really having a discussion about management planes. Controller-based solutions certainly separate the management plane. But so do policy servers, OSS/BSS solutions, and even well-written Perl scripts that pull information from a content versioning system as part of device management.

My point here is not to say that separation is not important, but rather that it is likely not enough by itself to determine the SDN-ness of a particular solution.

Does Open make something SDN? No one will say that merely being open (for whatever definition of open you mean) is enough to make something SDN. The real question is whether something can be SDN and not be open. The answer here gets pretty religious, but that is largely dependent on how people have defined SDN. Can you build a software-implemented, controller-based solution that uses proprietary protocols? Absolutely. If that solution is deployed for 8 years and then the IETF ratifies a standard for the base protocol, has your deployed solution gone from non-SDN to SDN despite the lack of solution changes?

So where did all of this conversation land?

It’s not that I think there are not important principles to be considered before labeling something SDN. I just think that it is less about technology and more about context. It is absolutely conceivable to me that a particular technology can exist in both SDN and non-SDN architectures. How a protocol is used determines whether it is SDN or not. The examples are virtually endless, but I would start with things like BGP, XMPP, NETCONF, YANG, and yes, even GMPLS. Similarly, I think there are controller-based solutions that are non-SDN, just as there might be non-controller-based solutions that could be SDN.

This means that the conversation needs to move away from the technological building blocks and more to the contexts that matter. I’ll offer up three here:

  • Delegation – OSS/BSS systems have already addressed the management problems inherent in networks built from different devices delivered by different vendors. Cannot the solution simply be to implement master translators that push configuration down to however many devices? It seems to me that SDN is about removing the complexity of managing individual elements. That can only happen through delegation. Central controllers are great, but only if they can pass requirements to individual elements rather than having to manage them all in detail. The analogy I like here is one of the modern corporation. Imagine how effective your company would be if your CEO told every individual what to do. Delegation matters.
  • Abstraction - And delegation depends on abstraction. If the goal of SDN is to make workflows more manageable and networks more better (more easily managed, more responsive to applications, more intelligent, more whatever), then we need to abstract out some of the complexity. We need to work less in device-specific directives (read: configuration knobs), and more in overarching intent. The only way that different part of the IT infrastructure can ever collaborate is through a common language, and that will require abstraction. Expecting compute, storage, or applications to speak in terms of VLANs and ACLs is no more practical than turning network admins into storage or compute junkies.
  • Globality – Centralizing control is not about where software runs; it is about what that software can do. The whole premise of controller-based solutions is that having a global view of the available resources allows for more intelligent decisions to be made. If your network behaves exactly the same way with or without OpenFlow (meaning all traffic effectively uses the same paths), then does it even matter if you call it SDN or not? We need to be in the business of doing things better, not just different. And that requires globality.

These might not be the only (or even right) contexts to think about, but they at least start to frame the discussion differently. I think it is entirely possible to build open, controller-based systems that fail to deliver against any of the promises of SDN, just as it is possible to use existing technologies in new ways. Ultimately, it is the context – not the capability – that determines whether the promises of SDN are achievable.

[Today's fun fact: A car that shifts manually gets 2 miles more per gallon of gas than a car with automatic shift. Of course all that extra work requires more sustenance, so it's about a wash environmentally.]

The post SDN: Capability or Context? appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

@ThingsExpo Stories
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal an...
Wearable devices have come of age. The primary applications of wearables so far have been "the Quantified Self" or the tracking of one's fitness and health status. We propose the evolution of wearables into social and emotional communication devices. Our BE(tm) sensor uses light to visualize the skin conductance response. Our sensors are very inexpensive and can be massively distributed to audiences or groups of any size, in order to gauge reactions to performances, video, or any kind of presentation. In her session at @ThingsExpo, Jocelyn Scheirer, CEO & Founder of Bionolux, will discuss ho...
The true value of the Internet of Things (IoT) lies not just in the data, but through the services that protect the data, perform the analysis and present findings in a usable way. With many IoT elements rooted in traditional IT components, Big Data and IoT isn’t just a play for enterprise. In fact, the IoT presents SMBs with the prospect of launching entirely new activities and exploring innovative areas. CompTIA research identifies several areas where IoT is expected to have the greatest impact.
Every day we read jaw-dropping stats on the explosion of data. We allocate significant resources to harness and better understand it. We build businesses around it. But we’ve only just begun. For big payoffs in Big Data, CIOs are turning to cognitive computing. Cognitive computing’s ability to securely extract insights, understand natural language, and get smarter each time it’s used is the next, logical step for Big Data.
The 4th International Internet of @ThingsExpo, co-located with the 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - announces that its Call for Papers is open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
We are reaching the end of the beginning with WebRTC, and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) i...
17th Cloud Expo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS – software, platform, and infrastructure as a service.
The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, data security and privacy.
The 17th International Cloud Expo has announced that its Call for Papers is open. 17th International Cloud Expo, to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, APM, APIs, Microservices, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers.
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
Container frameworks, such as Docker, provide a variety of benefits, including density of deployment across infrastructure, convenience for application developers to push updates with low operational hand-holding, and a fairly well-defined deployment workflow that can be orchestrated. Container frameworks also enable a DevOps approach to application development by cleanly separating concerns between operations and development teams. But running multi-container, multi-server apps with containers is very hard. You have to learn five new and different technologies and best practices (libswarm, sy...
SYS-CON Events announced today that DragonGlass, an enterprise search platform, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. After eleven years of designing and building custom applications, OpenCrowd has launched DragonGlass, a cloud-based platform that enables the development of search-based applications. These are a new breed of applications that utilize a search index as their backbone for data retrieval. They can easily adapt to new data sets and provide access to both structured and unstruc...
Converging digital disruptions is creating a major sea change - Cisco calls this the Internet of Everything (IoE). IoE is the network connection of People, Process, Data and Things, fueled by Cloud, Mobile, Social, Analytics and Security, and it represents a $19Trillion value-at-stake over the next 10 years. In her keynote at @ThingsExpo, Manjula Talreja, VP of Cisco Consulting Services, will discuss IoE and the enormous opportunities it provides to public and private firms alike. She will share what businesses must do to thrive in the IoE economy, citing examples from several industry sector...
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo, June 9-11, 2015, at the Javits Center in New York City. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be
An entirely new security model is needed for the Internet of Things, or is it? Can we save some old and tested controls for this new and different environment? In his session at @ThingsExpo, New York's at the Javits Center, Davi Ottenheimer, EMC Senior Director of Trust, reviewed hands-on lessons with IoT devices and reveal a new risk balance you might not expect. Davi Ottenheimer, EMC Senior Director of Trust, has more than nineteen years' experience managing global security operations and assessments, including a decade of leading incident response and digital forensics. He is co-author of t...
The Internet of Things is a misnomer. That implies that everything is on the Internet, and that simply should not be - especially for things that are blurring the line between medical devices that stimulate like a pacemaker and quantified self-sensors like a pedometer or pulse tracker. The mesh of things that we manage must be segmented into zones of trust for sensing data, transmitting data, receiving command and control administrative changes, and peer-to-peer mesh messaging. In his session at @ThingsExpo, Ryan Bagnulo, Solution Architect / Software Engineer at SOA Software, focused on desi...
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists will peel away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud environment, and we must architect and code accordingly. At the very least, you'll have no problem fil...
While great strides have been made relative to the video aspects of remote collaboration, audio technology has basically stagnated. Typically all audio is mixed to a single monaural stream and emanates from a single point, such as a speakerphone or a speaker associated with a video monitor. This leads to confusion and lack of understanding among participants especially regarding who is actually speaking. Spatial teleconferencing introduces the concept of acoustic spatial separation between conference participants in three dimensional space. This has been shown to significantly improve comprehe...