Welcome!

Java IoT Authors: Liz McMillan, Automic Blog, Elizabeth White, Pat Romanski, Roger Strukhoff

Related Topics: Containers Expo Blog, Java IoT, @CloudExpo

Containers Expo Blog: Blog Feed Post

Musings on Neural Networking By @DaveGraham | @CloudExpo #Cloud

I’ve always had a fascination with the way information is acquired and process

Given my last post was in November of 2013 (trust me, I’ve been busy), I figured I’d start out with a heady topic like “Neural

Networking” in an age where Deep Machine Learning and perhaps its lesser cousin, assisted Machine Learning (I’ll define in a bit), seem to be all the rage.  However, before we begin, I want to make a few things clear:

  • I’m no expert in these fields.
  • I’m musing out loud here.  You’re my audience and what you determine to be salient and what you deem junk is, well, your problem, not mine.
  • DML/AML, Neural Networking, and a whole host of other terms, acronyms, mindf**k level events, etc. are here. Deal with it.

So with such an illustrious preface, I suppose we should let the party begin.

I’ve always had a fascination with the way information is acquired and process. Reading back through the history of this site, you can see this tendency towards more fanciful thinking, e.g., GPGPU assisted network analytics, future storage systems using Torrenza-style processing.  What has once been theory has made its way into the realm of praxis; looking no further than ICML 2015, for example, to see the forays into DML that nVidia is making with their GPUs.  And on the story goes.  Having said all this, there are elements of data, of data networking, of data processing, which, to date, have NOT gleaned all the benefits of this type of acceleration.  To that end, what I am going to attempt to posit today is an area where Neural Networking (or at least the benefits therein) can be usefully applied to an area interacted with every single nanosecond of every day: the network.

Glossary:
Before we get much further, we should probably have a definition of some terms that I will be using:

  • Deep Machine Learning (DML): burgeoning area of machine learning research focused on machine intelligence utilizing underlying principles of neural networking
  • Assisted Machine Learning (aka Hybrid; AML): a half-step towards DML where pre-pended processing is done by fixed systems within a rough grid approach  and learning takes place on these processed chunks of data.
  • Neural Networking: “a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.” (In “Neural Network Primer: Part I” by Maureen Caudill, AI Expert, Feb. 1989)
  • Packet Forwarding Engines (PFE): base level of hardware in a contemporary network switch

State of the Union: Networks
To talk about the future, some mention is needed of the current état de fait of systems networking.

Packet Forward Engines (PFEs) are the muscle of networking switches. Today, we’re facing routinely more powerful PFEs, both custom as well as mainline/merchant.  Companies like Cisco, Broadcom, Xpliant, Intel, Marvel, Juniper, etc. have propagated designs and delivered ever-increasingly scalable devices that can process billions of bits of information at a time.  The traceable curve here closely follows an analog of Moore’s law while not exactly staying within the same bounds (e.g. I could point out that Broadcom’s Trident/Trident+ compared to the currently shipping Trident 2 are not all that far removed from each other both in frequency, scale, latency, and processing power).  If we allow for interstitial comparisons cross-vendor, the story changes somewhat and, to my mind, the curve becomes even more pronounced.  Comparing custom silicon from Juniper or Cisco to that of Broadcom, for example, shows a higher level of capability present in these more custom designs, albeit with a slower time to market.  All this is being said by way of pointing out that compared to host-level development of processors (like Intel’s Xeon/Core and AMD‘s APU/CPU line ups), these specialized processing units have a different scale in/scale out process.  Consequently, their application has been mostly stagnant; a switch line or two released with a regular cadence of roughly 18 months or so, interspersed by the next important part of networking: the software.

Software development is as critical to the current state of networking as the hardware is.  Relying on fixed pipeline devices (as the Trident 2 is), requires a certain level of determinism to be designed into the software that controls it.  With the seminal development of software development kits (SDKs), the de-coupling has allowed for vendors to write against a known set of functions with a healthy separation from the underlying hardware.  This abstraction has both accomplished a level of increasing functionality and capability within the systems (e.g. Broadcom’s concept of a programmable unified forwarding table (UFT)),  as well as allowing for agile development of the overlaying software (e.g. quicker time to market for a network operating system (NOS) built on top of said SDK).  Having this level of functionality is important as it allows more agile decisions to be made as standards or protocols are ratified for implementation.    An NOS is only as capable as the hardware it lies upon, however, and that leads us to the third part of the current network: the control plane processing.

The control plane of a network switch is the brain of the operations. A PFE is useless as a commodity processor.  If you examine its structure closely, its functional blocks are designed for very purpose driven applications.  This type of processing, while important for the datagrams it will functionally serve, is useless for running more banal applications like an NOS.  However, generic processing hardware, like PowerPC, MIPS, ARM, or even x86 cores can be harnessed to manage this type of workload very effectively.  In recent years, there has been increasing momentum to moving these control plane processing entities from more archaic and proprietary architectures like PPC and MIPS, to more modern and commercially available standards like ARM and x86.  This move has allowed for modernizing the control plane from an embedded system to a discrete “system on a switch” running modern operating systems and either virtualizing the NOS (e.g. like Juniper’s QFX5100 switch line) or partitioning via containers or some other level of abstraction.  The benefits of such systems cannot be ignored as again, time to market and feature development becomes more agile in nature.  (Side note: the role of ARM as a valid control plane foundation cannot be overlooked and will be the subject of another post at some point in the not-so-distant future).

In summary, the current networking switch present in the data center is comprised of a PFE, a network operating system (NOS), and a control plane to run the NOS. This is not unlike a commodity server with lots of physical interfaces designed for ingress and egress of data.  These switches are increasingly complex and performance-heavy and provide a robust foundation upon which to build neural networks.

Becoming Neural, not Neurotic
When you walk into your living room, tell your Xbox One to turn itself on (“Xbox On!”) and watch as the always-listening machine powers up your TV and itself and then scans you really quick to determine identity, you’re watching machine learning in action. This process makes use of both audio and visual queuing and localization of data (a core component of neural networking) to derive identity and causality.  You had to walk through a setup process to both capture your image as well as your vocalization.  This was stored in a local database and used as a reference point.  The system is given rough control points to operate against but is functionally able to interact against this baseline; case in point, depending on my level of beard growth or not, my Xbox has various levels of success in determining who I am by sight.  The same goes for my iPhone, my Android, my Amazon Echo, etc.  Each of these machines has a minimal database connected to a backend process (the “cloud” or another hosted platform) and performs a fixed function (voice recognition, facial recognition).  All this explanation is to demonstrate that we’re in the throes of neural networks without even realizing.  If we look at the network as a necessary part of this process, it becomes the springboard for incredible capability.

So how can a transport layer become “neural”?  Looking back at our definition of “neural networks” we see that at its very foundation is the concept connectedness.  A network is a collection of interconnected devices using some sort of medium, whether copper, optical, or radio frequency that allows them to interoperate or exchange data.  Transporting data, whether electrical, radio frequency, or optical, is just that: transport.  It implies neither intelligence nor insight.  The sender and the receiver, however, can operate on data and make decisions with some level of determinism, though, and this is where we will focus.  Historically, one would look for the systems attached to the transport layer as the true members of the network.  However, as noted previously, with the advent of “system on a switch” control planes, suddenly we have the appearance of systems as joining points, not just transport pipes.

Moving further, if these transport junctions or pipes suddenly develop the intelligence, based on no other inputs but data, to route “conversations” or data in ways that logically make sense and have derived value to either the sender, receiver, or both, have we achieved a neural network? We can see some basic interworkings of this in the use of LLDP (link layer discovery protocol) as a low level exchange of “who are you?” information, but this is derived from extant specifications of what a datagram should look like.  This isn’t flaunting the concepts of neural networking but belies that data, exclusive of content and context, is known already.  So, the next logical leap is how that data is interpreted.

Let’s presuppose that LLDP has provided two neighboring switches with the identity, capability, and proximity to each other.  What then?  As hosts are connected one side to another, data will flow based on the hosts requirements for connectedness and data.  The transport layer, at that point, is nothing more than transport; simple forwarding devices.  However, let’s also assume that these two switches have a system attached to each respective control plane that is constantly watching traffic as it flows across and is “learning.”  What these switches are learning can be perceived as raw input and can be manipulated and quantified as such.  In a neural networking world, these systems are nascent; raw with no heuristic capability as yet designed.

The situation described above is precisely why networking systems function so completely today.  They’re not tasked with anything beyond fixed parameters or inspection.  Think of it:  IETF and IEEE have specified what a datagram should look like.  It should have Layer-2 source and destination media access control (mac) address along with payload, for example.  But beyond this, what is accomplished?  The PFE is looking for datagrams that conform to these standards to pass along; anything else is malformed and dropped.  You quickly reach a situation where, heuristically, you’re limiting the overall potential of these machines to be simple engines, receiving parameters and doing as told.  What, then, could be done?

Vision Casting
I can sit here and postulate any number of ideas that my peers have already done.  I’m more interested in what we can do with the data that is already present.  We can argue that daemons that run in the kernel, statistic packages that collect PFE-published data points, or other such utilities are useful.  In a way, they are, but they represent a subset of capabilities and are mostly human driven (AML at its finest).  What if, however, each time a request is made, the switch learns what data points are being requested and viewed and is able to selectively feed only the most salient points back to its consumers without flooding tons of useless information?  What if this is a priori to a receiver (in the classic SNMP use case)? What if this is machine driven (DML) and becomes part of the flow?

For a network to become “aware” and fully realized as neural in nature (and presupposing the eventual coupling of machine state to machine state thru a hyperaware network as my conclusion) it must be able to functionally process data on its own, either by simple heuristic learning (profiling, as noted above, is just one method) or through the contrived mechanisms of its NOS in a non-rigid manner (e.g. not L2 learning, etc.)  Certainly the use of standardized protocols for initial communication is encouraged, since it can engage heterogenous systems together in communication without other proprietary lower-level protocols like HiGig, but beyond this initial negotiation, the hope and desire is that learning, forwarding, reporting, and engaging become autonomous and self-forming.  As systems interact, then, decisions will be made based on what the datagram contains, the way the PFE is responding to traffic flows and utilization, and also what the next connected device is doing.  This capability is present, to some extent, today in systems that use a network management system (NMS) that wholistically can see the network for what it is, but this external intelligence, is again, driven from the outside in and not organic to the devices themselves.

Conclusion
I’ve laid out what I hope is the framework for an ongoing discussion of neural networks (without delving into AML/DML this go around) and their role within the actual network space.  I’m curious as to your thoughts (constructive, please).

Read the original blog entry...

More Stories By Dave Graham

Dave Graham is a Technical Consultant with EMC Corporation where he focused on designing/architecting private cloud solutions for commercial customers.

@ThingsExpo Stories
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, discussed how they built...
No hype cycles or predictions of a gazillion things here. IoT is here. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, an Associate Partner of Analytics, IoT & Cybersecurity at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He also discussed the evaluation of communication standards and IoT messaging protocols, data...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices t...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
DevOps at Cloud Expo – being held June 5-7, 2018, at the Javits Center in New York, NY – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits,...
@DevOpsSummit at Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, is co-located with 22nd Cloud Expo | 1st DXWorld Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
SYS-CON Events announced today that T-Mobile exhibited at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on qua...
SYS-CON Events announced today that Cedexis will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Cedexis is the leader in data-driven enterprise global traffic management. Whether optimizing traffic through datacenters, clouds, CDNs, or any combination, Cedexis solutions drive quality and cost-effectiveness. For more information, please visit https://www.cedexis.com.
SYS-CON Events announced today that Google Cloud has been named “Keynote Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Companies come to Google Cloud to transform their businesses. Google Cloud’s comprehensive portfolio – from infrastructure to apps to devices – helps enterprises innovate faster, scale smarter, stay secure, and do more with data than ever before.
SYS-CON Events announced today that Vivint to exhibit at SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California. As a leading smart home technology provider, Vivint offers home security, energy management, home automation, local cloud storage, and high-speed Internet solutions to more than one million customers throughout the United States and Canada. The end result is a smart home solution that sav...
SYS-CON Events announced today that Opsani will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Opsani is the leading provider of deployment automation systems for running and scaling traditional enterprise applications on container infrastructure.