Welcome!

Java IoT Authors: Stackify Blog, Yeshim Deniz, Pat Romanski, Matt Lonstine, Glenda Sims

Related Topics: Containers Expo Blog, Java IoT, @CloudExpo

Containers Expo Blog: Blog Feed Post

Musings on Neural Networking By @DaveGraham | @CloudExpo #Cloud

I’ve always had a fascination with the way information is acquired and process

Given my last post was in November of 2013 (trust me, I’ve been busy), I figured I’d start out with a heady topic like “Neural

Networking” in an age where Deep Machine Learning and perhaps its lesser cousin, assisted Machine Learning (I’ll define in a bit), seem to be all the rage.  However, before we begin, I want to make a few things clear:

  • I’m no expert in these fields.
  • I’m musing out loud here.  You’re my audience and what you determine to be salient and what you deem junk is, well, your problem, not mine.
  • DML/AML, Neural Networking, and a whole host of other terms, acronyms, mindf**k level events, etc. are here. Deal with it.

So with such an illustrious preface, I suppose we should let the party begin.

I’ve always had a fascination with the way information is acquired and process. Reading back through the history of this site, you can see this tendency towards more fanciful thinking, e.g., GPGPU assisted network analytics, future storage systems using Torrenza-style processing.  What has once been theory has made its way into the realm of praxis; looking no further than ICML 2015, for example, to see the forays into DML that nVidia is making with their GPUs.  And on the story goes.  Having said all this, there are elements of data, of data networking, of data processing, which, to date, have NOT gleaned all the benefits of this type of acceleration.  To that end, what I am going to attempt to posit today is an area where Neural Networking (or at least the benefits therein) can be usefully applied to an area interacted with every single nanosecond of every day: the network.

Glossary:
Before we get much further, we should probably have a definition of some terms that I will be using:

  • Deep Machine Learning (DML): burgeoning area of machine learning research focused on machine intelligence utilizing underlying principles of neural networking
  • Assisted Machine Learning (aka Hybrid; AML): a half-step towards DML where pre-pended processing is done by fixed systems within a rough grid approach  and learning takes place on these processed chunks of data.
  • Neural Networking: “a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.” (In “Neural Network Primer: Part I” by Maureen Caudill, AI Expert, Feb. 1989)
  • Packet Forwarding Engines (PFE): base level of hardware in a contemporary network switch

State of the Union: Networks
To talk about the future, some mention is needed of the current état de fait of systems networking.

Packet Forward Engines (PFEs) are the muscle of networking switches. Today, we’re facing routinely more powerful PFEs, both custom as well as mainline/merchant.  Companies like Cisco, Broadcom, Xpliant, Intel, Marvel, Juniper, etc. have propagated designs and delivered ever-increasingly scalable devices that can process billions of bits of information at a time.  The traceable curve here closely follows an analog of Moore’s law while not exactly staying within the same bounds (e.g. I could point out that Broadcom’s Trident/Trident+ compared to the currently shipping Trident 2 are not all that far removed from each other both in frequency, scale, latency, and processing power).  If we allow for interstitial comparisons cross-vendor, the story changes somewhat and, to my mind, the curve becomes even more pronounced.  Comparing custom silicon from Juniper or Cisco to that of Broadcom, for example, shows a higher level of capability present in these more custom designs, albeit with a slower time to market.  All this is being said by way of pointing out that compared to host-level development of processors (like Intel’s Xeon/Core and AMD‘s APU/CPU line ups), these specialized processing units have a different scale in/scale out process.  Consequently, their application has been mostly stagnant; a switch line or two released with a regular cadence of roughly 18 months or so, interspersed by the next important part of networking: the software.

Software development is as critical to the current state of networking as the hardware is.  Relying on fixed pipeline devices (as the Trident 2 is), requires a certain level of determinism to be designed into the software that controls it.  With the seminal development of software development kits (SDKs), the de-coupling has allowed for vendors to write against a known set of functions with a healthy separation from the underlying hardware.  This abstraction has both accomplished a level of increasing functionality and capability within the systems (e.g. Broadcom’s concept of a programmable unified forwarding table (UFT)),  as well as allowing for agile development of the overlaying software (e.g. quicker time to market for a network operating system (NOS) built on top of said SDK).  Having this level of functionality is important as it allows more agile decisions to be made as standards or protocols are ratified for implementation.    An NOS is only as capable as the hardware it lies upon, however, and that leads us to the third part of the current network: the control plane processing.

The control plane of a network switch is the brain of the operations. A PFE is useless as a commodity processor.  If you examine its structure closely, its functional blocks are designed for very purpose driven applications.  This type of processing, while important for the datagrams it will functionally serve, is useless for running more banal applications like an NOS.  However, generic processing hardware, like PowerPC, MIPS, ARM, or even x86 cores can be harnessed to manage this type of workload very effectively.  In recent years, there has been increasing momentum to moving these control plane processing entities from more archaic and proprietary architectures like PPC and MIPS, to more modern and commercially available standards like ARM and x86.  This move has allowed for modernizing the control plane from an embedded system to a discrete “system on a switch” running modern operating systems and either virtualizing the NOS (e.g. like Juniper’s QFX5100 switch line) or partitioning via containers or some other level of abstraction.  The benefits of such systems cannot be ignored as again, time to market and feature development becomes more agile in nature.  (Side note: the role of ARM as a valid control plane foundation cannot be overlooked and will be the subject of another post at some point in the not-so-distant future).

In summary, the current networking switch present in the data center is comprised of a PFE, a network operating system (NOS), and a control plane to run the NOS. This is not unlike a commodity server with lots of physical interfaces designed for ingress and egress of data.  These switches are increasingly complex and performance-heavy and provide a robust foundation upon which to build neural networks.

Becoming Neural, not Neurotic
When you walk into your living room, tell your Xbox One to turn itself on (“Xbox On!”) and watch as the always-listening machine powers up your TV and itself and then scans you really quick to determine identity, you’re watching machine learning in action. This process makes use of both audio and visual queuing and localization of data (a core component of neural networking) to derive identity and causality.  You had to walk through a setup process to both capture your image as well as your vocalization.  This was stored in a local database and used as a reference point.  The system is given rough control points to operate against but is functionally able to interact against this baseline; case in point, depending on my level of beard growth or not, my Xbox has various levels of success in determining who I am by sight.  The same goes for my iPhone, my Android, my Amazon Echo, etc.  Each of these machines has a minimal database connected to a backend process (the “cloud” or another hosted platform) and performs a fixed function (voice recognition, facial recognition).  All this explanation is to demonstrate that we’re in the throes of neural networks without even realizing.  If we look at the network as a necessary part of this process, it becomes the springboard for incredible capability.

So how can a transport layer become “neural”?  Looking back at our definition of “neural networks” we see that at its very foundation is the concept connectedness.  A network is a collection of interconnected devices using some sort of medium, whether copper, optical, or radio frequency that allows them to interoperate or exchange data.  Transporting data, whether electrical, radio frequency, or optical, is just that: transport.  It implies neither intelligence nor insight.  The sender and the receiver, however, can operate on data and make decisions with some level of determinism, though, and this is where we will focus.  Historically, one would look for the systems attached to the transport layer as the true members of the network.  However, as noted previously, with the advent of “system on a switch” control planes, suddenly we have the appearance of systems as joining points, not just transport pipes.

Moving further, if these transport junctions or pipes suddenly develop the intelligence, based on no other inputs but data, to route “conversations” or data in ways that logically make sense and have derived value to either the sender, receiver, or both, have we achieved a neural network? We can see some basic interworkings of this in the use of LLDP (link layer discovery protocol) as a low level exchange of “who are you?” information, but this is derived from extant specifications of what a datagram should look like.  This isn’t flaunting the concepts of neural networking but belies that data, exclusive of content and context, is known already.  So, the next logical leap is how that data is interpreted.

Let’s presuppose that LLDP has provided two neighboring switches with the identity, capability, and proximity to each other.  What then?  As hosts are connected one side to another, data will flow based on the hosts requirements for connectedness and data.  The transport layer, at that point, is nothing more than transport; simple forwarding devices.  However, let’s also assume that these two switches have a system attached to each respective control plane that is constantly watching traffic as it flows across and is “learning.”  What these switches are learning can be perceived as raw input and can be manipulated and quantified as such.  In a neural networking world, these systems are nascent; raw with no heuristic capability as yet designed.

The situation described above is precisely why networking systems function so completely today.  They’re not tasked with anything beyond fixed parameters or inspection.  Think of it:  IETF and IEEE have specified what a datagram should look like.  It should have Layer-2 source and destination media access control (mac) address along with payload, for example.  But beyond this, what is accomplished?  The PFE is looking for datagrams that conform to these standards to pass along; anything else is malformed and dropped.  You quickly reach a situation where, heuristically, you’re limiting the overall potential of these machines to be simple engines, receiving parameters and doing as told.  What, then, could be done?

Vision Casting
I can sit here and postulate any number of ideas that my peers have already done.  I’m more interested in what we can do with the data that is already present.  We can argue that daemons that run in the kernel, statistic packages that collect PFE-published data points, or other such utilities are useful.  In a way, they are, but they represent a subset of capabilities and are mostly human driven (AML at its finest).  What if, however, each time a request is made, the switch learns what data points are being requested and viewed and is able to selectively feed only the most salient points back to its consumers without flooding tons of useless information?  What if this is a priori to a receiver (in the classic SNMP use case)? What if this is machine driven (DML) and becomes part of the flow?

For a network to become “aware” and fully realized as neural in nature (and presupposing the eventual coupling of machine state to machine state thru a hyperaware network as my conclusion) it must be able to functionally process data on its own, either by simple heuristic learning (profiling, as noted above, is just one method) or through the contrived mechanisms of its NOS in a non-rigid manner (e.g. not L2 learning, etc.)  Certainly the use of standardized protocols for initial communication is encouraged, since it can engage heterogenous systems together in communication without other proprietary lower-level protocols like HiGig, but beyond this initial negotiation, the hope and desire is that learning, forwarding, reporting, and engaging become autonomous and self-forming.  As systems interact, then, decisions will be made based on what the datagram contains, the way the PFE is responding to traffic flows and utilization, and also what the next connected device is doing.  This capability is present, to some extent, today in systems that use a network management system (NMS) that wholistically can see the network for what it is, but this external intelligence, is again, driven from the outside in and not organic to the devices themselves.

Conclusion
I’ve laid out what I hope is the framework for an ongoing discussion of neural networks (without delving into AML/DML this go around) and their role within the actual network space.  I’m curious as to your thoughts (constructive, please).

Read the original blog entry...

More Stories By Dave Graham

Dave Graham is a Technical Consultant with EMC Corporation where he focused on designing/architecting private cloud solutions for commercial customers.

@ThingsExpo Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.