Welcome!

Java IoT Authors: Yeshim Deniz, Liz McMillan, Pat Romanski, Roger Strukhoff, Elizabeth White

Related Topics: @CloudExpo

@CloudExpo: Blog Post

Cloud and Big Data Come Together to Give Climate Insights

OOI aims to provide an unprecedented ability to study the Earth's oceans

A fascinating global ocean studies initiative helps best define some of the IT superlatives around big data, cloud computing, and middleware integration capabilities.

The Ocean Observatories Initiative (OOI) and its accompanying Cyberinfrastructure Program aims to provide an unprecedented ability to study the Earth's oceans and climate using myriad distributed data centers and literally oceans' worth of data.



The scale and impact of the science's importance is closely followed by the magnitude of the computer science needed to make that data accessible and actionable by scientists. In a sense, the OOI and its infrastructure program, a major undertaking by the National Science Foundation, are constructing a big data-scale programmable and integratable cloud fabric for oceanography.

We’ve gathered three leaders to explain the OOI and how the Cyberinfrastructure Program may not only solve this set of data and compute problems, but perhaps establish a path to how future massive data and analysis problems are solved.

Here to share their story on OOI are:

 

  • Matthew Arrott, Project Manager at the OOI Cyberinfrastructure. Matthew's career spans more than 20 years in design leadership and engineering management for software and network systems. He’s held leadership positions at Currenex, DreamWorks SKG, Autodesk, and the National Center for Supercomputing Applications. His most recent work has been with the University of California as e-Science Program Manager while focusing on delivering the OOI Cyberinfrastructure capabilities.
  • Michael Meisinger, Managing Systems Architect for the Ocean Observatories Initiative Cyberinfrastructure. Since 2007, Michael has been employed by the University of California, San Diego. He leads a team of systems architects on the OOI Project. Prior to UC San Diego, Michael was a lead developer in an Internet startup, developing a platform for automated customer interactions and data analysis. Michael holds a master's degree in computer science from the Technical University of Munich and will soon complete a PhD in formal services-oriented computing and distributed systems architecture.

The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Meisinger: The Ocean Observatories Initiative is a large, US National Science Foundation project intended to build a platform for ocean sciences with an operational life span of 30 years.

It comprises a construction period of five years and will integrate a large number of resources and assets. These range from typical oceanographic assets, like instruments that are mounted on buoys deployed in the ocean, to networking infrastructure on the cyberinfrastructure side. It also includes a large number of sophisticated software systems.

I'm the managing architect for the cyberinfrastructure, so I'm primarily concerned with the interfaces through the oceanographic infrastructure, including beta interfaces, networking interfaces, and then primarily, the design of the system that is the network hardware and software system that comprises the cyberinfrastructure.

OOI’s goals include serving the science and education communities with their needs for receiving, analyzing, and manipulating ocean sciences and environmental data. This will have a large impact on the science community and the overall public, as a whole, because ocean sciences data is very important in understanding the changes and processes of the earth, the environment, and the climate as a whole.

Ocean sciences, as a discipline, hasn't yet received as much infrastructure and central attention as other communities. So the OOI initiative is a very important to bring this to the community. It has an almost $400 million construction budget, and an annual operations budget of $70 million for a planned lifetime of 25 to 30 years.

Gardner: What are the big hurdles here in terms of a compute requirements? What makes this so challenging?

Arrott: It has a number of key aspects that we had to address. It's best to start at the top of the functional requirements, which is to provide interactive mission planning and control of the overall instrumentation on the 65 independent platforms that are deployed throughout the ocean.

The issue there is how to provide a standard command-and-control infrastructure over a core set of 800 instruments, about 50 different classes of instrumentation, as well as be able to deploy -- over the 30-year lifecycle -- new instrumentation brought to us by different scientific communities for experimentation.

The next is that the mission planning and control is meant to be interactive and respond to emergent changes. So we needed an event-response infrastructure that allowed us to operate on scales from microseconds to hours in being able to detect and respond to the changes. We needed an ability to move computing throughout the network to deal with the different latency requirements that were needed for the event-response analysis.

Finally, we have computational nodes all the way down in the ocean, as well as on the shore stations, that are accepting or acquiring the data coming off the network. And we're distributing that data in real time to any one who wants to listen to the signals to develop their own sense-and-response mechanisms, whether they're in the cloud, in their local institutions, or on their laptop.

Domain of control

The fundamental challenge was the ability to create a domain of control over instrumentation that is deployed by operators and for processing and data distribution to be agile in its deployment anywhere in the global network.

Gardner: Why is this a good time to try to solve this from a software distribution and data distribution perspective?

Richardson: It's the scale that's changed the architecture and deployment patterns that people have been using for these applications.

We can see the OOI project is essentially bringing the science needed to collaborate between vast numbers of sensors and signals and a comparatively smaller number of scientists, research institutions, and scientific applications to do analytics in a similar way as to how Facebook combines what people say, what pictures they post, what music they listen to with everybody’s friends, and then allow an application to be attached to that.

So it’s a huge technology challenge that would have been simply infeasible 12 years ago in the year 2000, when we thought things were big, but they were not. Now, when we talk about big data being masses of terabytes and petabytes that need to be analyzed all the time, then we’re starting to glimpse what's possible with the technology that’s been created in the last 10 years.

It’s a huge technology challenge that would have been simply infeasible 12 years ago.



If we had been talking about this 12 years ago, in the year 2000, we would have been talking about companies like Google and Yahoo, which we would not have considered to be of moderate scale.

Since then, many companies have appeared. For example, Facebook, which has many hundreds of millions of users connecting throughout the world, shares vast amounts of data all the time.

In addition to that, many of these companies have brought out essentially a platform capability, whereby others, such as Zynga, in the case of Facebook, can create applications that run inside these networks -- social networks in the case of Facebook.

Arrott: The challenge goes beyond just the big data challenge. It also now introduces, as Alexis said, the concept of the instrument as an equal partner with the human in the participation in the network.

So you now have to think about what it means to have a device that’s acting like a human in the network, and the notion that the instrument is, in fact, owned by someone and must be governed by someone, which is not the case with the human, because the human governs themselves. So it represents the notion of an autonomous agent in the network, as well as that agent having a notion of control that has to stay on the network.

Gardner: I’d like to try to explain for our audience a bit more about what is going on here. We understand that we have a tremendous diversity of sensors gathering in real-time a tremendous scale of data. But we’re also talking about automating the gathering and distribution of that data to a variety of applications.

Numerical framework

We’re talking about having applications within this fabric, so that the output is not necessarily data, but is a computational numerical framework that’s then distributed. So there's a lot of data, a lot of logic, and a lot of scale. Can one of you help step me through it all a bit more to understand the architecture of what’s being conducted here?

Meisinger: The challenge, as you mentioned, is very heterogeneous. We deal with various classes of sensors, classes of data, classes of users, or even communities of users, and with classes of technological problems and solution spaces.

So the architecture is based on a tiered model or in a layered model of most invariant things at the bottom, things that shouldn’t change over the lifetime of 30 years to serve the highest level of attention.

Then, we go into our more specialized layered architecture where we try to find optimal solutions using today’s technologies for high-speed messaging, big data, and so on. Then, we go into specialized solutions for specific groups of users and specific sensors that are there as last-mile technologies to integrate them into the system.

Then as you go towards the core, you approach the invariants of the system.



So you basically see an onion layer model of the architecture, externalization outside. Then as you go toward the core, you approach the invariants of the system.

This architecture is based on defining a common interaction format. It’s based on defining a common data format. Our architecture is strongly communication-oriented, service-oriented, message-oriented, and federated.

As Matthew mentioned, it’s an important means to have the individual resources, agents, provide their own policies, not having a central bottleneck in the system or central governing entity in the system that defines policies.

Strongly federated


Arrott: Think of it as its four core layers. There is the underlying network resource management layer. We talk about agents. They supply that capability to any process in the system, and we create devices that process.

The next layer up is the data layer, and the data layer consists of two core parts. One is the distribution system that allows for data to be moved in real-time from the source to the interested parties. It’s fundamentally a publish-subscribe (pub-sub) model. We're currently using point-to-point as well as topic-based subscriptions, but we're quickly moving toward content-based routing, which is more based on the the selector that is provided by the consumer to direct traffic toward them.

The other part of the data layer is the traditional harvesting or retrieval of data from historical repositories.



The other part of the data layer is the traditional harvesting or retrieval of data from historical repositories.

The next layer up is the analytic layer. It looks a lot like the device layer, but is focused on the management of processes that are using the big data and responding to new arrival of data in the network or change in data in the network. Finally, there is the fourth layer, which is the mission planning and control layer, which we’ll talk about later.

Gardner: Alexis, when you saw the problem that needed to be solved here, you had a lot of experience with advanced message queuing protocol (AMQP). Why did this problem seems to be the right fit for that particular technology, RabbitMQ, and a messaging infrastructure in general?

Richardson: What Matthew and Michael have described can be broken down into three fundamental pieces of technology.

Lot of chatter

Number one, you have a lot of chatter coming from these devices -- machines, people, and other kinds of processes -- and that needs to get to the right place. It's being chattered or twittered away and possibly at high rates and high frequencies. It needs to get to just the set of receivers following that stream, very similar to how we understand distribution to our computers. So you need what’s called pub-sub, which is a fundamental technology.

In addition, that data needs to be stored somewhere. People need to go back and audit it, to pull it out of the archive and replay it, or view it again. So you need some form of storage and reliability built into your messaging network.

Finally, you need the ability to attach applications that will be written by autonomous groups, scientists, and other people who don’t necessarily talk to one another, may choose these different programming languages, and may be deploying our applications, as Matthew said, on their own servers, on multiple different clouds that they are choosing through what you would like to be a common platform. So you need this to be done in a standard way.

AMQP is unique in bringing together pub-sub with reliable messaging with standards, so that this can happen. That is precisely why AMQP is important. It's like HTTP and email SMTP, but it’s aimed at messaging the publish-subscribe reliable message delivery in a standard way. And RabbitMQ is one of the first implementations, and that’s how we ended up working with the OOI team -- because RabbitMQ provides these and does it well.

Gardner: I’d also like to go back to the project itself, and give our listeners a sense of what this can accomplish. I’ve heard it described as "the Hubble Telescope of oceans.

It's the notion that we're providing capabilities that do not currently exist for oceanographers.

"

Let’s go back to the oceanography and the climate science. What can we accomplish with this, when this data is delivered in the fashion we’ve been discussing, where the programmability is there, where certain scientists can interact with these sensors and data, ask it to do things, and then get that information back in a format that’s not raw, but is in fact actionable intelligence?

Matthew, what could possibly happen in terms of the change in our understanding of the oceans from this type of undertaking?

Meisinger: The primary mission of our project is to provide this platform, the space telescope in the ocean. And it’s not a single telescope. In our case, it's a set of 65 buoys, locations in the ocean, and even a cable that runs a 1,000 miles at the seafloor of the Pacific Northwest that provides 10 gigabit ethernet connectivity to the instrument, and high power.

The primary mission of our project is to provide this platform, the space telescope in the ocean.



It’s a model where scientists have to compete. They have to compete for a slot on that infrastructure. They'll have to apply for grants and they'll have to reserve the spot, so that they can accomplish the best scientific discoveries out of that system.

It’s kind of the analogy of the space telescope that will bring ocean scientists to the next level. This is our large platform, our large infrastructure that have the best scientists develop and research to best results. That’s the fascination that I see as part of this project.

Arrott: The way to think about this can be summed up as continual presence in the oceans at multiple scales through multiple perspectives.

The scope of the OOI is such that it is considered to be observing the ocean at multiple scales -- coastal, regional, and global. It is an expandable model. One of the largest classes of applications that we’ll attach to the network are the modeling, in particular the nowcast and forecast modeling.

Happening at scale

Once you have that ability to actually model the oceans and predict where it’s going, you can use that to refocus the instrumentation on emergent events. It's this ability to have long-term presence in the ocean, and the ability to refocus the instrumentation on emergent events, that really represents the revolutionary change in the formation of this infrastructure.

Gardner: Is this in some ways taking the weather of the oceans?

Arrott: There's a movement to instrument the Earth, so that we can understand from observation, as opposed to speculation, what the Earth is actually doing, and from a notion of climate and climate change, what we might be doing to the Earth as participants on it.

The weather community, because of the demand for commercial need for that weather data, has been well in advance of the other environmental sciences in this regard. What you'll find is that OOI is just one of several ongoing initiatives to do exactly what weather has done.

Science more mature


Gardner: How is it that cloud computing is being brought to bear, making this productive, and perhaps even ahead of where the whole weather and predicting weather has been?

Richardson: Happily, that’s an easy one. Imagine if a person or scientist wanted to process very quickly a large amount of data that’s come from the oceans to build a picture of the climate, the ocean, or anything to do with the coastal proprieties of the North American coast. They might need to borrow 10,000 or 20,000 machines for an hour, and they might need to have a vast amount of data readily accessible to those machines.

In the cloud, you can do that, and with big data technologies today, that is a realistic proposition. It was not five to 10 years ago. It’s that simple.

Obviously, you need to have the technologies, like this messaging that we talked about, to get that data to those machines so they can be processed. But, the cloud is really there to bring it altogether and to make it seem to the application owner like something that’s just ready for them to acquire it, and when they don’t need it anymore, they can put it back and someone else can use it.

Its common execution infrastructure subsystem is built in order to enable this access to computation and big data very quickly.



Gardner: How are cloud models enabling this at an unprecedented scale, but also at an efficient cost?

Meisinger: It does enable computing at unprecedented scale. A lot of the earth's environment is changing. Assume that you’re interested in tracking the effect of a hurricane somewhere in the ocean and you’re interested in computing a very complex numerical model that provides certain predictions about currents and other variables of the ocean. You want to do that when the hurricane occurs and you want to do it quickly. Part of the strategy is to enable quick computation on demand.

The OOI architecture, in particular, its common execution infrastructure subsystem, is built in order to enable this access to computation and big data very quickly. You want to be able to make use of execution provider’s infrastructure as a service very quickly to run your own models with the infrastructure that the OOI provides.

Then, there are other users that want to do things more regularly, and they might have their own hardware. They might run their own clusters, but in order to be interoperable, and in order to have excess overflow capabilities, it’s very important to have cloud infrastructure as a means of making the system more homogenous.

So the cloud is a way of abstracting compute resources of the various participants of the system, be they commercial or academic cloud computing providers or institutions that provide their own clusters as cloud systems, and they all form a large compute network, a compute fabric, so that they can run the computation in a predictable way, but also then in a very episodic way.

Cloud as enabler


I really see that the cloud paradigm is one of the enablers of doing this very efficiently, and it enables us as a software infrastructure project to develop the systems, the architecture, to actually manage this computation from a system’s point of view in a central way.

Gardner: Alexis, because of AMQP and the VMware cloud application platform, it seems to me that you’ve been able to shop around for cloud resources, using the marketplace, because you’ve allowed for interoperability among and between platforms, applications, tools, and frameworks.

Is it the case that leveraging AMQP has given you the opportunity to go to where the compute resources are available at the lowest cost when that’s in your best interest?

Richardson: The dividend of interoperability for the end-user and the end-customer in this platform environment is ultimately portability -- portability through being able to choose where your application will run.

Michael described it very well. A hurricane is coming. Do you want to use the machines provided by the cloud provider here for this price? Do you want to use your own servers? Maybe your neighboring data center has servers available to you, provided those are visible and provided there is this fundamental interoperability through cloud platforms of the type that we are investing in. Then, you will be able to have that choice. And that lets you make these decisions in a way that you could not do before.

Providing a strong platform or a strong technological footprint that’s not specific to any technology is a great benefit to the community out there.



Gardner: It’s been mentioned by Alexis and others that this has some common features to Twitter or Facebook.

We think of the social environment because of the scale, complexity, and the use of cloud models. But we’re doing far more advanced computational activities here. This is simply not a display of 140 characters, based on a very rudimentary search, for example. These are at the high performance computing (HPC) level, supercomputer-level types of requests and analysis.

So are we combining the best of a social fabric approach and the architecture behind that to what we’ve been traditionally exposed to in high-performance computing and supercomputing, and what does that mean for the future?

Meisinger: This is the direction in which the future will evolve, and it’s the combination of proven patterns of interaction that are emerging out of how humans interact applied to high-performance computing. Providing a strong platform or a strong technological footprint that’s not specific to any technology is a great benefit to the community out there.

Providing a reference architecture and a reference implementation that can solve these problems, that social network for sensor networks and for device computation will be a pattern that can be leveraged by other interested participants, either by participating in the system directly or indirectly, where it’s just taking that pattern and the technologies that come with it and basically bringing it to the next level in the future. Developing it as one large project in a coherent set really yields a technology stack and architecture that will carry us far into the future.

Arrott: With all the incremental change that we're introducing is taking the concepts of Facebook and of Twitter and the notions of Dropbox, which is the ability to move a file to a shared place so someone else can pick it up later, which was really not possible long ago. I had to do an FTP server, put up an HTTP server to accomplish that.

Sharing processes

W
hat we are now adding to the mix is not sharing just artifacts, but we’re actually sharing processes with one another, and then specifically sharing instrumentation. I can say to you, "Here, have a look through my telescope." You can move it around and focus it.

Basically, we introduced the concept of artifacts or information resources, as well as the concept of a taskable resource, and the thing that we’re adding to that which can be shared are taskable resources.

Meisinger: This pattern is very applicable, and it’s not that frequent that a research and construction project of that size has an ability to provide an end-to-end technology solution to this challenge of big data combined with real-time analysis and real-time command and control of the infrastructure.

What I see that’s evolving into is, first of all, you can take the solutions build in this project and apply it to other communities that are in need for such a solution. But then it could go further. Why not combine these communities into a larger system? Why not federate or connect all these communities into a larger infrastructure that all is based on common ideas, common standards, and that still enables open participation?

It’s a platform where you can plug in your own system or subsystem that you can then make available to whoever is connected to that platform, whoever you trust. So it can evolve into a large ecosystem, and that does not have to happen under the umbrella of one organization such as OOI.

Larger ecosystem

I
t can happen to a larger ecosystem of connected computing based on your own policies, your own technologies, your own standards, but where everyone shares a common piece of the same idea and can take whatever they want and not consume what they’re not interested in.

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

More Stories By Dana Gardner

At Interarbor Solutions, we create the analysis and in-depth podcasts on enterprise software and cloud trends that help fuel the social media revolution. As a veteran IT analyst, Dana Gardner moderates discussions and interviews get to the meat of the hottest technology topics. We define and forecast the business productivity effects of enterprise infrastructure, SOA and cloud advances. Our social media vehicles become conversational platforms, powerfully distributed via the BriefingsDirect Network of online media partners like ZDNet and IT-Director.com. As founder and principal analyst at Interarbor Solutions, Dana Gardner created BriefingsDirect to give online readers and listeners in-depth and direct access to the brightest thought leaders on IT. Our twice-monthly BriefingsDirect Analyst Insights Edition podcasts examine the latest IT news with a panel of analysts and guests. Our sponsored discussions provide a unique, deep-dive focus on specific industry problems and the latest solutions. This podcast equivalent of an analyst briefing session -- made available as a podcast/transcript/blog to any interested viewer and search engine seeker -- breaks the mold on closed knowledge. These informational podcasts jump-start conversational evangelism, drive traffic to lead generation campaigns, and produce strong SEO returns. Interarbor Solutions provides fresh and creative thinking on IT, SOA, cloud and social media strategies based on the power of thoughtful content, made freely and easily available to proactive seekers of insights and information. As a result, marketers and branding professionals can communicate inexpensively with self-qualifiying readers/listeners in discreet market segments. BriefingsDirect podcasts hosted by Dana Gardner: Full turnkey planning, moderatiing, producing, hosting, and distribution via blogs and IT media partners of essential IT knowledge and understanding.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
"Digital transformation - what we knew about it in the past has been redefined. Automation is going to play such a huge role in that because the culture, the technology, and the business operations are being shifted now," stated Brian Boeggeman, VP of Alliances & Partnerships at Ayehu, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Product connectivity goes hand and hand these days with increased use of personal data. New IoT devices are becoming more personalized than ever before. In his session at 22nd Cloud Expo | DXWorld Expo, Nicolas Fierro, CEO of MIMIR Blockchain Solutions, will discuss how in order to protect your data and privacy, IoT applications need to embrace Blockchain technology for a new level of product security never before seen - or needed.
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone inn...
Digital Transformation (DX) is not a "one-size-fits all" strategy. Each organization needs to develop its own unique, long-term DX plan. It must do so by realizing that we now live in a data-driven age, and that technologies such as Cloud Computing, Big Data, the IoT, Cognitive Computing, and Blockchain are only tools. In her general session at 21st Cloud Expo, Rebecca Wanta explained how the strategy must focus on DX and include a commitment from top management to create great IT jobs, monitor ...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, discussed how they built...
Recently, WebRTC has a lot of eyes from market. The use cases of WebRTC are expanding - video chat, online education, online health care etc. Not only for human-to-human communication, but also IoT use cases such as machine to human use cases can be seen recently. One of the typical use-case is remote camera monitoring. With WebRTC, people can have interoperability and flexibility for deploying monitoring service. However, the benefit of WebRTC for IoT is not only its convenience and interopera...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
No hype cycles or predictions of a gazillion things here. IoT is here. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, an Associate Partner of Analytics, IoT & Cybersecurity at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He also discussed the evaluation of communication standards and IoT messaging protocols, data...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...