Welcome!

Java Authors: Elizabeth White, Pat Romanski, Liz McMillan, Jason Bloomberg, Trevor Parsons

Related Topics: Java

Java: Article

Scalability of J2EE Applications: Effective Caching's Key

Scalability of J2EE Applications: Effective Caching's Key

Sooner or later all architects and developers of large-scale J2EE products face the same problem: their software's response time gets slower and slower, and the scalability of their solution is ending. This article investigates caching solutions that promise to help; sheds some light on their limitations; and describes an easy, lightweight, and effective caching mechanism that solves most of the issues.

Note: This article does not assess all possible ways of caching nor does it take solutions such as commercial external caching products into account.

The Problem
Whenever we build distributed software for a large scale - whether it's J2EE or not - we face the same challenge: keep response or transaction times low while increasing user load. The main problem is that essentially all software systems scale exponentially at some point (see Figure 1). Architecting and implementing a solution that keeps scalability linear and leaves enough room for increasing load as the business grows is a difficult task that requires experience.

A good architect keeps traffic, transaction times and volume, persistence layer design, and caching in mind when he or she drafts the first layout of a new architecture. Understanding concurrent access by n users on m data items is one of the major things an architect looks for.

Possible Solutions
Minimizing traffic in all tiers is the primary objective when creating a scalable solution. Figure 2 shows a typical three-tier system.

While the persistence tier in modern databases already provides significant caching capabilities, it's rarely enough for large-scale systems. What do other mechanisms do to increase performance and scalability, and to what tier/layer do they apply?

Stored Procedures?
I mention this because aside from caching, one suggestion I always hear is using stored procedures. I'd like to encourage everyone to consider different options. Using stored procedures splits the persistence layer over two physical tiers and usually improves only single user performance.

If you look at your application server's console, you might see, for example, that of the 500ms a servlet or JSP request takes, only 100ms are spent on the DB transaction side. Squeezing another 30ms out by using stored procedures rarely makes your system scale - you still need DB connections, cursors, and other resources.

Persistence Layer Caching
The easiest way to cache in J2EE systems is with entity beans (if we say entity beans let's only talk about CMP for the moment); I can hear the readers moan, but the fact remains: they are the only "good" way of caching in J2EE solutions. Why? Because the maximum cache size is controllable by setting the maximum number of beans and because the resource is in control of the container, as they can be passivated if memory is short. Usually, they are the only resource that is clusterable as well.

Why would most developers and architects say entity beans are bad for your performance? Because they are. In a single request use case, they have significant overhead compared to direct JDBC. But even in scalability assessments, entity beans often come out last, because their usage as a cache is determined by the possible cache hit rate, just like any other cache. The cache hit rate is determined by the number of reads versus the number of data items versus the number of writes.

Ultimately, if you use entity beans you really need to know what you're doing. While that might be true for any out-of-the-box mechanisms a container provides, it's especially true for entity beans. It's easy to get it wrong and a lot of containers have less than mediocre support for entity beans.

Entity beans make sense if:

  • Your reads and writes are few, then scalability is not your concern anyway and CMP EJBs are just as convenient.
  • Your reads are many, your writes and number of data items are few - this means maximum cache hit rate - you have just a few items to cache (most containers only perform well with a few thousand entity bean instances per CPU), and it rarely becomes stale because you hardly write.
In all other cases, entity beans just make things worse due to their management overhead. Figure 3 shows that cache efficiencies (like entity beans) depend on the number of reads versus the number of writes versus the number of rows (which is an oversimplified perception and not real math). Caching with entity beans works well within the green area.

One important fact needs to be considered as well: some application servers (WLS 6, WebSphere) do not support EntityBean clustering/ caching in clustered infrastructures. In other words, they often support only the caching of read-only entity beans if you run a cluster, which rules straight CMP out completely to increase scalability.

Let's have a quick look at BMP (mainly read-only or read-mostly BMP). These type of entity beans can be used to solve the problem of too many entity bean instances by allowing you to change the caching granularity: while CMP caches on a per-data-row basis, RO BMPs can essentially cache on any desired granularity level and are basically similar to the caching mechanism I'll discuss later. However, they still have a few disadvantages, such as the entity bean management overhead or (depending on your container implementation) the fact that they usually are - like all entity beans - single threaded: only one request at a time can access the cache.

In all other cases (mixed reads/ writes, lots of data, few reads many writes, etc.), how do we make our software scalable?

Web Tier Caching Using HTTP Session
If persistence layer caching through entity beans is ruled out, we have two tiers left where we could cache.

The most obvious choice developers often make is HTTP session caching. Since it caches at the uppermost tier, it should be most effective at minimizing traffic, right? However, using the HTTP session as a cache makes architects of large-scale systems shudder.

First, it caches on a per-session basis: it helps if one user performs the same or similar action 5,000 times but not if 5,000 users perform one action.

Second, the cache invalidation and GC is based on the session time-out - usually something like 60 minutes. Even if a user works for 10 minutes in your system, the data is cached for 60 minutes, which makes the cache size six times as big as it needs to be, unless you invalidate your session manually.

Finally, it removes one important task from the container: resource management. Since this cache cannot be cleared by the container, it often causes problems since the container cannot GC these objects even if memory resources become short. The container's GC cycles become more frequent and the GC has to walk over a large set of mainly stale objects in your session, making the cycles longer than they need to be.

Singletons and Application Context
The last place to cache is in the business layer (the following mechanism could be used in the Web tier as well). Since the HTTP session is not very effective at caching in high-traffic systems, the next best choice is using singletons to cache objects or data from the database.

Singletons (just like the application context) have the advantage that they again cache for all requests, but still are not a container-managed resource. Frequently singleton caches are implemented as a plain Hashtable and are unlimited in size, which causes almost the same problems as HTTP session caching.

I'd like to recall a simple but effective caching strategy that is singleton-based and uses a container such as a mechanism of resource management to keep resource usage to a minimum.

LRU Caching
The strategy used is called LRU (least recently used), also known as MRU (most recently used). Essentially, it only caches objects that are used frequently by limiting the cache size to a fixed number of items (hence the name), just like a container pool size for EJBs, thus keeping resource utilization controlled.

How does this work? Essentially it's a stack: if an object is requested from the stack and it's not there (cache miss), it's inserted at the very top. If your cache size is 1,000 items and the cache is full, the last item will fall off the stack and effectively be removed from the cache (see Figure 4).

In case an object is on the cache, it will be removed and reinserted at the top (see Figure 5).

This way, the most often used items will remain at the top, and the least used items will eventually drop off the stack. You can even keep track of your hits and misses easily and either query this information to reconfigure your cache or grow and shrink the maximum size dynamically. This way, you minimize usage of resources and maximize cache effectiveness. The stack implementation depends on your needs: choose an unsynchronized model if necessary to allow concurrent reads and minimize overhead.

Cache Invalidation
This cache works best in read-only or read-mostly scenarios. Unless you implement write-back or other write cache synchronization schemes or don't care that the cache is out of sync with the data source, you'll have to invalidate the cache, which decreases the cache hit rate and efficiency. For example, you can implement write-through caches fairly easily using dynamic proxy classes (JDK 1.3 introduced support for dynamic proxies) but that is a topic for another article.

Singleton-based LRU caching still has the typical problem of all singleton-based caches: a singleton is not a singleton in distributed systems (J2EE for that matter) but unique per classloader or server context (if you're lucky), and it's not kept in sync in clustered environments. There are, of course, mechanisms to implement the synchronization of distributed resources; some of them are difficult to implement or have scalability or availability issues; some work just fine. Distributed caching is not easy and if your requirements force you to go down this path, you might be well served choosing a commercial caching product.

The fact that you have several unsynchronized cache copies in clustered environments can be a big problem. The easy solution is using timed caches (just like read-only entity beans), which means that if a cached object is a certain age, it's considered stale and will be dropped from the cache. This is sufficient in most cases, but let's look at the following scenario.

Let's assume our invalidation time is 30 minutes (an object older than 30 minutes is considered stale). Cache A caches an object at 11:15, Cache B at 11:35. If the data item the cache is referring to is refreshed in the database at 11:40, Cache A will have the correct value at 11:45 when it expires but Cache B won't have it until 12:05 (see Figure 6). The problem now is that for 20 minutes you get different results - depending on which server you hit and on the use case this can be a big problem.

The solution for these cases is a timed cache that is refreshed at fixed points in time every n minutes, like at 12:00, 12:30, 1:00, etc. The advantage is that now all your caches are somewhat in sync (as in sync as the clocks on your servers are). The disadvantage is that the load on your servers increases quite a bit every time the caches are cleared, because they're cleared completely.

Which way you go depends on your business requirements; adjusting your refresh cycles largely depends on your data update frequency versus the cache hit rate you would like to achieve.

Of course, there are a variety of other ways to keep distributed copies of caches in sync, but these are not easy to implement and have a variety of side effects to consider.

Open Source and Commercial Caching Implementations
If your caching needs are more complex, or if you just don't want to "roll your own," you might want to give JSR 107 a look. This is the JCache JSR that specifies a distributed, resource-managed, Java-based cache. Even though little progress has been made to provide a production-ready implementation, there are several open source projects and products that are close to a JCache implementation and might provide what you need.

Commercial caching products should be considered if your caching requirements are complex (clustered environments, etc.). As mentioned earlier, distributed caching is not as easy at it seems and relying on an enterprise-class product often saves time and trouble.

Building a scalable solution often depends on making the right decisions in persistence mechanism and in caching. How, when, and where to cache is the trick; I hope this article helped you make the right decision.

References

  • JSR 104: www.jcp.org/en/jsr/detail?id=107
  • JCS and JCache at Apache: http://jakarta.apache.org/turbine/jcs/JCSandJCACHE.html
  • More Stories By Stefan Piesche

    Stefan Piesche is a Principal Architect for the Cobalt Group (HQ in Seattle) responsible for large-scale, distributed systems based on J2EE. In the past years I worked on several large-scale systems in Europe in the financial and airline industry.

    Comments (15) View Comments

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    Most Recent Comments
    Cameron 05/19/04 03:22:24 PM EDT

    Hi Jamal, cluster-wide means than any cluster node can request a lock, optionally queue for it (if it''s taken), and thus have the ability to safely work with data in-memory in the cluster, just like having "synchronized" as a keyword allows you to write thread-safe Java code. So just as "synchronized" is a necessary ingredient of scaling to multiple threads, the ability to manage locks and coherent data in a cluster is a necessary ingredient of horizontal scaling in a cluster.

    Jamal 05/05/04 04:53:20 PM EDT

    Perhaps I am misinterpreting the meaning of the ?cluster-wide? lock. It would seem that if you were locking a cache at the record level for a read/write operation, as the case may be, that the overall performance would degrade as the number of nodes increases because each node would effectively be locked until another node finished it?s work. This in turn creates a virtual cluster wide transaction (- ick -). A transaction of this nature can only decrease in performance as more nodes are added to the cluster.

    Cluster wide locks sound like a really, really bad idea for large-scale systems where scalability is key.

    Rob Misek 05/04/04 09:48:58 AM EDT

    Hi Jamal,

    The in-cluster operations generally start off more performant because they occur in-memory (and in Java) in the cluster in which the application is running, and do not have to go through inefficiencies of translating through the JDBC API. The icing on the cake is that the cluster performance does not degrade as more and more machines enter the cluster while the database performace decreases in the same situation. The operation stays at a relatively constant speed, regardless of of whether you have 2 servers or 20 servers in the cluster. Yet, with 20 servers you can manage 10x the throughput (as 2 servers).

    Jamal 05/03/04 09:19:11 PM EDT

    Rob, you indicate that Coherence provides cluster-wide locking as the concurrency mechanism. Can you please explain how a cluster-wide lock ?increases? performance?

    Rob Misek 04/28/04 04:30:37 PM EDT

    "For the data that are modified frequently and concurrently, they are best not to be cached."

    Actually, that is the area where you will see the largest performance improvement by caching. Coherence is used in this exact scenario since it provides cluster-wide locking as the concurrency mechanism.

    That coupled with the use of our "read-through/write-behind" technology when using an underlying datastore increases performance further. The huge advantage of a read-through/write-behind cache in a cluster is that the database is almost completely unloaded of repetitive read operations, and more importantly the database is unloaded from repetitive writes, which are usually much more expensive than read operations. Combined with the HA features of Coherence, even the write-behind data is managed safely during failover, allowing an application server to unexpectedly fail without any loss of uncommitted (write-behind) data.

    Stanley 04/28/04 12:41:54 AM EDT

    The more important thing is to determine what to cache and how much to cache. All applications use some kind of reference data that are used application-wide and change very infrequently. These can be cached at global level, at web/business tier using either singleton or application context. They can be refreshed overnightly and/or on demand. We have successfully implemented this type of cache in a number of projects, with custom notification mechanism to synchronize the caches on clustered servers.

    The Session object has also a place to cache certain types of data, such as those that are applicable mainly to the user session, hardly shared with other users, infrequently modified (rarely modified by other users while the user session is active). However we shall not store too many objects in the Session or the application will not scale.

    For the data that are modified frequently and concurrently, they are best not to be cached.

    I agree with Stefan that Entity EJB is not a good place to implement caches.

    Jim Carroll 04/21/04 12:00:15 PM EDT

    Your article is interesting but if scalability is your main aim (as the first paragraphs imply) then a thorough discussion of how to design out server maintained state information can be more important than sophisticated caching. Rather than recommending the use of Entity beans, designing a system that eliminates (or minimizes) their need (since they are, by definition, server managed state) will go much further toward a truly "linearly scalable" distributed system.

    Regards,
    Jim

    Cameron 04/15/04 02:13:39 PM EDT

    Hi John, those limitations do not apply to system libraries, such as JDBC drivers, which *must* open sockets in order to communicate with a database running in a different process or on a different machine.

    The rules for EJBs, particularly the limitations, are there to restrict the component developer (someone writing an EJB that may be deployed to any application server) so that they are assured of the highest level of portability. The spec now explains a lot of the limitations. For example, if you access the file system, and the application server is clustered, how can you be sure that the file will be there when you fail over (since it might be a local file system, or the machine that is failed over to could be running on a completely different site!)

    Similarly, managing threads and sockets may be disabled for the EJB components from a security perspective, to allow for controlled environments for third party hosting of applications.

    IBM is working to standardize APIs so that the container can expose things like thread pooling to the component developer. In the mean time, those things should be considered off limits to an EJB developer if you want to make your EJBs as portable as possible. However, system libraries are still able to (and often must) use those APIs to implement features like database connectivity.

    Peace.

    John Segrave 04/15/04 01:21:27 PM EDT

    It strikes me that there''s little difference between an EJB Bean class calling a prohibited API and the Bean class calling an intermediate class which calls the prohibited API.

    Does EJB make a provision for when there is no alternative but to call these APIs? e.g. You can call them, as long as the call is routed through an XXX? (eg resource adapter? don''t really know much about resource adapters, apologies if this is a meaningless suggestion). How is a JDBC driver supposed to access a remote DB without making socket calls?

    Cheers,

    J

    Cameron 04/13/04 11:38:22 AM EDT

    A system library, such as Coherence or a JDBC driver or a JMS implementation, is allowed to manage socket communication, while an EJB component is not. Optionally, you can use Coherence as a transactional resource through the J2CA (Java 2 Enterprise Edition Connector Architecture) API.

    As far as compatibility, Coherence has hundreds of production deployments on numerous application server platforms (including WebSphere 3.5.4 - 5.1 and WebLogic 4.5.1 - 8.1) on JDK 1.2 - 1.4.

    Peace.

    John Segrave 04/13/04 02:53:36 AM EDT

    Hi Cameron, thanks for the replies.

    To my reading, the EJB 2.0 spec (section 24.1.2 to be precise!) appears to prohibit the use of multicast sockets: "An enterprise bean must not attempt to listen on a socket, accept connections on a socket, or use a socket for multicast."

    I''m genuinely not trying to be argumentative here, just trying to work out whether or not Coherence does or doesn''t contravene the spec, and if it does, was it done intentionally (in which case, what was the reason)?

    Normally, I''d expect an intentional contravention of a spec to be accompanied by an explanation (so customers can make up their own minds whether or not they think it may cause them problems). I''d like to give Coherence (and several other caching products) the benefit of the doubt, but at present that would be based on faith rather than evidence!

    Cameron 04/08/04 10:12:04 AM EDT

    Tangosol Coherence is built on top of a peer-to-peer cluster and uses the TCMP protocol, which is an asynchronous, dynamically-switched unicast/multicast datagram protocol, with reliable in-order delivery of cluster messages. Unlike simple invalidation strategies, Coherence maintains a coherent image of the data across the cluster, and supports fully replicated caches, cluster-partitioned caches, near caches, overflow (to NIO buffers and/or disk) caches, and database-backed caches.

    John Segrave 04/08/04 08:31:32 AM EDT

    What does it use for propagating cache messages in clusters - JMS, sockets? If sockets, does it do blocking reads?

    Cameron 04/07/04 04:47:18 PM EDT

    Tangosol Coherence is supported on basically all Java / J2EE application servers, and doesn''t screw with anything like setMessageListener(). It''s also the most mature and widely deployed clustered caching solution in the market.

    John Segrave 04/07/04 08:36:04 AM EDT

    I recently investigated several open source and commercial caching products and hit somewhat of a brick wall with the J2EE & EJB specs.

    I needed a cache to serve read-mostly data in a clustered EJB application. The application runs on several app servers, so portability was a key requirement.

    In my experience, the key element in ensuring portability is standards compliance. The J2EE and EJB specs forbid EJBs from using certain APIs, namely those that create threads, block on socket operations, etc. They go even further by explicitly naming forbidden APIs, such as JMS''s setMessageListener().

    However, pretty much all the caching products I investigated were using these APIs.

    In one example, I queried the vendor (who shall remain nameless!) about the use of setMessageListener() in their product. They replied by stating that (a) it works and (b) it was fine to use those APIs in classes that the EJB *uses*, just not in the *actual* EJB bean class itself.

    My feeling on (a) was that, if it contravenes the spec, then working on AppServer version X doesn''t guarantee it will work on version X+1. It also doesn''t mean it will work on any other app server. On (b) I don''t see that it makes any difference whether the call is in class A or in class B which gets called from A (in the same thread)?!... The vendor in question had only recently started marketing their product as a J2EE cache (previously it had been best known as a servlet cache). I don''t know if those restrictions apply to the web tier.

    In the end, due to portability concerns, we decided to roll our own. Our invalidation strategy was simple and was achievable using JMS and MDB''s (no spec contravention required).

    I''d be interested in other people''s experiences and perspectives on this topic.

    @ThingsExpo Stories
    Connected devices and the Internet of Things are getting significant momentum in 2014. In his session at Internet of @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, examined three key elements that together will drive mass adoption of the IoT before the end of 2015. The first element is the recent advent of robust open source protocols (like AllJoyn and WebRTC) that facilitate M2M communication. The second is broad availability of flexible, cost-effective storage designed to handle the massive surge in back-end data in a world where timely analytics is e...
    The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges. In his session at @ThingsExpo, Jeff Kaplan, Managing Director of THINKstrategies, will examine why IT must finally fulfill its role in support of its SBUs or face a new round of...
    How do APIs and IoT relate? The answer is not as simple as merely adding an API on top of a dumb device, but rather about understanding the architectural patterns for implementing an IoT fabric. There are typically two or three trends: Exposing the device to a management framework Exposing that management framework to a business centric logic Exposing that business layer and data to end users. This last trend is the IoT stack, which involves a new shift in the separation of what stuff happens, where data lives and where the interface lies. For instance, it's a mix of architectural styles ...
    We are reaching the end of the beginning with WebRTC, and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) i...
    Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water, are pursuing SmartGrid initiatives that represent one of the more mature examples of SAE. We have s...
    "Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
    The Internet of Things will greatly expand the opportunities for data collection and new business models driven off of that data. In her session at @ThingsExpo, Esmeralda Swartz, CMO of MetraTech, discussed how for this to be effective you not only need to have infrastructure and operational models capable of utilizing this new phenomenon, but increasingly service providers will need to convince a skeptical public to participate. Get ready to show them the money!
    One of the biggest challenges when developing connected devices is identifying user value and delivering it through successful user experiences. In his session at Internet of @ThingsExpo, Mike Kuniavsky, Principal Scientist, Innovation Services at PARC, described an IoT-specific approach to user experience design that combines approaches from interaction design, industrial design and service design to create experiences that go beyond simple connected gadgets to create lasting, multi-device experiences grounded in people's real needs and desires.
    P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at @ThingsExpo, Robin Raymond, Chief Architect at Hookflash, will walk through the shifting landscape of traditional telephone and voice services ...
    Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
    The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
    The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things' get connected, people will rely on DNS to name and find their fridges, toasters and toilets. According to a recent IDG Research Services Survey this rate of traffic will only grow. What's driving t...
    Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With "smart" appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user's habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps, abiding by privacy concerns and making the concept a reality. These challenges can't be addressed w...
    Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at Internet of @ThingsExpo, James Kirkland, Chief Architect for the Internet of Things and Intelligent Systems at Red Hat, described how to revolutioniz...
    Bit6 today issued a challenge to the technology community implementing Web Real Time Communication (WebRTC). To leap beyond WebRTC’s significant limitations and fully leverage its underlying value to accelerate innovation, application developers need to consider the entire communications ecosystem.
    The definition of IoT is not new, in fact it’s been around for over a decade. What has changed is the public's awareness that the technology we use on a daily basis has caught up on the vision of an always on, always connected world. If you look into the details of what comprises the IoT, you’ll see that it includes everything from cloud computing, Big Data analytics, “Things,” Web communication, applications, network, storage, etc. It is essentially including everything connected online from hardware to software, or as we like to say, it’s an Internet of many different things. The difference ...
    Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world.
    SYS-CON Events announced today that Windstream, a leading provider of advanced network and cloud communications, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Windstream (Nasdaq: WIN), a FORTUNE 500 and S&P 500 company, is a leading provider of advanced network communications, including cloud computing and managed services, to businesses nationwide. The company also offers broadband, phone and digital TV services to consumers primarily in rural areas.
    "There is a natural synchronization between the business models, the IoT is there to support ,” explained Brendan O'Brien, Co-founder and Chief Architect of Aria Systems, in this SYS-CON.tv interview at the 15th International Cloud Expo®, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
    The major cloud platforms defy a simple, side-by-side analysis. Each of the major IaaS public-cloud platforms offers their own unique strengths and functionality. Options for on-site private cloud are diverse as well, and must be designed and deployed while taking existing legacy architecture and infrastructure into account. Then the reality is that most enterprises are embarking on a hybrid cloud strategy and programs. In this Power Panel at 15th Cloud Expo (http://www.CloudComputingExpo.com), moderated by Ashar Baig, Research Director, Cloud, at Gigaom Research, Nate Gordon, Director of T...