Click here to close now.


Java IoT Authors: Deep Bhattacharjee, Brad Thies, Liz McMillan, Anders Wallgren, Elizabeth White

Related Topics: Java IoT

Java IoT: Article

Scalability of J2EE Applications: Effective Caching's Key

Scalability of J2EE Applications: Effective Caching's Key

Sooner or later all architects and developers of large-scale J2EE products face the same problem: their software's response time gets slower and slower, and the scalability of their solution is ending. This article investigates caching solutions that promise to help; sheds some light on their limitations; and describes an easy, lightweight, and effective caching mechanism that solves most of the issues.

Note: This article does not assess all possible ways of caching nor does it take solutions such as commercial external caching products into account.

The Problem
Whenever we build distributed software for a large scale - whether it's J2EE or not - we face the same challenge: keep response or transaction times low while increasing user load. The main problem is that essentially all software systems scale exponentially at some point (see Figure 1). Architecting and implementing a solution that keeps scalability linear and leaves enough room for increasing load as the business grows is a difficult task that requires experience.

A good architect keeps traffic, transaction times and volume, persistence layer design, and caching in mind when he or she drafts the first layout of a new architecture. Understanding concurrent access by n users on m data items is one of the major things an architect looks for.

Possible Solutions
Minimizing traffic in all tiers is the primary objective when creating a scalable solution. Figure 2 shows a typical three-tier system.

While the persistence tier in modern databases already provides significant caching capabilities, it's rarely enough for large-scale systems. What do other mechanisms do to increase performance and scalability, and to what tier/layer do they apply?

Stored Procedures?
I mention this because aside from caching, one suggestion I always hear is using stored procedures. I'd like to encourage everyone to consider different options. Using stored procedures splits the persistence layer over two physical tiers and usually improves only single user performance.

If you look at your application server's console, you might see, for example, that of the 500ms a servlet or JSP request takes, only 100ms are spent on the DB transaction side. Squeezing another 30ms out by using stored procedures rarely makes your system scale - you still need DB connections, cursors, and other resources.

Persistence Layer Caching
The easiest way to cache in J2EE systems is with entity beans (if we say entity beans let's only talk about CMP for the moment); I can hear the readers moan, but the fact remains: they are the only "good" way of caching in J2EE solutions. Why? Because the maximum cache size is controllable by setting the maximum number of beans and because the resource is in control of the container, as they can be passivated if memory is short. Usually, they are the only resource that is clusterable as well.

Why would most developers and architects say entity beans are bad for your performance? Because they are. In a single request use case, they have significant overhead compared to direct JDBC. But even in scalability assessments, entity beans often come out last, because their usage as a cache is determined by the possible cache hit rate, just like any other cache. The cache hit rate is determined by the number of reads versus the number of data items versus the number of writes.

Ultimately, if you use entity beans you really need to know what you're doing. While that might be true for any out-of-the-box mechanisms a container provides, it's especially true for entity beans. It's easy to get it wrong and a lot of containers have less than mediocre support for entity beans.

Entity beans make sense if:

  • Your reads and writes are few, then scalability is not your concern anyway and CMP EJBs are just as convenient.
  • Your reads are many, your writes and number of data items are few - this means maximum cache hit rate - you have just a few items to cache (most containers only perform well with a few thousand entity bean instances per CPU), and it rarely becomes stale because you hardly write.
In all other cases, entity beans just make things worse due to their management overhead. Figure 3 shows that cache efficiencies (like entity beans) depend on the number of reads versus the number of writes versus the number of rows (which is an oversimplified perception and not real math). Caching with entity beans works well within the green area.

One important fact needs to be considered as well: some application servers (WLS 6, WebSphere) do not support EntityBean clustering/ caching in clustered infrastructures. In other words, they often support only the caching of read-only entity beans if you run a cluster, which rules straight CMP out completely to increase scalability.

Let's have a quick look at BMP (mainly read-only or read-mostly BMP). These type of entity beans can be used to solve the problem of too many entity bean instances by allowing you to change the caching granularity: while CMP caches on a per-data-row basis, RO BMPs can essentially cache on any desired granularity level and are basically similar to the caching mechanism I'll discuss later. However, they still have a few disadvantages, such as the entity bean management overhead or (depending on your container implementation) the fact that they usually are - like all entity beans - single threaded: only one request at a time can access the cache.

In all other cases (mixed reads/ writes, lots of data, few reads many writes, etc.), how do we make our software scalable?

Web Tier Caching Using HTTP Session
If persistence layer caching through entity beans is ruled out, we have two tiers left where we could cache.

The most obvious choice developers often make is HTTP session caching. Since it caches at the uppermost tier, it should be most effective at minimizing traffic, right? However, using the HTTP session as a cache makes architects of large-scale systems shudder.

First, it caches on a per-session basis: it helps if one user performs the same or similar action 5,000 times but not if 5,000 users perform one action.

Second, the cache invalidation and GC is based on the session time-out - usually something like 60 minutes. Even if a user works for 10 minutes in your system, the data is cached for 60 minutes, which makes the cache size six times as big as it needs to be, unless you invalidate your session manually.

Finally, it removes one important task from the container: resource management. Since this cache cannot be cleared by the container, it often causes problems since the container cannot GC these objects even if memory resources become short. The container's GC cycles become more frequent and the GC has to walk over a large set of mainly stale objects in your session, making the cycles longer than they need to be.

Singletons and Application Context
The last place to cache is in the business layer (the following mechanism could be used in the Web tier as well). Since the HTTP session is not very effective at caching in high-traffic systems, the next best choice is using singletons to cache objects or data from the database.

Singletons (just like the application context) have the advantage that they again cache for all requests, but still are not a container-managed resource. Frequently singleton caches are implemented as a plain Hashtable and are unlimited in size, which causes almost the same problems as HTTP session caching.

I'd like to recall a simple but effective caching strategy that is singleton-based and uses a container such as a mechanism of resource management to keep resource usage to a minimum.

LRU Caching
The strategy used is called LRU (least recently used), also known as MRU (most recently used). Essentially, it only caches objects that are used frequently by limiting the cache size to a fixed number of items (hence the name), just like a container pool size for EJBs, thus keeping resource utilization controlled.

How does this work? Essentially it's a stack: if an object is requested from the stack and it's not there (cache miss), it's inserted at the very top. If your cache size is 1,000 items and the cache is full, the last item will fall off the stack and effectively be removed from the cache (see Figure 4).

In case an object is on the cache, it will be removed and reinserted at the top (see Figure 5).

This way, the most often used items will remain at the top, and the least used items will eventually drop off the stack. You can even keep track of your hits and misses easily and either query this information to reconfigure your cache or grow and shrink the maximum size dynamically. This way, you minimize usage of resources and maximize cache effectiveness. The stack implementation depends on your needs: choose an unsynchronized model if necessary to allow concurrent reads and minimize overhead.

Cache Invalidation
This cache works best in read-only or read-mostly scenarios. Unless you implement write-back or other write cache synchronization schemes or don't care that the cache is out of sync with the data source, you'll have to invalidate the cache, which decreases the cache hit rate and efficiency. For example, you can implement write-through caches fairly easily using dynamic proxy classes (JDK 1.3 introduced support for dynamic proxies) but that is a topic for another article.

Singleton-based LRU caching still has the typical problem of all singleton-based caches: a singleton is not a singleton in distributed systems (J2EE for that matter) but unique per classloader or server context (if you're lucky), and it's not kept in sync in clustered environments. There are, of course, mechanisms to implement the synchronization of distributed resources; some of them are difficult to implement or have scalability or availability issues; some work just fine. Distributed caching is not easy and if your requirements force you to go down this path, you might be well served choosing a commercial caching product.

The fact that you have several unsynchronized cache copies in clustered environments can be a big problem. The easy solution is using timed caches (just like read-only entity beans), which means that if a cached object is a certain age, it's considered stale and will be dropped from the cache. This is sufficient in most cases, but let's look at the following scenario.

Let's assume our invalidation time is 30 minutes (an object older than 30 minutes is considered stale). Cache A caches an object at 11:15, Cache B at 11:35. If the data item the cache is referring to is refreshed in the database at 11:40, Cache A will have the correct value at 11:45 when it expires but Cache B won't have it until 12:05 (see Figure 6). The problem now is that for 20 minutes you get different results - depending on which server you hit and on the use case this can be a big problem.

The solution for these cases is a timed cache that is refreshed at fixed points in time every n minutes, like at 12:00, 12:30, 1:00, etc. The advantage is that now all your caches are somewhat in sync (as in sync as the clocks on your servers are). The disadvantage is that the load on your servers increases quite a bit every time the caches are cleared, because they're cleared completely.

Which way you go depends on your business requirements; adjusting your refresh cycles largely depends on your data update frequency versus the cache hit rate you would like to achieve.

Of course, there are a variety of other ways to keep distributed copies of caches in sync, but these are not easy to implement and have a variety of side effects to consider.

Open Source and Commercial Caching Implementations
If your caching needs are more complex, or if you just don't want to "roll your own," you might want to give JSR 107 a look. This is the JCache JSR that specifies a distributed, resource-managed, Java-based cache. Even though little progress has been made to provide a production-ready implementation, there are several open source projects and products that are close to a JCache implementation and might provide what you need.

Commercial caching products should be considered if your caching requirements are complex (clustered environments, etc.). As mentioned earlier, distributed caching is not as easy at it seems and relying on an enterprise-class product often saves time and trouble.

Building a scalable solution often depends on making the right decisions in persistence mechanism and in caching. How, when, and where to cache is the trick; I hope this article helped you make the right decision.


  • JSR 104:
  • JCS and JCache at Apache:
  • More Stories By Stefan Piesche

    Stefan Piesche is a Principal Architect for the Cobalt Group (HQ in Seattle) responsible for large-scale, distributed systems based on J2EE. In the past years I worked on several large-scale systems in Europe in the financial and airline industry.

    Comments (15) View Comments

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

    Most Recent Comments
    Cameron 05/19/04 03:22:24 PM EDT

    Hi Jamal, cluster-wide means than any cluster node can request a lock, optionally queue for it (if it''s taken), and thus have the ability to safely work with data in-memory in the cluster, just like having "synchronized" as a keyword allows you to write thread-safe Java code. So just as "synchronized" is a necessary ingredient of scaling to multiple threads, the ability to manage locks and coherent data in a cluster is a necessary ingredient of horizontal scaling in a cluster.

    Jamal 05/05/04 04:53:20 PM EDT

    Perhaps I am misinterpreting the meaning of the ?cluster-wide? lock. It would seem that if you were locking a cache at the record level for a read/write operation, as the case may be, that the overall performance would degrade as the number of nodes increases because each node would effectively be locked until another node finished it?s work. This in turn creates a virtual cluster wide transaction (- ick -). A transaction of this nature can only decrease in performance as more nodes are added to the cluster.

    Cluster wide locks sound like a really, really bad idea for large-scale systems where scalability is key.

    Rob Misek 05/04/04 09:48:58 AM EDT

    Hi Jamal,

    The in-cluster operations generally start off more performant because they occur in-memory (and in Java) in the cluster in which the application is running, and do not have to go through inefficiencies of translating through the JDBC API. The icing on the cake is that the cluster performance does not degrade as more and more machines enter the cluster while the database performace decreases in the same situation. The operation stays at a relatively constant speed, regardless of of whether you have 2 servers or 20 servers in the cluster. Yet, with 20 servers you can manage 10x the throughput (as 2 servers).

    Jamal 05/03/04 09:19:11 PM EDT

    Rob, you indicate that Coherence provides cluster-wide locking as the concurrency mechanism. Can you please explain how a cluster-wide lock ?increases? performance?

    Rob Misek 04/28/04 04:30:37 PM EDT

    "For the data that are modified frequently and concurrently, they are best not to be cached."

    Actually, that is the area where you will see the largest performance improvement by caching. Coherence is used in this exact scenario since it provides cluster-wide locking as the concurrency mechanism.

    That coupled with the use of our "read-through/write-behind" technology when using an underlying datastore increases performance further. The huge advantage of a read-through/write-behind cache in a cluster is that the database is almost completely unloaded of repetitive read operations, and more importantly the database is unloaded from repetitive writes, which are usually much more expensive than read operations. Combined with the HA features of Coherence, even the write-behind data is managed safely during failover, allowing an application server to unexpectedly fail without any loss of uncommitted (write-behind) data.

    Stanley 04/28/04 12:41:54 AM EDT

    The more important thing is to determine what to cache and how much to cache. All applications use some kind of reference data that are used application-wide and change very infrequently. These can be cached at global level, at web/business tier using either singleton or application context. They can be refreshed overnightly and/or on demand. We have successfully implemented this type of cache in a number of projects, with custom notification mechanism to synchronize the caches on clustered servers.

    The Session object has also a place to cache certain types of data, such as those that are applicable mainly to the user session, hardly shared with other users, infrequently modified (rarely modified by other users while the user session is active). However we shall not store too many objects in the Session or the application will not scale.

    For the data that are modified frequently and concurrently, they are best not to be cached.

    I agree with Stefan that Entity EJB is not a good place to implement caches.

    Jim Carroll 04/21/04 12:00:15 PM EDT

    Your article is interesting but if scalability is your main aim (as the first paragraphs imply) then a thorough discussion of how to design out server maintained state information can be more important than sophisticated caching. Rather than recommending the use of Entity beans, designing a system that eliminates (or minimizes) their need (since they are, by definition, server managed state) will go much further toward a truly "linearly scalable" distributed system.


    Cameron 04/15/04 02:13:39 PM EDT

    Hi John, those limitations do not apply to system libraries, such as JDBC drivers, which *must* open sockets in order to communicate with a database running in a different process or on a different machine.

    The rules for EJBs, particularly the limitations, are there to restrict the component developer (someone writing an EJB that may be deployed to any application server) so that they are assured of the highest level of portability. The spec now explains a lot of the limitations. For example, if you access the file system, and the application server is clustered, how can you be sure that the file will be there when you fail over (since it might be a local file system, or the machine that is failed over to could be running on a completely different site!)

    Similarly, managing threads and sockets may be disabled for the EJB components from a security perspective, to allow for controlled environments for third party hosting of applications.

    IBM is working to standardize APIs so that the container can expose things like thread pooling to the component developer. In the mean time, those things should be considered off limits to an EJB developer if you want to make your EJBs as portable as possible. However, system libraries are still able to (and often must) use those APIs to implement features like database connectivity.


    John Segrave 04/15/04 01:21:27 PM EDT

    It strikes me that there''s little difference between an EJB Bean class calling a prohibited API and the Bean class calling an intermediate class which calls the prohibited API.

    Does EJB make a provision for when there is no alternative but to call these APIs? e.g. You can call them, as long as the call is routed through an XXX? (eg resource adapter? don''t really know much about resource adapters, apologies if this is a meaningless suggestion). How is a JDBC driver supposed to access a remote DB without making socket calls?



    Cameron 04/13/04 11:38:22 AM EDT

    A system library, such as Coherence or a JDBC driver or a JMS implementation, is allowed to manage socket communication, while an EJB component is not. Optionally, you can use Coherence as a transactional resource through the J2CA (Java 2 Enterprise Edition Connector Architecture) API.

    As far as compatibility, Coherence has hundreds of production deployments on numerous application server platforms (including WebSphere 3.5.4 - 5.1 and WebLogic 4.5.1 - 8.1) on JDK 1.2 - 1.4.


    John Segrave 04/13/04 02:53:36 AM EDT

    Hi Cameron, thanks for the replies.

    To my reading, the EJB 2.0 spec (section 24.1.2 to be precise!) appears to prohibit the use of multicast sockets: "An enterprise bean must not attempt to listen on a socket, accept connections on a socket, or use a socket for multicast."

    I''m genuinely not trying to be argumentative here, just trying to work out whether or not Coherence does or doesn''t contravene the spec, and if it does, was it done intentionally (in which case, what was the reason)?

    Normally, I''d expect an intentional contravention of a spec to be accompanied by an explanation (so customers can make up their own minds whether or not they think it may cause them problems). I''d like to give Coherence (and several other caching products) the benefit of the doubt, but at present that would be based on faith rather than evidence!

    Cameron 04/08/04 10:12:04 AM EDT

    Tangosol Coherence is built on top of a peer-to-peer cluster and uses the TCMP protocol, which is an asynchronous, dynamically-switched unicast/multicast datagram protocol, with reliable in-order delivery of cluster messages. Unlike simple invalidation strategies, Coherence maintains a coherent image of the data across the cluster, and supports fully replicated caches, cluster-partitioned caches, near caches, overflow (to NIO buffers and/or disk) caches, and database-backed caches.

    John Segrave 04/08/04 08:31:32 AM EDT

    What does it use for propagating cache messages in clusters - JMS, sockets? If sockets, does it do blocking reads?

    Cameron 04/07/04 04:47:18 PM EDT

    Tangosol Coherence is supported on basically all Java / J2EE application servers, and doesn''t screw with anything like setMessageListener(). It''s also the most mature and widely deployed clustered caching solution in the market.

    John Segrave 04/07/04 08:36:04 AM EDT

    I recently investigated several open source and commercial caching products and hit somewhat of a brick wall with the J2EE & EJB specs.

    I needed a cache to serve read-mostly data in a clustered EJB application. The application runs on several app servers, so portability was a key requirement.

    In my experience, the key element in ensuring portability is standards compliance. The J2EE and EJB specs forbid EJBs from using certain APIs, namely those that create threads, block on socket operations, etc. They go even further by explicitly naming forbidden APIs, such as JMS''s setMessageListener().

    However, pretty much all the caching products I investigated were using these APIs.

    In one example, I queried the vendor (who shall remain nameless!) about the use of setMessageListener() in their product. They replied by stating that (a) it works and (b) it was fine to use those APIs in classes that the EJB *uses*, just not in the *actual* EJB bean class itself.

    My feeling on (a) was that, if it contravenes the spec, then working on AppServer version X doesn''t guarantee it will work on version X+1. It also doesn''t mean it will work on any other app server. On (b) I don''t see that it makes any difference whether the call is in class A or in class B which gets called from A (in the same thread)?!... The vendor in question had only recently started marketing their product as a J2EE cache (previously it had been best known as a servlet cache). I don''t know if those restrictions apply to the web tier.

    In the end, due to portability concerns, we decided to roll our own. Our invalidation strategy was simple and was achievable using JMS and MDB''s (no spec contravention required).

    I''d be interested in other people''s experiences and perspectives on this topic.

    @ThingsExpo Stories
    Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem" in this scenario: microservice A (releases daily) depends on a couple of additions to backend B (re...
    With all the incredible momentum behind the Internet of Things (IoT) industry, it is easy to forget that not a single CEO wakes up and wonders if “my IoT is broken.” What they wonder is if they are making the right decisions to do all they can to increase revenue, decrease costs, and improve customer experience – effectively the same challenges they have always had in growing their business. The exciting thing about the IoT industry is now these decisions can be better, faster, and smarter. Now all corporate assets – people, objects, and spaces – can share information about themselves and thei...
    Two weeks ago (November 3-5), I attended the Cloud Expo Silicon Valley as a speaker, where I presented on the security and privacy due diligence requirements for cloud solutions. Cloud security is a topical issue for every CIO, CISO, and technology buyer. Decision-makers are always looking for insights on how to mitigate the security risks of implementing and using cloud solutions. Based on the presentation topics covered at the conference, as well as the general discussions heard between sessions, I wanted to share some of my observations on emerging trends. As cyber security serves as a fou...
    Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true change and transformation possible.
    There are over 120 breakout sessions in all, with Keynotes, General Sessions, and Power Panels adding to three days of incredibly rich presentations and content. Join @ThingsExpo conference chair Roger Strukhoff (@IoT2040), June 7-9, 2016 in New York City, for three days of intense 'Internet of Things' discussion and focus, including Big Data's indespensable role in IoT, Smart Grids and Industrial Internet of Things, Wearables and Consumer IoT, as well as (new) IoT's use in Vertical Markets.
    Container technology is shaping the future of DevOps and it’s also changing the way organizations think about application development. With the rise of mobile applications in the enterprise, businesses are abandoning year-long development cycles and embracing technologies that enable rapid development and continuous deployment of apps. In his session at DevOps Summit, Kurt Collins, Developer Evangelist at, examined how Docker has evolved into a highly effective tool for application delivery by allowing increasingly popular Mobile Backend-as-a-Service (mBaaS) platforms to quickly crea...
    The Internet of Things (IoT) is growing rapidly by extending current technologies, products and networks. By 2020, Cisco estimates there will be 50 billion connected devices. Gartner has forecast revenues of over $300 billion, just to IoT suppliers. Now is the time to figure out how you’ll make money – not just create innovative products. With hundreds of new products and companies jumping into the IoT fray every month, there’s no shortage of innovation. Despite this, McKinsey/VisionMobile data shows "less than 10 percent of IoT developers are making enough to support a reasonably sized team....
    The cloud. Like a comic book superhero, there seems to be no problem it can’t fix or cost it can’t slash. Yet making the transition is not always easy and production environments are still largely on premise. Taking some practical and sensible steps to reduce risk can also help provide a basis for a successful cloud transition. A plethora of surveys from the likes of IDG and Gartner show that more than 70 percent of enterprises have deployed at least one or more cloud application or workload. Yet a closer inspection at the data reveals less than half of these cloud projects involve production...
    As organizations realize the scope of the Internet of Things, gaining key insights from Big Data, through the use of advanced analytics, becomes crucial. However, IoT also creates the need for petabyte scale storage of data from millions of devices. A new type of Storage is required which seamlessly integrates robust data analytics with massive scale. These storage systems will act as “smart systems” provide in-place analytics that speed discovery and enable businesses to quickly derive meaningful and actionable insights. In his session at @ThingsExpo, Paul Turner, Chief Marketing Officer at...
    Internet of @ThingsExpo, taking place June 7-9, 2016 at Javits Center, New York City and Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 18th International @CloudExpo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo New York Call for Papers is now open.
    We all know that data growth is exploding and storage budgets are shrinking. Instead of showing you charts on about how much data there is, in his General Session at 17th Cloud Expo, Scott Cleland, Senior Director of Product Marketing at HGST, showed how to capture all of your data in one place. After you have your data under control, you can then analyze it in one place, saving time and resources.
    With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York and Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound cha...
    We are rapidly moving to a brave new world of interconnected smart homes, cars, offices and factories known as the Internet of Things (IoT). Sensors and monitoring devices will touch every part of our lives. Let's take a closer look at the Internet of Things. The Internet of Things is a worldwide network of objects and devices connected to the Internet. They are electronics, sensors, software and more. These objects connect to the Internet and can be controlled remotely via apps and programs. Because they can be accessed via the Internet, these devices create a tremendous opportunity to inte...
    Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Day 2 Keynote at 17th Cloud Expo, Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, wil...
    Cloud computing delivers on-demand resources that provide businesses with flexibility and cost-savings. The challenge in moving workloads to the cloud has been the cost and complexity of ensuring the initial and ongoing security and regulatory (PCI, HIPAA, FFIEC) compliance across private and public clouds. Manual security compliance is slow, prone to human error, and represents over 50% of the cost of managing cloud applications. Determining how to automate cloud security compliance is critical to maintaining positive ROI. Raxak Protect is an automated security compliance SaaS platform and ma...
    SYS-CON Events announced today that Kintone has been named "Bronze Sponsor" of SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. kintone promotes cloud-based workgroup productivity, transparency and profitability with a seamless collaboration space, build your own business application (BYOA) platform, and workflow automation system.
    Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi’s VP Business Development and Engineering, explored the IoT cloud-based platform technologies driving this change including privacy controls, data transparency and integration of real time context with p...
    Just over a week ago I received a long and loud sustained applause for a presentation I delivered at this year’s Cloud Expo in Santa Clara. I was extremely pleased with the turnout and had some very good conversations with many of the attendees. Over the next few days I had many more meaningful conversations and was not only happy with the results but also learned a few new things. Here is everything I learned in those three days distilled into three short points.
    DevOps is about increasing efficiency, but nothing is more inefficient than building the same application twice. However, this is a routine occurrence with enterprise applications that need both a rich desktop web interface and strong mobile support. With recent technological advances from Isomorphic Software and others, rich desktop and tuned mobile experiences can now be created with a single codebase – without compromising functionality, performance or usability. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, demonstrated examples of com...
    In his General Session at 17th Cloud Expo, Bruce Swann, Senior Product Marketing Manager for Adobe Campaign, explored the key ingredients of cross-channel marketing in a digital world. Learn how the Adobe Marketing Cloud can help marketers embrace opportunities for personalized, relevant and real-time customer engagement across offline (direct mail, point of sale, call center) and digital (email, website, SMS, mobile apps, social networks, connected objects).