Welcome!

Java IoT Authors: Sematext Blog, Elizabeth White, William Schmarzo, Jyoti Bansal, Yeshim Deniz

Related Topics: Java IoT

Java IoT: Article

Scalability of J2EE Applications: Effective Caching's Key

Scalability of J2EE Applications: Effective Caching's Key

Sooner or later all architects and developers of large-scale J2EE products face the same problem: their software's response time gets slower and slower, and the scalability of their solution is ending. This article investigates caching solutions that promise to help; sheds some light on their limitations; and describes an easy, lightweight, and effective caching mechanism that solves most of the issues.

Note: This article does not assess all possible ways of caching nor does it take solutions such as commercial external caching products into account.

The Problem
Whenever we build distributed software for a large scale - whether it's J2EE or not - we face the same challenge: keep response or transaction times low while increasing user load. The main problem is that essentially all software systems scale exponentially at some point (see Figure 1). Architecting and implementing a solution that keeps scalability linear and leaves enough room for increasing load as the business grows is a difficult task that requires experience.

A good architect keeps traffic, transaction times and volume, persistence layer design, and caching in mind when he or she drafts the first layout of a new architecture. Understanding concurrent access by n users on m data items is one of the major things an architect looks for.

Possible Solutions
Minimizing traffic in all tiers is the primary objective when creating a scalable solution. Figure 2 shows a typical three-tier system.

While the persistence tier in modern databases already provides significant caching capabilities, it's rarely enough for large-scale systems. What do other mechanisms do to increase performance and scalability, and to what tier/layer do they apply?

Stored Procedures?
I mention this because aside from caching, one suggestion I always hear is using stored procedures. I'd like to encourage everyone to consider different options. Using stored procedures splits the persistence layer over two physical tiers and usually improves only single user performance.

If you look at your application server's console, you might see, for example, that of the 500ms a servlet or JSP request takes, only 100ms are spent on the DB transaction side. Squeezing another 30ms out by using stored procedures rarely makes your system scale - you still need DB connections, cursors, and other resources.

Persistence Layer Caching
The easiest way to cache in J2EE systems is with entity beans (if we say entity beans let's only talk about CMP for the moment); I can hear the readers moan, but the fact remains: they are the only "good" way of caching in J2EE solutions. Why? Because the maximum cache size is controllable by setting the maximum number of beans and because the resource is in control of the container, as they can be passivated if memory is short. Usually, they are the only resource that is clusterable as well.

Why would most developers and architects say entity beans are bad for your performance? Because they are. In a single request use case, they have significant overhead compared to direct JDBC. But even in scalability assessments, entity beans often come out last, because their usage as a cache is determined by the possible cache hit rate, just like any other cache. The cache hit rate is determined by the number of reads versus the number of data items versus the number of writes.

Ultimately, if you use entity beans you really need to know what you're doing. While that might be true for any out-of-the-box mechanisms a container provides, it's especially true for entity beans. It's easy to get it wrong and a lot of containers have less than mediocre support for entity beans.

Entity beans make sense if:

  • Your reads and writes are few, then scalability is not your concern anyway and CMP EJBs are just as convenient.
  • Your reads are many, your writes and number of data items are few - this means maximum cache hit rate - you have just a few items to cache (most containers only perform well with a few thousand entity bean instances per CPU), and it rarely becomes stale because you hardly write.
In all other cases, entity beans just make things worse due to their management overhead. Figure 3 shows that cache efficiencies (like entity beans) depend on the number of reads versus the number of writes versus the number of rows (which is an oversimplified perception and not real math). Caching with entity beans works well within the green area.

One important fact needs to be considered as well: some application servers (WLS 6, WebSphere) do not support EntityBean clustering/ caching in clustered infrastructures. In other words, they often support only the caching of read-only entity beans if you run a cluster, which rules straight CMP out completely to increase scalability.

Let's have a quick look at BMP (mainly read-only or read-mostly BMP). These type of entity beans can be used to solve the problem of too many entity bean instances by allowing you to change the caching granularity: while CMP caches on a per-data-row basis, RO BMPs can essentially cache on any desired granularity level and are basically similar to the caching mechanism I'll discuss later. However, they still have a few disadvantages, such as the entity bean management overhead or (depending on your container implementation) the fact that they usually are - like all entity beans - single threaded: only one request at a time can access the cache.

In all other cases (mixed reads/ writes, lots of data, few reads many writes, etc.), how do we make our software scalable?

Web Tier Caching Using HTTP Session
If persistence layer caching through entity beans is ruled out, we have two tiers left where we could cache.

The most obvious choice developers often make is HTTP session caching. Since it caches at the uppermost tier, it should be most effective at minimizing traffic, right? However, using the HTTP session as a cache makes architects of large-scale systems shudder.

First, it caches on a per-session basis: it helps if one user performs the same or similar action 5,000 times but not if 5,000 users perform one action.

Second, the cache invalidation and GC is based on the session time-out - usually something like 60 minutes. Even if a user works for 10 minutes in your system, the data is cached for 60 minutes, which makes the cache size six times as big as it needs to be, unless you invalidate your session manually.

Finally, it removes one important task from the container: resource management. Since this cache cannot be cleared by the container, it often causes problems since the container cannot GC these objects even if memory resources become short. The container's GC cycles become more frequent and the GC has to walk over a large set of mainly stale objects in your session, making the cycles longer than they need to be.

Singletons and Application Context
The last place to cache is in the business layer (the following mechanism could be used in the Web tier as well). Since the HTTP session is not very effective at caching in high-traffic systems, the next best choice is using singletons to cache objects or data from the database.

Singletons (just like the application context) have the advantage that they again cache for all requests, but still are not a container-managed resource. Frequently singleton caches are implemented as a plain Hashtable and are unlimited in size, which causes almost the same problems as HTTP session caching.

I'd like to recall a simple but effective caching strategy that is singleton-based and uses a container such as a mechanism of resource management to keep resource usage to a minimum.

LRU Caching
The strategy used is called LRU (least recently used), also known as MRU (most recently used). Essentially, it only caches objects that are used frequently by limiting the cache size to a fixed number of items (hence the name), just like a container pool size for EJBs, thus keeping resource utilization controlled.

How does this work? Essentially it's a stack: if an object is requested from the stack and it's not there (cache miss), it's inserted at the very top. If your cache size is 1,000 items and the cache is full, the last item will fall off the stack and effectively be removed from the cache (see Figure 4).

In case an object is on the cache, it will be removed and reinserted at the top (see Figure 5).

This way, the most often used items will remain at the top, and the least used items will eventually drop off the stack. You can even keep track of your hits and misses easily and either query this information to reconfigure your cache or grow and shrink the maximum size dynamically. This way, you minimize usage of resources and maximize cache effectiveness. The stack implementation depends on your needs: choose an unsynchronized model if necessary to allow concurrent reads and minimize overhead.

Cache Invalidation
This cache works best in read-only or read-mostly scenarios. Unless you implement write-back or other write cache synchronization schemes or don't care that the cache is out of sync with the data source, you'll have to invalidate the cache, which decreases the cache hit rate and efficiency. For example, you can implement write-through caches fairly easily using dynamic proxy classes (JDK 1.3 introduced support for dynamic proxies) but that is a topic for another article.

Singleton-based LRU caching still has the typical problem of all singleton-based caches: a singleton is not a singleton in distributed systems (J2EE for that matter) but unique per classloader or server context (if you're lucky), and it's not kept in sync in clustered environments. There are, of course, mechanisms to implement the synchronization of distributed resources; some of them are difficult to implement or have scalability or availability issues; some work just fine. Distributed caching is not easy and if your requirements force you to go down this path, you might be well served choosing a commercial caching product.

The fact that you have several unsynchronized cache copies in clustered environments can be a big problem. The easy solution is using timed caches (just like read-only entity beans), which means that if a cached object is a certain age, it's considered stale and will be dropped from the cache. This is sufficient in most cases, but let's look at the following scenario.

Let's assume our invalidation time is 30 minutes (an object older than 30 minutes is considered stale). Cache A caches an object at 11:15, Cache B at 11:35. If the data item the cache is referring to is refreshed in the database at 11:40, Cache A will have the correct value at 11:45 when it expires but Cache B won't have it until 12:05 (see Figure 6). The problem now is that for 20 minutes you get different results - depending on which server you hit and on the use case this can be a big problem.

The solution for these cases is a timed cache that is refreshed at fixed points in time every n minutes, like at 12:00, 12:30, 1:00, etc. The advantage is that now all your caches are somewhat in sync (as in sync as the clocks on your servers are). The disadvantage is that the load on your servers increases quite a bit every time the caches are cleared, because they're cleared completely.

Which way you go depends on your business requirements; adjusting your refresh cycles largely depends on your data update frequency versus the cache hit rate you would like to achieve.

Of course, there are a variety of other ways to keep distributed copies of caches in sync, but these are not easy to implement and have a variety of side effects to consider.

Open Source and Commercial Caching Implementations
If your caching needs are more complex, or if you just don't want to "roll your own," you might want to give JSR 107 a look. This is the JCache JSR that specifies a distributed, resource-managed, Java-based cache. Even though little progress has been made to provide a production-ready implementation, there are several open source projects and products that are close to a JCache implementation and might provide what you need.

Commercial caching products should be considered if your caching requirements are complex (clustered environments, etc.). As mentioned earlier, distributed caching is not as easy at it seems and relying on an enterprise-class product often saves time and trouble.

Building a scalable solution often depends on making the right decisions in persistence mechanism and in caching. How, when, and where to cache is the trick; I hope this article helped you make the right decision.

References

  • JSR 104: www.jcp.org/en/jsr/detail?id=107
  • JCS and JCache at Apache: http://jakarta.apache.org/turbine/jcs/JCSandJCACHE.html
  • More Stories By Stefan Piesche

    Stefan Piesche is a Principal Architect for the Cobalt Group (HQ in Seattle) responsible for large-scale, distributed systems based on J2EE. In the past years I worked on several large-scale systems in Europe in the financial and airline industry.

    Comments (15) View Comments

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    Most Recent Comments
    Cameron 05/19/04 03:22:24 PM EDT

    Hi Jamal, cluster-wide means than any cluster node can request a lock, optionally queue for it (if it''s taken), and thus have the ability to safely work with data in-memory in the cluster, just like having "synchronized" as a keyword allows you to write thread-safe Java code. So just as "synchronized" is a necessary ingredient of scaling to multiple threads, the ability to manage locks and coherent data in a cluster is a necessary ingredient of horizontal scaling in a cluster.

    Jamal 05/05/04 04:53:20 PM EDT

    Perhaps I am misinterpreting the meaning of the ?cluster-wide? lock. It would seem that if you were locking a cache at the record level for a read/write operation, as the case may be, that the overall performance would degrade as the number of nodes increases because each node would effectively be locked until another node finished it?s work. This in turn creates a virtual cluster wide transaction (- ick -). A transaction of this nature can only decrease in performance as more nodes are added to the cluster.

    Cluster wide locks sound like a really, really bad idea for large-scale systems where scalability is key.

    Rob Misek 05/04/04 09:48:58 AM EDT

    Hi Jamal,

    The in-cluster operations generally start off more performant because they occur in-memory (and in Java) in the cluster in which the application is running, and do not have to go through inefficiencies of translating through the JDBC API. The icing on the cake is that the cluster performance does not degrade as more and more machines enter the cluster while the database performace decreases in the same situation. The operation stays at a relatively constant speed, regardless of of whether you have 2 servers or 20 servers in the cluster. Yet, with 20 servers you can manage 10x the throughput (as 2 servers).

    Jamal 05/03/04 09:19:11 PM EDT

    Rob, you indicate that Coherence provides cluster-wide locking as the concurrency mechanism. Can you please explain how a cluster-wide lock ?increases? performance?

    Rob Misek 04/28/04 04:30:37 PM EDT

    "For the data that are modified frequently and concurrently, they are best not to be cached."

    Actually, that is the area where you will see the largest performance improvement by caching. Coherence is used in this exact scenario since it provides cluster-wide locking as the concurrency mechanism.

    That coupled with the use of our "read-through/write-behind" technology when using an underlying datastore increases performance further. The huge advantage of a read-through/write-behind cache in a cluster is that the database is almost completely unloaded of repetitive read operations, and more importantly the database is unloaded from repetitive writes, which are usually much more expensive than read operations. Combined with the HA features of Coherence, even the write-behind data is managed safely during failover, allowing an application server to unexpectedly fail without any loss of uncommitted (write-behind) data.

    Stanley 04/28/04 12:41:54 AM EDT

    The more important thing is to determine what to cache and how much to cache. All applications use some kind of reference data that are used application-wide and change very infrequently. These can be cached at global level, at web/business tier using either singleton or application context. They can be refreshed overnightly and/or on demand. We have successfully implemented this type of cache in a number of projects, with custom notification mechanism to synchronize the caches on clustered servers.

    The Session object has also a place to cache certain types of data, such as those that are applicable mainly to the user session, hardly shared with other users, infrequently modified (rarely modified by other users while the user session is active). However we shall not store too many objects in the Session or the application will not scale.

    For the data that are modified frequently and concurrently, they are best not to be cached.

    I agree with Stefan that Entity EJB is not a good place to implement caches.

    Jim Carroll 04/21/04 12:00:15 PM EDT

    Your article is interesting but if scalability is your main aim (as the first paragraphs imply) then a thorough discussion of how to design out server maintained state information can be more important than sophisticated caching. Rather than recommending the use of Entity beans, designing a system that eliminates (or minimizes) their need (since they are, by definition, server managed state) will go much further toward a truly "linearly scalable" distributed system.

    Regards,
    Jim

    Cameron 04/15/04 02:13:39 PM EDT

    Hi John, those limitations do not apply to system libraries, such as JDBC drivers, which *must* open sockets in order to communicate with a database running in a different process or on a different machine.

    The rules for EJBs, particularly the limitations, are there to restrict the component developer (someone writing an EJB that may be deployed to any application server) so that they are assured of the highest level of portability. The spec now explains a lot of the limitations. For example, if you access the file system, and the application server is clustered, how can you be sure that the file will be there when you fail over (since it might be a local file system, or the machine that is failed over to could be running on a completely different site!)

    Similarly, managing threads and sockets may be disabled for the EJB components from a security perspective, to allow for controlled environments for third party hosting of applications.

    IBM is working to standardize APIs so that the container can expose things like thread pooling to the component developer. In the mean time, those things should be considered off limits to an EJB developer if you want to make your EJBs as portable as possible. However, system libraries are still able to (and often must) use those APIs to implement features like database connectivity.

    Peace.

    John Segrave 04/15/04 01:21:27 PM EDT

    It strikes me that there''s little difference between an EJB Bean class calling a prohibited API and the Bean class calling an intermediate class which calls the prohibited API.

    Does EJB make a provision for when there is no alternative but to call these APIs? e.g. You can call them, as long as the call is routed through an XXX? (eg resource adapter? don''t really know much about resource adapters, apologies if this is a meaningless suggestion). How is a JDBC driver supposed to access a remote DB without making socket calls?

    Cheers,

    J

    Cameron 04/13/04 11:38:22 AM EDT

    A system library, such as Coherence or a JDBC driver or a JMS implementation, is allowed to manage socket communication, while an EJB component is not. Optionally, you can use Coherence as a transactional resource through the J2CA (Java 2 Enterprise Edition Connector Architecture) API.

    As far as compatibility, Coherence has hundreds of production deployments on numerous application server platforms (including WebSphere 3.5.4 - 5.1 and WebLogic 4.5.1 - 8.1) on JDK 1.2 - 1.4.

    Peace.

    John Segrave 04/13/04 02:53:36 AM EDT

    Hi Cameron, thanks for the replies.

    To my reading, the EJB 2.0 spec (section 24.1.2 to be precise!) appears to prohibit the use of multicast sockets: "An enterprise bean must not attempt to listen on a socket, accept connections on a socket, or use a socket for multicast."

    I''m genuinely not trying to be argumentative here, just trying to work out whether or not Coherence does or doesn''t contravene the spec, and if it does, was it done intentionally (in which case, what was the reason)?

    Normally, I''d expect an intentional contravention of a spec to be accompanied by an explanation (so customers can make up their own minds whether or not they think it may cause them problems). I''d like to give Coherence (and several other caching products) the benefit of the doubt, but at present that would be based on faith rather than evidence!

    Cameron 04/08/04 10:12:04 AM EDT

    Tangosol Coherence is built on top of a peer-to-peer cluster and uses the TCMP protocol, which is an asynchronous, dynamically-switched unicast/multicast datagram protocol, with reliable in-order delivery of cluster messages. Unlike simple invalidation strategies, Coherence maintains a coherent image of the data across the cluster, and supports fully replicated caches, cluster-partitioned caches, near caches, overflow (to NIO buffers and/or disk) caches, and database-backed caches.

    John Segrave 04/08/04 08:31:32 AM EDT

    What does it use for propagating cache messages in clusters - JMS, sockets? If sockets, does it do blocking reads?

    Cameron 04/07/04 04:47:18 PM EDT

    Tangosol Coherence is supported on basically all Java / J2EE application servers, and doesn''t screw with anything like setMessageListener(). It''s also the most mature and widely deployed clustered caching solution in the market.

    John Segrave 04/07/04 08:36:04 AM EDT

    I recently investigated several open source and commercial caching products and hit somewhat of a brick wall with the J2EE & EJB specs.

    I needed a cache to serve read-mostly data in a clustered EJB application. The application runs on several app servers, so portability was a key requirement.

    In my experience, the key element in ensuring portability is standards compliance. The J2EE and EJB specs forbid EJBs from using certain APIs, namely those that create threads, block on socket operations, etc. They go even further by explicitly naming forbidden APIs, such as JMS''s setMessageListener().

    However, pretty much all the caching products I investigated were using these APIs.

    In one example, I queried the vendor (who shall remain nameless!) about the use of setMessageListener() in their product. They replied by stating that (a) it works and (b) it was fine to use those APIs in classes that the EJB *uses*, just not in the *actual* EJB bean class itself.

    My feeling on (a) was that, if it contravenes the spec, then working on AppServer version X doesn''t guarantee it will work on version X+1. It also doesn''t mean it will work on any other app server. On (b) I don''t see that it makes any difference whether the call is in class A or in class B which gets called from A (in the same thread)?!... The vendor in question had only recently started marketing their product as a J2EE cache (previously it had been best known as a servlet cache). I don''t know if those restrictions apply to the web tier.

    In the end, due to portability concerns, we decided to roll our own. Our invalidation strategy was simple and was achievable using JMS and MDB''s (no spec contravention required).

    I''d be interested in other people''s experiences and perspectives on this topic.

    @ThingsExpo Stories
    SYS-CON Events announced today that IoT Now has been named “Media Sponsor” of SYS-CON's 20th International Cloud Expo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. IoT Now explores the evolving opportunities and challenges facing CSPs, and it passes on some lessons learned from those who have taken the first steps in next-gen IoT services.
    SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 20th Cloud Expo, which will take place on June 6-8, 2017 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 add...
    SYS-CON Events announced today that MobiDev, a client-oriented software development company, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software company that develops and delivers turn-key mobile apps, websites, web services, and complex softw...
    DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
    What sort of WebRTC based applications can we expect to see over the next year and beyond? One way to predict development trends is to see what sorts of applications startups are building. In his session at @ThingsExpo, Arin Sime, founder of WebRTC.ventures, will discuss the current and likely future trends in WebRTC application development based on real requests for custom applications from real customers, as well as other public sources of information,
    As businesses adopt functionalities in cloud computing, it’s imperative that IT operations consistently ensure cloud systems work correctly – all of the time, and to their best capabilities. In his session at @BigDataExpo, Bernd Harzog, CEO and founder of OpsDataStore, will present an industry answer to the common question, “Are you running IT operations as efficiently and as cost effectively as you need to?” He will expound on the industry issues he frequently came up against as an analyst, and...
    Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor - all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
    My team embarked on building a data lake for our sales and marketing data to better understand customer journeys. This required building a hybrid data pipeline to connect our cloud CRM with the new Hadoop Data Lake. One challenge is that IT was not in a position to provide support until we proved value and marketing did not have the experience, so we embarked on the journey ourselves within the product marketing team for our line of business within Progress. In his session at @BigDataExpo, Sum...
    Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing and analyzing streaming data is the Lambda Architecture, represent...
    Things are changing so quickly in IoT that it would take a wizard to predict which ecosystem will gain the most traction. In order for IoT to reach its potential, smart devices must be able to work together. Today, there are a slew of interoperability standards being promoted by big names to make this happen: HomeKit, Brillo and Alljoyn. In his session at @ThingsExpo, Adam Justice, vice president and general manager of Grid Connect, will review what happens when smart devices don’t work togethe...
    SYS-CON Events announced today that Ocean9will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Ocean9 provides cloud services for Backup, Disaster Recovery (DRaaS) and instant Innovation, and redefines enterprise infrastructure with its cloud native subscription offerings for mission critical SAP workloads.
    SYS-CON Events announced today that Technologic Systems Inc., an embedded systems solutions company, will exhibit at SYS-CON's @ThingsExpo, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Technologic Systems is an embedded systems company with headquarters in Fountain Hills, Arizona. They have been in business for 32 years, helping more than 8,000 OEM customers and building over a hundred COTS products that have never been discontinued. Technologic Systems’ pr...
    SYS-CON Events announced today that CA Technologies has been named “Platinum Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business – from apparel to energy – is being rewritten by software. From ...
    The taxi industry never saw Uber coming. Startups are a threat to incumbents like never before, and a major enabler for startups is that they are instantly “cloud ready.” If innovation moves at the pace of IT, then your company is in trouble. Why? Because your data center will not keep up with frenetic pace AWS, Microsoft and Google are rolling out new capabilities In his session at 20th Cloud Expo, Don Browning, VP of Cloud Architecture at Turner, will posit that disruption is inevitable for c...
    The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
    SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 20th International Cloud Expo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buyers...
    SYS-CON Events announced today that T-Mobile will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on ...
    SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 20th International Cloud Expo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
    In his keynote at @ThingsExpo, Chris Matthieu, Director of IoT Engineering at Citrix and co-founder and CTO of Octoblu, focused on building an IoT platform and company. He provided a behind-the-scenes look at Octoblu’s platform, business, and pivots along the way (including the Citrix acquisition of Octoblu).
    Web Real-Time Communication APIs have quickly revolutionized what browsers are capable of. In addition to video and audio streams, we can now bi-directionally send arbitrary data over WebRTC's PeerConnection Data Channels. With the advent of Progressive Web Apps and new hardware APIs such as WebBluetooh and WebUSB, we can finally enable users to stitch together the Internet of Things directly from their browsers while communicating privately and securely in a decentralized way.