Java Authors: Jerry Melnick, Liz McMillan, Esmeralda Swartz, Michelle Drolet, Kevin Benedict

Related Topics: Cloud Expo, Java, SOA & WOA, .NET, Open Source, Big Data Journal

Cloud Expo: Article

Case Study: Integrating Redshift, DynamoDB and JasperSoft

JasperSoft and Redshift are touted as the new way to do data warehousing in the cloud

We recently completed a proof-of-concept (POC) that involved pulling data out of DynamoDB and into Redshift so that business users could analyze the data in an ad hoc manner with JasperSoft. JasperSoft and Redshift are touted as the new way to do data warehousing in the cloud and we were using DynamoDB as a sort of alternative to something Hadoop based.

We were amazed at how quickly and easily you could get a business-user friendly view of the data we had stored in DynamoDB using Redshift and JasperSoft. The actual human effort required to copy some initial data from DynamoDB into Redshift and then view it in JasperSoft was barely a few hours. However, there were a few unforseen technical challenges, but these challenges were not insurmountable. Ultimately we will continue with this technology combination because, as well as being easy to deploy and use, it gives us confidence that we can scale the POC into a "real" solution, and that our growing data needs will be taken care of.

Redshift's inbuilt copy from DynamoDB function makes getting data into Redshift fast but has limitations
Redshift provides an out-of-the-box copy function to copy data from DynamoDBinto Redshift without the need to set up servers or write any code other than a few simple lines of SQL. We were able to copy many of our tables straight out of DynamoDB and into Redshift and start running ad hoc queries without having to fire up servers or create an entire data translation layer of software.

We couldn't copy all of our tables, however, as DynamoDB's String Set field type is not currently supported by Redshift's copy function. After attempts at various SQL hacks and closely reading the Redshift manual we realized that we would not be able to work with these tables in the POC. In the future we will write some simple copy scripts (unless Amazon beats us to it and updates the copy command which, given their continuous product improvement, is likely).

Copying takes time
As it was a POC we were only copying across 3.5GB or 50 million rows of data but this process did take some time to complete - it took us 37 hours. Both Redshift and DynamoDB were running on the lowest performance settings and the DynamoDB instance was also servicing the needs of the live beta application we were trying to extract data from. We suspect this process could easily be made quicker by increasing the DynamoDB instance power and the power of Redshift but we did not test this.

The time it takes to copy 3.5GB of data indicated to us that a considered approach is necessary for getting data from DynamoDB into Redshift, especially considering that the live data will be much larger in volume. For example, when this goes into production we are only going to copy new records on a daily basis instead of clearing the entire Redshift database and reloading it to stay up to date.

Working in SQL with Redshift makes life easy
Once we had pulled our initial copy of data into Redshift we needed to manipulate the data to get it into a form that business users could analyze and create reports with.

The great thing about Redshift is that you are working in an environment you are familiar with. SQL. Redshift, at the time of writing, is based on PostgreSQL 8.0.2 so we were able to apply familiar string manipulation and math functions as well as create and join new tables to make the data much easier to understand for a non-technical business user.

Some SQL functions aren't yet supported by Redshift so we had to read through the documentation every now and then to find a suitable alternative. Sometimes it was just about trying to find the alternate name Redshift was using for a function we were used to using. Other times it meant creating some interesting workaround SQL. For example, Redshift doesn't support a function that can convert a Unix timestamp to a date so we had to manually convert our time stamps to dates using a mathematical formula.

Instant, non-technical user friendly data access with JasperSoft
We easily spun up a JasperSoft OnDemand instance and connected it to Redshift quite quickly. We were then creating ad hoc views and reports in a matter of minutes.

We did have some issues analyzing one of our tables straight out-of-the-box, one table had almost 3.5GB. Attempting to view reports on this table led to JasperSoft crashing. With some tweaking of the way the reports ran we were able to prevent analysis of this table from crashing JasperSoft.

More Stories By Scott Middleton

Scott Middleton is the CEO and Principal Consultant at Terem Technologies, a company that specializes in custom software development for innovative companies and high-tech ventures.

Comments (1)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.