|By David Smith
|April 10, 2014 11:30 AM EDT
by Joseph Rickert
The seven lightning talks presented to the Bay Area useR Group on Tuesday night were not only really interesting (in some cases downright entertaining) in their own right, but they also illustrated the diversity of R applications, and the extent to which R has become embedded in the corporate world. Two presentations with a whimsical touch were Gaston Sanchez’s talk on Arc Diagrams with R and Ram Narasimhan presentation on comparing the weather of various cities. Gaston showed a statistical text analysis of the movie scripts from three Star War episodes using arc-diagram representations. Gaston did some original work here in creating the arc-diagram plots and showed how to use R’s tm and igraph packages to extract text and compute adjacency matrices. The Star Wars analaysis code and the arc-diagram code are both available.
Ram’s talk was based on his weather data package (V0.3 on CRAN and V0.4 at GitHub) which has become a very useful and popular tool for scraping weather data from airports and weather stations around the world. The following plot shows how various cities rank according to his wife’s personal comfort score.
Also have a look at Ram’s Shiny app next time you are wondering whether you should visit San Francisco or Honolulu.
Presentations from Sara Brumbaugh on Running R from Excel, Winston Chen on Data Analysis with RStudio and MongoDB, and Cliff Click and Nidhi Mehta on Using H20 with R all made cases for integrating R with other corporate tools. Sara showed how to combine R scripts and Excel VBA code to pass inputs and parameters from a worksheet to a batch process, and back again. She showed several practical examples as well as quite a few virtuoso Excel tricks like storing and R script in a hidden Excel worksheet.
Winston’s talk emphasized how R’s visualization capabilities alone are enough to earn it a place in a big-league machine learning shop. The platform stack at Winston’s company, fliptop. is built around Java/Scala, MongoDB/MySql and Python. But with all of that power they still didn’t have a good way to do data visualization with exploratory data analysis. Winston showed some examples with code of how they use RStudio to pull data from MongoDB into an R data frame where they can plot it.
Cliff, 0xdata’s CTO, gave a succinct overview of how the H20 JVM can free R from its memory and speed limitations and make it possible to run machine learning algorithms from the R environment on huge data sets. According to Cliff, if you built a 16 node cluster of machines each with 64GB of RAM and all running H20 you could have a terabyte cluster for H20’s in-memory analytics and run logistic regression, gbm, neural nets, random forests and other machine learning algorithms through the R to H20 Interface. Cliff emphasized that H20 implements a "group-by" feature that is very similar to the way plyr’s ddply function making it possible to do R style analyses on big data. Nidhi followed up by running several of the examples that can be found on the 0xdata website. Nidhi showed real grace under pressure, and made the speed of the H20 algorithms seem all the more impressive by running live demos one after the other while the clock on the 12 minute presentation time limit was running out.
Finally the two presentations, the first by, Raman Kapur on Managing Enterprise Cyber Risk through Big Data & Analytics, and the second by Giovanni Seni on Intuit’s new Rego package show how R applications can form the foundation of a production system. After providing some background information on the prevalence of information security breaches, Raman talked about how Foundation’s Edge has built Avana, an R based system to model the risk profile of a corporation’s business units.
Giovanni gave a brief introduction to the rule based ensemble methods developed by Friedman and Popescu and worked through an example using the Rego package, which is newly available on Github. Giovanni, who has considerable experience with ensemble methods (have a look at the book he wrote with John Elder), said that he favors rule based methods because of their interpretability. He stressed that in addition to building predictive models, data scientists are often seeking insight into how complex systems work. Rule based ensemble models are useful for both purposes, often outperforming tree based classifiers for prediction. A notable feature of the Rego package is that it has a command-line, batch interface. Here we have an R package that is meant to do the heavy lifting in a production system.
key link: BARUG presentations