Providing adequate software and tools for researchers has always been of great importance to organizations, but has often come at a great cost. In an era of constantly evolving technology and rapidly dwindling budgets, my IT team has had to work with a large pool of researchers to provide cost-effective solutions that meet the ever-growing demand for innovation and computing power.
I am an Information Technologist for the Department of Statistics and Probability at Michigan State University. The Department is home to award-winning faculty with a wide variety of expertise in fundamental and interdisciplinary research, and over 100 graduate students from all over the world. Keeping the faculty and students ahead of their research is a constantly evolving challenge for my team and I.
Evolution of Statistical Software
Lesson #1: The Shortcomings of Open Source
As more people began to use R and the analysis became increasingly complex, researchers began to face a large problem: time. Research was taking several months to complete in terms of processing jobs. Often, there is a need to run the calculations several times to ensure accuracy; waiting three months for one to complete was simply not feasible. It was taking R this long to process the jobs because the iterations were computed in serial, one right after another, using only one processor core at a time.
Bo Cowgill from Google once said "The best thing about R is that it was developed by statisticians. The worst thing about R ...is that it was developed by statisticians." Even though R was--and still is--constantly evolving, the department needed a solution that could keep up with hardware technology and compute calculations in an efficient, scalable manner.
Lesson #2: Find Commercial Enhancements for Open Source
Our search for a more effective version of R ultimately brought us to a product called Revolution R Enterprise by Revolution Analytics, which provides commercial support and software for open source R. It takes advantage of multiple processor cores by using optimized assembly code and efficient multi-threaded algorithms that use all of the processor cores simultaneously. Although this addressed a lot of the issues of open source R, professors were only using Revolution R on their desktops. The next question was, how we could combine the power of our servers to dramatically decrease our computation times?
Lesson #3: Expanding to Infinity and Beyond
Open Source R is a memory-bound language. This means that all of the data, matrices, lists etc. need to be stored in memory. Issues quickly arose when data sets became several gigabytes large and were too big to fit into memory. This required implementing parallel external memory algorithms and data structures to handle the data. These challenges were tackled by Revolution Analytics as they developed the R language for a High Performance Computing (HPC) environment.
Once the department could schedule R jobs in an HPC environment, the demand began to drastically increase. The HPC cluster is now scheduling more than four times the amount of jobs that were scheduled in previous semesters, from 200 jobs over a year ago to over 800 jobs this past semester. Jobs that were taking over three months to complete on open source R were completed in less than a few days with Revolution R. Computational jobs are now run multiple times with significantly higher levels of accuracy than ever before.
There are often great pieces of software created through open source, but they generally lack key features needed for an enterprise environment. Combined with commercial backing and expertise, these projects can be further developed and expanded to meet the needs of large-scale enterprise environments. IT departments can provide enhanced solutions to their users that adapt to the expanding world of cloud and High Performance Computing environments--all while minimizing the impact on a shrinking budget.
Photo courtesy of Shutterstock.