Code

New programming language promises a 4x speed boost on Big Data

Pro
Image: IDGNS

15 September 2016

Memory management can be challenge enough on traditional data sets, but when Big Data enters the picture, things can slow way, way down. A new programming language announced by MIT this week aims to remedy that problem, and so far it’s been found to deliver fourfold speed boosts on common algorithms.

The principle of locality is what governs memory management in most computer chips today, meaning that if a program needs a chunk of data stored at some memory location, it’s generally assumed to need the neighboring chunks as well. In Big Data, however, that’s not always the case. Instead, programs often must act on just a few data items scattered across huge data sets.

Fetching data from main memory is the major performance bottleneck in today’s chips, so having to fetch it more frequently can slow execution considerably.

“It’s as if, every time you want a spoonful of cereal, you open the fridge, open the milk carton, pour a spoonful of milk, close the carton, and put it back in the fridge,” explained Vladimir Kiriansky, a doctoral student in electrical engineering and computer science at MIT.

With that challenge in mind, Kiriansky and other researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created Milk, a new language that lets application developers manage memory more efficiently in programs that deal with scattered data points in large data sets.

Essentially, Milk adds a few commands to OpenMP, an API for languages such as C and Fortran that makes it easier to write code for multicore processors. Using it, the programmer inserts a few additional lines of code around any instruction that iterates through a large data collection looking for a comparatively small number of items. Milk’s compiler then figures out how to manage memory accordingly.

With a program written in Milk, when a core discovers that it needs a piece of data, it doesn’t request it – and the attendant adjacent data – from main memory. Instead, it adds the data item’s address to a list of locally stored addresses. When the list gets long enough, all the chip’s cores pool their lists, group together those addresses that are near each other, and redistribute them to the cores. That way, each core requests only data items that it knows it needs and that can be retrieved efficiently.

In tests on several common algorithms, programs written in the new language were four times as fast as those written in existing languages, MIT says. That could get even better, too, as the researchers work to improve the technology further.

IDG News Service

Read More:


Back to Top ↑

TechCentral.ie