I was at
pgconf.eu and held a lightning talk about map/reduce with PostgreSQL. Upfront, I was asked "Why do you want to do that anyway?" and my initial response was like, "Because I can.". :-)
But that got me thinking about the real case behind the idea. What is the heart of map/reduce? Citing from the original
paper:
"MapReduce is a programming model and an associated implementation for processing and generating large data sets." Note the word "sets"?
"Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key.", or, expressed in a more formal way
map (k1,v1) -> list(k2,v2)
reduce (k2,list(v2)) -> list(v2)
Please note, that this is a bit more precise than the initial definition. The paper explains:
"I.e., the input keys and values are drawn from a different domain than the output keys and values. Furthermore, the intermediate keys and values are from the same domain as the output keys and values."
This is important, because the authors are introducing a domain transformation of the input data here. That is, in my opinion, already the heart of map/reduce.
Going back to the in
itial definition, this is basically what all RDBMS already do when processing parallel queries, be it by builtin ability or bolted on like with PL/Proxy + PostgreSQL: In the first step the input set is broken down to partitions, then the query runs on that partitions in parallel and produces intermediate result sets and finally that intermediate result sets are aggregated to the final result set. But the formal definition above adds a little twist, the domain transformation.
To clarify this, I'll use the canonical example, counting words in a text. The map function converts semi structured data, a text with lines of arbitrary length, into a well structured set of key (a word) and value (its count) tuples. This is
the difference and the key to the power of map/redcue.
The ability to handle semi structured data which the relational model usually does not handle very well. (And I won't say unstructured data. Truly unstructured data is statistical noise.)
But modern RDBMS, especially PostgreSQL, often already have functions to transform semi structured data into relations and/or allow for user defined functions to extend their capabilities and that allows for running a map/reduce type job inside a RDBMS. Still, why would somebody want to do this?
1.) Integration
An awful lot of data is stored in relational models and will stay there. Simultaneously, especially for analytical workloads which become more and more important, the need for integrating relational and semi-structured data grows. Why handle them in different systems when one will do?
This decision of course heavily depends on the real world requirements. But rest assured that the datacenter guys who have to run the show will like to operate one database better than
2..n.
2.) Sets
Remember the word "sets" from the initial definition? And now the definition of a "relation" in a RDBMS:
"R is a relation on these n domains if it is a set of elements of the form (d1, d2, ..., dn) where dj ∈ Dj for each j=1,2,...,n." (E. F. Codd (Oct 1972). "Further normalization of the database relational model". "Data Base Systems". Courant Institute: Prentice-Hall. ISBN 013196741X.)
If a relation is a set of tuples with values from some domain D and map/reduce does domain transformation on key/value pairs (aka. tuples) what does that call for? Right, a very efficient set processor. Since relational DBMS are very efficient set processors by nature, they allow for
writing compact map/reduce functions that are also less error prone due to the declarative nature of SQL.
To clarify what I mean take a look at the following map and reduce functions for wordcount written for MongoDB in JavaScript from
here:
var map = function() {
var summary = this.summary;
if (summary) {
// quick lowercase to normalize per your requirements
summary = summary.toLowerCase().split(" ");
for (var i = summary.length - 1; i >= 0; i--) {
// might want to remove punctuation, etc. here
if (summary[i]) { // make sure there's something
emit(summary[i], 1); // store a 1 for each word
}
}
}
};
var reduce = function( key, values ) {
var count = 0;
values.forEach(function(v) {
count +=v;
});
return count;
}
db.so.mapReduce(map, reduce, {out: "word_count"})
PostgreSQL:
For the code, see my previous
post.
Well, while it seems to require more code than MongoDB, there is a subtle difference. The most PostgreSQL code is standard boilerplate to write a set returning function and to make PL/Proxy work. Once you got that right, you usually never have to look back. The actual work is done in two lines of SQL:
SELECT TRIM(both from word),count(1) FROM (SELECT regexp_split_to_table(line, E'\\W+') as word FROM kjb) w GROUP BY word
and
SELECT word,sum(count) FROM map_kjb() AS (word text, count bigint) WHERE word != '' GROUP BY word
IMHO, the ability to express this with extensively tried and tested functions instead of having to implement them yourself combined with strong typing is worth so much, that one should give it a try before looking somewhere else. Granted, raw performance may become so paramount that an alternative technology might be called for, but if you already use PostgreSQL now there is another reason to like and not to leave it.
And this is it, the long answer I didn't already had ready at the conference.
P.S. It also allows to move computation instead of data around.