Home > Hadoop, MapReduce > Running Hadoop MapReduce With Cassandra NoSQL

Running Hadoop MapReduce With Cassandra NoSQL

So if you are looking for a good NoSQL read of HBase vs. Cassandra you can check out http://ria101.wordpress.com/2010/02/24/hbase-vs-cassandra-why-we-moved/.  In short HBase is good for reads and Cassandra for writes.  Cassandra does a great job on reads too so please do not think I am shooting either down in any way.  I am just saying that both HBase and Cassandra have great value and useful purpose in their own right and even use cases exists to run both.  HBase recently got called up as a top level apache project coming up and out of Hadoop.

Having worked with Cassandra a bit I often see/hear of folks asking about running Map/Reduce jobs against the data stored in Cassandra instances.  Well Hadoopers & Hadooperettes the Cassandra folks in the 0.60 release provide a way to-do very nicely.   It is VERY straight forward and well thought through.  If you want to see the evolution check out the JIRA issue https://issues.apache.org/jira/browse/CASSANDRA-342

So how do you it?  Very simple, Cassandra provides an implementation of InputFormat.  Incase you are new to Hadoop the InputFormat is what the mapper is going to use to load your data into it (basically).  Their subclass connects your mapper to pull the data in from Cassandra.  What is also great here is that the Cassandra folks have also spent the time implementing the integration in the classic “Word Count” example.

See https://svn.apache.org/repos/asf/cassandra/trunk/contrib/word_count/ for this example.  Cassandra rows or row fragments (that is, pairs of key + SortedMap of columns) are input to Map tasks for processing by your job, as specified by a SlicePredicate that describes which columns to fetch from each row. Here’s how this looks in the word_count example, which selects just one configurable columnName from each row:

ConfigHelper.setColumnFamily(job.getConfiguration(), KEYSPACE, COLUMN_FAMILY);
SlicePredicate predicate = new SlicePredicate().setColumn_names(Arrays.asList(columnName.getBytes()));
ConfigHelper.setSlicePredicate(job.getConfiguration(), predicate);

Cassandra also provides a Pig LoadFunc for running jobs in the Pig DSL instead of writing Java code by hand. This is in https://svn.apache.org/repos/asf/cassandra/trunk/contrib/pig/.

[tweetmeme http://wp.me/pTu1i-2Z%5D

Joe Stein

  1. Grant
    April 26, 2010 at 1:42 pm

    Correct me if I’m wrong, but it seems a limitation of the Cassandra InputFormat is that it only works with one column family at a time. I believe HBase can use multiple column families in M/R jobs.

    • June 9, 2010 at 12:58 pm

      I’m writing a Cassandra-Hadoop integration currently and that’s correct, you can only scan 1 column family at a time. Since I have more than one I’m having to spawn 1 job/per column family and then joining the outputs in a 2nd job. Technically it’s one slice predicate per job, so if you have a vast # of columns (more than the mapper can process at 1 time w/o blowing the heap) you may have to have multiple jobs per column family. I’m currently avoiding that issue by having a large heap and large slice predicate, but there’s definitely a ceiling with that approach.

  1. June 4, 2010 at 3:13 am
  2. July 12, 2010 at 6:13 pm
  3. August 2, 2010 at 12:42 am
  4. September 27, 2010 at 3:15 am
  5. February 24, 2011 at 9:46 am

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: