Cloudera, Yahoo and the Apache Hadoop Community Security Branch Release Update
In the wake of Yahoo! having announced that they would discontinue their Hadoop distribution and focus their efforts into Apache Hadoop http://yhoo.it/i9Ww8W the landscape has become tumultuous.
Yahoo! engineers have spent their time and effort contributing back to the Apache Hadoop security branch (branch of 0.20) and have proposed release candidates.
Currently being voted and discussed is “Release candidate 0.20.203.0-rc1”. If you are following the VOTE and the DISCUSSION then maybe you are like me it just cannot be done without a bowl of popcorn before opening the emails. It is getting heated in a good and constructive kind of way. http://mail-archives.apache.org/mod_mbox/hadoop-general/201105.mbox/thread there are already more emails in 5 days of May than there were in all of April. woot!
My take? Has it become Cloudera vs Yahoo! and Apache Hadoop releases will become fragmented because of it? Well, it is kind of like that already. 0.21 is the latest and can anyone that is not a committer quickly know or find out the difference between that and the other release branches? It is esoteric 😦 0.22 is right around the corner too which is a release from trunk.
Lets take HBase as an example (a Hadoop project). Do you know what version of HDFS releases can support HBase in production without losing data? If you do then maybe you don’t realize that many people still don’t even know about the branch. And, now that CDH3 is out you can use that (thanks Cloudera!) otherwise it is highly recommended to not be in production with HBase unless you use the append branch http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append/ of 0.20 which makes you miss out on other changes in trunk releases…
__ eyes crossing inwards and sideways with what branch does what and when the trunk release has everything __
Hadoop is becoming an a la cart which features and fixes can I live without for all of what I really need to deploy … or requiring companies to hire a committer … or a bunch of folks that do nothing but Hadoop day in and day out (sounds like Oracle, ahhhhhh)… or going with the Cloudera Distribution (which is what I do and don’t look back). The barrier to entry feels like it has increased over the last year. However, stepping back from that the system overall has had a lot of improvements! A lot of great work by a lot of dedicated folks putting in their time and effort towards making Hadoop (in whatever form the elephant stampedes through its data) a reality.
Big shops that have teams of “Hadoop Engineers” (Yahoo, Facebook, eBay, LinkedIn, etc) with contributors and/or committers on that team should not have lots of impact because ultimately they are able to role their own releases for whatever they need/want themselves in production and just support it. Not all are so endowed.
Now, all of that having been said I write this because the discussion is REALLY good and has a lot of folks (including those from Yahoo! and Cloudera) bringing up pain points and proposing some great solutions that hopefully will contribute to the continued growth and success of the Apache Hadoop Community http://hadoop.apache.org/…. still if you want to run it in your company (and don’t have a committer on staff) then go download CDH3 http://www.cloudera.com it will get you going with the latest and greatest of all the releases, branches, etc, etc, etc. Great documentation too!
[tweetmeme]
/*
Joe Stein
http://www.linkedin.com/in/charmalloc
*/
FWIW, LinkedIn runs base Apache 0.20.2 with 3-4 patches. These patches are fixes to the capacity scheduler and non-Linux portability (Mac OS X and Solaris). The “huge team” that builds and supports our production code base is me, either writing new code or grabbing patches from JIRA with some occasional help from the authors of those patches. Now that Jakob is an employee, our next internal release might have two people that support it. Two is not exactly huge. 😀 (Most of the other work that LinkedIn does with Hadoop is on the periphery–Azkaban, Pig, etc.)
Anyway, it was important to me that LI run what Apache runs for a variety of reasons. The big one being so that our options are open. In retrospect, this was a great decision given the incompatibilities that are now flooding into the ecosystem between these forks and trunk. It will be interesting to see how the various vendors force upgrades on users to deal with them.
(I’m working on dropping the phrase “Hadoop distribution” from my vocabulary. They are forks, no matter how much marketing wants to say otherwise).
The other thing to keep in mind is that if one has been paying attention, the tension is not new. These battle lines were drawn a long, long time ago.