Hadoop takes a blow

Google has taken a blow at Hadoop and it's ecosystem today by announcing it no longer uses Map/Reduce. It has moved to analytic systems which support batch and stream processing, something Hadoop falls down on.

Big Data enthusiasts were always quick to point out that Hadoop is just one implementation of a file system and might not be the lasting leader. From experience I can tell that Hadoop did bring clustered computing to the masses, albeit in a slow and complex fashion.

The fact that the almighty Google has already turned it's back does not bode well for Cloudera and co. I'd imagine they're all scrambing to create Map/Reduce counter arguements and are brimming with graphs and stats but the main Map/Reduce users have just put the paradigm to bed.

Hadoop was always far more complicated than it needed to be. When an installation comes on a VM it's a worrying sign. Also the various ZooKeepers and tools build to try and make it manageable required serious time to learn. Other Big Data tools have emphasized simplicity and speed. MongoDB's map reduce function is a doddle and built in Javascript. Apache Storm is also easy to get running and processing. Hadoop is a elephantine headache of configuration and properties.

Cloudera et al are sure to improve Hadoop as the years go by but I can't help but feel they've backed the wrong horse (or elephant?).