Processing massive amount of data with Map Reduce using Apache Hadoop

The processing of massive amount of data gives great insights into analysis for business. Many primary algorithms run over the data and gives information which can be used for business benefits and scientific research. Extraction and processing of large amount of data has become a primary concern in terms of time, processing power and cost. Map Reduce algorithm promises to address the above mentioned concerns. It makes computing of large sets of data considerably easy and flexible. The algorithm offers high scalability across many computing nodes. This session will introduce Map Reduce algorithm, followed by few variations of the same and also hands on example in Map Reduce using Apache Hadoop.

Speaker:

Cloud Computing Conference SpeakerAllahbaksh Asadullah is a Product Technology Lead from Infosys Labs, Bangalore. He has over 5 years of experience in software industry in various technologies. He has extensively worked on GWT, Eclipse Plugin development, Lucene, Solr, No SQL databases etc. He speaks at the developer events like ACM Compute, Indic Threads and Dev Camps.

Allahbaksh Asadullah will be presenting on “Map Reduce – Apache Hadoop” at the 2nd Annual IndicThreads.com Conference On Cloud Computing to be held in Pune, India on 3,4 June 2011.Click here for a list of other Speakers & Sessions @ The Conference

This entry was posted in Sessions. Bookmark the permalink.

Comments are closed.