“IT systems today, are being used to manage, monitor and analyze cloud scale infrastructures. This involves large scale collection and analysis of data related to hundreds of performance measures (like CPU, Memory utilization, Job queue size etc) for hundreds of thousands of servers and applications in a cloud scale data center with a fairly short sampling rates ranging in seconds. This scale yields millions of concurrent time series observations and an extremely large quantum of data (TB’s).
This data is used in the enterprise for real-time monitoring, predictive analytics, capacity planning, application/virtual machine placement, root cause analysis of events etc. The sheer volume and size of the time series data stream makes it is quite challenging to store this massive amount of data and to support prompt analytics using traditional approaches like data warehousing.
With the advent and rising popularity of distributed technologies like Hadoop, HBase, Hive etc large scale analytics on big data is becoming popular in the enterprise as well. These technologies are used in various social web sites like FaceBook to perform analytics on extremely large scale data. Hadoop is the underlying platform that provides the HDFS distributed file system and the framework for executing Map Reduce programs. HBase is a distributed NoSQL column data store based on HDFS and Hive provides an SQL layer on top of Hadoop/HBase which supports querying large scale data in a very developer friendly SQL like language.
In this session we introduce these technologies and explore using these non traditional technologies to solve the problems of big data storage and analytics in the enterprise.”