Big data and Hadoop facts

Big data means a lot of data. The experts say, big data fits one or more of four Vs of big data, namely, volume, velocity, veracity and variety. We are living in the age of big data and the factors mentioned ahead prove this fact to some extent.

Over 90% of all the data in the world was created in the past 2 years. And, it is expected that by the year 2020 the amount of digital information in existence will have grown from 3.2 zettabytes to 40 zettabytes. The total amount of data being captured and stored by industry doubles every 1.2 years. In two days we create as much information as we did from the beginning of time until 2003.

So, all of these trending threats about big data gave birth to the requirement of having a system which can handle big-data and analyze it at a fast rate. And, this is how Hadoop came into existence, although there were many system/frameworks which were being used or are still used for handling big data.

Big Data has been around for a long time, in fact, you can handle high volumes of data with massively parallel-processing (MPP) databases, such as those offered by Greenplum, Aster Data and Vertica. And, they’re incorporating Hadoop into these platforms.

Hadoop is the distributed file system which is nothing but the way to create clustered or distributed storage and can run on any server. HDFS is fast, secure, and fault tolerant.

MapReduce is actually the core of Hadoop which can put all the data nodes to process the data locally, and is fast and very powerful.

Hadoop is not actually an analytic platform; it can be used with traditional analytic platform or a common way to analyze the data we use R programming language to write our MapReduce jobs.

Hadoop can also be used for archiving and for ETL that stands for extracting, transform, and load. Moreover, Hadoop can also be used for filtering. The Hadoop platform provides many opportunities for transforming and extracting the data and processing.

Scaling of data is the major concern in the data world. The Hadoop system uses Accumulo for scaling the data. Accumulo is actually inspired from Google big table design and is built on the top of Hadoop. It comes with a few improvements in big table, for example, it provides cell-based access control and a server side programming. Also, in Accumulo the key-value pair at the various points can be modified in the process of data management.

Components of Hadoop

Hive: Apache Hive is a data warehouse application and provides high level language for expressing data analysis programs. It provides SQL like environment

PIG: Apache PIG provides high level language for expressing large datasets. PIG’s language consist of textual language called Pig Latin.

Click here to know more about Big Data Hadoop Training Course

APACHE HBASE- Growing popularity in Industry

Apache HBase is an open-source, NoSQL database built on top of hadoop distributed file system. It is a column-oriented database which provides storage and quick access to large quantities of data. It is modeled after Google’s big table where it deals with huge volumes of data tables. It also allows users to perform insert, update and delete operation.

HBase, which was a sub-project within Apache Hadoop project, is now being used to provide real-time read and write access to big data.

Important Features of Apache HBase:

  • Data: can deal with any type of data whether structured, semi-structured or unstructured.
  • Tables: Sparsely populated tables.
  • Scalability: Horizontal Scalability, which adds servers to increase capacity.
  • SQL access: One can Query data interactively.
  • Schemas: Flexible schemas where users can add columns on the fly.
  • High availability: Having multiple master nodes ensure continuous access to data.
  • Full consistency: which guards against node failures or simultaneous writes to the same record.
  • Automatic sharding: which transparently and efficiently scale out your data across machines in the cluster.
  • Security: which secures table and column family-level access via Kerberos.

Stable release       – 0.98.4, 21 JULY 2014

Website                 – hbase.apache.org

          Written in              – Java

          License                 – Apache License 2.0

Working with HBase:

It uses Log Structured Merge trees (LSM trees) mainly to store and query the data. It deals with, compression, in-memory caching, bloom filters, and very fast scans. Also HBase tables can serve as both the input and output for MapReduce jobs.

Top users of Apache HBase:

  • Adobe currently has about 30 nodes running HDFS, Hadoop and HBase in clusters ranging from 5 to 14 nodes on both production and development.
  • Facebook uses HBase to power their Messages infrastructure.
  • Twitter runs HBase across its entire Hadoop cluster.
  • Yahoo uses HBase to store their document fingerprint for detecting near-duplications and they have a cluster of few nodes that runs HDFS and mapreduce.
  • Stumbleupon use HBase as a real-time data storage and analytics platform.
  • Filmweb have just started a small cluster of 3 HBase nodes mainly to handle their web cache persistency layer.
  • OpenLogic stores all the world’s Open Source packages, files and lines of code in HBase for both analytical and near real-time access purposes.

And many more – http://wiki.apache.org/hadoop/Hbase/PoweredBy

When not to use HBase?

  • When you’re dealing with few thousands of rows.
  • When you have hardware which is less than 5 DataNodes.
  • When you’re dealing with cross record-transactions or joins.

Column-family in HBase:

  • Table Schema only defines its column-families.
  • Columns in Apache Hbase are grouped into column-families.
  • All column members of a column-family have the same prefix.
  • Physically, all column-family members are stored together on the filesystem.

Three major Components of Hbase:

  • HbaseMaster: It stores all the metadata such as how table is splitted.
  • HRegionServer: Splitted table are stored in region servers.
  • HbaseClient: Client which are connect to master server and region servers.

Data Model in Hbase:

  • Hbase is a key-Value Store.
  • Values are stored in multi-dimensional format.
  • Data Model with Multi-Dimensional Columns.
 Data Model in Hbase Works
Check How Data Model in Hbase Works

Conclusion: The goal of this blog is to introduce you about Apache HBase, it uses and it’s structure. In my upcoming blog we’ll be looking ahead with the implementation of HBase table in hadoop.

Click here to know more about Big Data Hadoop Training Course