Tuesday, September 30, 2014

Hadoop (HDFS) Command line basics

Once you get Hadoop installed you can open the a terminal (aka command line). There is a program called hadoop and we need to pass it different switches and arguments to make it do what we want. Most of the hadoop fs (file shell) commands behave like the corresponding UNIX commands. Below are some of the commands you may find useful.

As a general rule, all hadoop filesystem commands start with hadoop fs.

Referencing HDFS Paths
When accessing the HDFS filesystem you need to use the hostname and port associated with the name node. In the examples below, the host is bivm and the name node is running on port 9000. You can also just make a relative reference by leaving off the hdfs://bivm:9000/

For example, to copy a file from the local file system to the HDFS file system we could specify the full path with:

hadoop fs -put file:///home/biadmin/test.txt hdfs://bivm:9000/user/biadmin/test.txt

However, we can also do

hadoop fs -put file:///home/biadmin/test.txt test.txt

This convention applies to all hadoop fs commands.

help - To get help and see all the commands for hadoop fs
hadoop fs -help

help - Get help on a fs command
hadoop fs -help du

ls - Show the files in the current user's directory

hadoop fs -ls
or
hadoop fs -ls /user/biadmin(assuming the user name is biadmin)


ls - Show the files in the user directory

hadoop fs -ls /user

cp - Copy a file from local file system to HDFS
hadoop fs -cp file:///home/biadmin/test.txt hdfs://bivm:9000/user/biadmin/test.txt

put or copyFromLocal - Copy files from the local filesystem (the opposite of copyToLocal)
hadoop fs -put file:///home/biadmin/test.txt hdfs://bivm:9000/user/biadmin/test.txt
hadoop fs -copyFromLocal file:///home/biadmin/test.txt hdfs://bivm:9000/user/biadmin/test.txt


get or copyToLocal or get - Copies files from HDFS to the local filesystem (the opposite of copyFromLocal)
hadoop fs -copyToLocal hdfs://bivm:9000/user/biadmin/test.txt file:///home/biadmin/test.txt
hadoop fs -get hdfs://bivm:9000/user/biadmin/test.txt file:///home/biadmin/test.txt


tail - View the last few line of a file
hadoop fs -tail hdfs://bivm:9000/user/biadmin/test.txt

cat -View the entire contents of a file
hadoop fs -cat hdfs://bivm:9000/user/biadmin/test.txt

rm - remove a file
hadoop fs -rm hdfs://bivm:9000/user/biadmin/test.txt

du - find the size of a file
hadoop fs -du hdfs://bivm:9000/user/biadmin/test.txt

du - get the size of all files in a directory
 hadoop fs -du hdfs://bivm:9000/user/biadmin

du - get the total size of all files in a directory
hadoop fs -du -s hdfs://bivm:9000/user/biadmin

mkdir - make a new directory
hadoop fs -mkdir hdfs://bivm:9000/user/biadmin/test

Other Unix-like HDFS Commands

chgrp
chmod
chown
mkdir
mv
rm
stat
setRep - Sets the replication factor of a file or Sets the replication factor  of a entire tree
getMerge - Gets all files in the directories that match the source pattern and also merges and sorts them into only one file on the local filesystem

Piping
You can pipe the results of these commands to unix commands. For example, we can grep the result of the ls command.

hadoop fs -ls | grep test

The result would be something like:
-rw-r--r--   1 biadmin biadmin          5 2014-09-24 00:48 test.txt












Saturday, September 20, 2014

Big Data Concepts

Why Big Data

Big Data conceptually has been around every since we had data. The concept is how do store and process data on hardware that is smaller than the data itself. Big Data as the name implies is needed when dealing with very large amounts of data. Big is relative, but generally we are dealing with terabytes, petabytes, exabytes, etc. However, there is no real threshold for using Big Data. Consider that a person's DNA sequence is only about 800MB, but it contains 4 billion pieces of information and has lots of patterns in it. The problem is that processing is slow using conventional databases. Big data would still be a good candidate for this because of the complexity of the data and processing power needed to analyze it. It is great for unstructured data, but can be used with structured data as well.

In short the amount of data being generated is growing exponentially and most of that data is unstructured or semi-structured. To process that data we generally need more power and storage than a single database, server, etc can handle.

The 3 V's of Big Data


  • Velocity - how fast is data being produced
  • Volume - how much data is being produced
  • Variety - how different is the data

What is Hadoop

Apache, the creators of Hadoop say
"The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures."

I would add some not generalizations:

  • You can kind of think of it as a software RAID in that it spreads data among different disks. Instead of hardware controller there is a dedicated name server that does the coordination between computers (instead of disks on one machine as with RAID). The big advantage of this is we are no longer bound to how much we can fit into one server and the processing power of one server and the IO of a hard disk because all requests are in parallel.
  • It is more than a place to store files, though one part of it is actually HDFS which the distributed file system. It includes an ever growing collection of tools to process the data.
  • It is a self healing technology such that if one computer or rack of computers goes down then it will detect this failure and use the other copies that are on other computers. Assuming there is available space available somewhere it will rebuild the data to reduce the risk if another server(s) goes down.
  • The great thing is that when we run out of space (and in big data you will by the nature of the domain) that we can add more computers to the configuration and tell Hadoop to re-balance and it will move data around to make use of the new space. 


The core of many Big Data systems


  • Open source project by The Apache Software Foundation
  • Written in Java
  • Great Performance
  • Reliability provided by replication of data between computers


Optimized to handle


  • Massive amounts of data through parallelism
  • A variety of data (unstructured, semi-structured, and structured)
  • Inexpensive commodity hardware

Projects Associated with Hadoop


  • Eclipse is a popular IDE donated by IBM to the open source community. 
  • Lucene is a text search engine library written in Java. 
  • Hbase is the Hadoop database. 
  • Hive provides data warehousing tools to extract, transform and load 
  • data, and then, query this data stored in Hadoop files. 
  • Pig is a high level language that generates MapReduce code to analyze 
  • large data sets. 
  • Jaql is a query language for JavaScript open notation. 
  • ZooKeeper is a centralized configuration service and naming registry for 
  • large distributed systems. 
  • Avro is a data serialization system. 
  • UIMA is the architecture for the development, discovery, composition 
  • and deployment for the analysis of unstructured data. 

What it is NOT good for


  • Not designed for OLTP, OLAP. It is not a replacement for RDBMS
  • Not designed for random access such is the case with RDBMS
  • Not good for processing lots of little files, but vendors are working to make this work better.
  • Not good for low latency data access
  • Not good for work that must be sequential or cannot be parallelized
  • Not good for complex calculations with little data.

Typical Sources for Big Data



  • RFID Readers
  • Shopping / Transactions
  • Mobile Devices
  • Internet users
  • Twitter
  • Sensor data