How to get started with Big Data Analysis

I've been a long time user of R and have recently started working with Python. Using conventional RDBMS systems for data warehousing, and R/Python for number-crunching, I feel the need now to get my hands dirty with Big Data Analysis.

I'd like to know how to get started with Big Data crunching. - How to start simple with Map/Reduce and the use of Hadoop

  • How can I leverage my skills in R and Python to get started with Big Data analysis. Using the Python Disco project for example.
  • Using the RHIPE package and finding toy datasets and problem areas.
  • Finding the right information to allow me to decide if I need to move to NoSQL from RDBMS type databases

All in all, I'd like to know how to start small and gradually build up my skills and know-how in Big Data Analysis.

Thank you for your suggestions and recommendations. I apologize for the generic nature of this query, but I'm looking to gain more perspective regarding this topic.

  • Harsh

Answers


Using the Python Disco project for example.

Good. Play with that.

Using the RHIPE package and finding toy datasets and problem areas.

Fine. Play with that, too.

Don't sweat finding "big" datasets. Even small datasets present very interesting problems. Indeed, any dataset is a starting-off point.

I once built a small star-schema to analyze the $60M budget of an organization. The source data was in spreadsheets, and essentially incomprehensible. So I unloaded it into a star schema and wrote several analytical programs in Python to create simplified reports of the relevant numbers.

Finding the right information to allow me to decide if I need to move to NoSQL from RDBMS type databases

This is easy.

First, get a book on data warehousing (Ralph Kimball's The Data Warehouse Toolkit) for example.

Second, study the "Star Schema" carefully -- particularly all the variants and special cases that Kimball explains (in depth)

Third, realize the following: SQL is for Updates and Transactions.

When doing "analytical" processing (big or small) there's almost no update of any kind. SQL (and related normalization) don't really matter much any more.

Kimball's point (and others, too) is that most of your data warehouse is not in SQL, it's in simple Flat Files. A data mart (for ad-hoc, slice-and-dice analysis) may be in a relational database to permit easy, flexible processing with SQL.

So the "decision" is trivial. If it's transactional ("OLTP") it must be in a Relational or OO DB. If it's analytical ("OLAP") it doesn't require SQL except for slice-and-dice analytics; and even then the DB is loaded from the official files as needed.


Need Your Help

calling C main from Objectivec

objective-c c parameters parameter-passing main

I want to call a C classes main function from within another class (which is written with objectiveC). I would like to pass some arguments to the main. I tried it like this, but the frist parameter...

Awk script not listing specific column

shell parameters awk

Using awk I have written this script but it's not working properly. Even I expected just only specific column it's displaying me total column of information.

How do i tunnel from local to external apache

apache ssh putty tunneling

I'm trying to set up a ssh tunnel using putty so i can view websites which are at my work location. The remote development server (apache) does not accept any incoming calls on port 80, so i'm tryi...