Hadoop Developer- Gurgaon (2-4 Years of Experience)

3 Lakhs to 10 Lakhs 2.0 years Gurugram

Flume , Hadoop , Hive , Mapreduce , Pig , R , Spark , Sqoop

Login

Job Description and Responsibilities

Position Summary:

We are looking for candidates with hands on experience in Big Data technologies to be based out of our Gurgaon office.

Key Responsibilities:

  • Build the Big Data infrastructure to store and process terabytes of data
  • Understand the business need what kind of data, how much data, types of algorithms to be run, load on the system, budget etc.- and recommend optimal solutions
  • Build and implement the solution. This will need you to be hands on to build in quick prototypes / proof of concepts data processing benchmarks
  • Work with the operations team to build systems, process and team required to run and maintain the systems securely, reliably and in a scalable manner
  • Work with the analytics team to understand what data landscaping would be required

Qualifications and Skills:

  • Must have 2-4 years of experience with Big Data technologies such as Hadoop and the related ecosystem
  • Practical experience and in-depth understanding of Map Reduce
  • Hands-on experience with Spark/Hive/Pig/Flume/Sqoop
  • Should have a good programming background with expertise in Java
  • Data infrastructure tools landscape e.g cloud service providers, virtualization software, system monitoring tools and development environments
  • Ability to program and guide junior resources on technical aspects
  • Ability to craft documents that can explain complex ideas in simple terms in order to build consensus or educate
  • Knowledge of R or any other Statistical Programming Language is a plus
  • Degree - Graduates/Postgraduates in CSE or related field

College Preference

Does Not Matter

Minimum Qualification

Under Graduate

Compensation

3 Lakhs to 10 Lakhs

Skills hiring for

Flume , Hadoop , Hive , Mapreduce , Pig , R , Spark , Sqoop

Locations hiring at

Gurugram

Login