About employer– Confidential
His “Big Data” team will be responsible for building the backend data processing and analytic infrastructure that forms the backbone for the Data Management Platform. As a Big Data Engineer, you will be working other members of the team (including the CTO and VP of Applications and Data).
- To build data processing pipelines that process Terabytes of data to generate valuable insights for our 60+ clients across the world.
- Responsible for building predictive model and machine learning models for Attribution, Segmentation and Online-Ad optimization.
Qualification and Skills Required
- B.S. or higher in Computer Science, Statistics, Mathematics, or related field
- Practical, hands-on experience with modern Agile development methodologies (XP, Scrum, TDD)
- 4 years Programming experience with R/Python
- 2 years programming experience with RPIG/ RHive
- Professional Hadoop ecosystem experience including analyzing large data sets, running queries and generating insight from the data. Ability to define and implement in big data tools, such as R Hive and RPig
- 3 Experience with Machine Learning Techniques.
- Experience with NoSQL technologies. We use Cassandra and Redis.
- AWS Exprience a HUGE plus. Our entire tech stack is built on top of Amazon and experience with DynamoDB and/or RedShift would be extremely handy.
Experience on Mahout.
- Personal Attributes and Behavior:
- Initiative Taking, Self Motivated and Self Directed
- Performs in ambiguous environment
- Team worker/good interpersonal skills
- Ability to interact with Senior Management
Interested people can apply for this job by sending their updated CV to [email protected] with subject as Data Engineer – R/PIG – Gurgaon and the following details:
- Total Experience
- Current CTC
- Expected CTC
- Notice Period