Designation – Hadoop Architect
Location – Hyderabad
About employer– Inside View
Job description:
Responsibilities
- Design, Model and implement new data architecture and analytics solutions
- Be a hand on technical leader. Spend 60 – 80 % time on hands–on design and coding data architecture
- Drive standards , Define and implement data processing architecture and enforce best practices to scale
- Lead team to build large scale, high availability, fault tolerant data platform – Hadoop ecosystem and NoSQL platforms
- Own Data Architecture Vision, Validation & Benchmarking, Technology evaluation and define key metrics
- Work closely with cross functional teams in an Agile environment
Qualification and Skills Required
- Bachelor’s or Masters degree in Computer Science from a top engineering institution.
- Adept in computer science fundamentals, distributed processing and passionate towards algorithms, problem solving and data science.
- 7 – 10 years of Software Engineering experience in product companies.
- Experience in Big Data solutions like Hadoop, MapReduce, Spark, Hive, Hbase, NoSQL DB’s, Oozie, Flume, Mahout, ZooKeeper and Elastic Search.
- Excellent knowledge in Java, Python, SQL and/or R
- Very good understanding of Architecture patterns, articulate data platform design philosophy and evolve architecture according to business requirements.
- Benchmark systems, Analyze system bottlenecks and propose robust solutions to eliminate them.
- Good to have experience in designing/implementing end to end ETL or ELT data pipeline using open source platforms.
- Good to have experience in building predictive products using Information Retrieval, Machine Learning, Statistics.
Interested people can apply for this job can mail their CV to [email protected] with subject as Data Visualization SME – Capgemini – Bangalore