1000k to 1500k 2.0 years Bengaluru
Relevant Experience (In Years) :
1. Minimum 2 years of strong experience on Core Java, Hadoop ecosystem and any NoSQL Database.
2. Minimum 1.5 or 2 Years of strong experience on Spark/Storm/Cassandr/Kafka/Scala.
Technical/Functional Skills :
1. Core Java, Multi-Threading, OOPS, Writing Parsers
5. Cloud Computing(AWS/Azure etc)
Roles & Responsibilities:
1. Strong on Core Java, Multi-Threading, OOPS Concept, writing parsers in Core Java
2. Should have strong knowledge on Hadoop ecosystem such as Hive/Pig/MapReduce
3. Strong in SQL, NoSQL, RDBMS and Data warehousing concepts
4. Writing complex MapReduce programs
5. Should have strong experience on pipeline building such Spark or Storm or Cassandra or Scala.
6. Designing efficient and robust ETL workflows
7. Gather and process raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.).
8. Tuning Hadoop solutions to improve performance and end-user experience;
9. Processing unstructured data into a form suitable for analysis - and then do the analysis.
10. Creating Big Data reference architecture deliverable
11. Performance optimization in a Big Data environment
Generic Leadership Skills:
1. Should have prior customer facing experience.
2. Ability to lead all requirement gathering sessions with the Customer
3. Strong co-ordination and interpersonal skills to handle complex projects.
Does Not Matter
1000k to 1500k