Designation: Data Crawling Engineer
Location – Pune / Kolkata
Experience – 4- 6 Years
About Employer – Confidential
Responsibilities
- Deep knowledge of the relevant Internet protocols such as HTTP, DNS, and familiarity with the web authoring standards such as HTML, CSS as well as de-facto data encoding, markup, and representations like JSON, XML etc.
- Extensive knowledge of and experience with crawling web pages and building systems to do so.
- Extensive knowledge of and experience with working with Web APIs, in particular, REST services and JSON-encoded responses.
- Experience with building systems at Internet scales, that scale to terabytes or more of data, millions of API calls, and thousands of requests/second.
- Experience with data extraction, scraping/parsing/processing semi-structured data, ETL toolsets, and mechanisms.
- Experience building robust, fault-tolerant, scalable and cost-effective distributed systems.
- Experience with Amazon Web Services or other Cloud platforms.
Skillsets Required
- Relevant experience in a related occupation that provided the required skills and abilities.
- Individuals must have demonstrable proficiency, knowledge and experience with the following specific areas:
- Machine-learning
- Text-Mining
- Java
- Python or natural language processing techniques
- Computational linguistics for artificial intelligence
- Lexical analysis design and implementation
Interested people can apply for this job by sending their updated CV to[email protected]with subject as Data Crawling – Pune/ Kolkata and following details:
- Total Experience
- Current CTC
- Expected CTC
- Notice Period