We are seeking Experienced Hadoop developers with extensive expertise in Hadoop to join our growing Data Engineering practice delivering transformative solutions using Hadoop. At this time, We welcome Freelancers, Contractors and other immediate joiners. Experience: 5+ Years of Relevant experience Location - Chennai / Bangalore/ Hyderabad/Noida Mode - Hybrid JOB DESCRIPTION: To develop and deliver codes for the work assigned in accordance with time quality and cost standards Job responsibilities: Interact with business stake holders and designers to implement to understand business Requirements. Ability to perform impact assessments on a vast data store to ensure existing data pipeline is not wrecked and uncover insights. Translate complex functional and technical requirements into detailed design. Project Development and implementation experience working in a Hadoop Distributed File System Designing, building Installing configuring, and supporting in a Hadoop based environment. Ingestion of Complex data sets into Hadoop environment through various techniques (ingestion via Spark/ Hive/ Sqoop Techniques) Transform data using Spark with Scala Managing and deploying Hive objects Should have performed Unit/System Testing to ensure code quality, Good to have Teradata Knowledge Must have working experience in IntelliJ IDEA Autosys Job scheduling and should work seamlessly in WinSCP, Putty and Unix Must have working Knowledge in utilising GitHub, Ci-CD pipelines like TeamCity or Jenkins in productionizing the code. Maintain Security and data privacy. Crea1c scalable and high performance webservices for data tracking. High-speed querying capability Test prototypes and oversee handover to operational teams. Propose best practices and standards,
Employement Category:
Employement Type: Full timeIndustry: IT Services & ConsultingRole Category: Not SpecifiedFunctional Area: Not SpecifiedRole/Responsibilies: Hadoop AWS Data Engineers