Job Summary:
Experience : 5 - 8 Years
Location : Bangalore
Contribute to building state-of-the-art data platforms in AWS, leveraging Python and Spark. Be part of a dynamic team, building data solutions in a supportive and hybrid work environment. This role is ideal for an experienced data engineer looking to step into a leadership position while remaining hands-on with cutting-edge technologies. You will design, implement, and optimize ETL workflows using Python and Spark, contributing to our robust data Lakehouse architecture on AWS. Success in this role requires technical expertise, strong problem-solving skills, and the ability to collaborate effectively within an agile team.
Must Have Tech Skills:
Demonstrable experience as a senior data engineer.
Expert in Python and Spark, with a deep focus on ETL data processing and data engineering practices.
Experience of implementing data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena
Experience with data services in Lakehouse architecture.
Nice To Have Tech Skills:
A masters degree or relevant certifications (e.g., AWS Certified Solutions Architect, Certified Data Analytics) is advantageous
Key Accountabilities:
Key Skills:
Deep technical knowledge of data engineering solutions and practices. Implementation of data pipelines using AWS data services and Lakehouse capabilities.
Educational Background:
Bonus Skills:
Keyskills: python data services data management api gateway software testing metadata management data processing emr data engineering aws lambda master data management aws glue data quality concepts data modeling spark quality assurance data governance athena etl aws