Job Description
Job Summary:
Experience : 4 - 8 years
Location : Bangalore
The Data Engineer will contribute to building state-of-the-art data Lakehouse platforms in AWS, leveraging Python and Spark. You will be part of a dynamic team, building innovative and scalable data solutions in a supportive and hybrid work environment. You will design, implement, and optimize workflows using Python and Spark, contributing to our robust data Lakehouse architecture on AWS. Success in this role requires previous experience of building data products using AWS services, familiarity with Python and Spark, problem-solving skills, and the ability to collaborate effectively within an agile team.
Must Have Tech Skills:
Demonstrable previous experience as a data engineer.
- Technical knowledge of data engineering solutions and practices. Implementation of data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena
Proficient in Python and Spark, with a focus on ETL data processing and data engineering practices.
Nice To Have Tech Skills:
Familiar with data services in a Lakehouse architecture.
Familiar with technical design practices, allowing for the creation of scalable, reliable data products that meet both technical and business requirements
A masters degree or relevant certifications (e.g., AWS Certified Solutions Architect, Certified Data Analytics) is advantageous
Key Accountabilities:
- Writes high quality code, ensuring solutions meet business requirements and technical standards.
- Works with architects, Product Owners, and Development leads to decompose solutions into Epics, assisting the design and planning of these components.
- Creates clear, comprehensive technical documentation that supports knowledge sharing and compliance. Experience in decomposing solutions into components (Epics, stories) to streamline development.
- Actively contributes to technical discussions, supporting a culture of continuous learning and innovation.
Key Skills:
- Proficient in Python and familiar with a variety of development technologies.
- Previous experience of implementing data pipelines, including use of ETL tools to streamline data ingestion, transformation, and loading.
- Solid understanding of AWS services and cloud solutions, particularly as they pertain to data engineering practices. Familiar with AWS solutions including IAM, Step Functions, Glue, Lambda, RDS, SQS, API Gateway, Athena.
- Proficient in quality assurance practices, including code reviews, automated testing, and best practices for data validation.
- Experienced in Agile development, including sprint planning, reviews, and retrospectives
Educational Background:
- Bachelors degree in computer science, Software Engineering, or related essential.
Bonus Skills:
- Financial Services expertise preferred, working with Equity and Fixed Income asset classes and a working knowledge of Indices.
- Familiar with implementing and optimizing CI/CD pipelines. Understands the processes that enable rapid, reliable releases, minimizing manual effort and supporting agile development cycles.
Job Classification
Industry: IT Services & Consulting
Functional Area / Department: Engineering - Software & QA
Role Category: Software Development
Role: Data Engineer
Employement Type: Full time
Contact Details:
Company: Cognizant
Location(s): Bengaluru
Keyskills:
continuous integration
aws iam
api gateway
glue
ci/cd
emr
data pipeline
financial services
cloud
iam
spark
cloud applications
etl
cd
python
aws certified
amazon rds
aws lambda
data engineering
aws glue
amazon sqs
lambda expressions
quality assurance
athena
agile
aws