Your browser does not support javascript! Please enable it, otherwise web will not work for you.

AWS Pyspark Data Engineer @ Data Economy

Home > Software Development

 AWS Pyspark Data Engineer

Job Description

We are seeking a highly skilled and experienced Senior Data Engineer to lead the end-to-end development of complex models for compliance and supervision. The ideal candidate will have deep expertise in cloud-based infrastructure, ETL pipeline development, and financial domains, with a strong focus on creating robust, scalable, and efficient solutions.

Key Responsibilities:
-Model Development: Lead the development of advanced models using AWS services such as EMR, Glue, and Glue Notebooks.
-Cloud Infrastructure: Design, build, and optimize scalable cloud infrastructure solutions with a minimum of 5 years of experience.
-ETL Pipeline Development: Create, manage, and optimize ETL pipelines using PySpark for large-scale data processing.
-CI/CD Implementation: Build and maintain CI/CD pipelines for deploying and maintaining cloud-based applications.
-Data Analysis: Perform detailed data analysis and deliver actionable insights to stakeholders.
-Collaboration: Work closely with cross-functional teams to understand requirements, present solutions, and ensure alignment with business goals.
-Agile Methodology: Operate effectively in agile or hybrid agile environments, delivering high-quality results within tight deadlines.
-Framework Development: Enhance and expand existing frameworks and capabilities to support evolving business needs.
-Documentation and Communication: Create clear documentation and present technical solutions to both technical and non-technical audiences.


Requirements Requirements
Required Qualifications:
-05+ years of experience with Python programming.
-5+ years of experience in cloud infrastructure, particularly AWS.
-3+ years of experience with PySpark, including usage with EMR or Glue Notebooks.
-3+ years of experience with Apache Airflow for workflow orchestration.
-Solid experience with data analysis in fast-paced environments.
-Strong understanding of capital markets, financial systems, or prior experience in the financial domain is a must.
-Proficiency with cloud-native technologies and frameworks.
-Familiarity with CI/CD practices and tools like Jenkins, GitLab CI/CD, or AWS CodePipeline.
-Experience with notebooks (e.g., Jupyter, Glue Notebooks) for interactive development.
-Excellent problem-solving skills and ability to handle complex technical challenges.
-Strong communication and interpersonal skills for collaboration across teams and presenting solutions to diverse audiences.
-Ability to thrive in a fast-paced, dynamic environment.



Benefits

Benefits
Standard Company Benefits
","

Job Classification

Industry: IT Services & Consulting
Functional Area / Department: Engineering - Software & QA
Role Category: Software Development
Role: Data Engineer
Employement Type: Full time

Contact Details:

Company: Data Economy
Location(s): Pune

+ View Contactajax loader


Keyskills:   Data analysis Usage Cloud Agile Infrastructure Workflow Data processing Apache AWS Python

 Fraud Alert to job seekers!

₹ Not Disclosed

Similar positions

Software Development Engineer

  • Accenture
  • 3 - 8 years
  • Hyderabad
  • 11 hours ago
₹ Not Disclosed

Advanced Application Engineer

  • Accenture
  • 5 - 10 years
  • Hyderabad
  • 11 hours ago
₹ Not Disclosed

Technology Platform Engineer

  • Accenture
  • 2 - 5 years
  • Hyderabad
  • 12 hours ago
₹ Not Disclosed

Advanced Application Engineer

  • Accenture
  • 5 - 10 years
  • Hyderabad
  • 12 hours ago
₹ Not Disclosed

Data Economy

DATAECONOMY