Your browser does not support javascript! Please enable it, otherwise web will not work for you.

AWS PySpark Data Engineer @ Data Economy

Home > Software Development

 AWS PySpark Data Engineer

Job Description

We are seeking a highlyskilled and experienced Senior Data Engineer to lead the end-to-end developmentof complex models for compliance and supervision. The ideal candidate will havedeep expertise in cloud-based infrastructure, ETL pipeline development, andfinancial domains, with a strong focus on creating robust, scalable, andefficient solutions.

Key Responsibilities:
-ModelDevelopment: Lead the development of advanced models using AWS services such asEMR, Glue, and Glue Notebooks.
-CloudInfrastructure: Design, build, and optimize scalable cloud infrastructuresolutions with a minimum of 5 years of experience.
-ETL PipelineDevelopment: Create, manage, and optimize ETL pipelines using PySpark forlarge-scale data processing.
-CI/CDImplementation: Build and maintain CI/CD pipelines for deploying andmaintaining cloud-based applications.
-Data Analysis:Perform detailed data analysis and deliver actionable insights to stakeholders.
-Collaboration:Work closely with cross-functional teams to understand requirements, presentsolutions, and ensure alignment with business goals.
-AgileMethodology: Operate effectively in agile or hybrid agile environments,delivering high-quality results within tight deadlines.
-FrameworkDevelopment: Enhance and expand existing frameworks and capabilities to supportevolving business needs.
-Documentation andCommunication: Create clear documentation and present technical solutions toboth technical and non-technical audiences.

Requirements
Required Qualifications:
-05+ years ofexperience with Python programming.
-5+ years ofexperience in cloud infrastructure, particularly AWS.
-3+ years ofexperience with PySpark, including usage with EMR or Glue Notebooks.
-3+ years ofexperience with Apache Airflow for workflow orchestration.
-Solid experiencewith data analysis in fast-paced environments.
-Strongunderstanding of capital markets, financial systems, or prior experience in thefinancial domain is a must.
-Proficiency withcloud-native technologies and frameworks.
-Familiarity withCI/CD practices and tools like Jenkins, GitLab CI/CD, or AWS CodePipeline.
-Experience withnotebooks (e.g., Jupyter, Glue Notebooks) for interactive development.
-Excellentproblem-solving skills and ability to handle complex technical challenges.
-Strongcommunication and interpersonal skills for collaboration across teams andpresenting solutions to diverse audiences.
-Ability to thrivein a fast-paced, dynamic environment.


Benefits
Standard Company Benefits
","

Job Classification

Industry: IT Services & Consulting
Functional Area / Department: Engineering - Software & QA
Role Category: Software Development
Role: Data Engineer
Employement Type: Full time

Contact Details:

Company: Data Economy
Location(s): Hyderabad

+ View Contactajax loader


Keyskills:   Data analysis Cloud Agile Data processing Apache AWS Python

 Fraud Alert to job seekers!

₹ Not Disclosed

Similar positions

Security Engineer Dast/sast - Lead Engineer/ Manager

  • Capgemini
  • 6 - 10 years
  • Pune
  • 2 days ago
₹ Not Disclosed

EPM Solution Engineer

  • Oracle
  • 7 - 12 years
  • Kolkata
  • 3 days ago
₹ Not Disclosed

EPM Solution Engineer

  • Oracle
  • 7 - 12 years
  • Kolkata
  • 3 days ago
₹ Not Disclosed

PDM Data Lake Engineer

  • Mercedes Benz
  • 8 - 13 years
  • Bengaluru
  • 2 days ago
₹ Not Disclosed

Data Economy

DATAECONOMY