Job Title: Data Engineer
Location: Pune, India (On-site)
Experience: 3 5 years
Employment Type: Full-time
Job Summary
We are looking for a hands-on Data Engineer who can design and build modern Lakehouse solutions on Microsoft Azure. You will own data ingestion from source-system APIs through Azure Data Factory into OneLake, curate bronze silver gold layers on Delta Lake, and deliver dimensional models that power analytics at scale.
Key Responsibilities
Build secure, scalable Azure Data Factory pipelines that ingest data from APIs, files, and databases into OneLake.
Curate raw data into Delta Lake tables on ADLS Gen 2 using the Medallion (bronze silver gold) architecture, ensuring ACID compliance and optimal performance.
Develop and optimize SQL/Spark SQL transformations in Azure Fabric Warehouse / Lakehouse environments.
Apply dimensional-modelling best practices (star/snowflake, surrogate keys, SCDs) to create analytics-ready datasets.
Implement monitoring, alerting, lineage, and CI/CD (Git/Azure DevOps) for all pipelines and artifacts.
Document data flows, data dictionaries, and operational runbooks.
Must-Have Technical Skills
Azure Fabric & Lakehouse experience
Azure Fabric Warehouse experience / Azure Synapse
Data Factory building, parameterizing, and orchestrating API-driven ingestion pipelines
ADLS Gen 2 + Delta Lake
Strong SQL advanced querying, tuning, and procedural extensions (T-SQL / Spark SQL)
Data-warehousing & Dimensional Modelling concepts
Good-to-Have Skills
Python (PySpark, automation, data-quality checks)
Unix/Linux shell scripting
DevOps (Git, Azure DevOps)
Education & Certifications
BE / B. Tech in computer science, Information Systems, or related field
Preferred: Microsoft DP-203 Azure Data Engineer Associate
Soft Skills
Analytical, detail-oriented, and proactive problem solver
Clear written and verbal communication; ability to simplify complex topics
Collaborative and adaptable within agile, cross-functional teams
Keyskills: Azure Data Factory Azure Synapse Adls Gen2 Data Lake Fabric