Job Description
To build an end to end data pipelines covering data acquisition from multiple sources, loading it into cloud data warehouses, and performing a wide range of transformations following industry-standard data modeling techniques, such as Star Schema, Snowflake, and Data Vault.
Incumbents should have 3-8+ years of relevant technical experience and be highly proficient in the latest Cloud enabled
Big Data technologies - Extensive working knowledge of Spark, Hadoop, SQL, No-SQL, and at least one programming language (e.g. Java, Scala, or Python)
Hands-on experience in implementing ETL/ELT pipelines on at least one cloud platform AWS, Google Cloud, or Azure Experience with Snowflake and Google BigQuery will be a distinct advantage.
Thorough understanding of data modeling, data architecture, and data warehousing techniques on-prem and cloud.
A good understanding of Data Quality, Data Security, and Data Governance practices is desirable
Candidates should be familiar with industry best practices for unit testing and regression testing for all ETL/ELT code.
Capital Market/ Banking domain experience would be highly desirable
Employement Category:
Employement Type: Full time
Industry: KPO
Functional Area: IT
Role Category: Software Developer
Role/Responsibilies: Hadoop-pyspark-python Architect
Contact Details:
Company: Change Leaders
Location(s): Delhi, NCR