Roles & Responsibilities:
Designing, developing and maintaining large-scale big data platforms using technologies like Hadoop, Spark and Kafka
Creating and managing data warehouses, data lakes and data marts
Implementing and optimizing ETL processes and data pipelines
Developing and maintaining security and access controls
Troubleshooting and resolving big data platform issues
Collaborating with other teams to ensure the consistency and integrity of data
Skills Requirements:
Qualifications
Keyskills: hive apache pig impala spark hadoop azure data lake data processing big data technologies microsoft azure apache flink data engineering data mart apache mapreduce data science kafka aws etl data integration data lake etl process