Responsibilities:
Requirements:
Responsibilities: Be part of the DevOps Platform team who own the Data reservoir platform Code/Tools/Processes Participate in the design and architecture of big data solutions. Design, develop, and maintain data patterns. Design, Develop and maintain Automation Testing Frameworks. Expertise in GIT and CI/CD.
Optimize and tune Spark code to ensure high performance and scalability. Automate processes and react quickly to continuously improve processes. Write clear and concise documentation for Python/Spark-developed code.
Requirements: Experience is working in DevOps Setup Experience in working with Git Repositories. Experience with Spark SQL, Spark Streaming. Experience with batch and streaming data processing using Spark.
Experience in building RESTful APIs is a plus.
Experience in using databases such as DB2,Oracle,Hadoop, Hive, Postgres Have high levels of Ownership and accountability for the undertaken tasks. Strong problem-solving and analytical skills.
Excellent written and verbal communication skills. Ability to work independently as well as part of a team. Strong attention to detail and accuracy. Strong knowledge of Agile methodologies.
Exposure to Cloudera and Azure Platform (Microsoft fabric/Databricks/Data Factory /Synapse) will be an advantage Experience: 5+ years Qualification: BTech/BE
Keyskills: Data Engineering Pyspark Hive DevOps GIT DB2 Agile methodologies Hadoop CI/CD Postgres Databricks Oracle
Colruyt Group is a leading retailer of food and non-food products in regions of Belgium, France and Luxembourg.