Key Responsibilities:
Design and Build Data Pipelines: Develop and maintain scalable data pipelines for extracting, transforming, and loading (ETL) data from various sources.
Data Integration: Combine raw data from different sources to create consistent and machine-readable formats.
Data Quality and Reliability: Implement methods to improve data reliability and quality, ensuring data is accurate and accessible.
Collaboration: Work closely with data scientists, analysts, and other stakeholders to support their data infrastructure needs.
Optimization: Identify and implement process improvements, such as optimizing data delivery and automating manual processes.
Data Analysis: Conduct complex data analysis and report on results to provide actionable insights.
Skills and Qualifications:
Technical Expertise: Proficiency in programming languages like Python, Java, and SQL. Experience with big data tools and frameworks such as Hadoop, Spark, and Kafka.
Data Modeling: Strong understanding of data modeling, data warehousing, and database design.
Cloud Technologies: Experience with cloud platforms like AWS, Azure, or Google Cloud Platform.
Problem-Solving: Excellent problem-solving skills and the ability to work with large, complex datasets.
Communication: Strong communication skills to collaborate with various teams and stakeholders.
Typical Requirements:
Education: A degree in Computer Science, IT, or a related field.
Experience: Previous experience as a data engineer or in a similar role, often requiring 2-3 years of hands-on experience.
Ready to work Overlap Shift
Candidates who can commence work immediately will be given priority.
Keyskills: Python SQL data engineer Snowflake Microsoft Azure ETL
Company ProfilePorteck India Infoservices Pvt. Ltd.Porteck is uniquely qualified to meet our customer's needs, offering solutions to many challenges faced in today's competitive landscape through a combination of workflow process and technological expertise.Company Info