Note- Immediate joiners will be given preference !
Type - Contractual
Mode- first one month Work from office , then remote
Job description :-
Data Engineer
Design, develop, and maintain robust and scalable data pipelines to support data integration, transformation, and analytics
Build and maintain data architecture for both structured and unstructured data
Develop and optimize ETL processes using tools such as Apache Spark, Kafka, or Airflow
Work with data analysts, data scientists, and business stakeholders to understand data requirements and deliver solutions
Ensure high data quality and integrity across the pipeline
Implement data security and governance standards
Monitor and troubleshoot performance issues in data pipelines
Proficiency in SQL and experience with relational and NoSQL databases
Hands-on experience with data pipeline tools
Good understanding of data warehousing concepts and tools
Strong communication and collaboration skills
Analytical mindset and attention to detail
Familiarity with SSRS (SQL Server Reporting Services) for creating reports
Expertise in SQL query optimization, stored procedures, and database tuning
Hands-on experience with data migration and replication techniques, including Change Data Capture (CDC) and SQL Server Integration Services (SSIS)
Strong understanding of dimensional modelling and data warehousing concepts, including star and snowflake schemas.
Familiarity with Agile methodologies and DevOps practices for continuous integration and deployment (CI/CD) pipelines.
Familiarity with Data Cubes
Soft Skills would include :
Good problem solving and decision making skills
Dynamic self-starter, independent (but confident to ask questions to clarify), adaptable and able to handle pressure.
Good communication skills and team player
Attention to detail
Curious and Continuous learning mindset
Time management skills
Accountability and ownership
Keyskills: SQL Airflow SSRS Kafka Cicd Pipeline Etl Process SSIS ETL