Role: Senior Data Engineer
Location: Bangalore - Hybrid
Experience : 10+ Years
Job Requirements:
ETL & Data Pipelines:
Experience building and maintaining ETL pipelines with large data sets using AWS Glue, EMR, Kinesis, Kafka, CloudWatch
Programming & Data Processing:
Strong Python development experience with proficiency in Spark or PySpark
Experience in using APIs
Database Management:
Strong skills in writing SQL queries and performance tuning in AWS Redshift
Proficient with other industry-leading RDBMS such as MS SQL Server and PostgreSQL
AWS Services:
Proficient in working with AWS services including AWS Lambda, Event Bridge, Step Functions, SNS, SQS, S3, and MI models
Interested candidates can share their resume at Ne****1@da*******p.com
Keyskills: Etl Pipelines Data Pipeline Data Engineer Pyspark Postgresql Kafka SQL Server Aws Glue SQL RDBMS API Spark AWS Python
Damco is a fast growing IT Solutions and Services provider. Started in 1996, Damco has clients all round the globe and has offices in UK, USA, Luxembourg and Australia apart from its world-class facilities in New Delhi NCR, Chandigarh and Hyderabad, India. Damco provides Onsite Consulting se...