Tech-savvy engineer - willing and able to learn new skills, track industry trend
5+ years of total experience of solid data engineering experience, especially in open-source, data-intensive, distributed environments with experience in Big data-related technologies like Spark, Hive, HBase, Scala, etc.
Programming background preferred Scala / Python.
Experience in Scala, Spark, PySpark and Java (Good to have).
Experience in migration of data to AWS or any other cloud.
Experience in SQL and NoSQL databases.
Optional: Model the data set from Teradata to the cloud.
Experience in Building ETL Pipelines
Experience in Building Data pipelines in AWS (S3, EC2, EMR, Athena, Redshift) or any other cloud.
Self-starter & resourceful personality with the ability to manage pressure situations
Exposure to Scrum and Agile Development Best Practices
Experience working with geographically distributed teams
Role & Responsibilities:
Build Data and ETL pipelines in AWS
Support migration of data to the cloud using Big Data Technologies like Spark, Hive, Talend, Python
Interact with customers on a daily basis to ensure smooth engagement
Responsible for timely and quality deliveries.
Fulfill organization responsibilities Sharing knowledge and experience within the other groups in the organization, conducting various
technical sessions and training.
Job Classification
Industry: IT Services & ConsultingFunctional Area / Department: Engineering - Software & QARole Category: Software DevelopmentRole: Data EngineerEmployement Type: Full time