Role and Responsibilities
S/he must have experience in Python
S/he must have experience in Big Data Spark, Hadoop, Hive, HBase and Presto
S/he must have experience in Data Warehousing
S/he must have experience in building reliable and scalable ETL pipelines
Requirements
2-4 years of professional experience in data engineering profile BS or MS in Computer Science or similar Engineering stream
Hands-on experience in data warehousing tools
Knowledge of distributed systems such as Hadoop, Hive, Spark and Kafka etc.
Experience with AWS services (EC2, RDS, S3, Athena, data pipeline/glue, lambda, dynamodb etc.
Keyskills: Pyspark Big Data data warehousing AWS Python
InnovationM is an end-to end technology solution provider, we provide specialized design & development services in the technology space focusing on an end to end solution development (e.g. product development & custom application development) on mobile, web, middleware & server back-...