Streaming data Technical skills requirements :-
Experience- 5+ Years Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS preferred)
- Hands on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue, EMR
- Hands-on experience of programming language like Scala with Spark.
- Good command and working experience on Hadoop Map Reduce, HDFS, Hive, HBase, and/or No-SQL Databases
- Hands on working experience on any of the data engineering analytics platform (Hortonworks Cloudera MapR AWS), AWS preferred
- Hands-on experience on Data Ingestion Apache Nifi, Apache Airflow, Sqoop, and Oozie - Hands on working experience of data processing at scale with event driven systems, message queues (Kafka FlinkSpark Streaming)
- Hands on working Experience with AWS Services like EMR, Kinesis, S3, CloudFormation, Glue, API Gateway, Lake Foundation - Hands on working Experience with AWS Athena
- Experience building data pipelines for structured unstructured, real-time batch, events synchronous asynchronous using MQ, Kafka, Steam processing.
Mandatory Skills-
Spark, Scala, AWS, Hadoop
Share your resume at Aa***********a@Co****e.com if you have all mandatory skills and you're an early joiner.
Keyskills: Aws Big Data SCALA Hadoop Spark Spark Programming Scala Programming Apache Flink Scala Technologies Spark Streaming