Desired Candidate Profile
- Linux: 5 or more years in Unix systems engineering with experience in Red Hat Linux, Centos or Ubuntu.
- AWS: Working experience and good understanding of the AWS environment, including VPC, EC2, EBS, S3, RDS, SQS, Cloud Formation, Lambda and Redshift.
- DevOps: Experience with DevOps automation - Orchestration/Configuration Management and CI/CD tools (Ansible, Chef, Puppet, Jenkins, etc.). Puppet and Ansible experience a strong plus.
- Programming: Experience programming with Python, Bash, REST APIs, and JSON encoding.
- Version Control: Working experience with one or more version control platforms (Git, TFS). Nice to have Git experience.
- Hadoop: 1 year operational experience with the Hadoop stack (MapReduce, Spark, Sqoop, Pig, Hive, Impala, Sentry, HDFS).
- AWS EMR: Experience in Amazon EMR cluster configuration.
- ETL: Job scheduler experience like Oozie or Airflow. Nice to have Airflow experience.
- Testing: Be very familiar with CI / CD, good in scripting like Phyton, Unix etc
- Security: Experience implementing role based security, including AD integration, security policies, and auditing in a Linux/Hadoop/AWS environment.
- Monitoring: Hands on experience with monitoring tools such as AWS CloudWatch, Nagios or Splunk.
- Backup/Recovery: Experience with the design and implementation of big data backup / recovery solutions.
- Networking: Working knowledge of TCP/IP networking, SMTP, HTTP, load-balancers (ELB, HAProxy) and high availability architecture.
- Ability to keep systems running at peak performance, upgrade operating system, patches, and version upgrades as required
Education:
UG: Any Graduate - Any Specialization
PG: Any Postgraduate - Any Specialization
Contact Details:
Keyskills:
hive
oozie
sqoop
impala
mapreduce
hadoop
hdfs
spark
pig
big data
cloud
devops
red hat
linux
centos
ubuntu
Python
AWS