Your browser does not support javascript! Please enable it, otherwise web will not work for you.

Hadoop and Kafka @ Same Page Hiring

Home >

Same Page Hiring  Hadoop and Kafka

Job Description

    Job Description We are seeking a talented and experienced Hadoop and Kafka Developer to join our team. The ideal candidate will be responsible for designing, developing, and maintaining large-scale data processing solutions using Hadoop ecosystem tools and Kafka. You will work closely with data engineers, architects, and stakeholders to ensure the reliability, scalability, and efficiency of our data pipelines and streaming applications. Responsibilities: 1. Design, develop, and implement data processing solutions leveraging Hadoop ecosystem tools such as HDFS, MapReduce, Spark, Hive, and HBase. 2. Build real-time data streaming applications using Kafka for message brokering and event-driven architectures. 3. Develop and maintain scalable and fault-tolerant data pipelines to ingest, process, and analyze large volumes of data. 4. Collaborate with cross-functional teams to gather requirements, understand business objectives, and translate them into technical solutions. 5. Optimize performance and troubleshoot issues related to data processing and streaming applications. 6. Ensure data quality, integrity, and security throughout the entire data lifecycle. 7. Stay updated with the latest technologies and best practices in big data processing and streaming. Requirements 1. Bachelor's degree in Computer Science, Information Technology, or a related field. (Master's degree preferred) 2. Proven experience in designing, developing, and implementing big data solutions using the Hadoop ecosystem. 3. Strong proficiency in programming languages such as Java, Scala, or Python. 4. Hands-on experience with Apache Kafka, including topics, partitions, producers, consumers, and Kafka Connect. 5. Solid understanding of distributed computing principles and large-scale data processing frameworks. 6. Experience with SQL and NoSQL databases for data storage and retrieval. 7. Familiarity with containerization technologies like Docker and orchestration tools like Kubernetes. 8. Excellent problem-solving skills and the ability to troubleshoot complex issues in distributed environments. 9. Strong communication and interpersonal skills with the ability to collaborate effectively in a team environment. 10. Experience with Agile development methodologies is a plus. Mandatory Skills: Hadoop, Kafka, Java, SQL About Company / Benefits 100% Work From Home Flexible working hours Role: Hadoop and Kafka / 100% Remote Work Location: India Work Exp.: 3-15 Years Job Type: Full Time Salary: INR 0.5-1.5 LPA,

Employement Category:

Employement Type: Full time
Industry: IT Services & Consulting
Role Category: Not Specified
Functional Area: Not Specified
Role/Responsibilies: Hadoop and Kafka

Contact Details:

Company: Travitons Technologies
Location(s): All India

+ View Contactajax loader


Keyskills:   Hadoop Kafka Java SQL

 Fraud Alert to job seekers!

₹ Not Disclosed

Similar positions

Department Head - Production Planning

  • Aditya Birla Sun Life
  • 12 to 16 Yrs
  • All India
  • 21 hours ago
₹ Not Disclosed

Director of QA and Automation

  • Aditya Birla Sun Life
  • 12 to 16 Yrs
  • Noida, Gurugram
  • 22 hours ago
₹ Not Disclosed

Hadoop with Scala Developer

  • Same Page Hiring
  • 8 to 12 Yrs
  • All India
  • 1 day ago
₹ Not Disclosed

DevOps Engineer (Jenkins and Kuberbetes)

  • Vega Intellisoft
  • 3 to 7 Yrs
  • 1 day ago
₹ Not Disclosed

Same Page Hiring

Same Page Hiring For Aditya Birla Life insurance