Your browser does not support javascript! Please enable it, otherwise web will not work for you.

Big Data Engineer @ Smartavya Analytica

Home > Software Development

 Big Data Engineer

Job Description

Job Title: Big Data Developer

Location: Mumbai, India

Exp: 3 - 5 Years

Department: Big Data and Cloud

Job Summary: Smartavya Analytica Private Limited is seeking a skilled Hadoop Developer to join our team and contribute to the development and maintenance of large-scale Big Data solutions. The ideal candidate will have extensive experience in Hadoop ecosystem technologies and a solid understanding of distributed computing, data processing, and data management.

Requirements:

  • 3+ years of overall experience in developing, testing & implementing Big data projects using Hadoop, Spark, Hive.
  • Hands-on experience playing lead role in Big data projects, responsible for implementing one or more tracks within projects, identifying and assigning tasks within the team and providing technical guidance to team members.
  • Experience in setting up Hadoop services, implementing ETL/ELT pipelines, working with Terabytes of data ingestion & processing from varied systems
  • Experience working in onshore/offshore model, leading technical discussions with customers, mentoring and guiding teams on technology, preparing HDD & LDD documents.

Skills:

  • Spark, Scala/Pyspark, Hadoop ecosystem including Hive, Sqoop, Impala, Oozie, Hue, Java, Python, SQL, Flume, bash (shell scripting)
  • Apache Kafka, Storm, Distributed systems, good understanding of networking, security (platform & data) concepts, Kerberos, Kubernetes
  • Understanding of Data Governance concepts and experience implementing metadata capture, lineage capture, business glossary
  • Experience implementing CICD pipelines and working experience with tools like SCM tools such as GIT, Bit bucket, etc
  • Ability to assign and manage tasks for team members, provide technical guidance, work with architects on HDD, LDD, POCs.
  • Hands on experience in writing data ingestion pipelines, data processing pipelines using spark and sql, experience in implementing SCD type 1 & 2, auditing, exception handling mechanism
  • Data Warehousing projects implementation with either Java, or Scala based Hadoop programming background.
  • Proficient with various development methodologies like waterfall, agile/scrum.
  • Exceptional communication, organization, and time management skills
  • Collaborative approach to decision-making & Strong analytical skills
  • Good To Have - Certifications in any of GCP, AWS or Azure, Cloudera 12. Work on multiple Projects simultaneously, prioritizing appropriately

Job Classification

Industry: IT Services & Consulting
Functional Area / Department: Engineering - Software & QA
Role Category: Software Development
Role: Big Data Engineer
Employement Type: Full time

Contact Details:

Company: Smartavya Analytica
Location(s): Mumbai

+ View Contactajax loader


Keyskills:   Big data Spark Pyspark Hive Hadoop SCALA Kafka ETL Python

 Fraud Alert to job seekers!

₹ Not Disclosed

Similar positions

Developer III - Software Engineering - Java Developer

  • UST
  • 3 - 5 years
  • Kochi
  • 1 day ago
₹ Not Disclosed

Dir. Software Engineering

  • UKG
  • 20 - 25 years
  • Pune
  • 2 days ago
₹ Not Disclosed

Sr Analyst II Data Science

  • DXC Technology
  • 3 - 6 years
  • Bengaluru
  • 2 days ago
₹ Not Disclosed

Developer III - Software Engineering - Java Developer

  • UST
  • 3 - 5 years
  • Kochi
  • 2 days ago
₹ Not Disclosed

Smartavya Analytica

Company DetailsSmartavya Analytica