Your browser does not support javascript! Please enable it, otherwise web will not work for you.

Big Data engineer(scala,Hadoop,Kafka,elastic) @ Fidelis Technology

Home >

 Big Data engineer(scala,Hadoop,Kafka,elastic)

Job Description

    Coders Brain is a global leader in its services, digital and business solutions that partners with its clients to simplify, strengthen and transform their businesses. We ensure the highest levels of certainty and satisfaction through a deep-set commitment to our clients, comprehensive industry expertise and a global network of innovation and delivery centers. We achieved our success because of how successfully we integrate with our clients. Position Name Big Data Engineer Experience Required 6+ Years Notice period Immediate joiner Location-:Bangalore Responsibilities Designing and implementing fine-tuned production ready data/ML pipelines in Hadoop platform. Driving optimization, testing and tooling to improve quality. Reviewing and approving high level & detailed design to ensure that the solution delivers to the business needs and align to the data & analytics architecture principles and roadmap. Understanding business requirement and solution design to develop and implement solutions that adhere to big data architectural guidelines and address business requirements. Following proper SDLC (Code review, sprint process). Identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, etc. Building robust and scalable data infrastructure (both batch processing and real-time) to support needs from internal and external users Understanding various data security standards and using secure data security tools to apply and adhere to the required data controls for user access in Hadoop platform. Supporting and contributing to development guidelines and standards for data ingestion Working with data scientist and business analytics team to assist in data ingestion and data related technical issues. Designing and documenting the development & deployment flow. Requirements Experience in developing rest API services using one of the Scala frameworks Ability to troubleshoot and optimize complex queries on the Spark platform Expert in building and optimizing big data data/ML pipelines, architectures and data sets Knowledge in modelling unstructured to structured data design. Experience in Big Data access and storage techniques. Experience in doing cost estimation based on the design and development. Excellent debugging skills for the technical stack mentioned above which even includes analyzing server logs and application logs. Highly organized, self-motivated, proactive, and ability to propose best design solutions. Good time management and multitasking skills to work to deadlines by working independently and as a part of a team. Ability to analyse and understand complex problems. Ability to explain technical information in business terms. Ability to communicate clearly and effectively, both verbally and in writing. Strong in user requirements gathering, maintenance and support Excellent understanding of Agile Methodology. Good experience in Data Architecture, Data Modelling, Data Security. Experience -Must Have Scala: Minimum 2 years of experience Spark: Minimum 2 years of experience Hadoop: Minimum 2 years of experience (Security, Spark on yarn, Architectural knowledge) Hbase: Minimum 2 years of experience Hive - Minimum 2 years of experience RDBMS (MySql / Postgres / Maria) - Minimum 2 years of experience CI/CD Minimum 1 year of experience Experience (Good To Have) Kafka Spark Streaming Apache Phoenix Caching layer (Memcache / Redis) Spark ML FP (Scala cats / scalaz) Qualifications Bachelor's degree in IT, Computer Science, Software Engineering, Business Analytics or equivalent with at-least 2 years of experience in big data systems such as Hadoop as well as cloud-based solutions. Job Perks Attractive variable compensation package Flexible working hours - everything is results-oriented Opportunity to work with an award-winning organization in the hottest space in tech artificial intelligence and advanced machine learning If you are interested then please share the below-mentioned details : Current CTC: Expected CTC: Current Company: Notice Period: Current Location: Preferred Location: Total-experience: Relevant experience: Highest qualification: DOJ(If Offer in Hand from Other company): Offer in hand: If you are interested then send your resume on (Mail id) or connect with me on (Mobile no.) Kritika Georaikar HR Executive Coders Brain Technology Pvt. Ltd. M: hidden_mobile E: hidden_email W: www.codersbrain.com Follow Us On Skills: hadoop,elastic search,rest api,data architecture,spark,data modelling,kafka,scala,elasticsearch,data security,big data,

Employement Category:

Employement Type: Full time
Industry: IT Services & Consulting
Role Category: Not Specified
Functional Area: Not Specified
Role/Responsibilies: Big Data engineer(scala,Hadoop,Kafka,elastic)

Contact Details:

Company: CodersBrain
Location(s): Karnataka

+ View Contactajax loader


Keyskills:   hadoop elastic search rest api data architecture spark kafka scala elasticsearch data security big data

 Fraud Alert to job seekers!

₹ Not Disclosed

Similar positions

Data Analytics Lead (Consulting) | PAN India

  • Envecon Global
  • 10 to 14 Yrs
  • 3 days ago
₹ Not Disclosed

Talend Data Steward | 8Y+ | Pune | Bangalore |

  • Hitex Healthcare
  • 8 to 12 Yrs
  • All India
  • 3 days ago
₹ Not Disclosed

Senior Business Analyst, Data Science

  • Sl Right Hr Consultant
  • 3 to 7 Yrs
  • karnataka
  • 4 days ago
₹ Not Disclosed

SAP Data Quality Lead

  • Tech Mahindra Ltd.
  • 5 to 9 Yrs
  • karnataka
  • 4 days ago
₹ Not Disclosed

Fidelis Technology

Fidelis Technologies is an ISO Certified IT Managed Services and technology firm. Incorporated in 2010, in Bangalore, operating in the Managed IT Services, IT Consulting, staffing services, and skilling. Have offices in India, Singapore, Middle East, USA and having more than 250 serviceable location...