Your browser does not support javascript! Please enable it, otherwise web will not work for you.

Data Engineering Lead -Spark @ Brillio

Home > Programming & Design

 Data Engineering Lead -Spark

Job Description

A day in the Life of Data Specialist at Brillio:

  • You will partner closely with the clients to deliver state-of-the-art outcomes by engineering solutions.

  • You will continue to expand your expertise in Data Science, Big Da ta, Analytics and Visualization tools/ techniques daily to enable incremental innovation in on-going projects

  • At all times, you will clearly articulate and communicate with a diverse set of internal and external customers with varying degrees of technical proficiency and deliver critical business and process related information to mitigate any ri sks or failures

  • Y ou will persistently look for opportunities to address customer needs by being a thought partner in every moment of engagement

For you to be successful, you must:

  • Build partnerships within and outside the team regardless of formal authority

  • Dismantle personal knowledge and empower others by mentoring and fostering an environment of growth

  • Create value by anticipating and meeting needs of internal and external customers and delivering high-quality results and be accountable for outcomes

  • Open and flexible to accommodate and implement new ideas , understand business complexities, nurture innovation and challenge the status quo persistently

  • Grasp data that is available, pay attention to details and strive to be subject matter expert in chosen area of specialty through continuous learning and improvement

You ll bring this to the table :

  • Ability to work in ambiguous situations with unstructured problems and anticipate potential issues/risks

  • Dem onstrated experience in building data pipelines in data analytics implementations such as Data Lake and Data Warehouse

  • At least 2 instance s of end-to-end implementation of data processing pipeline

  • Experience configuring or developing custom code components for data ingestion, data processing and data provisioning, using Big data & distributed computing platforms such as Hadoop/Spark, and Cloud platforms such as AWS or Azure.

  • Hands-on- experience developing enterprise solutions using designing and building frameworks, enterprise patterns, database design and development in 2 or more of the following areas:

    • E nd - to - end implementation of Cloud data engineering solution
      • AWS ( EC2, S3, EMR, Spectrum, Dynamo DB, RDS, Redshift, Glue, Kinesis) /
      • Azure (Azure SQL DW, Azure Data factory, HDInsight, Cosmos DB, PostgreSQL, SQL on Azure)
    • End - to - end implementation of Big data solution on Cloudera/Hortonworks/ MapR ecosystem
      • Real-time solution using Spark streaming, Kafka/Apache pulsar/Kinesis
      • Distributed compute solution (Spark/Storm/Hive/Impala)
      • Distributed storage and NoSQL storage (Cassandra, Mongo DB, Datastax )
    • Batch solution and distributed computing using ETL/ELT ( SSIS/Informatica/Talend/Spark SQL/Spark Data frame/AWS Glue/ADF)
    • DW-BI (MSBI, Oracle, Teradata), Data modeling, performance tuning, memory optimization/DB partitioning
    • Frameworks, reusable components, accelerators, CI/CD automation
    • Languages (Python, Scala)
  • Proficiency in data modelling, for both structured and unstructured data, for various layers of storage

  • Ability to collaborate closely with business analysts, architects and client stake holders to create technical specifications

  • Ensure quality of code components delivered by employing unit testing and test automation techniques including CI in DevOps environments.

  • Ability to profile data, assess data quality in the context of business rules, and incorporate validation and certification mechanism to ensure data quality

  • Ability to review technical deliverables, mentor and drive technical teams to deliver quality technical deliverables.

  • Understand system Architecture and provide component level design specifications , both high level and low level design

It would be exceptional, if you also have this :

  • Experience in building ground-up Data lake solutions

  • Provide support in building RFP

  • Data governance using Apache atlas, Falcon, Ranger, Erwin, Metadata manager

  • Understanding of Design patterns (Lam bd a architecture/Data lake/micro services)

Job Classification

Industry: IT-Software, Software Services
Functional Area: IT Software - Application Programming, Maintenance,
Role Category: Programming & Design
Role: Programming & Design
Employement Type: Full time

Education

Under Graduation: B.Tech/B.E. in Computers
Post Graduation: Post Graduation Not Required

Contact Details:

Company: Brillio Technologies
Location(s): Bengaluru

+ View Contactajax loader


Keyskills:   System architecture Data modeling Database design Informatica Oracle Teradata Cosmos Apache SQL Python

 Job seems aged, it may have been expired!
 Fraud Alert to job seekers!

₹ Not Disclosed

Brillio

Wipro Limited (NYSE: WIT, BSE: 507685, NSE: Wipro) is a leading global information technology, consulting and business process services company. We harness the power of cognitive computing, hyper-automation, robotics, cloud, analytics and emerging technologies to help our clients adapt to the digita...