Want to join KANINI
We are looking for aData Engineer who can build a robust database and its architecture. In thisrole, you will assess a wide range of requirements and apply relevant databasetechniques to create a sustainable data architecture before you begin theimplementation process and develop the database from scratch.
You areall set to:
Develop, maintain,evaluate, and test big data solutions. You will be involved in data engineeringactivities like creating pipelines/workflows for Source-to-Target Data Mapping amongothers.
You are someone who can:
You will be involved in the design of datasolutions using Hadoop based technologies along with Hadoop, Azure, HDInsightfor Cloudera based Data Late using Scala Programming.
Liaise and be part of our extensive GCP community,contributing in the knowledge exchange learning programme of the platform.
Be required to showcase your GCP Data engineeringexperience when communicating with business team on their requirements, turningthese into technical data solutions.
Be required to build and deliver Data solutionsusing GCP products and offerings.
Hands on and deep experience working with GoogleData Products (e.g. Big Query, Dataflow, Dataproc, AI Building Blocks, Looker,Cloud Data Fusion, Dataprep, etc.).
Experience in Spark /Scala / Python/Java / Kafka.
Responsible to Ingest data from files, streams anddatabases. Process the data with Hadoop, Scala, SQL Database, Spark, ML, IoT
Develop programs in Scala and Python as part ofdata cleaning and processing
Responsible to design and develop distributed, highvolume, high velocity multi-threaded event processing systems
Develop efficient software code for multiple usecases leveraging Python and Big Data technologies for various use cases builton the platform
Provide high operational excellence guaranteeinghigh availability and platform stability
Implement scalable solutions to meet theever-increasing data volumes, using big data/cloud technologies Pyspark, Kafka,any Cloud computing etc
You bring in:
Minimum 4+ years of experience in Big Datatechnologies
Good to have experience in Cloud, GCP, AWS,Azure Data Engineering with background in Spark/Python/Scala / Java.
Proficient in any of the programming languages -Python, Scala or Java
Mandatory experience in Mid to Expert Levelprogramming capabilities in a large-scale enterprise
In-depth experience in modern data platformcomponents such as the Hadoop, Hive, Pig, Spark, Python, Scala, etc
Experience with Distributed Versioning Controlenvironments such as GIT
Familiarity with development tools - experience oneither IntelliJ / Eclipse / VSCode IDE, Build Tool Maven
Demonstrated experience in modern API platformdesign including how modern UI s are built consuming services / APIs.
Experience on Azure cloud including Data Factory,Databricks, Data Lake Storage is highly preferred.
Solid experience in all phases of Software DevelopmentLifecycle - plan, design, develop, test, release, maintain and support, decommission.
Your qualificationis:
B.E/B.Tech/M.C.A/MSc(preferably in Computer Science/IT)