Primarily looking for an data engineer with expertise in processing data pipelines using Spark & DatabricksMust have - databricks, SparkGood to have - AWS S3, Snowflake, Talend Requirements: Candidate must be experienced working in projects involving Other ideal qualifications include experiences in: Primarily looking for an data engineer with expertise in processing data pipelines using Data bricks Should be very proficient in doing large scale data operations using Spark and overall very comfortable using Python Familiarity with AWS compute, storage and IAM concepts. Experience in working with S3 Data Lake as the storage tier. Any ETL background (Talend, AWS Glue etc.) is a plus but not required Cloud Warehouse experience (Snowflake etc.) is a huge plus Carefully evaluates alternative risks and solutions before taking action Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives, practices and procedures of the corporation, department and business unitSkills: Hands on experience on databricks, spark Experience on Shell scripting Exceptionally strong analytical and problem solving skills. Relevent experience with ETL methods and with retrieving data from dimensional data models and data warehouses. Strong experience with relational databases and data access methods, especially SQL. Excellent collaboration and cross-functional leadership skills. Excellent communication skills, both written and verbal. Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment. Ability to leverage data assets to respond to complex questions that require timely answers has working knowledge on migrating relational and dimensional databases on AWS Cloud platform
Employement Category:
Employement Type: Full timeIndustry: KPOFunctional Area: AnalyticsRole Category: Other AnalyticsRole/Responsibilies: Big Data Developer