Key Responsibilities:
Work on client projects to deliver AWS, PySpark, Databricks based Data engineering & Analytics solutions.
Build and operate very large data warehouses or data lakes.
ETL optimization, designing, coding, & tuning big data processes using Apache Spark.
Build data pipelines & applications to stream and process datasets at low latencies.
Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data.
Technical Experience:
Minimum of 5 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark, Databricks SQL, Data pipelines using Delta Lake.
Minimum of 5 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture & delivery.
Email at- ma*a@mo********t.com
Keyskills: Data Bricks Pyspark Databricks Unified Data Analytics Platform Oracle ADF Azure Databricks Azure Blob Storage Azure Logic Apps Azure Data Factory Azure Synapse Datafactory Azure Functions Django Microsoft Azure Azure Data Lake Data Lake Python Azure Storage
\r\nSystems Domain Pvt Ltd: ISO Certified 9001:2008The current education system does not focus on training people to enhance their skills, that makes them employable. Systems Domain bridges the gap, providing employable skills to young minds.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\...