Job Description
:Design, develop, and maintain data pipelines and ETL processes using Databricks.
Manage and optimize data solutions on cloud platforms such as Azure and AWS.
Implement big data processing workflows using PySpark.
Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver effective solutions.
Ensure data quality and integrity through rigorous testing and validation.
Optimize and tune big data solutions for performance and scalability.
Stay updated with the latest industry trends and technologies in big data and cloud computing.
:
Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
Proven experience as a Big Data Engineer or similar role.
Strong proficiency in Databricks and cloud platforms (Azure/AWS).
Expertise in PySpark and big data processing.
Experience with data modeling, ETL processes, and data warehousing.
Familiarity with cloud services and infrastructure.
Excellent problem-solving skills and attention to detail.
Strong communication and teamwork abilities.
Preferred Qualifications:
Experience with other big data technologies and frameworks.
Knowledge of machine learning frameworks and libraries.
Certification in cloud platforms or big data technologies.
Job Classification
Industry: Banking
Functional Area / Department: Engineering - Software & QA
Role Category: Software Development
Role: Data Engineer
Employement Type: Full time
Contact Details:
Company: Virtusa
Location(s): Hyderabad
Keyskills:
data processing
pyspark
cloud platforms
data bricks
big data
azure databricks
hive
cloud services
python
scala
big data technologies
microsoft azure
data warehousing
machine learning
java
data modeling
spark
hadoop
sqoop
aws
etl
cloud computing