Pyspark Developer / Data Engineer
Strong programming experience, Pyspark, is preferred.
Experience in designing and implementing CI/CD, Build Management, and Development strategy.
Experience with SQL and SQL Analytical functions, experience participating in key business, architectural and technical decisions
Scope to get trained on AWS cloud technology
Proficient in leveraging Spark for distributed data processing and transformation.
Skilled in optimizing data pipelines for efficiency and scalability.
Experience with real-time data processing and integration.
Familiarity with Apache Hadoop ecosystem components.
Strong problem-solving abilities in handling large-scale datasets.
Ability to collaborate with cross-functional teams and communicate effectively with stakeholders.
Primary Skills :
Pyspark
SQL
Secondary Skill:
Keyskills: azure data engineering aws
Capgemini Technology Services India Limited Capgemini in India is over 85,000 people strong across nine cities (Mumbai, Bangalore, Gurgaon, Noida, Gandhinagar, Hyderabad, Pune, Kolkata and Chennai - Trichy and Salem). A pioneer in the IT industry, Capgemini has over 45 years of global expertise ...