Who We Are
For 20 years, we have been working with organizations large and small to help solve business challenges through technology. We bring a unique combination of engineering and strategy to Make Data Work for organizations.
Our clients range from the travel and leisure industry to publishing, retail and banking. The common thread between our clients is their commitment to making data work as seen through their investment in those efforts.
In our quest to solve data challenges for our clients, we work with large enterprise, cloud-based and marketing technology suites. We have a deep understanding of these solutions so we can help our clients make the most of their investment in an efficient way to have a data-driven business.
Softcrylic now joins forces with Hexaware to Make Data Work in bigger ways!
Why Work at Softcrylic?
Softcrylic provides an engaging, team-focused, and rewarding work environment where people are excited about the work they do and passionate about delivering creative solutions to our clients.
Work Timing: 12:30 pm to 9:30 pm (Flexible in work timing)
Why Work at Softcrylic?
Softcrylic provides an engaging, team-focused, and rewarding work environment where people are excited about the work they do and passionate about delivering creative solutions to our clients.
Work Timing: 12:30 pm to 9:30 pm (Flexible in work timing)
Here's how to approach the interview:
Job Description:
Key Responsibilities:
1. Data Pipeline Development: Design, develop, and maintain large-scale data pipelines using Databricks, Apache Spark, and AWS.
2. Data Integration: Integrate data from various sources into a unified data platform using dbt and Apache Spark.
3. Graph Database Management: Design, implement, and manage graph databases to support complex data relationships and queries.
4. Data Processing: Develop and optimize data processing workflows using Python, Apache Spark, and Databricks.
5. Data Quality: Ensure data quality, integrity, and security across all data pipelines and systems.
6. Team Management: Lead and manage a team of data engineers, providing guidance, mentorship, and support.
7. Agile Scrum: Work with product owners, product managers, and stakeholders to create product roadmaps, schedule and estimate tasks in sprints, and ensure successful project delivery.
Mandatory Skills:
1. Databricks: Experience with Databricks platform, including data processing, analytics, and machine learning.
2. AWS: Experience with AWS services, including S3, Glue, and other relevant services.
3. Python: Proficiency with Python programming language for data processing, analysis, and automation.
4. Graph Database: Experience with graph databases, such as Neo4j, Amazon Neptune, or similar.
5. Apache Spark: Experience with Apache Spark for large-scale data processing and analytics.
6. dbt (Data Build Tool): Experience with dbt for data transformation, modeling, and analytics.
7. Agile Scrum: Experience with Agile Scrum methodologies, including sprint planning, task estimation, and backlog management.
Optional Skills:
1. dlt(Data Load Tool): Experience with data load tools for efficiently loading data into target systems.
2. Kubernetes: Experience with Kubernetes for container orchestration and management.
3. Bash Scripting: Proficiency with Bash scripting for automation and task management.
4. Linux: Experience with Linux operating system, including command-line interface and system administration.
Keyskills: AWS Data Bricks Python s3 Airflow Glue Data Build Tool Spark
Hexaware BPS is a unit of Hexaware Technologies Ltd. We are currently staffed at 2000+ people across Navi Mumbai (Mahape) Chennai, Nagpur and US. Ranked 15th in the NASSCOM Top 20 IT Software & Services Exporters from India, we also rank among the Top 20 Best IT employers in India by DQ-IDC fo...