Demand 1 :: -
Mandatory Skill :: 3.5 -7 Years (Bigdata -Adobe& scala, python, linux)
Demands 2::
Mandatory Skill :: 3.5 -7 Years (Bigdata -Snowflake (snowpark ) & scala, python, linux)
Specialist Software Engineer - Bigdata
Missions
We are seeking an experienced Big Data Senior Developer to lead our data engineering efforts. In this role, you will design, develop, and maintain large-scale data processing systems. You will work with cutting-edge technologies to deliver high-quality solutions for data ingestion, storage, processing, and analytics. Your expertise will be critical in driving our data strategy and ensuring the reliability and scalability of our big data infrastructure.
Profile
3 to 8 years of experience on application development with Spark/Scala
Good hands-on experience of working on the Hadoop Eco-system ( HDFS, Hive, Spark )
Good understanding of the Hadoop File Formats
Good Expertise on Hive / HDFS, PySpark, Spark, JupiterNotebook, ELT Talend, Control-M, Unix/Script, Python, CI/CD, Git / Jira, Hadoop, TOM, Oozie, Snowflake
Expertise in the implementation of the Data Quality Controls
Ability to interpret the Spark UI and identify the bottlenecks in the Spark process and provide the optimal solution.
Tools
Ability to learn and work with various tools such as IntelliJ, GIT, Control M, Sonar Qube and also on board the new frameworks into the project.
Should be able to independently handle the projects.
Agile
Good to have exposure to CI/CD processes
Exposure to Agile methodology and processes
Others
Ability to understand complex business rules and translate into technical specifications/design.
Write highly efficient and optimized code which is easily scalable.
Adherence to coding, quality and security standards.
Effective verbal and written communication to work closely with all the stakeholders
Should be able to convince the stakeholders on the proposed solutions
Keyskills: python snowpark Bigdata -Adobe& scala linux Snowflake