Role Overview:
As an MLOps Engineer, the resource will work collaboratively with Data Scientists and Data engineers to deploy and operate systems. Resource will help automate and streamline our operations and processes. Resource will build and maintain tools for deployment, monitoring, and operations. Resource will also troubleshoot and resolve issues in development, testing, and production environments
Roles and Responsibilities
Operate and maintain systems supporting the provisioning of new clients, applications, and features.
Day-to-day monitoring of the Production service delivery environment to ensure all services and applications are operating optimally and SLAs are met.
Software deployment and configuration management in both QA and Production environments.
Collaborate with Data Scientists and Data Engineers on feature development teams to containerize and build out deployment pipelines for new modules
Design, build and optimize applications containerization and orchestration with Docker and Kubernetes and AWS or Azure
Automate applications and infrastructure deployments.
Produce build and deployment automation scripts to integrate between services
Be a subject matter expert on DevOps practices, CI/CD and Configuration Management with assigned engineering team.
Experience with one of the cloud computing platforms: Google Cloud, Amazon Web Service, Azure, Kubernetes.
Experience in MLFlow, Qubeflow, MLTracking, MLExperiments
Experience in big data technologies preferred: Hadoop, Hive, Spark, Kafka.
Knowledge of machine learning frameworks: Tensorflow, Caffe/Caffe2, Pytorch, Keras, MXNet, Scikit-Learn.
Skills:
At least 3 years experience working with cloud-base services and DevOps concepts, tools and Practices
Extensive experience with Unix/AIX/Linux environments
Experience with Kubernetes or Docker Swarm
Experience working in cross-functional Agile engineering teams
Familiarity with standard concepts and technologies used in CI/CD build, deployment Pipelines Experience with scripting and coding using Python, Shell
Experience with configuration using tools such as Chef, Ansible
for more details share your resume to Bh********e@ep****************y.com
Keyskills: Tensorflow coding using Python Hadoop Qubeflow Big Data Kafka big data technologies Unix/AIX/Linux environments MLTracking Scikit-Learn Pytorch cloud scripting Hive Shell MLExperiments DevOps concepts MLFlow Keras Caffe Spark
We at Epicenter are one of India's Leading Customer Contact Centre’s providing Voice and Non Voice Services in the areas of Collections, Sales and Customer ServiceCompany URL: www.epicentertechnology.comJob Location: Bhayander West