Job Description
AI/ML Engineer (Specializing in NLP/ML, Large Data Processing, and Generative AI)
Job Summary
Synechron seeks a highly skilled AI/ML Engineer specializing in Natural Language Processing (NLP), Large Language Models (LLMs), Foundation Models (FMs), and Generative AI (GenAI). The successful candidate will design, develop, and deploy advanced AI solutions, contributing to innovative projects that transform monolithic systems into scalable microservices integrated with leading cloud platforms such as Azure, Amazon Bedrock, and Google Gemini. This role plays a critical part in advancing Synechrons capabilities in cutting-edge AI technologies, enabling impactful business insights and product innovations.
Software Requirements
Required Proficiency:
- Python (core libraries: TensorFlow, PyTorch, Hugging Face transformers, etc.)
- Cloud platforms: Azure, AWS, Google Cloud (familiarity with AI/ML services)
- Containerization: Docker, Kubernetes
- Version control: Git
- Data management tools: SQL, NoSQL databases (e.g., MongoDB)
- Model deployment and MLOps tools: MLflow, CI/CD pipelines, monitoring tools
Preferred Skills:
- Experience with cloud-native AI frameworks and SDKs
- Familiarity with AutoML tools
- Additional programming languages (e.g., Java, Scala)
Overall Responsibilities
- Design, develop, and optimize NLP models, including advanced LLMs and Foundation Models, for diverse business use cases.
- Lead the development of large data pipelines for training, fine-tuning, and deploying models on big data platforms.
- Architect, implement, and maintain scalable AI solutions in line with MLOps best practices.
- Transition legacy monolithic AI systems into modular, microservices-based architectures for scalability and maintainability.
- Build end-to-end AI applications from scratch, including data ingestion, model training, deployment, and integration.
- Implement retrieval-augmented generation techniques for enhanced context understanding and response accuracy.
- Conduct thorough testing, validation, and debugging of AI/ML models and pipelines.
- Collaborate with cross-functional teams to embed AI capabilities into customer-facing and enterprise products.
- Support ongoing maintenance, monitoring, and scaling of deployed AI systems.
- Document system designs, workflows, and deployment procedures for compliance and knowledge sharing.
Performance Outcomes:
- Production-ready AI solutions delivering high accuracy and efficiency.
- Robust data pipelines supporting training and inference at scale.
- Seamless integration of AI models with cloud infrastructure.
- Effective collaboration leading to innovative AI product deployment.
Technical Skills (By Category)
Programming Languages:
- Essential: Python (TensorFlow, PyTorch, Hugging Face, etc.)
- Preferred: Java, Scala
Databases/Data Management:
- SQL (PostgreSQL, MySQL), NoSQL (MongoDB, DynamoDB)
Cloud Technologies:
- Azure AI, AWS SageMaker, Bedrock, Google Cloud Vertex AI, Gemini
Frameworks and Libraries:
- Transformers, Keras, scikit-learn, XGBoost, Hugging Face engines
Development Tools & Methodologies:
- Docker, Kubernetes, Git, CI/CD pipelines (Jenkins, Azure DevOps)
Security & Compliance:
- Knowledge of data security standards and privacy policies (GDPR, HIPAA as applicable)
Experience Requirements
- 8 to 10 years of hands-on experience in AI/ML development, especially NLP and Generative AI.
- Demonstrated expertise in designing, fine-tuning, and deploying LLMs, FMs, and GenAI solutions.
- Proven ability to develop end-to-end AI applications within cloud environments.
- Experience transforming monolithic architectures into scalable microservices.
- Strong background with big data processing pipelines.
- Prior experience working with cloud-native AI tools and frameworks.
- Industry experience in finance, healthcare, or technology sectors is advantageous.
Alternative Experience:
Candidates with extensive research or academic experience in AI/ML, especially in NLP and large-scale data processing, are eligible if they have practical deployment experience.
Day-to-Day Activities
- Develop and optimize sophisticated NLP/GenAI models fulfilling business requirements.
- Lead data pipeline construction for training and inference workflows.
- Collaborate with data engineers, architects, and product teams to ensure scalable deployment.
- Conduct model testing, validation, and performance tuning.
- Implement and monitor model deployment pipelines, troubleshoot issues, and improve system robustness.
- Document models, pipelines, and deployment procedures for audit and knowledge sharing.
- Stay updated with emerging AI/ML trends, integrating best practices into projects.
- Present findings, progress updates, and technical guidance to stakeholders.
Qualifications
- Bachelors degree in Computer Science, Data Science, or related field; Masters or PhD preferred.
- Certifications in AI/ML, Cloud (e.g., AWS, Azure, Google Cloud), or Data Engineering are a plus.
- Proven professional experience with advanced NLP and Generative AI solutions.
- Commitment to continuous learning to keep pace with rapidly evolving AI technologies.
Professional Competencies
- Strong analytical and problem-solving capabilities.
- Excellent communication skills, capable of translating complex technical concepts.
- Collaborative team player with experience working across global teams.
- Adaptability to rapidly changing project scopes and emerging AI trends.
- Innovation-driven mindset with a focus on delivering impactful solutions.
- Time management skills to prioritize and manage multiple projects effectively.
Job Classification
Industry: IT Services & Consulting
Functional Area / Department: Data Science & Analytics
Role Category: Data Science & Machine Learning
Role: Machine Learning Engineer
Employement Type: Full time
Contact Details:
Company: Synechron
Location(s): Pune
Keyskills:
NLP
continuous integration
kubernetes
python
natural language processing
dynamo db
data processing
ci/cd
microsoft azure
fms
artificial intelligence
docker
sql
microservices
nosql
pipeline
tensorflow
java
gcp
pytorch
jenkins
keras
big data
aws