Vacancy
AI Engineer (Agentic AI & LLM Systems)
We are looking for an experienced AI Engineer to join our growing AI team. You’ll play a key role in developing intelligent, agentic AI systems using cutting-edge large language models (LLMs), multi-agent orchestration, and retrieval-augmented generation (RAG). This is a hands-on role combining software engineering, ML/NLP expertise, and a passion for building next-gen autonomous agents.
You’ll collaborate closely with AI leads, backend engineers, data engineers, and product managers to bring scalable and intelligent systems to life—integrated into real-world procurement and business applications.
Key Responsibilities:
- Design and implement agentic AI pipelines using LangGraph, LangChain, CrewAI, or custom frameworks.
- Build robust retrieval-augmented generation (RAG) systems with vector databases (e.g., FAISS, Pinecone, OpenSearch)
- Fine-tune, evaluate, and deploy LLMs for task-specific applications.
- Integrate external tools and APIs into multi-agent workflows using dynamic tool/function calling (e.g., OpenAI JSON schema)
- Develop memory modules such as short-term context, episodic memory, and long term vector stores.
- Build scalable, cloud-native services using Python, Docker, and Terraform.
- Monitor and evaluate agent performance using tailored metrics (e.g., success rate, hallucination rate).
- Ensure secure, reliable, and maintainable deployment of AI systems in production environments.
Your profile:
- 7+ years of professional experience in machine learning, NLP, or software engineering.
- Strong proficiency in Python and experience with ML libraries like PyTorch, TensorFlow, scikit-learn, and XGBoost
- Hands-on experience with LLMs (e.g., GPT, Claude, LLaMA, Mistral) and NLP tooling such as LangChain, HuggingFace, and Transformers.
- Experience designing and implementing RAG pipelines with chunking, semantic search, and reranking.
- Familiarity with agent frameworks and orchestration techniques (e.g., planning, memory, role assignment).
- Deep understanding of prompt engineering, embeddings, and LLM architecture basics.
- Design systems with role-based communication, coordination loops, and hierarchical planning. Optimize agent collaboration strategies for real-world tasks.
- Solid foundation in microservice architectures, CI/CD, and infrastructure-as-code (e.g., Terraform).
- Experience integrating REST/GraphQL APIs into ML workflows.
- Strong collaboration and communication skills, with a builder’s mindset and willingness to explore new approaches.
Bonus Qualifications
- Experience with RLHF, LoRA, or parameter-efficient LLM fine-tuning.
- Familiarity with CrewAI, AutoGen, Swarm, or other multi-agent libraries.
- Exposure to cognitive architectures like task trees, state machines, or episodic memory.
- Prompt debugging and LLM evaluation practices.
- Awareness of AI security risks (e.g., prompt injection, data exposure).
Anchor: #apply-for-this-job
Location
India - Remote
Posted
24th September 2024