Skip to main content

Vacancy

AI Engineer (Agentic AI & LLM Systems)

test img

We are looking for an experienced AI Engineer to join our growing AI team. You’ll play a key role in developing intelligent, agentic AI systems using cutting-edge large language models (LLMs), multi-agent orchestration, and retrieval-augmented generation (RAG). This is a hands-on role combining software engineering, ML/NLP expertise, and a passion for building next-gen autonomous agents.

You’ll collaborate closely with AI leads, backend engineers, data engineers, and product managers to bring scalable and intelligent systems to life—integrated into real-world procurement and business applications.

Key Responsibilities:

  1. Design and implement agentic AI pipelines using LangGraph, LangChain, CrewAI, or custom frameworks.
  2. Build robust retrieval-augmented generation (RAG) systems with vector databases (e.g., FAISS, Pinecone, OpenSearch)
  3. Fine-tune, evaluate, and deploy LLMs for task-specific applications.
  4. Integrate external tools and APIs into multi-agent workflows using dynamic tool/function calling (e.g., OpenAI JSON schema)
  5. Develop memory modules such as short-term context, episodic memory, and long term vector stores.
  6. Build scalable, cloud-native services using Python, Docker, and Terraform.
  7. Monitor and evaluate agent performance using tailored metrics (e.g., success rate, hallucination rate).
  8. Ensure secure, reliable, and maintainable deployment of AI systems in production environments.

Your profile:

  1.  7+ years of professional experience in machine learning, NLP, or software engineering. 
  2.  Strong proficiency in Python and experience with ML libraries like PyTorch, TensorFlow, scikit-learn, and XGBoost
  3.  Hands-on experience with LLMs (e.g., GPT, Claude, LLaMA, Mistral) and NLP tooling such as LangChain, HuggingFace, and Transformers.
  4. Experience designing and implementing RAG pipelines with chunking, semantic search, and reranking.
  5. Familiarity with agent frameworks and orchestration techniques (e.g., planning, memory, role assignment).
  6.  Deep understanding of prompt engineering, embeddings, and LLM architecture basics.
  7.  Design systems with role-based communication, coordination loops, and hierarchical planning. Optimize agent collaboration strategies for real-world tasks. 
  8. Solid foundation in microservice architectures, CI/CD, and infrastructure-as-code (e.g., Terraform).
  9. Experience integrating REST/GraphQL APIs into ML workflows. 
  10.  Strong collaboration and communication skills, with a builder’s mindset and willingness to explore new approaches.

Bonus Qualifications

  1. Experience with RLHF, LoRA, or parameter-efficient LLM fine-tuning. 
  2. Familiarity with CrewAI, AutoGen, Swarm, or other multi-agent libraries.
  3. Exposure to cognitive architectures like task trees, state machines, or episodic memory.
  4. Prompt debugging and LLM evaluation practices.
  5. Awareness of AI security risks (e.g., prompt injection, data exposure).
Anchor: #apply-for-this-job
No file selected
One file only.
5 MB limit.
Allowed types: pdf, doc, docx.

Location

India - Remote

Posted

24th September 2024