💬
Evie Avatar Ask Evie
Evie: Hi, I'm Evie 👋 — your digital assistant from Evangelist Apps. Ask me anything about our services, projects, or tech stack! If you'd like us to get in touch, share your contact details below.

Experience Evangelist’s value across end-to-end digital deliveries for key industries.

AI Engineer (Generative AI & MLOps) – Job Opportunity

Role Summary: We are seeking a talented AI Engineer with a strong focus on Generative AI and MLOps. In this role, you will build cutting-edge machine learning solutions for global clients, helping to solve real-world problems through innovative technology.

What We’re Looking For

We are seeking a candidate with a strong foundation in the following areas. This is not a checklist of hard requirements, but a guide to the ideal candidate profile.

  • 3-5 years of professional experience
  • 1-2 years of hands-on experience in AI/ML
  • Proficiency in Python
  • Experience with PyTorch or TensorFlow
  • Familiarity with frameworks like Hugging Face, LangChain, etc.
  • Hands-on experience with LLMs and Generative AI
  • Knowledge of Cloud MLOps on platforms like AWS, GCP, or Azure
  • Experience with containerization using Docker & Kubernetes

What You’ll Do: A Day in the Life

As an AI Engineer, you will be responsible for a wide range of tasks that span the full machine learning lifecycle.

  • Design & Develop Models: Create robust AI/ML models and algorithms with a focus on modern deep learning architectures.
  • Build MLOps Pipelines: Build and maintain scalable pipelines for model training, versioning, and deployment using tools like MLflow.
  • Leverage Generative AI: Work on exciting projects involving Generative AI, computer vision, and advanced NLP techniques like RAG.
  • Collaborate & Integrate: Partner with product and engineering teams to integrate AI-driven APIs and microservices into our products.
  • Monitor & Improve: Maintain model performance via CI/CD, monitoring for model and data drift to ensure reliability.
  • Stay Ahead of the Curve: Continuously research and implement the latest AI advancements and industry best practices.

Our Tech Stack

Our team uses a modern tech stack focused on building and deploying AI at scale. The following represents the key areas of expertise we are looking for:

  • Python
  • Deep Learning (PyTorch/TF)
  • LLM Frameworks (Hugging Face, LangChain)
  • MLOps (MLflow)
  • Cloud Platforms (AWS, GCP, Azure) and specific services like AWS SageMaker or GCP Vertex AI
  • Containerization (Docker, Kubernetes)
  • Version Control (Git)
  • API Development (FastAPI, Flask)
  • Data Handling and Databases (SQL, Spark)

What We Offer

  • Flexible Remote Work: Enjoy the freedom and flexibility of a remote-first work environment.
  • Dynamic Team: Collaborate with an experienced, passionate, and supportive team.
  • Career Growth: Access opportunities for continuous learning and professional development.

Ready to make an impact with AI?

Exp: 3 years to 5 years
Salary: Competitive
Location: Remote
Type: Full-time
Skills: AI/ML AWS Docker Generative AI Hugging Face Kubernetes LangChain LLMs Python PyTorch TensorFlow

Apply for this position

Allowed Type(s): .pdf, .doc, .docx