Skip to main content

Senior MLOps Engineer (Training & Inference Optimization)

MultiversecomputingTraining & Inference OptimizationToday
RustCloud & InfrastructurePythonKubernetesPrometheusGrafana

Job Description

Multiverse Computing

Multiverse is a well-funded, fast-growing deep-tech company founded in 2019. We are the largest quantum software company in the EU and have been recognized by CB Insights (2023 and 2025) as one of the 100 most promising AI companies in the world.
With 180+ employees and growing, our team is fully multicultural and international. We deliver hyper-efficient software for companies seeking a competitive edge through quantum computing and artificial intelligence.
Our flagship products, CompactifAI and Singularity, address critical needs across various industries:

  • CompactifAI is a groundbreaking compression tool for foundational AI models based on Tensor Networks. It enables the compression of large AI systems—such as language models—to make them significantly more efficient and portable.

  • Singularity is a quantum- and quantum-inspired optimization platform used by blue-chip companies to solve complex problems in finance, energy, manufacturing, and beyond. It integrates seamlessly with existing systems and delivers immediate performance gains on classical and quantum hardware.

You’ll be working alongside world-leading experts to develop solutions that tackle real-world challenges. We’re looking for passionate individuals eager to grow in an ethics-driven environment that values sustainability and diversity.
We’re committed to building a truly inclusive culture—come and join us.

About the Role

We are seeking a Senior MLOps Engineer to steer the technical vision of our Training and Inference Optimization team. In this high-impact role, you will architect the infrastructure that powers our next-generation AI models. You will bridge the gap between systems programming and machine learning, optimizing large-scale LLM training via NVIDIA NeMo and building ultra-high-throughput serving systems using vLLM, TensorRT-LLM, and SGLang.

Your mission is to ensure our models are not only state-of-the-art but also production-hardened, cost-efficient, and performant at scale.

Key Responsibilities

  • Training Infrastructure: Architect and maintain scalable distributed training pipelines using NVIDIA NeMo/Nemotron/Megatron-Bridge. You will optimize GPU utilization, manage complex checkpointing strategies, and implement automated fault tolerance for long-running jobs.

  • Inference Orchestration: Lead the deployment of LLMs using vLLM, TensorRT-LLM, or SGLang. You will implement and tune cutting-edge techniques - including PagedAttention, continuous batching, and advanced quantization (AWQ/FP8) to maximize throughput and minimize TPOT (Time Per Output Token).

  • Workload Orchestration: Utilize SLURM/Flyte/Ray/SkyPilot to manage and scale ML workloads across diverse cloud providers and on-prem clusters, ensuring seamless resource shifting and cost-effective execution.

  • Lifecycle Management: Standardize model tracking, versioning, and transition workflows using MLflow (or similar tool), ensuring reproducible training runs and a clear path from research to production.

  • Performance Engineering: Conduct deep-dive profiling and bottleneck analysis across the full stack - from CUDA kernels and NCCL collective communications to Python-level orchestration.

  • Efficiency & Cost Governance: Monitor and optimize cloud and on-prem GPU expenditures through intelligent scaling policies and high-density resource packing.

  • Technical Leadership: Set the bar for engineering excellence. You will drive the roadmap, perform rigorous code reviews, and mentor junior and mid-level engineers.

Required Qualifications

  • Experience: 5+ years in MLOps, DevOps, or Software Engineering, with a minimum of 2 years dedicated to LLM infrastructure.

  • Deep Learning Ecosystem: Expert-level proficiency with PyTorch and the NVIDIA stack (CUDA, NCCL, Triton).

  • Specialized Tooling: Hands-on experience with NVIDIA NeMo (or Megatron-Bridge) for distributed training and at least two of the following for serving: vLLM, TensorRT-LLM, or SGLang.

  • Orchestration & Lifecycle: Proven experience with SLURM/Flyte/Ray/SkyPilot for cluster management and MLflow (or similar tool) for experiment and model management.

  • Infrastructure: Deep expertise in Kubernetes and K8s operators (e.g., KubeRay, MPI Operator, or Run:ai).

  • Systems Programming: Mastery of Python and a functional understanding of C++ or Rust for performance-critical components.

  • Next-Gen Hardware: Familiarity with high-performance networking (InfiniBand/RoCE) and NVIDIA H200/B200 (Blackwell) architectures.

Preferred Skills

  • Active contributions to relevant open-source projects (vLLM, SGLang, SkyPilot, or NeMo).

  • Proven track record with model compression (Sparsity, Distillation, or Quantization).

  • Experience writing or optimizing custom Triton kernels.

  • Expertise in ML observability stacks (Prometheus, Grafana, Jaeger).

Source: TeamTailorView original listing →
The Rusty Bucket
Weekly curated Rust jobs delivered to your inbox.