Skip to main content

Member of Technical Staff - Compute Infrastructure

XaiCompute InfrastructureToday
RustCloud & InfrastructureKubernetesLinuxKernel
Apply Now →
$180,000 - $440,000

Job Description

About xAI

xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All employees are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.

ABOUT THE ROLE:

We are building one of the world’s largest AI supercomputers from the ground up. As part of the Compute Infrastructure team, you will own both the raw GPU supercomputer and the platform layer that runs on top of it. You will work across the full stack — from low-level GPU kernel optimizations and Linux kernel internals to massive-scale orchestration and virtualization — to make training and inference at xAI as fast, reliable, and scalable as possible.

This is a broad, high-impact role that combines hardcore supercompute and compute infrastructure work. Your contributions will directly accelerate Grok’s training speed and overall AI progress.

RESPONSIBILITIES:

  • Design, build, and optimize massive GPU clusters for extreme-scale training and inference workloads
  • Develop and tune low-level CUDA kernels (GeMM, Attention, etc.), using CUTLASS, Tensor Cores, and Nsight for maximum performance
  • Work on Linux kernel internals, scheduling, memory management, and resource isolation at cluster scale
  • Build custom container orchestration, virtualization layers (KVM, Firecracker, etc.), and distributed systems that go beyond standard Kubernetes
  • Profile, debug, and eliminate bottlenecks across GPU memory hierarchy, networking fabric, filesystems, and multi-GPU operations
  • Create and maintain infrastructure-as-code, automation, and tools that keep the entire supercomputer reliable and efficient
  • Collaborate closely with AI research teams to deliver production-grade performance and scalability

PREFERRED SKILLS AND EXPERIENCE:

  • Deep low-level systems programming (C/C++ or Rust)
  • Experience building and operating high performance exabyte scale storage systems
  • Strong experience with large-scale GPU clusters or distributed compute infrastructure at production scale
  • Hands-on work with GPU kernel optimization (CUTLASS, custom kernels, Nsight profiling)
  • Experience with Linux kernel internals, scheduling, virtualization, or large-scale orchestration
  • Track record of building or running high-performance infrastructure for AI workloads (training or inference platforms)
  • Ability to reason from first principles and optimize for both memory-bound and compute-bound scenarios

COMPENSATION AND BENEFITS:

$180,000 - $440,000 USD

Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short & long-term disability insurance, life insurance, and various other discounts and perks.

xAI is an equal opportunity employer. For details on data processing, view our Recruitment Privacy Notice.

Source: GreenhouseView original listing →
The Rusty Bucket
Weekly curated Rust jobs delivered to your inbox.