Invest in your career with a Madrona-funded company.

0
Companies
0
Jobs

Member of Technical Staff, Inference

Runway

Runway

IT
Remote
USD 240k-290k / year
Posted on Jan 16, 2026

We are building AI to simulate the world through merging art and science.

We believe that world models are at the frontier of progress in artificial intelligence. Language models alone won’t solve the world’s hardest problems – robotics, disease, scientific discovery. Real progress requires models that experience the world and learn from their mistakes, the same way that humans do. And this kind of trial and error can be massively accelerated when done in simulation, rather than in the real world.

World models offer the most clear path to general-purpose simulation, changing how stories are told, how scientific progress is made and how the next frontiers of humanity are reached.

Our team consists of creative, open minded, caring and ambitious people who are determined to change the world. We aspire to continuously build impossible things and our ability to do so relies on building an incredible team. If you are driven to do the same, we'd love to hear from you.

About the role

We're looking for an ML infrastructure engineer to bridge the gap between research and production at Runway. You'll work directly with our research teams to productionize cutting-edge generative models—taking checkpoints from training to staging to production, ensuring reliability at scale, and building the infrastructure that enables fast iteration.

You'll be embedded within research teams, providing platform support throughout the entire model development lifecycle. Your work will directly impact how quickly we can ship new models and features to millions of users.

A peek at our technical stack

Our API endpoints for real-time collaboration and media asset management is written in TypeScript, and runs in ECS containers on AWS Fargate. We leverage multiple AWS-native components, such as S3, CloudFront, Lambda, Kinesis, and SQS, as building blocks of our infrastructure.

Our inference backend is written in Python (PyTorch, TorchScript), and is deployed across multiple clusters / cloud providers. We use Kubernetes for container orchestration, and k8s-native components such as Flyte, Kueue, and Kyverno efficient job orchestration. We invest in prometheus and grafana for monitoring, and Terraform to manage our infrastructure.

What you’ll do

  • Productionize model checkpoints end-to-end: from research completion to internal testing to production deployment to post-release support
  • Build and optimize inference systems for large-scale generative models running on multi-GPU environments
  • Design and implement model serving infrastructure specialized for diffusion models and real-time diffusion workflows
  • Add monitoring and observability for new model releases—track errors, throughput, GPU utilization, and latency
  • Embed with research teams to gather training data, run preprocessing scripts, and support the model development process
  • Explore and integrate with GPU inference providers (Modal, E2E, Baseten, etc.)

What you’ll need

  • 4+ years of experience running ML model inference at scale in production environments
  • Strong experience with PyTorch and multi-GPU inference for large models
  • Experience with Kubernetes for ML workloads—deploying, scaling, and debugging GPU-based services
  • Comfortable working across multiple cloud providers and managing GPU driver compatibility
  • Experience with monitoring and observability for ML systems (errors, throughput, GPU utilization)
  • Self-starter who can work embedded with research teams and move fast
  • Strong systems thinking and pragmatic approach to production reliability
  • Humility and open mindedness; at Runway we love to learn from one another

Nice to Have

  • Experience building custom inference frameworks or serving systems
  • Deep understanding of distributed training and inference patterns (FSDP, data parallelism, tensor parallelism)
  • Ability to debug low-level issues: NCCL networking problems, CUDA errors, memory leaks, performance bottlenecks
  • Experience with diffusion models or video generation systems
  • Knowledge of real-time or latency-sensitive ML applications

Runway strives to recruit and retain exceptional talent from diverse backgrounds while ensuring pay equity for our team. Our salary ranges are based on competitive market rates for our size, stage and industry, and salary is just one part of the overall compensation package we provide.

There are many factors that go into salary determinations, including relevant experience, skill level and qualifications assessed during the interview process, and maintaining internal equity with peers on the team. The range shared below is a general expectation for the function as posted, but we are also open to considering candidates who may be more or less experienced than outlined in the job description. In this case, we will communicate any updates in the expected salary range.

Lastly, the provided range is the expected salary for candidates in the U.S. Outside of those regions, there may be a change in the range, which again, will be communicated to candidates.

Salary range: $240,000-290,000

Working at Runway

Great things come from great teams. We’d love to hear from you.

We’re committed to creating a space where our employees can bring their full selves to work and have equal opportunity to succeed. So regardless of race, gender identity or expression, sexual orientation, religion, origin, ability, age, veteran status, if joining this mission speaks to you, we encourage you to apply.

More about Runway

We're excited to be recognized as a best place to work Crain's | InHerSight | BuiltIn NYC | INC