Invest in your career with a Madrona-funded company.

0
Companies
0
Jobs

ML Data Engineer

Recraft

Recraft

Software Engineering, Data Science
London, UK
Posted on Nov 19, 2025

Location

London, UK

Employment Type

Full time

Location Type

On-site

Department

Engineering

About Us

Founded in the US in 2022 and now based in London, UK, Recraft is an AI tool for professional designers, illustrators, and marketers, setting a new standard for excellence in image generation.

We designed a tool that lets creators quickly generate and iterate original images, vector art, illustrations, icons, and 3D graphics with AI. Over 3 million users across 200 countries have produced hundreds of millions of images using Recraft, and we’re just getting started.

Join a universe of professional opportunities, develop and support large-scale projects, and shape the future of creativity. We are committed to making Recraft an essential, daily tool for every designer and setting the industry standard. Our mission is to ensure that creators can fully control their creative process with AI, providing them with innovative tools to turn ideas into reality.

If you’re passionate about pushing the boundaries of AI, we want you on board!

Job Description

At Recraft, we’re building the next generation of generative models across images and text. We’re looking for an ML Data Engineer to scale our data pipelines for unstructured data (primarily images) and keep our training flows fast, reliable, and repeatable. You’ll design and operate high-throughput ingestion and preprocessing on Kubernetes, evolve our internal data-pipeline framework, and work hand-in-hand with ML engineers to ship datasets that move model quality forward.

Key Responsibilities

  • Build robust crawlers/scrapers to collect large-scale image (and occasional text/HTML) datasets from diverse sources.

  • Own the end-to-end flow: raw data → quality/beauty/relevance filtering → dedup/validation → ready-to-train artifacts.
    Operate and improve our Kubernetes-based data-pipeline framework (distributed jobs, retries, monitoring, automation).

  • Work with S3-style object storage: efficient layouts, lifecycle, throughput, and cost awareness.

  • Add tooling around pipelines (progress/health visualization, metrics, alerts) for observability and faster iteration.

  • Collaborate closely with ML engineers to align datasets with training needs and accelerate experimentation.

Requirements

Must-have

  • Strong Python fundamentals; you write clean, maintainable, production-ready code.

  • Solid hands-on Kubernetes experience (containers, jobs, batch/distributed processing).

  • Proven track record with unstructured data, especially images (loading, filtering, transforming at scale).

  • Experience building web crawlers/parsers and handling real-world failure modes gracefully.

  • Comfort with S3/object storage and moving lots of data efficiently and safely.

  • Pragmatic, detail-oriented, ownership mindset; you enjoy making systems reliable and fast.

Nice-to-have

  • Familiarity with ML workflows (PyTorch) and downstream training considerations.

  • Experience with image quality scoring, captioning, or image-to-text pipelines.

  • DAG/workflow visualizations or pipeline UX tooling.

  • DevOps fluency: Docker, CI/CD, infra automation.

What We Offer

  • ​​Competitive salary and equity.

  • We’re able to offer Skilled Worker visa sponsorship in the UK for qualified candidates.

  • Real impact on model quality: your pipelines directly power training runs and product improvements.

  • Ownership with support: autonomy to design and improve systems, alongside experienced ML peers.

  • Modern stack: Python, Kubernetes, S3, internal pipeline framework built for scale.

  • Growth: a fast-moving environment where shipping well-engineered systems is the norm.