Senior Software Engineer II, Cloud Data Pipeline
Nautilus Biotechnology
Software Engineering
Seattle, WA, USA
Location
Seattle
Employment Type
Full time
Location Type
Hybrid
Department
Software & Bioinformatics
Compensation
- $164K – $221K • Offers Equity
The hiring pay range for this position is based on skills, education, and experience relevant to the role. New hires are typically hired into the low to midpoint of the range, enabling employee growth in the range over time. Actual placement in range is based on job-related skills and experience, as evaluated throughout the interview process. Pay ranges are adjusted based on cost of labor in each respective geographical market. Your recruiter can share more about the specific pay range for your location during the hiring process.
Benefits include medical, vision, and dental insurance, group and supplemental life insurance, 401k retirement plan, responsible paid time off, parental leave, and more. Other components of total compensation include a competitive options grant at the time of hire (with potential for additional grants).
At Nautilus, we have a big and important mission: improve the health of millions by unleashing the potential of the proteome to accelerate drug development and enable a new world of precision and personalized medicine. We are developing a single-molecule protein analysis platform of unprecedented sensitivity, scale, and ease of use that we believe will democratize access to the proteome -- one of the most dynamic and valuable sources of biological insight. To accomplish this, we are pursuing hard scientific problems with an entrepreneurial mindset and creating a world-class team of builders, innovators, and dreamers across a wide range of disciplines.
We are hiring a Senior Software Engineer II to join our Cloud Data Pipeline team. This team is responsible for the data infrastructure that transforms raw protein measurement data into the scientific outputs that our customers and internal researchers depend on. You'll own ETL pipelines, cloud infrastructure, and the APIs and databases that connect our data platform to the rest of the company. As Nautilus moves into commercial deployment, this role sits at the intersection of data engineering rigor and the practical demands of a production scientific platform. Your work will directly shape what our customers can learn from their experiments and how reliably they can trust the results.
This position will report to the Manager, Data Engineering and Cloud Pipelines and is located in Seattle, WA. The position is hybrid and requires a minimum of three days onsite.
Responsibilities
Design and implement data pipelines and ETLs that process protein measurement data at scale, turning instrument outputs into reliable, query-able scientific results.
Improve the architecture of existing cloud systems: identify structural weaknesses, propose better approaches, and drive implementation alongside the technical lead.
Maintain and evolve the APIs and database schemas that serve internal teams including bioinformatics, science, and product development, adapting as their needs grow.
Contribute to the team's DevOps practice: optimize AWS costs, manage cloud deployments, improve system security, and drive performance improvements through infrastructure changes.
Work cross-functionally with scientific and software teams to define data quality metrics, understand how downstream consumers use pipeline outputs, and ensure the platform meets their needs.
Surface and advocate for changes to project priorities and architecture across the cloud pipeline and adjacent projects.
Requirements
7+ years of relevant experience in a software engineering organization, with a strong track record of delivering production-quality systems.
Bachelor's degree in Computer Science or a related field, or equivalent practical experience.
Fluency in a variety of programming languages. We are currently invested in Python for our data pipelines.
Solid experience with cloud infrastructure on AWS including cost management and deployment practices.
Experience with CI/CD pipelines and infrastructure-as-code (e.g., Terraform, CDK).
Experience with relational and non-relational database design
Demonstrated experience building and maintaining data pipelines or ETL systems at production scale.
Skilled in multiple technology domains with the ability to independently pick up new ones as needed.
Strong communication skills and comfort working across engineering, science, and product stakeholders.
Ability to identify when a change in direction is necessary and deal competently with that shift.
Familiar with AI-driven development tools and methodologies.
Nice to Haves
Experience with Docker and container orchestration tools (Kubernetes, ECS).
Experience with workflow orchestration tools (e.g., Nextflow, Step Functions, Airflow, Prefect).
Experience with data observability, pipeline monitoring, or data quality frameworks.
Background in biotech, life sciences, or scientific data processing.
Familiarity with NoSQL data stores and when to use them alongside relational databases.
Compensation Range: $164K - $221K