Company Name: Zettabyte Inc Job Details: Be,an,Early,Applicant,Hiring,Remotely,in,United,States,Remote,Senior,level Job Url: https://builtin.com/job/senior-staff-backend-engineer-distributed-system/7486134 Job Description: About UsAt Zettabyte, we’re on a mission to make AI compute ubiquitous, seamless, and limitless. We’re building a cloud where AI just works—anywhere, anytime. “AI Power. Everywhere.” Be part of the team designing the infrastructure for the AI-first world.Why this role existsWe need a Backend Engineer to build the systems that orchestrate GPU clusters for AI workloads. You'll create APIs that handle GPU allocation, memory management, compute scheduling, and multi-tenant isolation—challenges unique to AI infrastructure that go far beyond typical backend engineering. As part of our backend team, you'll solve problems like: How do we efficiently share expensive GPU resources across users? How do we handle GPU memory constraints for large AI models? How do we ensure quality of service when workloads compete for compute? This is an opportunity to build infrastructure where every API call could allocate thousands of dollars worth of compute per hour, where your optimizations directly impact whether AI startups can afford to train their models.What you’ll doDesign APIs that abstract complex GPU operations into simple developer experiencesBuild scheduling algorithms that maximize GPU utilization while ensuring SLA complianceDevelop resource management systems for GPU lifecycle—provisioning, allocation, scheduling, and releaseCreate usage tracking and billing systems for GPU-hours, memory usage, and compute utilizationImplement monitoring for GPU-specific metrics, health checks, and automatic failure recoveryBuild multi-tenancy systems with resource isolation, quota management, and fair schedulingOptimize cold starts for model serving and implement efficient model loading strategiesCollaborate with frontend engineers to expose complex infrastructure through intuitive interfacesLeverage AI-assisted coding tools (GitHub Copilot, Claude Code, Cursor IDE, etc.) to boost productivity and code quality.You’ll thrive here if you5+ years backend engineering experience with distributed systemsStrong proficiency in Go, Python, or similar backend languagesExperience with resource scheduling, orchestration, and API design (REST, GraphQL, gRPC)Understanding of hardware constraints and system optimizationLinux systems knowledge and containerization experience (Docker, Kubernetes)Comfortable working with expensive resources where efficiency directly impacts costsExcited about solving novel problems in AI infrastructure (not just another CRUD app)Startup mindset—comfortable with ambiguity and rapid iterationBonus qualificationsGPU or HPC cluster management experienceUnderstanding of ML/AI workload patterns and requirementsExperience with high-value resource allocation systemsBackground in performance optimization for compute-intensive workloadsFamiliarity with GPU virtualization and sharing technologiesExperience building billing or metering systemsDetailsWe provide competitive salary and meaningful equity, based on your experience and skillset.This is a Hybrid role - 3 days in office, 2 days WFH; Must locate in Palo Alto and be able to commute to the local office.Please note that this position is open to U.S. citizens and permanent residents only, visa sponsorship is not available.