Job Title: Machine Learning Engineer - Distributed ML Systems Company Name: Pluralis Research Job Details: RemoteFull,Time Job Url: https://hiring.cafe/viewjob/ofgi0k687ddyfi6j Job Description: Posted 1d agoMachine Learning Engineer - Distributed ML Systems@ Pluralis ResearchView All JobsWebsiteSan Francisco or MelbourneRemoteFull TimeResponsibilities:design training, optimize parallelism, build monitoringRequirements Summary:5+ years in distributed systems and large-scale ML training; senior/staff engineer level.Technical Tools Mentioned:Python, DeepSpeed, Megatron, FSDP, gRPC OverviewPluralis Research carries out foundational research on Protocol Learning: multi-participant training of foundation models where no single participant has, or can ever obtain, a full copy of the model. The purpose of Protocol Learning is to facilitate the creation of community-trained and community-owned frontier models with self-sustaining economics.We're looking for Senior/Staff engineers with 5+ years of experience in distributed systems and ML large-scale training. You'll be implementing a novel substrate for training distributed ML models that work under consumer grade internet connection.ResponsibilitiesDistributed Training Architecture & OptimizationDesign and implement large-scale distributed training systems optimized for heterogeneous hardware operating under low-bandwidth, high-latency conditions.Develop and optimize model-parallel training strategies (data, tensor, pipeline parallelism) with custom sharding techniques that minimize communication overhead.Optimize GPU utilization, memory efficiency, and compute performance across distributed nodes.Implement robust checkpointing, state synchronization, and recovery mechanisms for long-running, fault-prone training jobs.Build monitoring and metrics systems to track training progress, model quality, and system bottlenecks.Decentralized Networking & ResilienceArchitect resilient training systems where nodes can fail, networks can partition, and participants can dynamically join or leave.Design and optimize peer-to-peer topologies for decentralized coordination across non-co-located nodes.Implement NAT traversal, peer discovery, dynamic routing, and connection lifecycle management.Profile and optimize communication patterns to reduce latency and bandwidth overhead in multi-participant environments.What You’ll BringStrong experience building and operating distributed systems in production.Hands-on expertise with distributed training frameworks (FSDP, DeepSpeed, Megatron, or similar).Deep understanding of model parallelism (data, tensor, pipeline parallelism).Expert-level Python with production experience (concurrency, error handling, retry logic, clean architecture).Strong networking fundamentals: P2P systems, gRPC, routing, NAT traversal, distributed coordination.Experience optimizing GPU workloads, memory management, and large-scale compute efficiency.What We OfferEquity-heavy compensation with meaningful ownership in a mission-driven companyCompetitive base salary for senior engineering roles in AustraliaVisa sponsorship available for exceptional candidatesRemote-first with optional access to our Melbourne hubWorld-class team — team mates were previously at at Google, Amazon, Microsoft, and leading startupsBacked by Union Square Ventures and other tier-1 investors, we're a world-class, deeply technical team of ML researchers and engineers. Pluralis is unapologetically ideological. We view the world as a better place if we are able to implement what we are attempting, and Protocol Learning as the only plausible approach to preventing a handful of massive corporations monopolising model development, access and release, and achieving massive economic capture. If this resonates, please apply.