Job Title: Machine Learning Lead Company Name: Autolane Job Details: RemoteFull,Time Job Url: https://hiring.cafe/viewjob/at2ivadkpzwaazbl Job Description: Posted 1w agoMachine Learning Lead@ AutolaneView All JobsWebsitePortland or San Francisco or AustinRemoteFull TimeResponsibilities:architect models, coordinate agents, design trainingRequirements Summary:5+ years ML engineering; expert PyTorch; Graph Neural Networks; Transformers; multi-agent RL; production ML experience; cloud/MLOps; edge and robotics exposureTechnical Tools Mentioned:PyTorch, PyTorch Geometric, DGL, Transformers, GCP, Vertex AI, Cloud Run, Pub/Sub, ROS2, ONNX Runtime, TensorRT You'll work directly with our CTO to build AI systems that scale from pilot deployments to thousands of coordinated deliveries per day, establishing the intelligence layer that makes autonomous logistics commercially viable. Description Machine Learning LeadLocation: Remote US (Bay Area, Austin preferred)About AutolaneAutolane is on a mission to revolutionize last-mile logistics by empowering autonomous vehicle owners to unlock the value of their vehicle. Our flagship product is the industry's first orchestration layer for autonomous deliveries—coordinating heterogeneous autonomous systems (AVs, humanoid robots, delivery bots) to achieve zero-wait handoffs and maximum fleet utilization. We integrate directly with retailers, commercial real-estate operators, and AV fleets, building the AI infrastructure that enables autonomy at scale.The RoleAs Machine Learning Lead at Autolane, you'll architect and build the AI brain that orchestrates autonomous last-mile logistics. You'll design and deploy the core learning systems—Graph Neural Networks for spatial reasoning, Transformers for temporal prediction, and Multi-Agent Reinforcement Learning for heterogeneous agent coordination—that enable our platform to optimize deliveries across AVs, humanoid robots, and delivery bots in real-time.You'll work directly with our CTO to build AI systems that scale from pilot deployments to thousands of coordinated deliveries per day, establishing the intelligence layer that makes autonomous logistics commercially viable.Core ResponsibilitiesAI/ML Architecture & Model DevelopmentGraph Neural Networks: Design and implement 6-layer Graph Attention Networks for modeling spatial relationships between agents, locations, and resources using PyTorch GeometricTemporal Prediction: Build Transformer-based architectures for multi-horizon arrival time prediction, task duration forecasting, and optimal scheduling sequencesMulti-Agent RL: Architect QMIX-based coordination systems with Conservative Q-Learning for safe exploration across heterogeneous agent types (Teslas, Unitree G1 humanoids, PUDU bots)Ensemble Systems: Design robust decision-making through model diversity, weighted voting mechanisms, and uncertainty quantification with confidence-based fallbacksReal-time Inference: Optimize models for <100ms inference latency in production environmentsHeterogeneous Agent CoordinationAgent Abstraction: Design unified state representations across vehicle types with distinct capability profilesCooperative Policy Learning: Train agents to optimize joint actions—vehicle routing, robot task assignment, and handoff timingReward Engineering: Develop composite reward structures balancing efficiency, wait time reduction, success rates, and safety constraintsCross-Agent Communication: Implement learned communication protocols for decentralized coordinationSimulation & Training InfrastructureEnvironment Design: Build high-fidelity simulation environments with physics engines for safe policy explorationOffline Training: Architect pipelines for learning from historical ridehail coordination data and synthetic scenariosTransfer Learning: Leverage logistics datasets and pre-trained models to accelerate domain adaptationOnline Learning: Design shadow mode deployment, A/B testing infrastructure, and continuous learning with replay buffersProduction ML SystemsMLOps Pipeline: Build end-to-end training, validation, and deployment infrastructure on GCPModel Monitoring: Implement drift detection, performance tracking, and automated retraining triggersFeature Engineering: Design spatial graph construction, temporal sequence encoding, and agent state representation pipelinesSafety Validation: Ensure policy safety through Conservative Q-Learning, human-in-the-loop validation, and confidence thresholdsEdge AI IntegrationModel Optimization: Quantize and optimize models for edge deployment alongside embedded systemsSensor Fusion: Integrate ML predictions with edge sensor data (cameras, LiDAR, ultrasonic) for ground truth validationHybrid Architecture: Design cloud-edge inference strategies balancing latency and computational requirementsRequired QualificationsTechnical Foundation5+ years machine learning engineering with production deployment experienceExpert proficiency in PyTorch and deep learning frameworksDeep expertise with Graph Neural Networks (PyTorch Geometric, DGL) for relational reasoningStrong foundation in Transformer architectures and attention mechanismsHands-on experience with Reinforcement Learning (single-agent and multi-agent systems)Proven ability to take models from research to production at scaleCore ML CompetenciesProven experience with temporal sequence modeling and time-series predictionWorking knowledge of model ensemble techniques and uncertainty quantificationStrong foundation in optimization algorithms, hyperparameter tuning, and neural architecture searchAbility to design and debug complex training pipelines with distributed computingProduction & Infrastructure SkillsExperience building MLOps pipelines (training, validation, deployment, monitoring)Strong understanding of cloud ML infrastructure (GCP Vertex AI, Cloud Run, Pub/Sub preferred)Knowledge of model serving, latency optimization, and real-time inferenceProven ability to build observable, debuggable ML systems in production environmentsAI Development FluencyActive daily use of AI coding assistants (Claude Code, Cursor, GitHub Copilot) for ML developmentDemonstrated ability to leverage LLMs for rapid prototyping, debugging, and code generationExperience using AI tools for experiment tracking, documentation, and analysisPreferred QualificationsAdvanced ML ExperienceMulti-Agent Reinforcement Learning algorithms (QMIX, MAPPO, COMA, VDN)Conservative Q-Learning or offline RL for safe policy learningGraph Attention Networks for dynamic graph reasoningImitation Learning and learning from demonstrationsHierarchical RL for multi-timescale decision makingSim-to-Real Transfer for robotics applicationsDomain ExperienceAutonomous vehicles or robotics ML systemsFleet optimization or logistics schedulingReal-time coordination systems at scaleSpatial-temporal prediction for transportationMulti-robot coordination or swarm intelligenceRobotics & Edge MLROS2 integration for ML inference and sensor fusionONNX Runtime or TensorRT for embedded deploymentModel quantization and pruning for edge inferenceSensor fusion with heterogeneous data sourcesIsaac Sim or Gazebo for robotics simulationResearch & InnovationPublications in top ML/robotics venues (NeurIPS, ICML, ICRA, CoRL)Experience translating research into production systemsOpen-source contributions to ML frameworks or RL librariesFamiliarity with latest advances in foundation models for roboticsOur AI Innovation CultureAt Autolane, we're building the intelligence layer for autonomous logistics—combining cutting-edge ML with real-world robotics to create systems that learn and adapt:Rapid Iteration: Move from Jupyter exploration to production deployment in days, not quartersAI-Augmented Development: Use LLMs to accelerate research, prototyping, and production codeReal-World Impact: Your models will coordinate actual autonomous vehicles and robots in productionCross-Functional Innovation: Collaborate with embedded engineers, roboticists, and operations teamsResearch-to-Production: Bridge the gap between academic ML and deployed systemsWhy Join Our AI/ML Team?Cutting-Edge Stack: Work with GNNs, Transformers, and MARL at the intersection of ML and roboticsDirect Impact: Your algorithms will orchestrate millions of autonomous deliveriesTechnical Leadership: Work directly with CTO and Head of R&D on architectural decisionsGrowth Trajectory: Build the AI foundation as we scale from pilots to nationwide deploymentInnovation Freedom: Experiment with novel architectures, reward structures, and training paradigmsMission-Critical Work: Build the intelligence that makes autonomous logistics safe, efficient, and commercially viableWorking Environment & RequirementsLocation: Remote US with Portland, Bay Area, or Austin preferred for occasional hardware collaborationCompute Resources: Access to GCP GPU clusters, TPUs, and simulation infrastructureHardware Integration: Collaboration opportunities with Unitree G1, Tesla vehicles, and delivery botsCollaboration: Direct partnership with CTO and Head of R&D on architecture decisionsPace: Fast-moving startup environment where shipping working models mattersInterview Process Note:Be prepared to:Walk through ML systems you've designed and deployed to productionDemonstrate your AI-augmented development workflow for research and prototypingDiscuss trade-offs in model architecture selection (when to use GNN vs Transformer vs RL)Show examples of designing reward functions and training multi-agent systemsExplain how you'd approach coordinating heterogeneous autonomous agents in real-timeDescribe MLOps decisions you've made for reliable production deploymentBonus points for:Showing working MARL systems or multi-agent coordination demosMetrics from deployed ML systems (latency, accuracy, business impact)Experience with robotics simulation (Isaac Sim, Gazebo) or real robotsCreative solutions to sim-to-real transfer, sample efficiency, or safety constraintsPublications or open-source contributions in relevant areasReal-world deployments involving autonomous vehicles or fleet optimization