Job Url: https://www.remoterocketship.com/company/fortytwo-network/jobs/senior-mlops-engineer-united-states-remote Job Description: Fortytwo Website LinkedIn All Job Openings Fortytwo is a decentralized AI network that allows anyone to contribute to planetary-scale intelligence by running AI models on everyday hardware. The platform enables users to participate in a community-driven AI ecosystem where multiple consumer devices collaborate to enhance the speed and accuracy of AI inference. By collectively processing user requests, Fortytwo's nodes facilitate smarter, faster, and more efficient AI solutions, operating without centralized control to ensure openness and accessibility. 1 - 10 employees Founded 2024 🤖 Artificial Intelligence 🤝 B2B ☁️ SaaS Senior MLOps Engineer 6 days ago 🇺🇸 United States – Remote ⏰ Full Time 🟠 Senior 🤖 Machine Learning Engineer Airflow AWS Azure Cloud ElasticSearch Google Cloud Platform Grafana Kubernetes Node.js Prometheus Python Rust Go Apply Now Receive Emails with Similar Jobs Report problem 📋 Description • Deploy scalable, production-ready ML services with optimized infrastructure and auto-scaling Kubernetes clusters. • Optimize GPU resources using MIG (Multi-Instance GPU) and NOS (Node Offloading System). • Manage cloud storage (e.g., S3) to ensure high availability and performance. • Integrate state-of-the-art ML techniques, such as LoRA and model merging, into workflows: • Deploy and manage large language models (LLM), small language models (SLM), and large multimodal models (LMM). • Serve ML models using technologies like Triton Inference Server. • Optimize models with ONNX and TensorRT for efficient deployment. • Develop Retrieval-Augmented Generation (RAG) systems integrating spreadsheet, math, and compiler processors. • Set up monitoring and logging solutions using Grafana, Prometheus, Loki, Elasticsearch, and OpenSearch. • Write and maintain CI/CD pipelines using GitHub Actions for seamless deployment processes. • Create Helm templates for rapid Kubernetes node deployment. • Automate workflows using cron jobs and Airflow DAGs. 🎯 Requirements • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. • Proficiency in Kubernetes, Helm, and containerization technologies. • Experience with GPU optimization (MIG, NOS) and cloud platforms (AWS, GCP, Azure). • Strong knowledge of monitoring tools (Grafana, Prometheus) and scripting languages (Python, Bash). • Hands-on experience with CI/CD tools and workflow management systems. • Familiarity with Triton Inference Server, ONNX, and TensorRT for model serving and optimization. • 5+ years of experience in MLOps or ML engineering roles. • Experience with advanced ML techniques, such as multi-sampling and dynamic temperatures. • Knowledge of distributed training and large model fine-tuning. • Proficiency in Go or Rust programming languages. • Experience designing and implementing highly secure MLOps pipelines, including secure model deployment and data encryption. 🏖️ Benefits • Engage in meaningful AI research – Work on decentralized inference, multi-agent systems, and efficient model deployment with a team that values rigorous, first-principles thinking. • Build scalable and sustainable AI – Design AI systems that reduce reliance on massive compute clusters, making advanced models more efficient, accessible, and cost-effective. • Collaborate with a highly technical team – Join engineers and researchers who are deeply experienced, intellectually curious, and motivated by solving hard problems. Apply Now