Job Url: https://www.linkedin.com/jobs/search/?currentJobId=4348306962&f_AL=true&f_TPR=r86400&f_WT=2&keywords=software%20engineer&origin=JOB_SEARCH_PAGE_JOB_FILTER&start=200 Job Description: AI/ML Software Engineer  United States · 20 hours ago · Over 100 applicants Promoted by hirer · No response insights available yet $160K/yr - $180K/yr On-site Matches your job preferences, workplace type is On-site. Full-time Easy Apply Save Save AI/ML Software Engineer  at Glocomms AI/ML Software Engineer Glocomms · United States (On-site) Easy Apply Save Save AI/ML Software Engineer  at Glocomms Show more options Your profile is missing required qualifications Show match details Help me update my profile BETA Is this information helpful? Get personalized tips to stand out to hirers Practice mock interviews personalized to every role and get custom feedback Try Premium for PKR0 Meet the hiring team Charles Tulio 3rd Recruitment Consultation - Emerging Tech & Data Analytics Job poster Message About the job About the Role As a Senior Software Engineer on the AI Platform team, you'll design and build infrastructure that supports model training, validation, deployment, and serving at scale. You will work with modern AWS-native technologies, focusing on low-latency microservices, automated pipelines, and robust deployment workflows to enable safe and efficient delivery of machine learning models into production. This role is ideal for someone who enjoys building platforms and tools that simplify complexity for ML and data science teams and thrives in fast-paced environments where engineering excellence and reliability are paramount. What You'll Do Build and maintain scalable systems and infrastructure for deploying and serving ML models. Design low-latency, fault-tolerant model inference systems using Amazon SageMaker. Implement safe deployment strategies like blue/green deployments and rollbacks. Create and manage CI/CD pipelines for ML workflows. Monitor model performance and system health using AWS observability tools. Develop internal tools and APIs to help ML teams deploy and monitor models easily. Collaborate with ML engineers, data scientists, and DevOps to productionize new models. Participate in code reviews, system design, and platform roadmap discussions. Continuously improve deployment reliability, speed, and usability of the ML platform. What You Bring 4+ years of software engineering experience, with at least 2 years focused on low-latency, highly available backend systems. Bachelor's or Master's in Computer Science, Data Science, AI, Machine Learning, or related field. Strong fundamentals in data structures, algorithms, and distributed computing. Proficiency in Python; familiarity with Go or Rust is a plus. Hands-on experience with model serving systems, registries, and pipeline orchestration (preferably SageMaker). Solid understanding of MLOps best practices, including versioning, testing, deployment, and reproducibility. Experience building and maintaining CI/CD pipelines for ML workflows. Familiarity with ML frameworks like TensorFlow, PyTorch, or Scikit-learn. Experience with SQL/NoSQL databases or data warehouses like Snowflake or Redshift. Preferred Qualifications Experience building internal ML platform services or self-service tooling for model deployment and monitoring. Knowledge of model optimization techniques (TorchScript, ONNX, quantization, batching). Experience with feature stores, real-time feature serving, or caching systems for ML workloads. Background in deploying ML models into high-availability, mission-critical environments.