Job Title: MLOps Engineer Company Name: Knowmadics, Inc Job Url: https://www.simplyhired.com/job/doH4hmQPo7mMK-D1UAN_zur7MBUacadhLi7JO-Hzm3HRu0x2rd4q7w Job Description: MLOps Engineer Knowmadics, Inc Wichita, KS Job Details Full-time 12 hours ago Qualifications Data Integration (Data management) SQL databases Continuous Delivery (CD) implementation PostGIS PyTorch DevOps Spark NumPy Git Mid-level Scalable systems 3 years Automating deployment processes SQL Pandas AWS Spark implementation Model deployment Containerization Feature extraction PostgreSQL Cloud Native Design Scalability Web applications Model training Kafka Keras Distributed computing Machine learning libraries Cross-functional collaboration Model evaluation Machine learning frameworks MLOps Cross-functional communication Full Job Description Job Purpose/Summary The MLOps Engineer designs, builds, and operates scalable machine learning systems that transform spatial-temporal and sensor-derived data into reliable ML workflows. This role spans the full ML lifecycle ingest, normalization, and feature engineering pipelines through distributed training and evaluation to low-latency inference and operational integration. Working across data infrastructure and deployment environments, the engineer operationalizes experimental models into reproducible, observable, and scalable systems. They ensure ML pipelines, containerized workloads, and CI/CD processes are robust, automated, and designed for real-world operational demands. In close collaboration with data scientists, geophysicists, and cross-functional engineering teams, this role translates research-grade algorithms into resilient services. As part of a fast-moving, government-funded technology business, the MLOps Engineer operates with high ownership in a low-ceremony, applied research environment, bringing structure, repeatability, and best practices to mission-driven sensor analytics systems. Duties and Responsibilities Design, build, and operate scalable ML and data pipelines for spatial-temporal and sensor-driven datasets. Operationalize data science algorithms into reliable, distributed ML workflows covering feature extraction, training, evaluation, inference, and model lifecycle management. Implement and maintain containerized ML workloads in cloud-native environments. Integrate model outputs into downstream serving systems and analytical platforms to support web-based applications and operational decision-making. Develop and maintain CI/CD pipelines for ML and data services. Collaborate closely with data scientists to operationalize experimental models into reproducible, observable, and scalable production systems. Take ownership of MLOps practices within an applied research team, bringing structure, repeatability, and best practices to evolving environments. Qualifications Minimum: 3+ years of experience in MLOps, ML Engineering, Data Engineering, or closely related roles building and running ML/data pipelines. Strong Python data and ML stack experience, including tools such as Polars/Pandas, PyArrow, PySpark, NumPy/SciPy. Experience integrating models built with frameworks such as PyTorch, TensorFlow, or Keras into scalable pipelines. Demonstrated experience working with temporal data, ideally including sensor-derived signals. Practical CI/CD experience for ML/data services using Git-based workflows. Experience working in AWS or similar cloud environments. Experience running containerized ML or data workloads in Kubernetes. Experience collaborating closely with data scientists to integrate algorithms. Eligible to obtain a U.S. Security Clearance – U.S. Citizenship required. Preferred: Direct hands-on experience with sensor datasets such as seismographic data, cellular sensor modalities, RF survey data, or GPS devices. Experience deploying and scaling ML workloads in Kubernetes using KEDA or alternative event-driven autoscaling approaches. Experience building event-driven or streaming pipelines e.g. Kafka, Spark, Flink, or Sedona feeding lakehouse-style open table formats e.g. Iceberg or Delta. Experience with SQL query engines e.g. Trino, DuckDB, or Athena Experience selecting and operating orchestration frameworks such as Airflow, Dask, Ray, or Spark for scalable ML workloads. Strong PostgreSQL experience, ideally with TimescaleDB and/or PostGIS, integrating ML outputs into operational databases. DevOps experience with Helm and GitOps tooling. Background in defense, cybersecurity, space, or other mission-driven sensor analytics environments. Working conditions Employees may be called upon to participate in in-person meetings, trainings, or company functions at Knowmadics offices or other designated locations. Travel in support of business operations may also be required, and employees are expected to comply with these obligations as part of their position. Physical requirements May include sitting or standing for extended periods, working with computers and technical equipment, and occasionally lifting or moving materials or tools. Direct reports None