Job Title: Member of Technical Staff - Edge Inference Engineer Company Name: Liquid AI Job Details: RemoteFull,Time Job Url: https://hiring.cafe/viewjob/dt7xvtapmgbv9nyk Job Description: Posted 1mo agoMember of Technical Staff - Edge Inference Engineer@ Liquid AIView All JobsWebsiteSan Francisco or Cambridge or United StatesRemoteFull TimeResponsibilities:optimizing stacks, deploying models, analyzing hardwareRequirements Summary:Extensive experience in inference on embedded hardware; expertise in CPU/NPU/GPU architectures; building edge inference stacks; proficiency in Python and PyTorch; strong hardware understanding; coding in Python, C++, or Rust; optimize low-level primitives; self-guided with ownership.Technical Tools Mentioned:Python, PyTorch, C++, Rust, TensorRT, llama.cpp, Executorch, CUDA Work With UsAt Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.While San Francisco and Boston are preferred, we are open to other locations.This Role Is For You If:You are a highly skilled engineer with extensive experience in inference on embedded hardware and a deep understanding of CPU, NPU, and GPU architecturesProficiency in building and enhancing edge inference stacks is essentialStrong ML Experience: Proficiency in Python and PyTorch to effectively interface with the ML team at a deeply technical levelHardware Awareness: Must understand modern hardware architecture, including cache hierarchies and memory access patterns, and their impact on performanceProficient in Coding: Expertise in Python, C++, or Rust for AI-driven real-time embedded systemsOptimization of Low-Level Primitives: Responsible for optimizing core primitives to ensure efficient model executionSelf-Guided and Ownership: Ability to independently take a PyTorch model and inference requirements and deliver a fully optimized edge inference stack with minimal guidanceDesired Experience:Experience with mobile development and in cache-aware algorithms will be highly valuedWhat You'll Actually Do:Optimize inference stacks tailored to each platform as we prepare to deploy our models across various edge device types, including CPUs, embedded GPUs, and NPUsTake our models, dive deep into the task, and return with a highly optimized inference stack, leveraging existing frameworks like llama.cpp, Executorch, and TensorRT to deliver exceptional throughput and low latencyWhat You'll Gain:Hands-on experience with state-of-the-art technology at a leading AI companyA collaborative, fast-paced environment where your work directly shapes our products and the next generation of LFMsAbout Liquid AISpun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.