Job Url: https://www.linkedin.com/jobs/search/?currentJobId=4344025744&distance=25&f_AL=true&f_TPR=r86400&f_WT=2&geoId=103644278&keywords=software%20engineer&origin=JOB_SEARCH_PAGE_JOB_FILTER&refresh=true&start=350 Job Description: AskSLM Share Show more options Backend Engineer - AI Runtime Austin, TX · 5 minutes ago · 0 applicants No response insights available yet Remote Matches your job preferences, workplace type is Remote. Full-time Easy Apply Save Save Backend Engineer - AI Runtime at AskSLM Backend Engineer - AI Runtime AskSLM · Austin, TX (Remote) Easy Apply Save Save Backend Engineer - AI Runtime at AskSLM Show more options Your profile is missing required qualifications Show match details Help me update my profile BETA Is this information helpful? Get personalized tips to stand out to hirers Find jobs where you’re a top applicant and tailor your resume with the help of AI. Try Premium for PKR0 Meet the hiring team Anastasiia Bychek 3rd Senior Recruitment Specialist Job poster Message About the job About Us We are a stealth-mode startup building the new AI runtime. Our mission is to make advanced language models deployable, customizable, and secure across diverse environments. Role We are seeking a Backend Engineer (Node.js/NestJS) to extend our platform using our existing codebase. You'll build the proxy backend that interacts with our custom inference runtime and extend dashboards. This role requires strong backend engineering skills, an ability to integrate existing systems, and comfort working closely with C++ engineers who are building low-level runtime features using CUDA. Responsibilities Proxy Backend for Inference Runtime Build and maintain a Node.js-based proxy backend that: Accepts inference requests from the frontend. Schedules and serializes prompts. Manages QKV cache load/unload (API hooks from the C++ runtime). Provides APIs to manage LoRA adapters. Integrate with authentication, RBAC, and logging already provided by the existing stack. Expose metrics and logs for monitoring inference usage and performance. Dashboards Extend existing Dashboard: Dataset upload, training job view, model management, inference usage, request history, and adapter selection. Reuse auth, billing, and user management code (Auth0, Stripe). Add necessary backend endpoints to support new UI flows. Core Stack & Infrastructure Develop using NestJS as the main backend framework. Work with PostgreSQL, Redis, MongoDB, and HashiCorp Vault for persistence, caching, and secrets. Use Socket.IO for real-time updates (job status, inference progress). Ensure secure integration with Stripe (billing) and Auth0 (identity). Collaborate with DevOps on deployment pipelines. Requirements Deep knowledge of the JavaScript and TypeScript languages. Strong experience with Node.js and NestJS framework. Proficiency in PostgreSQL and Redis for persistence and caching. Hands-on experience with Socket.IO or other WebSocket libraries. Experience with secure configuration and secrets management (HashiCorp Vault preferred). Experience with JWKS. Comfortable working with microservices and integrating with existing codebases. Strong debugging and systems thinking, able to reason about scheduling, state management, and concurrency. Nice to Have Experience integrating with AI runtimes (gRPC/REST backends for inference). Experience with RAG and MCP. Experience with authentication/authorization frameworks (Auth0, JWT, RBAC). Familiarity with Stripe API or similar billing systems. Contributions to backend open-source projects. Experience with WebRTC. Why Join Extend a proven SaaS foundation into a new AI runtime platform. Work directly with a C++ systems team building custom inference features. Build real products (dashboards + runtime APIs) used by vendors and customers. Competitive compensation, equity potential.