Job Title: Member of Technical Staff – Explainability Engineer (Alignment + SHAP) Company Name: Elloe AI Job Details: RemoteFull,Time,,Part,Time Job Url: https://hiring.cafe/viewjob/vr3dqae6dibzilyf Job Description: Posted 10mo agoMember of Technical Staff – Explainability Engineer (Alignment + SHAP)@ Elloe AIView All JobsWebsiteUnited StatesRemoteFull Time, Part TimeResponsibilities:Design SHAP layers, Architect interventions, Build experimentsRequirements Summary:Design SHAP-based explainability, align models, build dashboards; MIT/Harvard CS/EECS/HST background; SHAP/LIME experience; strong research-to-production mindset.Technical Tools Mentioned:SHAP, LIME, AutoHeal, Dashboards Type: Full-time | Location: Remote | Function: Alignment & Explainability | Reports to: CTO/CEOAbout Elloe AIElloe is building the immune system for AI. We help organizations in healthcare, finance, and policy deploy GenAI systems that are explainable, auditable, and safe. Our stack fuses SHAP explainability, secure Vault sync, and red-team simulation defense — all built by engineers from MIT and Harvard.About the Role:You’ll own the design of SHAP-driven explainability and model alignment systems for GenAI in high-stakes industries. From human-in-the-loop correction to production-grade dashboards, you’ll architect and ship the critical loops that make AI traceable, safe, and compliant. for real-world GenAI. This means SHAP-layer visualizations, alignment experiments with AutoHeal, and deeply integrating human feedback into every model correction. You’ll own core loops that power how Elloe learns to heal itself.What You’ll OwnDesign and implement SHAP-based explanation layers for GenAI outputsArchitect AutoHeal interventions for misaligned model behaviorBuild alignment experiments grounded in real-world compliance datasetsOwn deployment of explainability dashboards for internal and customer usageWho You AreMIT/Harvard (CS, EECS, HST) with strong research-to-production instinctsFamiliar with SHAP, LIME, or other explainability librariesAble to design alignment experiments that are both empirical and operationalThrive in ambiguity and ship aligned features end-to-endWhy NowGenAI is entering regulated sectors faster than governance can keep up. AutoHeal v4.2 is Elloe’s leap toward traceable, compliant AI — just as hospitals, banks, and policy orgs realize they need it. This is the right mission at the right moment.You’ll Leave This Role WithFounder-level ownership on compliance AI infraA deep technical legacy in one of GenAI’s most urgent domainsEquity in a company built to define real-world alignmentA public portfolio of traceable, explainable AI systemsLogistics & ApplicationCompensation: Salary + equityCommitment: Full-time or intense part-time (15+ hrs/week)Start: Rolling (ideal: within 2–4 weeks)To Apply: Tell us how you'd extend SHAP to make multi-modal outputs traceable