Job Title: Data Infrastructure Engineer (Real-Time Systems) Company Name: Deeter Analytics Job Url: https://jobs.ashbyhq.com/deeter-analytics/348aff39-42ef-4e59-be45-567ac680853b Job Description: Data Infrastructure Engineer (Real-Time Systems) Location international Employment Type Full time Department Algo Overview Application Deeter Analytics At Deeter Analytics, we’re building something that doesn’t get built twice in a generation. Our goal is to create a fundamental trading model as capable as today’s most advanced AI systems — but applied to global markets. Not incremental signals or isolated strategies, but a system that can continuously interpret, learn from, and act on the evolving state of the world. We train on large-scale, real-time social data — capturing how narratives form, how sentiment propagates, and how collective behavior drives markets. This requires operating at the frontier of data infrastructure, model design, and compute, all tightly integrated into a single system. You’ll work alongside a small group of elite traders, engineers and AI researchers, in an environment defined by speed and ownership. We run experiments continuously. Ideas move from concept to production in hours. And the feedback loop is immediate — measured directly in live performance. About the role You will build and operate the data systems that determine what the model sees and how it learns. This includes ingesting, structuring, and preparing large volumes of real-time data for training — reliably, efficiently, and at scale. We prefer systems that improve with scale over systems that rely on manual intervention. What you’ll work on ● Designing and operating high-throughput data ingestion pipelines from external providers ● Managing the flow of data into S3-based storage systems with a focus on reliability and correctness ● Building systems to structure and prepare raw data for model training ● Monitoring pipeline health and debugging failures quickly ● Improving data quality and consistency under real-world conditions ● Optimizing systems for throughput/latency/cost efficiency ● Working closely with modeling and infrastructure teams to support rapid experimentation What we’re looking for We’re looking for people who can take ownership of systems that matter — and make them work under real constraints. Strong signals: ● You have built or operated systems that process large volumes of data reliably ● You have a strong understanding of AWS primitives (storage, compute, networking) and how they behave under load ● You design systems that are robust to failure and easy to evolve ● You are comfortable working with imperfect, high-volume data ● You have debugged systems where failures were visible and mattered ● You move quickly, identify problems, and fix them without being asked Bonus signals: ● Experience building systems where data quality directly impacted downstream performance ● Experience working with real-time or streaming data systems ● Experience operating systems under scale, latency, or cost constraints ● You use AI tools to accelerate debugging, development, and iteration ● You care about turning raw data into something that is actually usable Apply f