Company Name: Satsuma.ai Job Details: Hiring,Remotely,in,Austin,,TX,In-Office,or,Remote,Mid,level Job Url: https://builtin.com/job/data-engineer/7341512 Job Description: At Satsuma, we are pioneering the future of retail and food services with our innovative AI-ready platform leveraging the Model Context Protocol (MCP). Our goal is to empower merchants by providing them with the tools to manage real-time inventory and engage with AI agents like ChatGPT for seamless customer experiences. We are looking for a talented Data Engineer to join our team and help us build a robust data infrastructure that supports our evolving needs. In this role, you will design, build, and maintain data pipelines and data storage solutions that facilitate data access and analytics across the organization. Your work will have a direct impact on our ability to deliver insights and enhance our platform functionalities. If you are passionate about data and want to be a part of an innovative team, we invite you to apply!Responsibilities Design, build, and maintain robust ETL pipelines on AWS to process and transform large-scale data from merchant data feeds and integrated APIs. Develop and optimize data models to support real-time inventory, ordering, and analytics use cases. Ensure high data quality, reliability, and scalability across distributed AWS services. Work closely with backend engineers and API teams to integrate data pipelines into production environments. Implement monitoring, alerting, and automated recovery using AWS-native tools (e.g., CloudWatch, CloudFormation). Collaborate with product and analytics teams to support new data-driven features and reporting requirements. Document data architecture, workflows, and best practices to enable cross-team collaboration. Requirements Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field (or equivalent experience). 3+ years of experience building and maintaining ETL pipelines on AWS (e.g., using Glue, Lambda, Step Functions, EMR). Strong proficiency in Python or Scala for data processing. Solid knowledge of SQL and familiarity with data warehouses (e.g., Redshift, Snowflake on AWS). Experience with event-driven architectures and streaming data (e.g., Kinesis, Kafka on AWS). Strong problem-solving skills and ability to work in a fast-paced, collaborative environment. Excellent communication and documentation skills. Benefits Retirement Plan (401k, IRA) Paid Time Off (Vacation, Sick & Public Holidays) Work From Home Free Food & Snacks Wellness Resources Stock Option Plan