Job Title: Senior Data Engineer Company Name: PeakData Job Url: https://peakdata.bamboohr.com/careers/117?jr_id=69b879b63b74eb1e2c864ec1 Job Description: The Team The engineering team in Poland is structured around three areas: Data Science (2 people), Backend/Data Engineering (4 people), and Frontend (2 people). You'd be joining the backend/data engineering team. You'll work closely with the DS on handoffs from experimentation to production, with the Product Manager on scope and priorities, and with the broader backend team on platform services where pipeline logic intersects with the product. Role Overview We're looking for a Senior Data Engineer to own data pipeline work end-to-end — from building and maintaining production pipelines to integrating new data sources and contributing to backend services that power the platform. You'll be working on: developing new pipeline components as we revamp and upgrade our data flows, cleaning up and sunsetting legacy services, maintaining and improving  infrastructure, and eventually integrating open data sources. When DS work produces a new model or data package, you're the person who makes it production-ready. This is a hands-on engineering role. Not research, not analytics. Technical Environment • Python — core language for everything • SQL — daily use • Docker — containerized services throughout • AWS — Lambda, ECS, S3, Step Functions, CDK (experience a plus, not required from day one) • Data Stores: PostgreSQL, DynamoDB, BigQuery • IaC: Terraform / CDK (nice to have) • Orchestration: Argo (or similar — willingness to learn matters more than prior experience) What You'll Own • Build new pipeline components as we upgrade and refactor data flows • Maintain and improve existing data pipelines — including sunset work on legacy services • Integrate new data package sources after DS teams define the spec — you make it production-grade • Collaborate with data scientists on handoffs from experimentation to production • Keep the infrastructure running: monitoring, alerting, reliability • Contribute to backend services where data and platform logic intersects with the product What We're Looking For Must-have: • 4+ years in data engineering or backend engineering • Strong Python — production-grade, not just scripts or notebooks • SQL — comfortable with complex queries, schema design, performance considerations • Docker — you've shipped containerized services, not just run them locally Nice to have: • pandas / data wrangling experience • AWS (any depth — Lambda, S3, ECS) • Terraform or CDK • Experience with orchestration tools (Argo, Airflow, Prefect, or similar) • Prior work in regulated or data-sensitive environments (life sciences, healthcare, finance)