Job Url: https://workiva.wd1.myworkdayjobs.com/en-US/careers/job/USA---Remote/Senior-Software-Engineer_R10788 Job Description: What You’ll Do Design & Development Architect, implement, and optimize batch and streaming data pipelines to move, transform, and process structured and unstructured data at scale. Apply software engineering best practices (code reviews, testing, CI/CD, version control) to data pipeline development. Ensure pipelines are modular, reusable, and extensible. Data Operations & Reliability Build monitoring, logging, and alerting frameworks to ensure pipeline reliability and data quality. Implement data validation, schema evolution handling, and error recovery mechanisms. Troubleshoot production issues and perform root cause analysis.  Collaboration & Leadership Partner with delivery teams to understand requirements and deliver end-to-end solutions. Mentor junior engineers, set coding standards, and advocate for best practices in pipeline development. Contribute to technical design reviews, architectural decisions, and long-term data platform strategy.  Scalability & Performance Optimize pipelines for high-volume, low-latency data processing. Evaluate and implement modern frameworks and cloud-native solutions (e.g., Databricks, Kafka, Airflow). Ensure systems can handle future growth and evolving data use cases. What You’ll Need Minimum Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or related field, or equivalent practical experience. 2+ years of software engineering experience with at least    Preferred Qualifications 3+ years experience working on data pipelines, data integration, or ETL/ELT systems. Strong skills in SQL with the ability to design and optimize complex queries. Hands-on experience with data pipeline frameworks (e.g., Apache Airflow, dbt, Dagster). Expertise in streaming technologies (e.g., Kafka, Kinesis). Deep understanding of databases and data warehouses (e.g., PostgreSQL, Snowflake, Aurora MySQL). Strong knowledge of cloud platforms (AWS or Azure), including storage, compute, and serverless data services. Proven ability to design for scalability, performance, and reliability in production-grade pipelines. Experience with infrastructure-as-code (Terraform, CloudFormation) and containerization (Docker, Kubernetes). Familiarity with data governance, lineage, and cataloging tools. Understanding of sustainability reporting, disclosure practices, or compliance-related data workflows. Contributions to open-source data frameworks. Strong communication skills and ability to influence cross-functional teams.