Job Title: Senior ETL / Python Developer Company Name: Datamaxis Job Details: RemoteContract Job Url: https://hiring.cafe/viewjob/y1hckqnqc8h5zxhs Job Description: Posted 4d agoSenior ETL / Python Developer@ DatamaxisView All JobsWebsiteSpringfield, Illinois, United StatesRemoteContractResponsibilities:Design pipelines, Build processing, Support analyticsRequirements Summary:5+ years data engineering with enterprise warehousing, ETL with Informatica/ADF, Python, Spark; SQL with Snowflake/Oracle/SQL Server; Azure DevOps/GitHub; Bachelor’s or higher in CS/Engineering/Analytics.Technical Tools Mentioned:Azure Data Factory, Informatica PowerCenter, Python, Spark, Databricks, Snowflake, Oracle, SQL Server, Azure DevOps, GitHub, CI/CD, PowerShell, Bash, REST API Our client is seeking a hands-on Senior Data Engineer (ETL / Python Developer) to support the Enterprise Data Warehouse (EDW) and Analytics Program. This role plays a critical part in designing, developing, and maintaining scalable data ingestion and transformation pipelines that support Medicaid analytics, federal reporting, and enterprise decision support.The ideal candidate brings strong ETL and Python engineering expertise, experience working with large healthcare datasets, and the ability to operate effectively in a regulated, audit-sensitive environment. This is a delivery-focused role requiring close collaboration with architects, analysts, QA, PMs, SMEs, developers, and reporting BI teams across both legacy and cloud-based platforms. Primary ResponsibilitiesDesign, develop, and maintain enterprise ETL pipelines using Azure Data Factory (ADF), Informatica PowerCenter, and Python-based frameworksBuild and optimize scalable data processing solutions using Python, Spark, and DatabricksSupport Medicaid analytics and federal reporting initiatives (e.g., T-MSIS, PERM, MARS, Quality of Care)Develop robust data validation, reconciliation, and audit-traceable data pipelinesWrite and optimize SQL and stored procedures across relational platforms such asSnowflake, Oracle, and SQL ServerParticipate in cloud migration and modernization initiatives within Azure-based architecturesCollaborate with analysts, QA, and reporting teams to ensure data quality, accuracy, and timelinessFollow data engineering best practices for performance, reliability, reusability, and securitySupport production operations, incident resolution, and root-cause analysisParticipate in code reviews, source control, and CI/CD processes using Azure DevOps and GitHubRequired Qualifications5+ years of data engineering experience with a focus on enterprise data warehousing5+ years of hands-on ETL development using Informatica PowerCenter, Azure Data Factory, or similar tools5+ years of Python development for data engineering and automation3+ years of experience with Spark-based processing frameworks (Databricks or equivalent)Strong SQL expertise and experience with relational databases (such as Teradata,Snowflake, Oracle, SQL Server) Experience with source control and DevOps practices (Azure DevOps, GitHub, CI/CD)Bachelor’s degree or higher in Computer Science, Engineering, Analytics, or a related fieldStrong analytical, problem-solving, and troubleshooting skills Preferred QualificationExperience supporting State Medicaid EDW or MMIS analytics environmentsHealthcare or public-sector analytics experience (Medicaid / Medicare preferred)Data modeling experience in enterprise data warehouse environmentsScripting experience (PowerShell, Bash) for automation and orchestrationExperience designing or consuming APIs (REST) within data platformsFamiliarity with data quality frameworks, reconciliation, and audit supportAzure certifications related to data engineering or analytics