Company Name: qode.world Job Details: RemoteFull,Time Job Url: https://hiring.cafe/viewjob/5fvbh9u6g2iqi6z5 Job Description: Posted 4d agoAWS Data Engineer@ qode.worldView All JobsWebsiteUnited StatesRemoteFull TimeResponsibilities:leading modernization, design pipelines, optimizing ETLRequirements Summary:10+ years experience in data engineering on AWS; design scalable data pipelines; Snowflake, Databricks; DataOps/CI/CD; secure data handling.Technical Tools Mentioned:AWS Glue, Python, Snowflake, S3, Redshift, Databricks, EMR, Lambda, Athena, KMS, Macie, FHIR, Power BI, Tableau, QuickSight, Dataiku Job BriefAs an AWS Data Engineer, your role will be to design, develop, and maintain scalable data pipelines on AWS. You will work closely with technical analysts, client stakeholders, data scientists, and other team members to ensure data quality and integrity while optimizing data storage solutions for performance and cost-efficiency. This role requires leveraging AWS native technologies and Databricks for data transformations and scalable data processing.Responsibilities•                   Lead and support the delivery of data platform modernization projects.•                   Design and develop robust and scalable data pipelines leveraging AWS native services.•                   Optimize ETL processes, ensuring efficient data transformation.•                   Migrate workflows from on-premise to AWS cloud, ensuring data quality and consistency.•                   Design automations and integrations to resolve data inconsistencies and quality issues •                   Perform system testing and validation to ensure successful integration and functionality.•                   Implement security and compliance controls in the cloud environment.•                   Ensure data quality pre- and post-migration through validation checks and addressing issues regarding completeness, consistency, and accuracy of data sets.•                   Collaborate with data architects and lead developers to identify and document manual data movement workflows and design automation strategies.Skills and Requirements·                  10+ years’ experience with a core data engineering skillset leveraging AWS native technologies (AWS Glue, Python, Snowflake, S3, Redshift).·                  Experience in the design and development of robust and scalable data pipelines leveraging AWS native services.·                  Proficiency in leveraging Snowflake for data transformations, optimization of ETL pipelines, and scalable data processing.·                  Experience with streaming and batch data pipeline/engineering architectures.·                  Familiarity with DataOps concepts and tooling for source control and setting up CI/CD pipelines on AWS.·                  Hands-on experience with Databricks and a willingness to grow capabilities.·                  Experience with data engineering and storage solutions (AWS Glue, EMR, Lambda, Redshift, S3).·                  Strong problem-solving and analytical skills.·                  Knowledge of Dataiku is needed·                  Graduate/Post-Graduate degree in Computer Science or a related field.·                  AWS S3 (data storage, export, recall)·                  Athena (querying data lakes)·                  Data pipelines (batch & near-real-time)·                  Integration with external systems (FHIR)·                  Secure data handling (KMS, Macie)·                  Cloud-native analytics·                  Multi-account, multi-region data architecture·                  BI integrations: Power BI, Tableau, QuickSight