Job Title: Data engineer Company Name: Granville Vance Public Health Job Details: $60-$90/hrRemoteFull,Time Job Url: https://hiring.cafe/viewjob/eui94e2ro1mqht84 Job Description: Posted 4d agoData engineer@ Granville Vance Public HealthView All JobsWebsiteOxford, North Carolina, United States$60-$90/hrRemoteFull TimeResponsibilities:design pipelines, build architectures, implement ETLRequirements Summary:Proficient in SQL, Python/Java/Scala; 1+ year data engineering experience; cloud platforms; ETL and data modeling.Technical Tools Mentioned:SQL, PostgreSQL, MySQL, SQL Server, Python, Java, Scala, AWS, Azure, Google Cloud, Airflow, dbt, Talend, Spark, Hadoop, Kafka We are seeking a highly skilled Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. The ideal candidate will have a strong background in data modeling, ETL processes, and cloud-based data solutions. You will work closely with data scientists, analysts, and software engineers to ensure efficient data processing and accessibility for business insights. This is a 100% remote position, so the ability to work independently while collaborating with a distributed team is essential. Key Responsibilities: Design, develop, and maintain data pipelines to support analytics and business intelligence needs. Build and optimize data architectures, databases, and data warehouses. Implement and manage ETL processes to ingest and transform data from multiple sources. Work with structured and unstructured data to ensure data integrity and quality. Collaborate with stakeholders to understand data requirements and provide scalable solutions. Ensure security, compliance, and governance of data platforms. Optimize query performance and data storage solutions. Troubleshoot and resolve data-related issues in production environments. Requirements: Education: Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. Experience: 1+ year of experience in data engineering, database management, or related fields. Technical Skills: Proficiency in SQL and experience with relational databases (PostgreSQL, MySQL, or SQL Server). Strong knowledge of Python, Java, or Scala for data processing. Experience with cloud platforms such as AWS (Redshift, S3, Glue), Azure, or Google Cloud. Expertise in ETL tools (Apache Airflow, dbt, Talend, or similar). Familiarity with big data technologies (Spark, Hadoop, Kafka). Experience working with APIs and data integration. Knowledge of data warehousing and data modeling best practices. Soft Skills: Strong problem-solving and analytical skills. Excellent communication and teamwork abilities. Ability to work independently in a remote setting and manage multiple priorities. Preferred Qualifications: Experience with NoSQL databases (MongoDB, DynamoDB, Cassandra). Knowledge of CI/CD pipelines for data workflows. Exposure to machine learning pipelines and MLOps is a plus. Certifications in cloud data platforms (AWS Certified Data Analytics, Google Cloud Professional Data Engineer, etc.). Benefits: Competitive salary based on experience. Fully remote work with flexible scheduling. Health, dental, and vision insurance. Paid time off and company holidays. Learning and development opportunities.