Job Url: https://www.remoterocketship.com/company/looiconsulting/jobs/senior-data-engineer-united-states-remote Job Description: Looi Consulting LLC Website LinkedIn All Job Openings Looi Consulting LLC is a company specializing in building high-performance, affordable technical teams through global talent sourcing. They streamline the recruiting process for businesses, providing comprehensive staffing and technical strategy services, while assisting job seekers in navigating their career paths. Their extensive network and expertise help organizations scale teams quickly and efficiently, adapting to changing demands while concentrating on their core competencies. Software Development β€’ Business Strategy β€’ Innovation 11 - 50 employees Founded 2010 🀝 B2B 🎯 Recruitment ☁️ SaaS Senior Data Engineer June 19 πŸ‡ΊπŸ‡Έ United States – Remote ⏰ Full Time 🟠 Senior 🚰 Data Engineer Airflow Amazon Redshift Apache AWS Cloud Docker ETL MySQL Oracle Postgres Python Spark Terraform VoIP Apply Now Receive Emails with Similar Jobs Report problem πŸ“‹ Description β€’ We are looking for a Senior Data Engineer to join a team to work on data pipeline development. β€’ You would work closely with small, highly collaborative Operations and Development teams to architect and deploy high-impact security solutions. β€’ You must be able to work US time zones (UTC-8 to UTC-5/UTC-7 to UTC-4). β€’ Build data management pipelines from heterogeneous data sources into Data Marts and other repositories for subsequent analysis and data mining β€’ Code, test, and support Python-based services, implementing ETL/ELT, data cleansing, etc. β€’ Design and use RESTful APIs, develop shippable code, documentation, and unit test new features for Cloud-based products β€’ Collaborate with Quality, Product, and other Engineering teams β€’ Provide code reviews, design feedback, demos, technical requirements & documentation β€’ Scope projects, provide accurate estimates for reliable delivery of projects β€’ Work closely with the DevOps team on the IAC setup for BI 🎯 Requirements β€’ Professional development experience using Python, including an understanding of functional programming: 5+ years β€’ Cloud infrastructure experience in AWS, Docker: 3+ years β€’ Web-scale data engineering experience using enterprise relational databases: 3+ years β€’ Experience with Apache Spark, especially SparkCore and SparkSQL: 3+ years β€’ CI/CD experience and DevSecOps: 2+ years β€’ Infrastructure As Code (Terraform) experience: 1+ years β€’ Experience in Apache Airflow to orchestrate Spark jobs β€’ Strong proficiency in Python for Data Science applications β€’ Proficient with relational databases such as Oracle, MySQL, PostgreSQL β€’ Experience with AWS services such as RDS, Redshift, S3, EMR β€’ Experience with designing and building Data Marts and ETL/ELT β€’ Proficient with Jira, Confluence, and GIT toolset; IDEs like PyCharm or Visual Studio Code β€’ Experience with CI/CD solutions, message queue systems, and background task management β€’ Skilled in Git and use it for version control β€’ Hands-on experience with Agile/Scrum β€’ Excellent written & verbal communication skills β€’ Ability to coordinate between teams β€’ Bachelor's degree or higher in Computer Science/related field or equivalent experience β€’ Pluses β€’ Design, implementation, and/or maintenance of CICD, Bash scripts β€’ Experience with using AWS EMR Serverless β€’ AWS Certified Data Analytics - Specialty β€’ Databricks Certified Associate Developer for Apache Spark β€’ Experience with serverless technology such as Amazon Lambda or Google Compute Engine β€’ Experience with TeamCity and Octopus