Job Url: https://www.remoterocketship.com/company/hso/jobs/azure-data-engineer-united-states-remote/ Job Description: HSO Website LinkedIn All Job Openings ICT • Trade • Logistics • Finance • Service HSO is a global business transformation partner with deep industry expertise and a strong focus on utilizing Microsoft Cloud technologies. They provide tailored business solutions and consultancy services to help organizations across sectors such as manufacturing, financial services, professional services, retail, public sector, and non-profit. HSO offers comprehensive cloud services, including strategy, integration, infrastructure, and AI solutions, with a particular emphasis on enhancing business performance through digital transformation. They achieve recognized status as a leading partner in Microsoft's global network and have been awarded for their work with Microsoft Dynamics 365 and supply chain management solutions. 1001 - 5000 employees Founded 1987 ☁️ SaaS 🤝 B2B 🏢 Enterprise Azure Data Engineer 3 days ago 🇺🇸 United States – Remote ⏰ Full Time 🟡 Mid-level 🟠 Senior 🚰 Data Engineer 🦅 H1B Visa Sponsor Apache Azure ERP ETL NoSQL Oracle PySpark Python Spark SQL Apply Now Receive Emails with Similar Jobs Report problem 📋 Description • Design, build, and maintain data engineering solutions on Microsoft Azure and Fabric (Lakehouse, Warehouse, Data Factory, Synapse Analytics, Dataflows, Eventstreams). • Coordinate sprints, daily stand-ups, and retrospective meetings as part of Agile delivery. • Collaborate with business and technical teams to analyze, document, and validate requirements. • Perform source system analysis and identify appropriate data ingestion and integration strategies for both batch and streaming pipelines. • Develop and optimize ETL/ELT solutions using Azure Data Factory pipelines, Synapse pipelines, or Fabric Data Pipelines. • Leverage services such as Azure Data Lake Storage (ADLS), Synapse Analytics, SQL Database, Azure Functions, Event Hub, Synapse Spark, and Microsoft Fabric (Lakehouse, Warehouse, Notebooks, Pipelines) for data integration and transformation. • Write and optimize SQL, Python, and PySpark code for large-scale data processing. • Implement data ingestion frameworks, incremental/delta loads, and medallion architecture patterns (Bronze/Silver/Gold). • Connect and integrate diverse data sources (SQL, Oracle, ODBC, ERP, CRM, APIs, flat files, streaming events) into Fabric and Azure ecosystems. • Ensure data quality, security, and governance in alignment with enterprise standards. • Continuously research and adopt efficient methods for scalable data movement, transformation, and storage. • Work independently and as part of a cross-functional team to deliver high-quality solutions. 🎯 Requirements • 3+ years of consulting experience in Data Engineering/Data Warehousing. • Strong hands-on expertise with Azure Data Factory, Synapse Analytics, Fabric Lakehouse, and ADLS. • Proficiency in SQL, Python, and Apache Spark (PySpark/Synapse/Fabric Notebooks). • Experience integrating batch and real-time data sources into Azure/Fabric. • Solid understanding of Relational Databases, Analytical Databases, and NoSQL systems. • Strong problem-solving ability and a self-starter mindset with the ability to learn new technologies quickly. • Experience with PowerShell or Bash scripting for automation • Familiarity with CI/CD pipelines (Azure DevOps, GitHub Actions) for data engineering solutions • Knowledge of Data Governance & Cataloging (Microsoft Preview) • Experience with Synapse link/Fabric Link 🏖️ Benefits • We offer competitive pay with and performance-based bonus. • Unlimited paid time off • Flexible and affordable benefits program including: medical, dental & vision coverage, flexible spending accounts, health reimbursement account, and a 401(k) plan with a company match. • Work alongside enthusiastic and energetic teammates in a dynamic and thriving environment.