Job Title: Data Engineer Company Name: Business Management Associates Job Details: RemoteFull,Time Job Url: https://hiring.cafe/viewjob/k7yuxmd6c78z4r0t Job Description: Posted 2d agoData Engineer@ Business Management AssociatesView All JobsWebsiteFrederick, Maryland, United StatesRemoteFull TimeResponsibilities:designing pipelines, managing lakehouse, establishing governanceRequirements Summary:7-9 years experience in Azure data engineering, ETL/ELT, Databricks, ADLS Gen2, data governance, and cloud cost optimization.Technical Tools Mentioned:Azure Data Factory, Azure Databricks, SparkSQL, PySpark, Azure Data Lake Storage (ADLS Gen2), Azure Monitor, GitHub Actions, Azure DevOps, Terraform, Azure Service Bus Job Summary We are seeking an experienced Azure Data Engineer to design, build, and optimize scalable data pipelines and lake house architectures within the Azure ecosystem. This role is responsible for developing end-to-end data ingestion, transformation, and governance frameworks using Azure Data Factory, Databricks, and ADLS Gen2 to support enterprise analytics and reporting needs. The ideal candidate will have hands-on experience managing batch and near real-time data ingestion from diverse structured and unstructured sources, implementing Spark-based transformation processes across Bronze, Silver, and Gold data layers, and establishing robust data quality, security, and monitoring frameworks. This position also requires expertise in data governance, automation, and cost optimization to ensure high-performing, secure, and reliable data platforms aligned with organizational SLAs. Key Responsibilities Azure Data Engineering & Lakehouse Architecture· Design and manage scalable data ingestion pipelines using Azure Data Factory and Azure Service Bus for batch and near real-time processing· Implement Bronze, Silver, and Gold data layer transformations using Databricks (PySpark, SparkSQL) within a lakehouse architecture· Integrate structured, semi-structured, and unstructured data sources using formats such as Parquet, Avro, and JSON into ADLS Gen2Data Governance, Security & Quality ManagementEstablish data quality frameworks and validation processes using Databricks and Azure Data FactoryConfigure RBAC and ACL-based access controls to secure sensitive datasets and ensure complianceManage credential security with Azure Key Vault and enforce governance standards across data platforms.Automation, Monitoring & Cloud OptimizationAutomate pipeline orchestration using Databricks Workflows and CI/CD tools such as Azure DevOps, GitHub Actions, and TerraformMonitor pipeline health and performance using Azure Monitor, logging, and alerting to meet SLA requirementsOptimize ADLS storage performance and manage cloud costs using Azure Cost Management best practices All other duties assigned    Required Skills: · Azure Data Factory for ETL/ELT operations.· Build and support self-service BI solutions using Azure Databricks and the Lakehouse architecture· Develop data pipelines and transformations using PySpark and Spark SQL· Enable analytics using Databricks AI/BI features and Genie for business users· Apply ML techniques for data insights, predictions, and feature engineering· Collaborate with analytics and business teams to deliver scalable, governed BI and ML solutions· Databricks (PySpark, SparkSQL) for data transformations.· Azure Data Lake Storage (ADLS Gen2).· SQL Server, PostgreSQL, Cosmos DB, CRM, and ERP systems.· Data Governance, RBAC, and ACLs for managing permissions.· CI/CD tools: Azure DevOps, GitHub Actions, Terraform.· Azure Monitoring, Alerting, and Logging tools. Qualifications:  Education:Bachelor's degree in Computer Science, Information Systems, Data Science, or a related field (Master's preferred).Minimum 7-9 years experienceSubstantial professional experience in a related field may be considered in lieu of formal degree.