Job Title: Data Engineer III Company Name: OU Health Job Details: RemoteFull,Time Job Url: https://hiring.cafe/viewjob/z294uosms1l7r5n1 Job Description: Posted 1d agoData Engineer III@ OU HealthView All JobsWebsiteUnited StatesRemoteFull TimeResponsibilities:Design pipelines, Enforce governance, Mentor teamRequirements Summary:Five+ years in analytics/ data engineering; bachelor’s degree; Epic certifications required within 6 months to 3 months window; strong SQL and data tooling experience.Technical Tools Mentioned:dbt, Snowflake, Spark, Kafka, SQL, Python, Java, C++, Scala, R, SAS, SPSS, Tableau, Power BI, Fivetran, Matillion, SSIS, SnowSQL, IBM Streams, Flume, Storm, Spark-Streaming Position Title:Data Engineer IIIDepartment:ETS Analytics and AI DataJob Description:The Data Engineer III leads and serves as the subject matter expert in the design, development and delivery of data pipelines and value-added data assets across the OU Health data ecosystem, leveraging a variety of data warehousing methodologies and disciplines. The Data Engineer III primarily designs, builds, and maintains these data assets. Emphasizing agility, partnership, and cross-functional teamwork, the Data Engineer III executes initiatives throughout the product lifecycle. The Data Engineer III works closely with Data Scientists, Business Intelligence Developers and other colleagues to build or enhance robust data systems. Initiatives will often be of significant complexity and risk. The Data Engineer III serves as a technical guide and mentor to the ETS department.Essential ResponsibilitiesResponsibilities listed in this section are core to the position. Inability to perform these responsibilities with or without an accommodation may result in disqualification from the position.Leverage subject matter expertise with a variety of data engineering, DataOps, and data warehousing methodologies, techniques, tools, and platforms to transform large quantities of data from multiple sources.Design, create, test, deploy and maintain data pipelines that deliver curated, value-added data assets such as data marts and other purpose-built data stores. Ensure data pipelines are optimized, highly reliable, and contain low technical debt.Design, build, and maintain the tools and infrastructure needed to handle large datasets.Data Governance: Enforce data governance policies including data quality, validation, lineage, metadata management, and adherence to healthcare regulationsQuality assurance: Develop and implement comprehensive data quality frameworks, addressing issues such as data accuracy, completeness, and consistencyWork closely with different application and operational teams to understand business needs and align data engineering initiatives accordingly.Guide, mentor, quality review and train Data Engineering team and ETS department on technical skills and best practices.General ResponsibilitiesPerforms other duties as assigned.Minimum QualificationsEducation Requirements: Bachelor's Degree required.Experience Requirements: 5 or more years in analytics (Business Intelligence, Data Engineering, Data Science, etc.) required.License/Certification/Registration Requirements:  Epic certification/accreditation required within 6 months of hire or within 3 months of class completion including:Cogito FundamentalsClarity Data ModelCaboodle Data ModelAccess Data ModelRevenue Data ModelClinical Data ModelCaboodle DevelopmentAdditional epic classes as requested to support evolving business needs Knowledge/Skills/Abilities Required:Expert level analytic skills related to working with structured and unstructured datasets.Guide, mentor and train Data Engineering team, Data Scientist and Business Intelligence Developers on technical skills and best practices.Must possess critical thinking and creative problem-solving skills along with the ability to communicate well with stakeholders throughout the organization.Effective communication, project management and organizational skills.Experience supporting and working with cross-functional teams in a dynamic environment.Working knowledge of stream processing and highly scalable data stores.Previous experience manipulating, processing, and extracting value from large, disconnected datasets.Expert level  SQL and data manipulation skillsExposure to big data tools: dbt, SnowPark, Spark, Kafka, etc.Experience with relational SQL and NoSQL databases, including Snowflake, MS SQL Server, and PostrgresExperience with integration tools: Fivetran, Matillion, SSIS, dbt, SnowSQL.Exposure to stream-processing systems: IBM Streams, Flume, Storm, Spark-Streaming, etc.Exposure to consuming and building APIsExposure to object-oriented/object function programming languages: Python, Java, C++, Scala, etc.Experience with statistical data analysis tools: R, SAS, SPSS, etc.Experience with visual analytics tools: QlikView, Tableau, Power BI etc.Familiarity to Agile methodology for developmentFamiliarity with electronic health records and financial systems. i.e., Epic Systems, Workday, Strata etc.Ability to work independently and within teams.Ability to develop and advise on data asset use to provide solutions to organizational needs.Current OU Health Employees - Please click HERE to login.OU Health is an equal opportunity employer. We offer a comprehensive benefits package, including PTO, 401(k), medical and dental plans, and many more. We know that a total benefits and compensation package, designed to meet your specific needs both inside and outside of the work environment, create peace of mind for you and your family.