Job Title: Staff Data Engineer Company Name: Asurion Job Details: RemoteFull,Time Job Url: https://hiring.cafe/viewjob/x5e6romou1zrwidy Job Description: Posted 2d agoStaff Data Engineer@ AsurionView All JobsNashville or SterlingRemoteFull TimeResponsibilities:Lead pipelines, Mentor engineers, Collaborate teamsRequirements Summary:8+ years data engineering experience; expert in Databricks, Spark, Delta Lake; SQL and Python; AWS; data modeling; ETL/ELT pipelines; AI-assisted tools.Technical Tools Mentioned:Databricks, Apache Spark, Delta Lake, SQL, Python, AWS The Staff Data Engineer leads the design and delivery of scalable data solutions that power analytics, reporting, and machine learning. This role sets technical direction for data platforms, implements robust pipelines, and establishes engineering standards to improve reliability and speed. The position partners with product, analytics, and security to translate business needs into resilient data architectures. Responsibilities include optimizing cloud data infrastructure, governing data quality, mentoring engineers, and driving continuous improvement. The Staff Data Engineer balances strategy with hands-on execution to ensure trusted, timely, and secure data is available across the enterprise.Title: Staff Data EngineerBusiness Function: Product DevelopmentDepartment: Enterprise Data ServicesLocation: Nashville, TN or Sterling, VA (Hybrid)Position OverviewAs a Staff Data Engineer, you will lead the design and delivery of scalable, high-quality data pipelines that power analytics and reporting across the enterprise. This role combines deep hands-on engineering with technical leadership, driving best practices for data ingestion, transformation, and data modeling.You will play a critical role in building and optimizing data solutions using Databricks, Delta Lake, cloud-native technologies (AWS), ensuring reliable and efficient movement of data from source systems to curated data assets. This position focuses on delivering well-structured, trusted data through robust pipeline development and modern data engineering practices. You will also leverage AI-assisted development tools to improve coding efficiency, validation, and documentation.Essential Duties and ResponsibilitiesLead the design and development of scalable data pipelines and data products using Databricks, Spark, and Delta LakeDevelop and optimize data transformations and ELT workflows using SQL and PythonDesign and implement data models and curated datasets to support analytics and reporting use casesEnsure data quality, consistency, and reliability through validation, monitoring, and testing practicesOptimize pipeline performance, scalability, and cost efficiency within AWS environmentsApply best practices for data partitioning, storage optimization, and query performance tuningCollaborate with product, analytics, and business teams to translate requirements into efficient data solutionsProvide technical leadership and mentorship to engineers, including code reviews and design guidanceLeverage AI tools for coding, validation, and documentation assistance to enhance productivity and code qualityTroubleshoot and resolve data pipeline failures, latency issues, and data inconsistenciesContinuously evaluate and improve data engineering workflows and toolingHere’s What You’ll Bring to the Team8+ years of experience in data engineering or data pipeline developmentStrong hands-on experience with Databricks, Apache Spark, and Delta LakeAdvanced proficiency in SQL and Python for building and optimizing data pipelinesExperience developing robust ETL/ELT pipelines and handling complex data transformationsHands-on experience with AWS cloud services (e.g., S3, EMR, Lambda, Glue, Redshift, Kinesis)Strong understanding of data modeling and data warehousing conceptsExperience working with large-scale datasets (TB+ or greater) in distributed environmentsKnowledge of data quality frameworks, validation techniques, and monitoring practicesFamiliarity with CI/CD pipelines and modern development workflowsExperience using AI-assisted development tools for code generation, validation, or documentationStrong problem-solving skills with the ability to debug complex data issuesLeadership & ImpactActs as a technical leader and subject matter expert in data pipeline developmentDrives best practices for pipeline design, data transformation, and reliabilityMentors engineers and elevates team capabilities through hands-on guidance and reviewsLeads complex, cross-functional data initiatives with measurable business impactBalances hands-on execution with technical leadershipRequired Education & ExperienceBachelor’s degree in Computer Science, Engineering, or related field8+ years of relevant experience in data engineeringProven experience leading technical initiatives or large-scale data pipeline projectsPreferred QualificationsMaster’s degree in a technical fieldExperience in large-scale, enterprise data environmentsCloud certifications (AWS, Databricks)About AsurionAsurion helps people protect, connect, and enjoy the latest tech—making life a little easier. Every day, our team supports nearly 300 million customers worldwide, solving everyday tech challenges through innovative solutions and world-class service.