Job Title: Data Engineer III Company Name: Preferred Travel Group Job Details: $120k-$150k/yrRemoteFull,Time Job Url: https://hiring.cafe/viewjob/tvmlj04eyu6uzc7c Job Description: Posted 1d agoData Engineer III@ Preferred Travel GroupView All JobsWebsiteUnited States$120k-$150k/yrRemoteFull TimeResponsibilities:Design pipelines, Write queries, Automate workflowsRequirements Summary:Senior-level data engineer with SQL, Python, dbt and data pipeline experience; Azure data tools; API integration; ETL optimization.Technical Tools Mentioned:SQL, dbt, Python, C#, Azure Data Factory, Spark About UsAt Preferred Travel Group, we care deeply about our people, nurture independence, and celebrate individuality.  Family values inspire us, and we believe that change creates opportunity. We are committed listeners and deliberate storytellers in hospitality. We engineer potential, foster trust, and co-create brighter futures. Our culture values collaboration, adaptability, and precision—qualities essential to every role. We are forever curious, guided by the Pineapple as our global symbol of hospitality. We believe the business of hospitality is borderless, and we proudly embrace that spirit every day.We believe that every team member brings unique strengths to the table, and we’re committed to creating an environment where those strengths can thrive. ________________________________________Position SummaryWe are seeking an analytical, experienced, and solution‑oriented Data Engineer III. The Data Engineer III is responsible for developing and optimizing the company’s data pipelines, integrations, and reporting solutions to ensure efficient and reliable data operations. This role requires independent problem‑solving, proactive improvement of processes, and collaboration across teams to deliver impactful data solutions.Organizational RelationshipUnder the general supervision of the department director, the Data Engineer works closely with other engineers in the Data and QA teams and frequently interacts with end users and other stakeholders.________________________________________Key ResponsibilitiesData EngineeringDesign, build, and maintain scalable data pipelines and ETL workflows.Write advanced SQL queries and implement optimization techniques for performance.Leverage Microsoft Azure & Fabric, Spark, and Python to automate complex workflows.Lead engineering efforts on machine learning and artificial intelligence projects.Integration DevelopmentDevelop and maintain robust web APIs (SOAP, REST) to support seamless data integration across internal and external systems.Collaborate with external vendors to ensure the integrity and functionality of integrations.Operational SupportMonitor and troubleshoot data processes, ensuring high availability and minimal downtime.Proactively identify and resolve bottlenecks or inefficiencies in data pipelines and integrations.CollaborationManage and prioritize work using the ticketing system while maintaining regular communication in stand-ups and stakeholder meetings.Conduct code reviews to ensure adherence to best practices and high-quality deliverables.Contribute to technical documentation for processes, tools, and workflows.End User SupportProvide tier 3 support to end users of integrated systems such as reporting and accounting.Partner with business owners to identify areas for improvement and gather requirements.Leadership & MentorshipMentor other team members by sharing knowledge, conducting training sessions, and providing guidance on best practices.Take ownership of complex projects, ensuring timely delivery and alignment with business objectives.________________________________________Required Experience/QualificationsThis role is classified as a senior‑level (Tier III) position and requires substantial prior experience in data engineering or related field, in addition to:Bachelor’s degree in Computer Science, Information Systems /other relevant degree or equivalent professional experience.Expert knowledge of relevant languages, such as SQL, dbt, Python, and/or C#Expert knowledge of at least one data pipeline orchestration tool, such as Azure Data FactoryExpert understanding of data modeling and ETL concepts.Experience with version control systems (e.g., Git) and best practices.Strong problem-solving skills and the ability to work independently on complex tasks.________________________________________Typical Behaviors & Working StyleThe ideal candidate will demonstrate the following behavioral traits:Versatile and adaptable, flexing to meet the needs of the situation.Maintains people-orientation, even if reserved in nature. Must be helpful and service-oriented, with a strong focus on repeatable, high-quality results.Decision-making is collaborative, but meticulous, requiring consideration of facts, established procedures, and proven processes. Must rely on existing knowledge or training to help make decisions.Communicates based on the task or technical needs at hand, defining clear team roles. Minimal collaboration is required, although they must prioritize specific tasks or problems.Leads according to specialty or expertise. Will act with conviction to ensure quality standards, rarely delegating. When delegation is required, follow up will be close.________________________________________Preferred Working Environment & Job CharacteristicsThis role is best suited to someone who thrives in a: A complex, senior‑level data engineering environment with high expectations for independence and ownership.A high‑standards, reliability‑focused setting where availability, performance, and data integrity are critical.A fast‑paced, multi‑priority workload balancing delivery, operational support, and cross‑team collaboration.The ideal candidate will find great satisfaction in:Solving complex technical problems and continuously improving how data systems perform and scale.Building automation and efficiencies using modern data platforms and tools.Leading by expertise, mentoring others, and setting strong engineering standards.________________________________________What success in this role looks likeData pipelines and integrations operate reliably at scale, supporting business‑critical systems and analytics.Operational issues are resolved proactively, with minimal downtime and continuous improvement.The data engineering function grows stronger over time, through high‑quality delivery, documentation, and ownership of complex initiatives.________________________________________Working ConditionsThis is a work from home position. All technology required will be provided.________________________________________TrainingOrientation via some live remote and some pre-recorded video sessionsIT security trainingInternal development process and proceduresCompany-approved AI technology________________________________________DisclaimerThe above information is designed to indicate the general nature and level of work performed. It is not intended to contain or be interpreted as a comprehensive inventory of all duties, responsibilities, and qualifications required of employees assigned to this job.________________________________________Salary$120,000 - $150,000 actual compensation within this range will be determined by multiple factors including candidate experience and expertise.