Job Url: https://wellfound.com/jobs/3642629-senior-software-engineer-data-pipelines-reporting-analytics Job Description: Fintary Actively Hiring Fintary is an AI-powered insurance platform managing payments, receivables, and operations Recently funded Save Apply Now Senior Software Engineer: Data Pipelines - Reporting & Analytics $180k – $210k • 0.01% – 0.1% | Remote ( Everywhere )  •  San Francisco Bay Area |7 years of exp |Full Time Posted: yesterday• Recruiter recently active Job Location San Francisco Bay Area Remote Work Policy Onsite or remote Hires remotely in Everywhere Visa Sponsorship Not Available Preferred Timezones Pacific Time Collaboration Hours 9:00 AM - 3:00 PM Pacific Time Relocation Not Allowed Skills Python Node.js Google Bigquery GCP About the job Summary At Fintary, we’re building an AI-enabled platform to automate financial operations for enterprise customers. We process and generate financial data for our customers, and we’re looking for an engineer to build out our reporting & analytics system to provide customers with actionable and valuable insights of their data. The ideal candidate will have experience building reporting and analytics solutions for enterprise applications. Company We’re a dynamic startup, agile, and committed to delivering high-quality and innovative solutions to our customers. Since our inception 2 years ago, we've experienced remarkable growth. Our founding team is made up of seasoned entrepreneurs and industry experts, supported by leading venture capital firms. Role & Responsibilities This role sits at the intersection of software engineering, data engineering, and data science, and will involve building data solutions to power business intelligence, forecasting, and other general data reporting needs. The candidate should be a self-starter and able to apply the current extensive experience in data management as well as explore new technologies and solutions for building a scalable reporting and analytics module for our platform. The candidate must have expertise with data engineering disciplines including data warehousing, database management, ETL processes Key Responsibilities Design, build, and maintain scalable data pipelines and systems supporting near real-time, customer-facing analytics and reporting. Plan and implement strategies for data reliability, observability, and quality across the full data stack. Architect and optimize data storage and transformation workflows (ETL/ELT) for both operational and analytical workloads. Collaborate with Product and Engineering to deliver high-impact projects that integrate data insights into user-facing experiences. Evaluate and incorporate emerging AI-driven analytics tools and data management technologies to enhance automation and decision intelligence. Document and operationalize data architecture, onboarding materials, and “how-to” guides to support cross-team adoption and transparency. Establish best practices for data governance, access control, and performance optimization. Required Qualifications 6–8 years of experience designing and implementing scalable data transformation, analytical, and reporting systems. Strong proficiency in TypeScript, SQL/PostgreSQL, and Python. Hands-on experience building and operating data products on GCP infrastructure. Hands-on experience with data orchestration/workflow frameworks (e.g., dbt, Airflow). Deep understanding of data warehousing and cloud data platforms (e.g., GCP BigQuery, Snowflake, Databricks). Proven experience managing end-to-end data lifecycles, from ingestion and transformation to modeling and serving. Excellent system design and problem-solving skills, with an ability to balance tradeoffs between scalability, latency, and maintainability. Proactive, self-sufficient, and ownership-driven, with a strong bias for action and learning. Strong opinions, loosely held — able to debate tradeoffs productively and adjust quickly based on evidence. Nice to Haves Experience at high-growth startups, building or scaling production-grade data systems. Background in fintech, finance, accounting, insurance, or enterprise SaaS platforms. Familiarity with CRM/ERP integrations (e.g., Salesforce, QuickBooks, NetSuite) or banking APIs. Exposure to AI/ML data pipelines, feature stores, or large-scale analytics serving layers. Experience implementing data observability frameworks