Job Url: https://www.remoterocketship.com/company/abnormal-security/jobs/software-engineer-ii-data-platform-united-states-remote Job Description: Abnormal Security Website LinkedIn All Job Openings Abnormal provides total protection against the widest range of attacks including phishing, malware, ransomware, social engineering, executive impersonation, supply chain compromise, internal account compromise, spam, and graymail. Email Security • Business Email Security • Cloud Email Security • Phishing Detection • Business Email Compromise 501 - 1000 employees Software Engineer II, Data Platform 2 hours ago 🇺🇸 United States – Remote 💵 $148.8k - $175.1k / year ⏰ Full Time 🟡 Mid-level 🟠 Senior 🧑‍💻 Full-stack Engineer 🦅 H1B Visa Sponsor Airflow AWS Cloud Distributed Systems DynamoDB ElasticSearch Kafka Postgres Python Redis Spark Go Apply Now Receive Emails with Similar Jobs Report problem 📋 Description • Enterprises of all sizes trust Abnormal Security’s cloud products to stop cybercrime. • These AI-native, data-intensive SaaS applications depend on fast, reliable, and secure access to massive datasets. • The Data Platform team designs and operates scalable storage systems (PostgreSQL, OpenSearch, Redis, RocksDB, DynamoDB), batch and stream processing (Kafka, Spark), orchestration (Airflow, DBT), and more. • We build and maintain the core infrastructure that powers Abnormal’s most data-heavy workloads, providing scalable, reliable, and efficient data platforms and services to all of engineering and data science. We also create tools that make this infrastructure simple to operate and integrate, enabling teams to deliver faster with confidence. • We’re looking for a Software Engineer II to help us define and build the next generation of our data platform. In this role, you’ll partner closely with experienced engineers, tackle ambitious technical projects, and play a key part in scaling the data platform that supports Abnormal’s rapid growth. • This is an opportunity to grow your skills, take ownership of important technical areas at Abnormal, and set yourself up for advancement into senior or technical leadership roles. • What you will do: Design, build, and operate core components of Abnormal’s data platform. Develop self-serve tools and services that empower internal teams to adopt and scale data systems with minimal operational overhead. Automate infrastructure and operations to ensure reliability, performance, and scalability with minimal human intervention. Apply cutting-edge GenAI techniques to build smarter developer and operator experiences. Collaborate across engineering teams to identify and close scalability or reliability gaps in their systems, leveraging our platform capabilities. Own the end-to-end delivery of complex features or systems that bring clear value to internal stakeholders. • What you bring: 3+ years of professional software engineering experience, with a focus on backend systems, distributed systems, or infrastructure. A strong foundation in software development: clean, maintainable, and testable code. Strong software engineering fundamentals, with an emphasis on taking engineering problems and solving them for long-term scale, reliability, and ease of use. Strong experience with Python or Go, and hands-on expertise with AWS and/or Databricks. Experience with one or more technologies in our stack: relational databases, RocksDB, OpenSearch/ElasticSearch, Redis, Kafka, Spark, Airflow, DBT, etc. Excellent communication skills, with a proven ability to work effectively with remote and cross-functional teams. A track record of owning and delivering technical projects with ambiguous requirements. A growth mindset and a strong sense of ownership over your career development. 🎯 Requirements • 3+ years of professional software engineering experience, with a focus on backend systems, distributed systems, or infrastructure. • A strong foundation in software development: clean, maintainable, and testable code. • Strong software engineering fundamentals, with an emphasis on taking engineering problems and solving them for long-term scale, reliability, and ease of use. • Strong experience with Python or Go, and hands-on expertise with AWS and/or Databricks. • Experience with one or more technologies in our stack: relational databases, RocksDB, OpenSearch/ElasticSearch, Redis, Kafka, Spark, Airflow, DBT, etc. • Excellent communication skills, with a proven ability to work effectively with remote and cross-functional teams. • A track record of owning and delivering technical projects with ambiguous requirements. • A growth mindset and a strong sense of ownership over your career development. 🏖️ Benefits • bonus • restricted stock units (RSUs) • benefits • Individual compensation packages are based on factors unique to each candidate, including their skills, experience, qualifications and other job-related reasons. We know that benefits are also an important piece of your total compensation package. Learn more about our Compensation and Equity Philosophy on our Benefits & Perks page.