Job Url: https://wellfound.com/jobs/3628645-staff-sr-staff-engineer-dem Job Description: taff / Sr. Staff Engineer, DEM Remote ( India )  •  San Francisco +10 |Full Time Posted: today• Recruiter recently active Job Location San Francisco •  London •  Melbourne •  Santa Clara •  Tokyo •  Taipei •  Clayton •  Melbourne •  Bangalore Urban •  Madrid •  St. Louis Visa Sponsorship Not Available Relocation Allowed About the job About the role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Digital Experience Management (DEM Engineering team) is responsible for building data ingestions, analytics, API and AI/ML on timeseries network, user and application telemetry data generated from real user monitoring (RUM), synthetic monitoring, endpoint monitoring from Netskope SASE platform . We work closely with engineers and the product team to build solutions solving real world problems of Network Operators and IT Admins. What's in it for you As part of the Digital Experience Management team, you will work on state-of-the-art, cloud-scale distributed systems at the intersections of networking, cloud security and big data. You will be part of designing and building systems that provide critical infrastructure for global Fortune 100 companies. What you will be doing: Architecting and Building Distributed Data Systems Design and implement large-scale distributed platforms, microservices, and frameworks. Build data ingestion pipelines that can handle millions of telemetry events daily, both streaming and batch. Ensure systems are fault-tolerant, highly available, and cost-efficient at scale. Translating Complex Business Needs into Software Partner closely with the product team to understand complex operational and analytical requirements. Convert these into usable, performant, and maintainable technical solutions. Technical Leadership Serve as a technical mentor and architectural guide for senior developers. Lead architecture reviews, design discussions, and code reviews. Influence engineering practices and promote best-in-class observability, reliability, and security. Innovation in DEM and SASE Build solutions that enhance user experience monitoring — correlating data across network, endpoint, and cloud layers. Integrate AI/ML models for root cause analysis, anomaly detection, and forecasting on time series telemetry data. Continuously optimize data reliability, latency, and insight accuracy. Required skills and experience Core Technical Expertise 8+ years building scalable distributed systems in cloud-native environments. Expert-level ability to design and deliver complex technical solutions from architecture to production. Hands-on experience with data pipelines that handle massive throughput — both streaming (Kafka, Flink, Spark) and batch (ETL frameworks). Big Data Architecture expertise: data modeling, ingestion, transformation, and storage optimization (especially with systems like ClickHouse, Redis, Kafka). Experience with ReST / OpenAPI Programming and Systems Design Strong in Go, Python, Java — with advanced system design and algorithmic problem-solving. Deep understanding of networking and security protocols: TCP/IP, TLS, IPSec, GRE, PKI, DNS, BGP, routing. Strong grasp of web performance and telemetry concepts (latency, page load, route optimization). Cloud, Containerization, and SRE Proven experience designing/deploying on AWS or other cloud providers. Expertise in Docker and Kubernetes orchestration. Deep understanding of SRE principles — monitoring, alerting, SLIs/SLOs, and incident management. History of driving performance improvements, cost optimization, and reliability. Leadership and Communication Ability to mentor, influence, and set technical direction across teams. Ownership of a major product area Excellent communication and documentation skills for diverse audiences. Proven track record of cross-functional collaboration with product, operations, and data science teams. Good to have Hands-on experience building APM, NPM, or DEM products. Prior work with AI/ML for time series analytics (root cause, anomaly detection, forecasting). Open source contributions related to big data, observability, or distributed systems. Advanced degree (MSCS or equivalent). What Makes This Role Unique This is not just another backend or data role — it’s at the intersection of cloud, network, and data intelligence. You’ll be shaping the core observability and performance layer for some of the world’s largest enterprise networks. It’s deeply technical, but also strategic and influential. It blends big data, cloud-native distributed systems, and AI/ML insights — all critical to Netskope’s SASE vision. Education BSCS or equivalent required, MSCS or equivalent strongly preferred