Job Title: Data Engineer - Azure (Scala/Kafka) Company Name: Tiger Analytics Job Details: RemoteFull,Time Job Url: https://hiring.cafe/viewjob/yznwdlfraown908v Job Description: Posted 2y agoData Engineer - Azure (Scala/Kafka)@ Tiger AnalyticsView All JobsWebsiteUnited StatesRemoteFull TimeResponsibilities:build pipelines, data integration, stream dataRequirements Summary:Bachelor's degree; 8+ years in Data Engineering and analytics; strong Scala, Kafka, Azure, PySpark, Python, SQL; experience with Azure Data Factory, Databricks, Delta Lake; Unix experience; real-time streaming and API integration.Technical Tools Mentioned:Scala, Kafka, Azure Data Factory, Azure Synapse, Python, SQL, PySpark, Apache Spark, Delta Lake, Azure Databricks, Unix, APIs DescriptionTiger Analytics is pioneering what AI and analytics can do to solve some of the toughest problems faced by organizations globally. We develop bespoke solutions powered by data and technology for several Fortune 100 companies. We have offices in multiple cities across the US, UK, India, and Singapore, and a substantial remote global workforce. We are expanding our Data Engineering practice and looking for Sr. Azure Data Engineers to join our growing team of analytics experts. The right candidate will have strong analytical skills and the ability to combine data from different sources and will strive for efficiency by aligning data systems with business goals. This is a remote role for applicants based in USA.Requirements Bachelor’s degree in Computer Science or similar field 8+ years experience in Data Engineering + several years in Analytics space Strong Proficiency in Scala - coding experience a must Strong Proficiency in Kafka and ADF for data pipelines /migration experience a must (Azure Synapses) Experience with real time streaming, Kafka, and API Integration Experience in PySpark Strong Proficiency in Python programming. Strong Proficiency in SQL queries Experience building data pipelines using Azure stack Experience using Apache spark Good working experience on Delta Lake and ETL processing Prior experience of working in a Unix environment Experience in harmonizing raw data into a consumer-friendly format using Azure Databricks Experience extracting/querying/joining large data sets at scale Experience building data ingestion pipelines using Azure Data Factory to ingest structured and unstructured data Experience in data wrangling, advanced analytic modeling is preferred Strong communication and organizational skills BenefitsThis position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.