Company Name: AlphaPoint Job Details: RemoteFull,Time Job Url: https://hiring.cafe/viewjob/8za77f7ik291lucb Job Description: Posted 2mo agoCloud Data Engineer@ AlphaPointView All JobsWebsiteAustin, Texas, United StatesRemoteFull TimeResponsibilities:Build scalable infrastructure, Architect backend data solutions, Develop ETL pipelinesRequirements Summary:7-10 years software development; proficient in Python, Node.js, and/or Java; distributed systems and data modeling familiarity; ETL pipelines; cloud experience.Technical Tools Mentioned:Python, Node.js, Java, Apache Airflow, Spark, Databricks, MongoDB, Cassandra, DynamoDB, CosmosDB, Kafka, AWS Kinesis, GCP Dataflow, Redis, Elasticsearch, Solr, RabbitMQ, AWS SQS, GCP Cloud Tasks, Delta Lake, Parquet, AWS, GCP, Azure, TDD, Git About UsAlphaPoint’s AI Labs’ team of engineers and AI scientists is solving complex business problems by bridging the gap between transformative breakthroughs in AI technology and increasingly competitive markets.Our team is developing and applying the latest generative AI, data and knowledge modeling technologies to large scale problems, right at the edge of what is possible.AlphaPoint is a financial technology company powering digital asset exchanges and brokerages worldwide. The Role Build a scalable and highly performant infrastructure to process batch and real-time workloadsWork with the AI engineering team and external engineering teams to monitor and extract data from a vast array of data sources Implement ETL data pipelines Architect backend data solutions to support various microservicesDevelop third-party integrations with large-scale legacy systems YouBachelor’s degree in computer science or similar discipline7-10 years experience in software developmentProficient in Python, Node.js, and/or JavaFamiliarity with the basic principles of distributed computing and data modelingExperience building ETL pipelines using Apache Airflow and Spark, Databricks, or other pipeline orchestration tools Experience with NoSQL databases such as MongoDB, Cassandra, DynamoDB, or CosmosDBExperience with real-time stream processing systems like Kafka, AWS Kinesis, GCP Data FlowExperience with Redis, Elasticsearch, SolrExperience with messaging systems like RabbitMQ, AWS SQS, GCP Cloud TasksAbility to find creative ways to harvest data in unstructured formats by scraping, modeling, and ingesting data into semantic databases and graphsFamiliarity with Delta Lake and Parquet filesFamiliarity with one or more cloud providers: AWS, GCP, or AzureProficiency with Test Driven Development (TDD)Proficiency with Git using services such as Github or BitbucketAdditional Preferred QualificationsExperience in a production environment with large-scale knowledge systems.Great written and verbal communication skillsTeam player hungry to learn from and teach fellow team membersBenefits100% Remote Work EnvironmentCompetitive compensationEquity or stock options (if applicable)A culture of autonomy, experimentation, and learningOpportunity to make a real impact on company trajectory