Job Title: Software Engineer Company Name: Global KTech Job Url: https://www.simplyhired.com/job/r7w9nHCmFDTmFi85hkltcIGUatxzDbPch1vDEyqLv5_oEOR0_pNliA Job Description: Software Engineer Global KTech Remote Job Details Contract $50 - $55 an hour 12 hours ago Qualifications Data model design Performance tuning Internal controls Continuous Delivery (CD) implementation Cloud data warehouses Web development Spark NoSQL Mid-level Scalable systems Snowflake Java SQL Data quality management Spark implementation Machine learning Mentoring Scalability Web applications Agile software development JSON Redshift AI Batch data processing Python Dimensional modeling Database software proficiency Full Job Description We are seeking a dynamic and innovative Software Engineer to join our vibrant technology team. In this role, you will be at the forefront of designing, developing, and maintaining cutting-edge software solutions that drive our business forward. Your expertise will help shape the future of our digital products, ensuring they are scalable, reliable, and aligned with industry best practices. This position offers an exciting opportunity to work with a diverse tech stack, including web development, cloud services, and application programming interfaces (APIs), in a fast-paced environment that values creativity and technical excellence. Job DescriptionExtensive experience in AWS, design, implementation, and maintenance of data pipelines using Java-Spark and PySpark. Proficient in SQLs, able to write and execute complex queries to perform curation and build views required by end users (single and multi-dimensional). Proven experience in performance and tuning to ensure jobs are running at optimal levels and no performance bottleneck. Advanced proficiency in Cloud Data Warehouse Snowflake, AWS Redshift. Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security Proficiency data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream. one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology.Role Responsibilities Supports review of controls to ensure sufficient protection of enterprise data. Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request. Updates logical or physical data models based on new use cases. Frequently uses SQL and understands NoSQL databases and their niche in the marketplace. Adds to team culture of diversity, opportunity, inclusion, and respect. Develop enterprise data models, Design/ develop/ maintain large-scale data processing pipelines (and infrastructure), Lead code reviews and provide mentoring thru the process, Drive data quality, Ensure data accessibility (to analysts and data scientists), Ensure compliance with data governance requirements, and Ensure business alignment (ensure data engineering practices align with business goals). Supports review of controls to ensure sufficient protection of enterprise data Job Title : Software Engineer Primary Skill : Java, PySpark, AWS Experience : 2 years Pay: $50.00 - $55.00 per hour Experience: Python: 3 years (Required) java: 4 years (Required) Pyspark: 2 years (Required) Work Location: Remote