Job Title: Big Data Engineer Company Name: Chuwa America Job Url: https://www.simplyhired.com/job/Ew7EfDQ58DUsv9JkD_2hXL73ckQxhBxYGr9cA4fYCzWIN0HF9_S0ig Job Description: Lead Big Data Engineer - Only USC or GC Chuwa America Remote Job Details Contract $55 - $60 an hour 5 days ago Qualifications Performance tuning Data modeling Data visualization software proficiency Data transformation pipeline development DevOps Oozie Spark R UNIX Google Cloud Platform Apache Hive Scalable systems Tableau Java SQL AWS Spark implementation Scala Continuous integration Apache Pig Software documentation APIs Scalability Kafka Distributed computing Data analytics tools Real-time data processing implementation Senior level Cross-functional collaboration Python Cross-functional communication Hadoop 10 years Database software proficiency Full Job Description Job Title: Big Data Engineer Experience: 10+ Years Location: Remote Job Summary We are seeking an experienced Big Data Engineer with strong expertise in Hadoop ecosystem, Apache Spark, Python, and Scala programming. The ideal candidate will be responsible for designing, developing, and maintaining scalable data pipelines and big data solutions. This role requires strong hands-on experience with distributed data processing frameworks, cloud platforms, and data engineering tools. Key Responsibilities Design, develop, and maintain scalable big data pipelines and data processing systems. Work with large-scale data using Hadoop ecosystem tools such as Hive, Pig, and Oozie. Develop distributed data processing applications using Apache Spark and Scala. Build and optimize ETL pipelines for structured and unstructured data. Work with RDD APIs in Spark for large-scale data transformations and analytics. Implement data streaming solutions using Kafka for real-time data processing. Collaborate with cross-functional teams including data analysts, architects, and business stakeholders. Deploy and manage big data solutions on AWS or GCP cloud platforms. Ensure performance optimization, scalability, and reliability of data systems. Maintain documentation, follow best practices, and ensure high-quality code standards. Required Skills 10+ years of experience in Big Data and Data Engineering. Strong hands-on experience with Hadoop ecosystem (Hive, Pig, Oozie). Expertise in Apache Spark and Scala programming. Hands-on programming experience in Python and Core Java. Strong understanding of Spark RDD APIs and distributed computing. Experience with Kafka or other streaming frameworks. Solid understanding of Data Warehousing concepts and Data Modeling techniques. Strong proficiency in SQL and working with large datasets. Experience working in Linux/Unix environments. Experience with AWS or GCP cloud platforms. Knowledge of ETL design and development. Good to Have Experience with data visualization and analytics tools such as Tableau or R. Experience with real-time data processing architectures. Exposure to CI/CD pipelines and DevOps practices. Experience with performance tuning and optimization in Spark/Hadoop environments. Pay: $55.00 - $60.00 per hour Application Question(s): Are you comfortable to come on our W2? Work Location: Remote