About the Role
We are looking for a skilled Big Data Engineer (Hadoop/Spark) to design, develop, and maintain scalable big data solutions. The ideal candidate will work closely with data architects, analysts, and business stakeholders to build high-performance data pipelines and ensure efficient data processing across distributed systems.
Key Responsibilities
- Design, develop, and maintain big data pipelines using Hadoop and Spark.
- Build scalable data processing frameworks for batch and real-time workloads.
- Develop and optimize Spark jobs using Python or Scala.
- Work with HDFS, Hive, and related big data components.
- Perform data ingestion from various structured and unstructured data sources.
- Monitor, troubleshoot, and optimize performance of big data clusters.
- Collaborate with cross-functional teams to translate business requirements into technical solutions.
- Ensure data quality, reliability, and system stability.
Ready to Apply?
Submit your application today and take the next step in your career journey with AMK Technology Sdn Bhd.
Apply Now