Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Data Engineer (SQL, Python, Spark, Hive, Hadoop).
Malaysia Jobs Expertini

Urgent! Data Engineer (SQL, Python, Spark, Hive, Hadoop) Job Opening In Kuala Lumpur – Now Hiring Unison Group

Data Engineer (SQL, Python, Spark, Hive, Hadoop)



Job description

Unison Group Kuala Lumpur, Federal Territory of Kuala Lumpur, Malaysia

We are looking for a skilled Data Engineer with 5 years of hands-on experience in designing, developing, and optimizing big data pipelines and solutions.

The ideal candidate will have strong expertise in SQL, Python, Apache Spark, Hive, and Hadoop ecosystems and will be responsible for building scalable data platforms to support business intelligence, analytics, and machine learning use cases.

Key Responsibilities

  • Design, develop, and maintain scalable ETL pipelines using Spark, Hive, and Hadoop.

  • Write efficient SQL queries for data extraction, transformation, and analysis.

  • Develop automation scripts and data processing workflows using Python.

  • Optimize data pipelines for performance, reliability, and scalability.

  • Work with structured and unstructured data from multiple sources.

  • Ensure data quality, governance, and security throughout the data lifecycle.

  • Collaborate with cross-functional teams (Data Scientists, Analysts, and Business stakeholders) to deliver data-driven solutions.

  • Monitor and troubleshoot production data pipelines.

Requirements

Required Skills & Qualifications

  • 5+ years of experience in Data Engineering / Big Data development.

  • Strong expertise in SQL (query optimization, performance tuning, stored procedures).

  • Proficiency in Python for data manipulation, scripting, and automation.

  • Hands-on experience with Apache Spark (PySpark/Scala) for large-scale data processing.

  • Solid knowledge of Hive for querying and managing data in Hadoop environments.

  • Strong working knowledge of Hadoop ecosystem (HDFS, YARN, MapReduce, etc.).

  • Experience with data pipeline orchestration tools (Airflow, Oozie, or similar) is a plus.

  • Familiarity with cloud platforms (AWS, Azure, or GCP) is preferred.

  • Excellent problem-solving, debugging, and communication skills.

Seniority level

Mid-Senior level

Employment type

Full-time

Job function

Other

Industries

IT Services and IT Consulting

#J-18808-Ljbffr


Required Skill Profession

It & Technology



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Data Engineer Potential: Insight & Career Growth Guide