Apply now »

Azure Databricks

Long Description

 

Job Summary

We are looking for a Big Data Platform Engineer with strong expertise in Azure Databricks to design, build, and manage scalable data platforms and pipelines. The role focuses on data processing, quality, governance, and cloud-based big data architectures.

Key Responsibilities

  • Build and optimize big data platforms using Databricks (preferred) or Snowflake/Cloudera/Palantir.
  • Design and manage Spark-based parallel compute environments, cluster configurations, and performance tuning.
  • Develop and maintain batch and streaming data pipelines using Databricks, Azure Data Factory, or DBT.
  • Implement data quality frameworks using Great Expectations or Apache Deequ.
  • Manage data storage and modeling, including RDBMS and cloud storage (ADLS preferred).
  • Ensure data governance, security (ACLs, PII), lineage, and schema evolution.
  • Support cloud administration activities including cost management, monitoring, backups, and automation via CLI/APIs.

Required Skills

  • Strong experience with Azure Databricks, Spark, and distributed data processing.
  • Hands-on with data lakes/lakehouses, data warehouses, and star/snowflake schemas.
  • Experience with Azure Cloud (preferred) or AWS/GCP.
  • Knowledge of data governance, streaming & batch processing, and schema management.
Ref. code:  455910
Posted on:  21 Apr 2026
Experience Level:  Executives
Contract Type:  Permanent
Location: 

Kuala Lumpur, MY

Brand:  Capgemini
Professional Community:  Data & AI

Apply now »