Kỹ năng


Mô tả công việc

● Design and architect data storage solutions, including databases, data lakes, and
warehouses, using AZURE services with Databricks' Delta Lake, including
metadata management and data cataloguing.
● Create, manage, and optimise data pipelines for ingesting, processing, and
transforming data using AZURE services like AZURE functions and
Databricks for advanced data processing, and Informatica IDMC for data
integration and quality.
● Integrate data from various sources, both internal and external, into AZURE
and Databricks environments, ensuring data consistency and quality, data
integration, transformation, and governance.
● Develop ETL (Extract, Transform, Load) processes to cleanse, transform, and
enrich data, making it suitable for analytical purposes using Databricks' Spark
capabilities
● Monitor and optimise data processing and query performance in both AZURE
and Databricks environments, making necessary adjustments to meet
performance and scalability requirements.
● Implement security best practices and data encryption methods to protect
sensitive data in both AZURE and Databricks, while ensuring compliance with
data privacy regulations.
● Implement automation for routine tasks, such as data ingestion, transformation,
and monitoring, using AZURE services like AWS Lambda, Databricks Jobs
● Maintain clear and comprehensive documentation of data infrastructure,
pipelines, and configurations in both AZURE and Databricks environments
● Collaborate with cross-functional teams, including data scientists, analysts, and
software engineers, to understand data requirements and deliver appropriate
solutions across AZURE, Databricks
● Identify and resolve data-related issues and provide support to ensure data
availability and integrity in both AWS, Databricks, and Informatica IDMC
environments.
● Optimise AZURE, Databricks resource usage to control costs while meeting
performance and scalability requirements

Yêu cầu công việc

● 5-7 years of experience in data engineering, with expertise in AZURE CLOUD
services, Databricks and SPARK skills
● Hands-on data migration (e.g. code migration from Information Power Centre
on premises to cloud version, data migration from Teradata/Oracle/PostgreSQL
to Databricks)
● Proficiency in programming languages such as Python or Java for building data
pipelines on the AZURE Cloud
● Evaluate potential technical solutions and make recommendations to resolve
data issues, especially on performance assessment for complex data
transformations and long-running data processes
● Strong knowledge of SQL and NoSQL databases
● Familiarity with data modelling and schema design
● AZURE certifications, Databricks certifications are a plus

Thời gian làm việc

Trong tuần: Từ thứ 2 - thứ 6


Quyền lợi ứng viên

No probation period, receive 100% of the salary from day one
Opportunity to work in teams with top domestic and international IT experts
Chance to participate in ambitious projects across various countries, gain exposure to the latest technologies, and learn from talented colleagues
Work in a young, dynamic, modern, and multicultural environment; regular internal communications and events on holidays
Career advancement based on performance, with corresponding promotions and salary increases
Access to soft skills training (logical thinking, creative thinking, communication skills, project management, negotiation skills, etc.) and Japanese language classes
And many other attractive benefits...

Địa chỉ làm việc

Remote 100%

Tiền thưởng

Đăng nhập để xem

Mức lương

30-40 triệu

Thông tin

  • Kinh nghiệm 5 năm
  • Trình độ Không yêu cầu
  • Vị trí Senior
  • Hình thức Remote
  • Hạn nộp hồ sơ 2025-08-10
  • Số lượng 1 người
  • Phỏng vấn 1 vòng
  • Ngôn ngữ Tiếng Anh giao tiếp
Hỗ trợ ứng tuyển
Báo cáo lỗi