Leave us your email address and we'll send you all the new jobs according to your preferences.

Senior Data Engineer AWS,DevOps,Big Data - Finance - London

Posted 19 days 3 hours ago by Salt Search

£500 - £650 Daily
Permanent
Not Specified
Other
London, United Kingdom
Job Description

Data Engineer (AWS,DevOps,Big Data) - Finance - London

Day rate: £550 - £650 inside IR35

Duration: 6 months

Start: ASAP

Hybrid 2 - 3 days in office

My new client is looking for a highly skilled Data Engineer with expertise in cloud DevOps, big data administration, and data engineering on AWS. The ideal candidate will have a deep understanding of AWS Lake Formation, data product concepts, and Spark job management, including auditing, monitoring, and performance tuning. This role involves creating and managing services and tools to support a multi-tenant environment, ensuring optimal performance and scalability.

Key Responsibilities:

  • Cloud DevOps & Big Data Administration:
    • Manage and optimize big data environments on AWS, with a focus on efficient administration and maintenance.
    • Leverage AWS Lake Formation to design and implement data lakehouses versus data fabric architectures, ensuring data integrity and accessibility.
  • Data Engineering & Spark Management:
    • Develop and maintain Spark jobs, with a focus on auditing, monitoring, and instrumentation to ensure reliability and performance.
    • Perform Spark performance tuning, including understanding and applying Spark 3+ features, such as Adaptive Query Execution (AQE) and job-level resource management.
    • Create services and tools to manage a multi-tenant environment, ensuring seamless data operations across tenants.
  • Infrastructure as Code (IaC):
    • Utilize Terraform for infrastructure provisioning and management, ensuring scalable and secure environments.
    • Integrate with AWS Glue and HIVE for data processing and management, optimizing workflows for large-scale data operations.
  • Data Storage & Management:
    • Work with data storage formats like Parquet and Hudi to optimize data storage and retrieval.
    • Implement and manage IAM policies for secure data access and management across AWS services.
  • Collaboration & Continuous Improvement:
    • Collaborate with cross-functional teams, including data scientists, analysts, and other engineers, to develop and deploy data solutions.
    • Continuously improve data engineering practices, leveraging new tools and techniques to enhance performance and efficiency.

Required Skills:

  • Cloud DevOps & Big Data:
    • Extensive experience in cloud DevOps and big data administration on AWS.
    • Proficiency in AWS Lake Formation, with a strong understanding of data lakehouse versus data fabric concepts.
  • Programming & Data Engineering:
    • Expertise in Python for data processing and automation.
    • Deep knowledge of Spark internals (Spark 3+ features), including architecture, events, system metrics, AQE, and job-level resource management.
  • Tech Stack:
    • Strong hands-on experience with Terraform for infrastructure management.
    • Experience with data formats such as Hudi and Parquet.
    • Familiarity with AWS Glue, HIVE, and IAM for data management and security.

Good to Have:

  • Additional Tools & Technologies:
    • Knowledge of Iceberg and its application in data storage and management.
    • Experience with Airflow for workflow automation.
    • Familiarity with Terragrunt for managing Terraform configurations.
    • Understanding of DynamoDB and its integration within data environments.
Email this Job