Lead AWS Data Architect

  • Temporary
  • London
  • Negotiable GBP / Year

Lead AWS Data Architect

Our client, a leading global supplier for IT services, requires Lead AWS Data Architect to be based at their client’s office in London, UK.

This is a hybrid role – you can work remotely in the UK and attend the London office 2 days per week .

This is a 6+ month temporary contract to start asap

Day rate: Competitive Market rate

As an AWS Data Hands-On Architect, you will design, build, and continuously enhance our Payments Data Platform by ingesting ISO 20022 events into an AWS-based lakehouse. You will ensure best-in-class data governance, observability, and cost optimisation, while owning the data product end-to-end across CX, Payments, and CPO domains. This includes managing the product backlog, SLAs, and data contracts, as well as delivering operational runbooks, defining SLOs, and producing quarterly cost and quality reports.

Key Responsibilities

  • Data products (To‑Be): Channel Ops Warehouse (~30‑day high‑perf layer) and Channel Analytics Lake (7+ yrs). Expose status and statements APIs with clear SLAs.
  • Platform architecture: S3/Glue/Athena/Iceberg Lakehouse, Redshift for BI/ops. QuickSight for PO/ops dashboards. Lambda/Step Functions for stream processing orchestration.
  • Streaming & ingest: Kafka (K4/K5/Confluent) and AWS MSK/Kinesis; connectors/CDC to DW/Lake. Partitioning, retention, replay, idempotency. EventBridge for AWS-native event routing.
  • Event contracts: Avro/Protobuf, Schema Registry, compatibility rules, versioning strategy.
  • As‑Is → To‑Be: Inventory APIs/File/SWIFT feeds and stores (Aurora Postgres, Kafka). Define migration waves, cutover runbooks.
  • Governance & quality: Data-as-a-product ownership, lineage, access controls, quality rules, retention.
  • Observability & FinOps: Grafana/Prometheus/CloudWatch for TPS, success rate, lag, spend per 1M events. Runbooks + actionable alerts.
  • Scale & resilience: Tens of millions of payments/day, multi-AZ/region patterns, pragmatic RPO/RTO.
  • Security: Data classification, KMS encryption, tokenization where needed, least‑privilege IAM, immutable audit.
  • Hands-on build: Python/Scala/SQL; Spark/Glue; Step Functions/Lambda; IaC (Terraform); CI/CD (GitLab/Jenkins); automated tests.

Key Requirements

Essential Skills:

  • Streaming & EDA: Kafka (Confluent) and AWS MSK/Kinesis; Kinesis Firehose; ordering, replay, exactly‑at‑least‑once semantics; EventBridge for event routing and filtering.
  • Schema management: Avro/Protobuf + Schema Registry (compatibility, subject strategy, evolution).
  • AWS data stack: S3/Glue/Athena, Redshift, Step Functions, Lambda; Kinesis & S3→Glue streaming pipelines; Glue Streaming; DLQ patterns.
  • Payments & ISO 20022: PAIN/PACS/CAMT, lifecycle modelling, reconstruction, SWIFT/file channel knowledge.
  • Governance: Data‑mass mindset, ownership, quality SLAs, access, retention, lineage.
  • Observability & FinOps: Build dashboards, alerts, cost KPIs; troubleshoot low throughput at scale.
  • Delivery: Production code, performance profiling, code reviews, automated tests, secure-by‑design.

Data Architecture Fundamentals (Must‑Have):

  • Logical data modelling: Entity‑relationship diagrams, normalization (1NF through Boyce‑Codd/BCNF), denormalization trade‑offs; functional dependencies & anomalies.
  • Physical data modelling: Table design, partitioning strategies, indexes; storage patterns for OLTP vs analytics.
  • Normalization & design: Normalize to 3NF/BCNF for OLTP; understand when to denormalize for queries; Data Vault, star schemas.
  • CQRS: Read/write segregation; event sourcing; reconstruction, when CQRS is justified vs overkill.
  • Event‑Driven Architecture (EDA): Event-first design; aggregate boundaries; pub/sub patterns; orchestration; idempotency; at-least-once delivery.
  • Bounded contexts & domain modelling: Anti‑corruption layers, shared kernel, published language, ubiquitous language.
  • Entities, value objects & repositories: Domain entity identity; immutability; repository abstraction over persistence; temporal/versioned records.
  • Domain events & contracts: Schema versioning (Avro/Protobuf); backward/forward compatibility; event replay; mapping domain events to Kafka topics and Aurora tables.

Desirable Skills:

  • QuickSight/Tableau, Redshift tuning; ksqlDB/Flink; Aurora Postgres internals.
  • Edge/API constraints (Apigee/API‑GW), mTLS/webhook patterns.

Due to the volume of applications received, unfortunately we cannot respond to everyone.

If you do not hear back from us within 7 days of sending your application, please assume that you have not been successful on this occasion.

Please do keep an eye on our website https://projectrecruit.com/jobs/ for future roles.

Upload your CV/resume or any other relevant file. Max. file size: 50 MB.

Project Global
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.