Specialist Platform Engineer
2026-03-18T06:37:00+00:00
Absa Bank Limited
https://cdn.greatkenyanjobs.com/jsjobsdata/data/employer/comp_5295/logo/Absa%20Bank%20Limted.png
https://www.greatkenyanjobs.com/employers/company-detail/company-Absa-Bank-Limited-5295/nav-42
FULL_TIME
Nairobi
Nairobi
00100
Kenya
Banking
Computer & IT, Science & Engineering
2026-03-25T17:00:00+00:00
8
Background
Absa Bank Limited (Absa) is a wholly owned subsidiary of Barclays Africa Group Limited. Absa offers personal and business banking, credit cards, corporate and investment banking, wealth and investment management as well as bancassurance.
Job Summary
We are looking for an engineer to join our team to help build and maintain Kafka-based streaming applications and support the Kafka platform across on-prem and Confluent Cloud environments. The role is a hybrid of development, platform responsibilities, and observability, providing a unique opportunity to work on distributed systems at scale.
Core Responsibilities:
- Develop, maintain, and optimize Kafka-based applications and event streaming pipelines using Java (Spring / Spring Boot), Python, or .NET.
- Work with distributed systems concepts: partitions, replication, fault-tolerance, scaling, and event-driven architectures.
- Contribute to provisioning, managing, and securing Kafka clusters both on-prem and in Confluent Cloud.
- Implement and maintain security and authorization mechanisms, including ACLs, Kerberos, SSL, and OAuth for Confluent Cloud.
- Automate infrastructure deployment and configuration using Terraform, Ansible, CloudFormation, Docker, or Kubernetes.
- Configure, monitor, and maintain observability for Kafka clusters, including metrics, alerts, and dashboards (e.g., Prometheus, Grafana, Confluent Control Center, ElasticSearch).
- Assist in troubleshooting production issues and perform root cause analysis.
- Collaborate closely with developers, DevOps/SRE teams, and other stakeholders to ensure reliable and performant streaming systems.
- Contribute to best practices for connector configuration, high availability, disaster recovery, and performance tuning, including streaming applications and pipelines built with Kafka Streams, ksqlDB, Apache Flink, and TableFlow.
Required Skills:
- Strong programming experience in Java(Spring / Spring Boot), Python, or .NET. Ability to write clean, maintainable, and performant code.
- Solid understanding of distributed systems principles and event-driven architectures.
- Hands-on experience with Kafka in production or strong ability to learn quickly.
- Knowledge of Kafka ecosystem components (Connect, Schema Registry, KSQL, MirrorMaker, Control Center, Kafka Streams, Apache Flink, TableFlow) is a plus.
- Familiarity with security best practices for Kafka, including ACLs, Kerberos, SSL, and OAuth.
- Experience with infrastructure as code and containerized environments.
- Experience with monitoring and observability tools for distributed systems.
Desirable Skills / Bonus Points:
- Experience with Confluent Cloud or other managed Kafka platforms.
- Experience with AWS
- Experience building streaming pipelines across multiple systems and environments.
- Familiarity with CI/CD pipelines and automated deployments.
Behavioural / Soft Skills:
- Strong problem-solving and analytical skills.
- Excellent communication and interpersonal skills.
- Ability to work independently and prioritize across multiple BAU and project tasks.
- Product-minded approach, focusing on delivering value and scalable solutions.
- Develop, maintain, and optimize Kafka-based applications and event streaming pipelines using Java (Spring / Spring Boot), Python, or .NET.
- Work with distributed systems concepts: partitions, replication, fault-tolerance, scaling, and event-driven architectures.
- Contribute to provisioning, managing, and securing Kafka clusters both on-prem and in Confluent Cloud.
- Implement and maintain security and authorization mechanisms, including ACLs, Kerberos, SSL, and OAuth for Confluent Cloud.
- Automate infrastructure deployment and configuration using Terraform, Ansible, CloudFormation, Docker, or Kubernetes.
- Configure, monitor, and maintain observability for Kafka clusters, including metrics, alerts, and dashboards (e.g., Prometheus, Grafana, Confluent Control Center, ElasticSearch).
- Assist in troubleshooting production issues and perform root cause analysis.
- Collaborate closely with developers, DevOps/SRE teams, and other stakeholders to ensure reliable and performant streaming systems.
- Contribute to best practices for connector configuration, high availability, disaster recovery, and performance tuning, including streaming applications and pipelines built with Kafka Streams, ksqlDB, Apache Flink, and TableFlow.
- Strong programming experience in Java(Spring / Spring Boot), Python, or .NET. Ability to write clean, maintainable, and performant code.
- Solid understanding of distributed systems principles and event-driven architectures.
- Hands-on experience with Kafka in production or strong ability to learn quickly.
- Knowledge of Kafka ecosystem components (Connect, Schema Registry, KSQL, MirrorMaker, Control Center, Kafka Streams, Apache Flink, TableFlow) is a plus.
- Familiarity with security best practices for Kafka, including ACLs, Kerberos, SSL, and OAuth.
- Experience with infrastructure as code and containerized environments.
- Experience with monitoring and observability tools for distributed systems.
- Experience with Confluent Cloud or other managed Kafka platforms.
- Experience with AWS
- Experience building streaming pipelines across multiple systems and environments.
- Familiarity with CI/CD pipelines and automated deployments.
- Strong problem-solving and analytical skills.
- Excellent communication and interpersonal skills.
- Ability to work independently and prioritize across multiple BAU and project tasks.
- Product-minded approach, focusing on delivering value and scalable solutions.
- BA/BSc/HND
- Strong programming experience in Java(Spring / Spring Boot), Python, or .NET.
- Solid understanding of distributed systems principles and event-driven architectures.
- Hands-on experience with Kafka in production or strong ability to learn quickly.
- Familiarity with security best practices for Kafka, including ACLs, Kerberos, SSL, and OAuth.
- Experience with infrastructure as code and containerized environments.
- Experience with monitoring and observability tools for distributed systems.
- Experience with Confluent Cloud or other managed Kafka platforms.
- Experience with AWS
- Experience building streaming pipelines across multiple systems and environments.
- Familiarity with CI/CD pipelines and automated deployments.
- Strong problem-solving and analytical skills.
- Excellent communication and interpersonal skills.
- Ability to work independently and prioritize across multiple BAU and project tasks.
- Product-minded approach, focusing on delivering value and scalable solutions.
JOB-69ba480cde72a
Vacancy title:
Specialist Platform Engineer
[Type: FULL_TIME, Industry: Banking, Category: Computer & IT, Science & Engineering]
Jobs at:
Absa Bank Limited
Deadline of this Job:
Wednesday, March 25 2026
Duty Station:
Nairobi | Nairobi
Summary
Date Posted: Wednesday, March 18 2026, Base Salary: Not Disclosed
Similar Jobs in Kenya
Learn more about Absa Bank Limited
Absa Bank Limited jobs in Kenya
JOB DETAILS:
Background
Absa Bank Limited (Absa) is a wholly owned subsidiary of Barclays Africa Group Limited. Absa offers personal and business banking, credit cards, corporate and investment banking, wealth and investment management as well as bancassurance.
Job Summary
We are looking for an engineer to join our team to help build and maintain Kafka-based streaming applications and support the Kafka platform across on-prem and Confluent Cloud environments. The role is a hybrid of development, platform responsibilities, and observability, providing a unique opportunity to work on distributed systems at scale.
Core Responsibilities:
- Develop, maintain, and optimize Kafka-based applications and event streaming pipelines using Java (Spring / Spring Boot), Python, or .NET.
- Work with distributed systems concepts: partitions, replication, fault-tolerance, scaling, and event-driven architectures.
- Contribute to provisioning, managing, and securing Kafka clusters both on-prem and in Confluent Cloud.
- Implement and maintain security and authorization mechanisms, including ACLs, Kerberos, SSL, and OAuth for Confluent Cloud.
- Automate infrastructure deployment and configuration using Terraform, Ansible, CloudFormation, Docker, or Kubernetes.
- Configure, monitor, and maintain observability for Kafka clusters, including metrics, alerts, and dashboards (e.g., Prometheus, Grafana, Confluent Control Center, ElasticSearch).
- Assist in troubleshooting production issues and perform root cause analysis.
- Collaborate closely with developers, DevOps/SRE teams, and other stakeholders to ensure reliable and performant streaming systems.
- Contribute to best practices for connector configuration, high availability, disaster recovery, and performance tuning, including streaming applications and pipelines built with Kafka Streams, ksqlDB, Apache Flink, and TableFlow.
Required Skills:
- Strong programming experience in Java(Spring / Spring Boot), Python, or .NET. Ability to write clean, maintainable, and performant code.
- Solid understanding of distributed systems principles and event-driven architectures.
- Hands-on experience with Kafka in production or strong ability to learn quickly.
- Knowledge of Kafka ecosystem components (Connect, Schema Registry, KSQL, MirrorMaker, Control Center, Kafka Streams, Apache Flink, TableFlow) is a plus.
- Familiarity with security best practices for Kafka, including ACLs, Kerberos, SSL, and OAuth.
- Experience with infrastructure as code and containerized environments.
- Experience with monitoring and observability tools for distributed systems.
Desirable Skills / Bonus Points:
- Experience with Confluent Cloud or other managed Kafka platforms.
- Experience with AWS
- Experience building streaming pipelines across multiple systems and environments.
- Familiarity with CI/CD pipelines and automated deployments.
Behavioural / Soft Skills:
- Strong problem-solving and analytical skills.
- Excellent communication and interpersonal skills.
- Ability to work independently and prioritize across multiple BAU and project tasks.
- Product-minded approach, focusing on delivering value and scalable solutions.
Work Hours: 8
Experience in Months: 12
Level of Education: bachelor degree
Job application procedure
Application Link:
Click Here to Apply Now
All Jobs | QUICK ALERT SUBSCRIPTION