Specialist Data Engineer job at Absa Bank Kenya
New
1 Day Ago
Linkedid Twitter Share on facebook
Specialist Data Engineer
2026-03-03T15:56:01+00:00
Absa Bank Kenya
https://cdn.greatkenyanjobs.com/jsjobsdata/data/employer/comp_10391/logo/absa.jpeg
FULL_TIME
Kenya
Nairobi
00100
Kenya
Information Technology
Computer & IT, Science & Engineering
KES
MONTH
2026-03-11T17:00:00+00:00
8

Job Summary

Work embedded as a member of squad OR; across multiple squads to produce, test, document and review algorithms & data specific source code that supports the deployment & optimisation of data retrieval, processing, storage and distribution for a business area.

Job Description

Data Architecture & Data Engineering

Understand the technical landscape and bank wide architecture that is connected to or dependent on the business area supported in order to effectively design & deliver data solutions (architecture, pipeline etc.)

Translate / interpret the data architecture direction and associated business requirements & leverage expertise in analytical & creative problem solving to synthesise data solution designs (build a solution from its components) beyond the analysis of the problem

Participate in design thinking processes to successfully deliver data solution blueprints

Leverage state of the art relational and No-SQL databases as well integration and streaming platforms do deliver sustainable business specific data solutions.

Design data retrieval, storage & distribution solutions (and OR components thereof) including contributing to all phases of the development lifecycle e.g. design process

Develop high quality data processing, retrieval, storage & distribution design in a test driven & domain driven / cross domain environment

Build analytics tools that utilize the data pipeline by quickly producing well[1]organised, optimized, and documented source code & algorithms to deliver technical data solutions

Create & Maintain Sophisticated CI / CD Pipelines (authoring & supporting CI/CD pipelines in Jenkins or similar tools and deploy to multi-site environments – supporting and managing your applications all the way to production)

Automate tasks through appropriate tools and scripting technologies e.g. Ansible, Chef

Debug existing source code and polish feature sets.

Assemble large, complex data sets that meet business requirements & manage the data pipeline • Build infrastructure to automate extremely high volumes of data delivery

Create data tools for analytics and data science teams that assist them in building and optimizing data sets for the benefit of the business

Ensure designs & solutions support the technical organisation principles of self[1]service, repeatability, testability, scalability & resilience

Apply general design patterns and paradigms to deliver technical solutions

Inform & support the infrastructure build required for optimal extraction, transformation, and loading of data from a wide variety of data sources

Support the continuous optimisation, improvement & automation of data processing, retrieval, storage & distribution processes

Ensure the quality assurance and testing of all data solutions aligned to the QA Engineering & broader architectural guidelines and standards of the organisation

Implement & align to the Group Security standards and practices to ensure the undisputable separation, security & quality of the organisation’s data

Meaningfully contribute to & ensure solutions align to the design & direction of the Group Architecture & in particular data standards, principles, preferences & practices. Short term deployment must align to strategic long term delivery.

Meaningfully contribute to & ensure solutions align to the design and direction of the Group Infrastructure standards and practices e.g. OLA’s, IAAS, PAAS, SAAS, Containerisation etc.

Monitor the performance of data solutions designs & ensure ongoing optimization of data solutions • Stay ahead of the curve on data processing, retrieval, storage & distribution technologies & processes (global best practices & trends) to ensure best practice

People

Coach & mentor other engineers

Conduct peer reviews, testing, problem solving within and across the broader team • Build data science team capability in the use of data solutions

Risk & Governance

Identify technical risks and mitigate these (pre, during & post deployment)

Update / Design all application documentation aligned to the organization technical standards and risk / governance frameworks

Create business cases & solution specifications for various governance processes (e.g. CTO approvals)

Participate in incident management & DR activity – applying critical thinking, problem solving & technical expertise to get to the bottom of major incidents

Deliver on time & on budget (always

  • Understand the technical landscape and bank wide architecture that is connected to or dependent on the business area supported in order to effectively design & deliver data solutions (architecture, pipeline etc.)
  • Translate / interpret the data architecture direction and associated business requirements & leverage expertise in analytical & creative problem solving to synthesise data solution designs (build a solution from its components) beyond the analysis of the problem
  • Participate in design thinking processes to successfully deliver data solution blueprints
  • Leverage state of the art relational and No-SQL databases as well integration and streaming platforms do deliver sustainable business specific data solutions.
  • Design data retrieval, storage & distribution solutions (and OR components thereof) including contributing to all phases of the development lifecycle e.g. design process
  • Develop high quality data processing, retrieval, storage & distribution design in a test driven & domain driven / cross domain environment
  • Build analytics tools that utilize the data pipeline by quickly producing well[1]organised, optimized, and documented source code & algorithms to deliver technical data solutions
  • Create & Maintain Sophisticated CI / CD Pipelines (authoring & supporting CI/CD pipelines in Jenkins or similar tools and deploy to multi-site environments – supporting and managing your applications all the way to production)
  • Automate tasks through appropriate tools and scripting technologies e.g. Ansible, Chef
  • Debug existing source code and polish feature sets.
  • Assemble large, complex data sets that meet business requirements & manage the data pipeline • Build infrastructure to automate extremely high volumes of data delivery
  • Create data tools for analytics and data science teams that assist them in building and optimizing data sets for the benefit of the business
  • Ensure designs & solutions support the technical organisation principles of self[1]service, repeatability, testability, scalability & resilience
  • Apply general design patterns and paradigms to deliver technical solutions
  • Inform & support the infrastructure build required for optimal extraction, transformation, and loading of data from a wide variety of data sources
  • Support the continuous optimisation, improvement & automation of data processing, retrieval, storage & distribution processes
  • Ensure the quality assurance and testing of all data solutions aligned to the QA Engineering & broader architectural guidelines and standards of the organisation
  • Implement & align to the Group Security standards and practices to ensure the undisputable separation, security & quality of the organisation’s data
  • Meaningfully contribute to & ensure solutions align to the design & direction of the Group Architecture & in particular data standards, principles, preferences & practices. Short term deployment must align to strategic long term delivery.
  • Meaningfully contribute to & ensure solutions align to the design and direction of the Group Infrastructure standards and practices e.g. OLA’s, IAAS, PAAS, SAAS, Containerisation etc.
  • Monitor the performance of data solutions designs & ensure ongoing optimization of data solutions • Stay ahead of the curve on data processing, retrieval, storage & distribution technologies & processes (global best practices & trends) to ensure best practice
  • Coach & mentor other engineers
  • Conduct peer reviews, testing, problem solving within and across the broader team • Build data science team capability in the use of data solutions
  • Identify technical risks and mitigate these (pre, during & post deployment)
  • Update / Design all application documentation aligned to the organization technical standards and risk / governance frameworks
  • Create business cases & solution specifications for various governance processes (e.g. CTO approvals)
  • Participate in incident management & DR activity – applying critical thinking, problem solving & technical expertise to get to the bottom of major incidents
  • Deliver on time & on budget (always
  • Proficiency in Hadoop required
  • Spark and/or AWS knowledge a distinct advantage
  • Experienced in lake formation
  • Ability to adapt to inhouse built ETL tools
  • Bachelor’s Degree: Information Technology
bachelor degree
36
JOB-69a70491cfde1

Vacancy title:
Specialist Data Engineer

[Type: FULL_TIME, Industry: Information Technology, Category: Computer & IT, Science & Engineering]

Jobs at:
Absa Bank Kenya

Deadline of this Job:
Wednesday, March 11 2026

Duty Station:
Kenya | Nairobi

Summary
Date Posted: Tuesday, March 3 2026, Base Salary: Not Disclosed

Similar Jobs in Kenya
Learn more about Absa Bank Kenya
Absa Bank Kenya jobs in Kenya

JOB DETAILS:

Job Summary

Work embedded as a member of squad OR; across multiple squads to produce, test, document and review algorithms & data specific source code that supports the deployment & optimisation of data retrieval, processing, storage and distribution for a business area.

Job Description

Data Architecture & Data Engineering

Understand the technical landscape and bank wide architecture that is connected to or dependent on the business area supported in order to effectively design & deliver data solutions (architecture, pipeline etc.)

Translate / interpret the data architecture direction and associated business requirements & leverage expertise in analytical & creative problem solving to synthesise data solution designs (build a solution from its components) beyond the analysis of the problem

Participate in design thinking processes to successfully deliver data solution blueprints

Leverage state of the art relational and No-SQL databases as well integration and streaming platforms do deliver sustainable business specific data solutions.

Design data retrieval, storage & distribution solutions (and OR components thereof) including contributing to all phases of the development lifecycle e.g. design process

Develop high quality data processing, retrieval, storage & distribution design in a test driven & domain driven / cross domain environment

Build analytics tools that utilize the data pipeline by quickly producing well[1]organised, optimized, and documented source code & algorithms to deliver technical data solutions

Create & Maintain Sophisticated CI / CD Pipelines (authoring & supporting CI/CD pipelines in Jenkins or similar tools and deploy to multi-site environments – supporting and managing your applications all the way to production)

Automate tasks through appropriate tools and scripting technologies e.g. Ansible, Chef

Debug existing source code and polish feature sets.

Assemble large, complex data sets that meet business requirements & manage the data pipeline • Build infrastructure to automate extremely high volumes of data delivery

Create data tools for analytics and data science teams that assist them in building and optimizing data sets for the benefit of the business

Ensure designs & solutions support the technical organisation principles of self[1]service, repeatability, testability, scalability & resilience

Apply general design patterns and paradigms to deliver technical solutions

Inform & support the infrastructure build required for optimal extraction, transformation, and loading of data from a wide variety of data sources

Support the continuous optimisation, improvement & automation of data processing, retrieval, storage & distribution processes

Ensure the quality assurance and testing of all data solutions aligned to the QA Engineering & broader architectural guidelines and standards of the organisation

Implement & align to the Group Security standards and practices to ensure the undisputable separation, security & quality of the organisation’s data

Meaningfully contribute to & ensure solutions align to the design & direction of the Group Architecture & in particular data standards, principles, preferences & practices. Short term deployment must align to strategic long term delivery.

Meaningfully contribute to & ensure solutions align to the design and direction of the Group Infrastructure standards and practices e.g. OLA’s, IAAS, PAAS, SAAS, Containerisation etc.

Monitor the performance of data solutions designs & ensure ongoing optimization of data solutions • Stay ahead of the curve on data processing, retrieval, storage & distribution technologies & processes (global best practices & trends) to ensure best practice

People

Coach & mentor other engineers

Conduct peer reviews, testing, problem solving within and across the broader team • Build data science team capability in the use of data solutions

Risk & Governance

Identify technical risks and mitigate these (pre, during & post deployment)

Update / Design all application documentation aligned to the organization technical standards and risk / governance frameworks

Create business cases & solution specifications for various governance processes (e.g. CTO approvals)

Participate in incident management & DR activity – applying critical thinking, problem solving & technical expertise to get to the bottom of major incidents

Deliver on time & on budget (always

Work Hours: 8

Experience in Months: 36

Level of Education: bachelor degree

Job application procedure

Application Link:Click Here to Apply Now

All Jobs | QUICK ALERT SUBSCRIPTION

Job Info
Job Category: Computer/ IT jobs in Kenya
Job Type: Full-time
Deadline of this Job: Wednesday, March 11 2026
Duty Station: Kenya | Nairobi
Posted: 03-03-2026
No of Jobs: 1
Start Publishing: 03-03-2026
Stop Publishing (Put date of 2030): 10-10-2076
Apply Now
Notification Board

Join a Focused Community on job search to uncover both advertised and non-advertised jobs that you may not be aware of. A jobs WhatsApp Group Community can ensure that you know the opportunities happening around you and a jobs Facebook Group Community provides an opportunity to discuss with employers who need to fill urgent position. Click the links to join. You can view previously sent Email Alerts here incase you missed them and Subscribe so that you never miss out.

Caution: Never Pay Money in a Recruitment Process.

Some smart scams can trick you into paying for Psychometric Tests.