Data Engineer job at Burn
New
Website :
1 Day Ago
Linkedid Twitter Share on facebook
Data Engineer
2025-12-11T10:18:41+00:00
Burn
https://cdn.greatkenyanjobs.com/jsjobsdata/data/employer/comp_8491/logo/BURN.png
FULL_TIME
 
Nairobi, Kenya
Nairobi
00100
Kenya
Manufacturing
Computer & IT, Science & Engineering
KES
 
MONTH
2025-12-20T17:00:00+00:00
 
Kenya
8

About the role

We are seeking a Data Engineer to own the continued maintenance, optimisation, and governance of our cloud data infrastructure. This role ensures the reliability and performance of our data warehouse, ETL pipelines, and self-service platforms while enabling new data integrations and supporting our growing AI and analytics initiatives.

The ideal candidate excels in operational excellence, cost optimisation, and improving data models and processes as our data ecosystem scales.

Duties and Responsibilities

Data Platform Maintenance & Monitoring

  • Monitor daily ETL workflows, data pipelines, and scheduled jobs to ensure high availability and timely delivery.
  • Troubleshoot pipeline failures, performance bottlenecks, and data quality issues.
  • Ensure adherence to SLAs for data freshness and system availability.

Data Warehouse Modelling & Optimisation

  • Review, optimise, and update the existing data warehouse schema as business requirements evolve.
  • Improve table partitioning, clustering, and indexing to boost performance
  • Ensure the warehouse follows best practices in dimensional modelling, data vault, or hybrid approaches.
  • Refactor legacy tables or models to improve clarity, performance, and usability.
  • Collaborate with analysts to design semantic models that improve analytics and self-service adoption.

Performance Optimisation & Cost Management

  • Continuously optimise queries, transformations, storage, and processing schedules for efficiency. Monitor AWS (or other cloud) costs and implement cost-saving strategies (e.g., lifecycle rules, storage tiering, compute optimisation, query tuning).
  • Conduct periodic performance audits of the warehouse and ETL layers.

Data Quality, Governance & Documentation

  • Implement and maintain validation rules, automated quality checks, and alerting.
  • Work with the data governance team to improve data lineage, metadata management, access controls, and dataset documentation.
  • Maintain structured documentation for pipelines, data flows, and schemas.

ETL Enhancements & New Data Integrations

  • Integrate new data sources into the existing pipelines.
  • Refactor and modernise ETLs as business requirements evolve.
  • Develop reusable, modular components following best practices.

Self-Service Platform (Metabase) Administration

  • Manage Metabase backend performance: resource usage, query tuning, cache configuration, and scaling.
  • Ensure stable connections to the warehouse and optimise slow dashboards.
  • Support analysts and business users in building efficient reports.

External Donor System Integrations

  • Maintain and optimize pipelines that push data to external donor systems as part of grant reporting obligations.
  • Ensure outgoing data meets required formats, completeness, and SLA timelines.
  • Troubleshoot sync issues and collaborate with external technical teams.

AI & Advanced Analytics Support

  • Prepare and maintain training datasets, feature pipelines, and model-serving data flows.
  • Collaborate with Data Scientists to transform raw and semi-structured data into AI-ready formats. Maintain data pipelines that support experimental and production AI workloads.

Stakeholder Collaboration

  • Work closely with analysts, scientists, product teams, and business units to understand data needs. Guide data access, modelling, and best practices.

Success Measures:

  • Pipeline Reliability: 95%+ reduction in avoidable pipeline failures within the first 3–6 months.
  • Performance Gains: ≥20% improvement in warehouse and Metabase query performance.
  • Cost Efficiency: Achieve measurable cloud cost reduction through storage, compute, and ETL optimisation.
  • Data Quality: Automated validation in place for all critical datasets, with a significant reduction in recurring data-quality issues.
  • Donor Integrations: 100% on-time and error-free data submissions to external donor systems.
  • Documentation: Complete and updated documentation for major ETLs, data models, and integrations.
  • Model Readiness: AI/ML pipelines and datasets consistently meet quality and performance standards for analytics initiatives.

Skills and Experience

Technical Skills

  • Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field.
  • 4–6 years of hands-on data engineering experience (maintenance + optimisation heavy roles preferred).
  • Strong SQL skills and experience with warehouse modelling (star schema, dimensions/facts, normalisation/denormalisation).
  • Experience optimising warehouse performance via partitioning, indexing, clustering, and storage tuning.
  • Proficiency with ETL/ELT tools (Airflow, AWS Glue).
  • Deep understanding of AWS cloud services (S3, Glue, Lambda, IAM, CloudWatch, RDS).
  • Experience maintaining data integrations: APIs, batch exports, and external partner systems.
  • Experience administering or maintaining BI/self-service platforms (Metabase is a strong plus).
  • Familiarity with data governance: cataloguing, lineage, access control, quality frameworks.
  • Familiarity with monitoring tools (Grafana, Datadog, Prometheus).

Soft Skills

  • Strong troubleshooting and root-cause-analysis ability.
  • Effective communicator who collaborates well with technical and non-technical teams.
  • Detail-oriented with a strong documentation mindset.
  • Proactive and highly accountable.
  • Experience supporting AI/ML data operations (feature stores, data prep, inference datasets).
  • Monitor daily ETL workflows, data pipelines, and scheduled jobs to ensure high availability and timely delivery.
  • Troubleshoot pipeline failures, performance bottlenecks, and data quality issues.
  • Ensure adherence to SLAs for data freshness and system availability.
  • Review, optimise, and update the existing data warehouse schema as business requirements evolve.
  • Improve table partitioning, clustering, and indexing to boost performance
  • Ensure the warehouse follows best practices in dimensional modelling, data vault, or hybrid approaches.
  • Refactor legacy tables or models to improve clarity, performance, and usability.
  • Collaborate with analysts to design semantic models that improve analytics and self-service adoption.
  • Continuously optimise queries, transformations, storage, and processing schedules for efficiency. Monitor AWS (or other cloud) costs and implement cost-saving strategies (e.g., lifecycle rules, storage tiering, compute optimisation, query tuning).
  • Conduct periodic performance audits of the warehouse and ETL layers.
  • Implement and maintain validation rules, automated quality checks, and alerting.
  • Work with the data governance team to improve data lineage, metadata management, access controls, and dataset documentation.
  • Maintain structured documentation for pipelines, data flows, and schemas.
  • Integrate new data sources into the existing pipelines.
  • Refactor and modernise ETLs as business requirements evolve.
  • Develop reusable, modular components following best practices.
  • Manage Metabase backend performance: resource usage, query tuning, cache configuration, and scaling.
  • Ensure stable connections to the warehouse and optimise slow dashboards.
  • Support analysts and business users in building efficient reports.
  • Maintain and optimize pipelines that push data to external donor systems as part of grant reporting obligations.
  • Ensure outgoing data meets required formats, completeness, and SLA timelines.
  • Troubleshoot sync issues and collaborate with external technical teams.
  • Prepare and maintain training datasets, feature pipelines, and model-serving data flows.
  • Collaborate with Data Scientists to transform raw and semi-structured data into AI-ready formats. Maintain data pipelines that support experimental and production AI workloads.
  • Work closely with analysts, scientists, product teams, and business units to understand data needs. Guide data access, modelling, and best practices.
  • Strong SQL skills
  • Experience with warehouse modelling (star schema, dimensions/facts, normalisation/denormalisation)
  • Experience optimising warehouse performance via partitioning, indexing, clustering, and storage tuning
  • Proficiency with ETL/ELT tools (Airflow, AWS Glue)
  • Deep understanding of AWS cloud services (S3, Glue, Lambda, IAM, CloudWatch, RDS)
  • Experience maintaining data integrations: APIs, batch exports, and external partner systems
  • Experience administering or maintaining BI/self-service platforms (Metabase is a strong plus)
  • Familiarity with data governance: cataloguing, lineage, access control, quality frameworks
  • Familiarity with monitoring tools (Grafana, Datadog, Prometheus)
  • Strong troubleshooting and root-cause-analysis ability
  • Effective communicator who collaborates well with technical and non-technical teams
  • Detail-oriented with a strong documentation mindset
  • Proactive and highly accountable
  • Experience supporting AI/ML data operations (feature stores, data prep, inference datasets)
  • Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field.
  • 4–6 years of hands-on data engineering experience (maintenance + optimisation heavy roles preferred).
bachelor degree
48
JOB-693a9a81a0f6f

Vacancy title:
Data Engineer

[Type: FULL_TIME, Industry: Manufacturing, Category: Computer & IT, Science & Engineering]

Jobs at:
Burn

Deadline of this Job:
Saturday, December 20 2025

Duty Station:
Nairobi, Kenya | Nairobi | Kenya

Summary
Date Posted: Thursday, December 11 2025, Base Salary: Not Disclosed

Similar Jobs in Kenya
Learn more about Burn
Burn jobs in Kenya

JOB DETAILS:

About the role

We are seeking a Data Engineer to own the continued maintenance, optimisation, and governance of our cloud data infrastructure. This role ensures the reliability and performance of our data warehouse, ETL pipelines, and self-service platforms while enabling new data integrations and supporting our growing AI and analytics initiatives.

The ideal candidate excels in operational excellence, cost optimisation, and improving data models and processes as our data ecosystem scales.

Duties and Responsibilities

Data Platform Maintenance & Monitoring

  • Monitor daily ETL workflows, data pipelines, and scheduled jobs to ensure high availability and timely delivery.
  • Troubleshoot pipeline failures, performance bottlenecks, and data quality issues.
  • Ensure adherence to SLAs for data freshness and system availability.

Data Warehouse Modelling & Optimisation

  • Review, optimise, and update the existing data warehouse schema as business requirements evolve.
  • Improve table partitioning, clustering, and indexing to boost performance
  • Ensure the warehouse follows best practices in dimensional modelling, data vault, or hybrid approaches.
  • Refactor legacy tables or models to improve clarity, performance, and usability.
  • Collaborate with analysts to design semantic models that improve analytics and self-service adoption.

Performance Optimisation & Cost Management

  • Continuously optimise queries, transformations, storage, and processing schedules for efficiency. Monitor AWS (or other cloud) costs and implement cost-saving strategies (e.g., lifecycle rules, storage tiering, compute optimisation, query tuning).
  • Conduct periodic performance audits of the warehouse and ETL layers.

Data Quality, Governance & Documentation

  • Implement and maintain validation rules, automated quality checks, and alerting.
  • Work with the data governance team to improve data lineage, metadata management, access controls, and dataset documentation.
  • Maintain structured documentation for pipelines, data flows, and schemas.

ETL Enhancements & New Data Integrations

  • Integrate new data sources into the existing pipelines.
  • Refactor and modernise ETLs as business requirements evolve.
  • Develop reusable, modular components following best practices.

Self-Service Platform (Metabase) Administration

  • Manage Metabase backend performance: resource usage, query tuning, cache configuration, and scaling.
  • Ensure stable connections to the warehouse and optimise slow dashboards.
  • Support analysts and business users in building efficient reports.

External Donor System Integrations

  • Maintain and optimize pipelines that push data to external donor systems as part of grant reporting obligations.
  • Ensure outgoing data meets required formats, completeness, and SLA timelines.
  • Troubleshoot sync issues and collaborate with external technical teams.

AI & Advanced Analytics Support

  • Prepare and maintain training datasets, feature pipelines, and model-serving data flows.
  • Collaborate with Data Scientists to transform raw and semi-structured data into AI-ready formats. Maintain data pipelines that support experimental and production AI workloads.

Stakeholder Collaboration

  • Work closely with analysts, scientists, product teams, and business units to understand data needs. Guide data access, modelling, and best practices.

Success Measures:

  • Pipeline Reliability: 95%+ reduction in avoidable pipeline failures within the first 3–6 months.
  • Performance Gains: ≥20% improvement in warehouse and Metabase query performance.
  • Cost Efficiency: Achieve measurable cloud cost reduction through storage, compute, and ETL optimisation.
  • Data Quality: Automated validation in place for all critical datasets, with a significant reduction in recurring data-quality issues.
  • Donor Integrations: 100% on-time and error-free data submissions to external donor systems.
  • Documentation: Complete and updated documentation for major ETLs, data models, and integrations.
  • Model Readiness: AI/ML pipelines and datasets consistently meet quality and performance standards for analytics initiatives.

Skills and Experience

Technical Skills

  • Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field.
  • 4–6 years of hands-on data engineering experience (maintenance + optimisation heavy roles preferred).
  • Strong SQL skills and experience with warehouse modelling (star schema, dimensions/facts, normalisation/denormalisation).
  • Experience optimising warehouse performance via partitioning, indexing, clustering, and storage tuning.
  • Proficiency with ETL/ELT tools (Airflow, AWS Glue).
  • Deep understanding of AWS cloud services (S3, Glue, Lambda, IAM, CloudWatch, RDS).
  • Experience maintaining data integrations: APIs, batch exports, and external partner systems.
  • Experience administering or maintaining BI/self-service platforms (Metabase is a strong plus).
  • Familiarity with data governance: cataloguing, lineage, access control, quality frameworks.
  • Familiarity with monitoring tools (Grafana, Datadog, Prometheus).

Soft Skills

  • Strong troubleshooting and root-cause-analysis ability.
  • Effective communicator who collaborates well with technical and non-technical teams.
  • Detail-oriented with a strong documentation mindset.
  • Proactive and highly accountable.
  • Experience supporting AI/ML data operations (feature stores, data prep, inference datasets).

 

Work Hours: 8

Experience in Months: 48

Level of Education: bachelor degree

Job application procedure

Interested and qualified? Click here to apply

 

All Jobs | QUICK ALERT SUBSCRIPTION

Job Info
Job Category: Engineering jobs in Kenya
Job Type: Full-time
Deadline of this Job: Saturday, December 20 2025
Duty Station: Nairobi, Kenya | Nairobi | Kenya
Posted: 11-12-2025
No of Jobs: 1
Start Publishing: 11-12-2025
Stop Publishing (Put date of 2030): 10-10-2076
Apply Now
Notification Board

Join a Focused Community on job search to uncover both advertised and non-advertised jobs that you may not be aware of. A jobs WhatsApp Group Community can ensure that you know the opportunities happening around you and a jobs Facebook Group Community provides an opportunity to discuss with employers who need to fill urgent position. Click the links to join. You can view previously sent Email Alerts here incase you missed them and Subscribe so that you never miss out.

Caution: Never Pay Money in a Recruitment Process.

Some smart scams can trick you into paying for Psychometric Tests.