Senior Data Engineer job at Roamtech Solutions
26 Days Ago
Linkedid Twitter Share on facebook
Senior Data Engineer
2026-01-20T07:11:25+00:00
Roamtech Solutions
https://cdn.greatkenyanjobs.com/jsjobsdata/data/employer/comp_8966/logo/Roamtech%20Solutions.png
FULL_TIME
 
Nairobi Area
Nairobi
00100
Kenya
Information Technology
Science & Engineering, Computer & IT
KES
 
MONTH
2026-01-31T17:00:00+00:00
 
 
8

About the Role:

We are seeking a skilled and experienced Data Engineer to join our team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure to support our business intelligence and data science initiatives. You will work closely with data scientists, analysts, and business stakeholders to understand their data needs and deliver reliable, scalable, and efficient data solutions.

Key Measures of Performance:

The performance of this role holder will be assessed on the basis of the achievements made on:

The performance of this role holder will be assessed based on the achievements made on:

  • Critical data pipelines are robust, observable, and cost-optimised.
  • Stakeholders trust the data and can make decisions quickly with minimal manual intervention.
  • Junior engineers are mentored effectively, raising the technical bar of the whole team.
  • The data platform evolves to handle growing scale and complexity without sacrificing performance

Key Responsibilities:

  • Architect & Lead: Design and implement scalable data architectures, including data lakes, warehouses, and streaming solutions.
  • End-to-End Ownership: Own pipelines from ingestion to serving layers, ensuring reliability, observability, and cost-efficiency.
  • Data Modeling: Define and maintain robust data models to support analytics, machine learning, and operational reporting.
  • Mentorship & Leadership: Guide junior and mid-level engineers through code reviews, design discussions, and best practices.
  • Performance Optimization: Identify bottlenecks, tune pipelines, and improve query performance at scale.
  • Governance & Security: Implement and enforce data quality checks, access control, and compliance with privacy regulations (GDPR, CCPA).
  • Collaboration: Work closely with product, engineering, and business stakeholders to translate requirements into scalable data solutions.
  • Innovation: Evaluate and introduce new technologies and tools to improve the data platform’s reliability and developer experience.

Requirements

  • Experience: 5+ years in data engineering or backend engineering, with a track record of leading large-scale data projects.
  • Programming Expertise: Advanced skills in Python (or Scala/Java) and strong proficiency in SQL.
  • Big Data Frameworks: Deep experience with Spark, Flink, Beam, or similar distributed data processing engines.
  • ETL/ELT Orchestration: Expertise with Airflow, dbt, Dagster, or similar tools.
  • Cloud Platforms: Strong experience with at least one major cloud provider (AWS, GCP, Azure) — including data services like Snowflake, BigQuery, Redshift, Databricks.
  • Data Modeling: Skilled at designing data schemas (OLTP, OLAP, dimensional models).
  • Streaming Data: Experience with Kafka, Kinesis, or Pub/Sub for real-time pipelines.
  • DevOps Mindset: Familiarity with CI/CD, infrastructure-as-code (Terraform, CloudFormation), and monitoring/alerting.
  • Architect & Lead: Design and implement scalable data architectures, including data lakes, warehouses, and streaming solutions.
  • End-to-End Ownership: Own pipelines from ingestion to serving layers, ensuring reliability, observability, and cost-efficiency.
  • Data Modeling: Define and maintain robust data models to support analytics, machine learning, and operational reporting.
  • Mentorship & Leadership: Guide junior and mid-level engineers through code reviews, design discussions, and best practices.
  • Performance Optimization: Identify bottlenecks, tune pipelines, and improve query performance at scale.
  • Governance & Security: Implement and enforce data quality checks, access control, and compliance with privacy regulations (GDPR, CCPA).
  • Collaboration: Work closely with product, engineering, and business stakeholders to translate requirements into scalable data solutions.
  • Innovation: Evaluate and introduce new technologies and tools to improve the data platform’s reliability and developer experience.
  • Python (or Scala/Java)
  • SQL
  • Spark, Flink, Beam
  • Airflow, dbt, Dagster
  • AWS, GCP, Azure
  • Snowflake, BigQuery, Redshift, Databricks
  • Data Modeling (OLTP, OLAP, dimensional models)
  • Kafka, Kinesis, Pub/Sub
  • CI/CD
  • Terraform, CloudFormation
  • Monitoring/Alerting
  • 5+ years in data engineering or backend engineering
  • Track record of leading large-scale data projects
  • Advanced skills in Python (or Scala/Java)
  • Strong proficiency in SQL
  • Deep experience with Spark, Flink, Beam, or similar distributed data processing engines
  • Expertise with Airflow, dbt, Dagster, or similar tools
  • Strong experience with at least one major cloud provider (AWS, GCP, Azure) — including data services like Snowflake, BigQuery, Redshift, Databricks
  • Skilled at designing data schemas (OLTP, OLAP, dimensional models)
  • Experience with Kafka, Kinesis, or Pub/Sub for real-time pipelines
  • Familiarity with CI/CD, infrastructure-as-code (Terraform, CloudFormation), and monitoring/alerting
bachelor degree
60
JOB-696f2a9db72f2

Vacancy title:
Senior Data Engineer

[Type: FULL_TIME, Industry: Information Technology, Category: Science & Engineering, Computer & IT]

Jobs at:
Roamtech Solutions

Deadline of this Job:
Saturday, January 31 2026

Duty Station:
Nairobi Area | Nairobi

Summary
Date Posted: Tuesday, January 20 2026, Base Salary: Not Disclosed

Similar Jobs in Kenya
Learn more about Roamtech Solutions
Roamtech Solutions jobs in Kenya

JOB DETAILS:

About the Role:

We are seeking a skilled and experienced Data Engineer to join our team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure to support our business intelligence and data science initiatives. You will work closely with data scientists, analysts, and business stakeholders to understand their data needs and deliver reliable, scalable, and efficient data solutions.

Key Measures of Performance:

The performance of this role holder will be assessed on the basis of the achievements made on:

The performance of this role holder will be assessed based on the achievements made on:

  • Critical data pipelines are robust, observable, and cost-optimised.
  • Stakeholders trust the data and can make decisions quickly with minimal manual intervention.
  • Junior engineers are mentored effectively, raising the technical bar of the whole team.
  • The data platform evolves to handle growing scale and complexity without sacrificing performance

Key Responsibilities:

  • Architect & Lead: Design and implement scalable data architectures, including data lakes, warehouses, and streaming solutions.
  • End-to-End Ownership: Own pipelines from ingestion to serving layers, ensuring reliability, observability, and cost-efficiency.
  • Data Modeling: Define and maintain robust data models to support analytics, machine learning, and operational reporting.
  • Mentorship & Leadership: Guide junior and mid-level engineers through code reviews, design discussions, and best practices.
  • Performance Optimization: Identify bottlenecks, tune pipelines, and improve query performance at scale.
  • Governance & Security: Implement and enforce data quality checks, access control, and compliance with privacy regulations (GDPR, CCPA).
  • Collaboration: Work closely with product, engineering, and business stakeholders to translate requirements into scalable data solutions.
  • Innovation: Evaluate and introduce new technologies and tools to improve the data platform’s reliability and developer experience.

Requirements

  • Experience: 5+ years in data engineering or backend engineering, with a track record of leading large-scale data projects.
  • Programming Expertise: Advanced skills in Python (or Scala/Java) and strong proficiency in SQL.
  • Big Data Frameworks: Deep experience with Spark, Flink, Beam, or similar distributed data processing engines.
  • ETL/ELT Orchestration: Expertise with Airflow, dbt, Dagster, or similar tools.
  • Cloud Platforms: Strong experience with at least one major cloud provider (AWS, GCP, Azure) — including data services like Snowflake, BigQuery, Redshift, Databricks.
  • Data Modeling: Skilled at designing data schemas (OLTP, OLAP, dimensional models).
  • Streaming Data: Experience with Kafka, Kinesis, or Pub/Sub for real-time pipelines.
  • DevOps Mindset: Familiarity with CI/CD, infrastructure-as-code (Terraform, CloudFormation), and monitoring/alerting.

 

Work Hours: 8

Experience in Months: 60

Level of Education: bachelor degree

Job application procedure

Application Link: Click Here to Apply Now

 

All Jobs | QUICK ALERT SUBSCRIPTION

Job Info
Job Category: Computer/ IT jobs in Kenya
Job Type: Full-time
Deadline of this Job: Saturday, January 31 2026
Duty Station: Nairobi Area | Nairobi
Posted: 20-01-2026
No of Jobs: 1
Start Publishing: 20-01-2026
Stop Publishing (Put date of 2030): 10-10-2076
Apply Now
Notification Board

Join a Focused Community on job search to uncover both advertised and non-advertised jobs that you may not be aware of. A jobs WhatsApp Group Community can ensure that you know the opportunities happening around you and a jobs Facebook Group Community provides an opportunity to discuss with employers who need to fill urgent position. Click the links to join. You can view previously sent Email Alerts here incase you missed them and Subscribe so that you never miss out.

Caution: Never Pay Money in a Recruitment Process.

Some smart scams can trick you into paying for Psychometric Tests.