Senior Data Engineer job at Strathmore University
New
Website :
2 Days Ago
Linkedid Twitter Share on facebook
Senior Data Engineer
2026-02-19T13:47:36+00:00
Strathmore University
https://cdn.greatkenyanjobs.com/jsjobsdata/data/employer/comp_4301/logo/Strathmore%20University.png
FULL_TIME
Nairobi
Nairobi
00100
Kenya
Education, and Training
Computer & IT, Science & Engineering, Education
KES
MONTH
2026-02-27T17:00:00+00:00
8

Strathmore University is a Chartered University located in Nairobi, Kenya. It was the first multiracial and multi religious educational institution in English speaking Eastern Africa and more recently the first institute of higher learning to be ISO certified in East and Central Africa in 2004. Our mission is to provide all-round quality education in an atmo...

Responsibilities or duties

Data Pipeline Design and Implementation

  • Design, implement, and maintain robust data ingestion and processing pipelines for heterogeneous data sources, including soil, weather, agronomic, geospatial, and related contextual datasets.
  • Develop scalable ETL/ELT workflows to transform raw data into structured, validated, and analytics-ready formats.
  • Ensure pipelines support both batch and, where required, near-real-time data processing.
  • Implement data versioning and lineage tracking to support reproducibility and auditability.

Cloud-Based Data Infrastructure

  • Design and manage cloud-native data architectures, including data lakes, data warehouses, and analytical storage solutions.
  • Optimize data storage and processing for performance, cost efficiency, and scalability.
  • Support deployment of data pipelines across development, testing, and pilot environments.
  • Collaborate with platform teams to ensure infrastructure aligns with DPI principles and interoperability standards.

Data Quality, Governance, and Reliability

  • Implement automated data quality checks, validation rules, and monitoring to ensure accuracy, completeness, and consistency.
  • Support enforcement of data governance requirements, including access controls, permissions, and audit logging.
  • Work with policy and governance partners to ensure technical implementations align with data protection and consent frameworks.
  • Proactively identify and remediate data reliability risks or bottlenecks.

Enablement of AI and LLM-Based Systems

  • Prepare and serve data in formats optimized for AI and LLM-based advisory systems, including retrieval-augmented generation (RAG) pipelines and structured knowledge services.
  • Support model evaluation, benchmarking, and experimentation workflows.

MLOps Support and Operational Readiness

  • Contribute to MLOps workflows by supporting data versioning, pipeline automation, and integration with model deployment and evaluation processes.
  • Implement monitoring and logging for data pipelines to support observability and issue diagnosis.
  • Support reproducible experimentation through consistent data environments and pipeline automation.

Documentation, Collaboration, and Delivery

  • Produce clear technical documentation covering data architectures, pipeline logic, and operational procedures.

Qualifications or requirements (e.g., education, skills)

Minimum Academic Qualifications:

  • Bachelor’s degree in Computer Science, Software Engineering, Information Systems, or a closely related technical field

Experience needed

Experience:

  • Applicants should possess at least 5 years of professional experience in data engineering, with demonstrated responsibility for designing and operating complex data pipelines and data platforms.
  • Strong experience designing and implementing data ingestion, transformation, and processing pipelines (ETL/ELT) for large and heterogeneous datasets.
  • Proficiency in Python and SQL, and experience with data processing frameworks and tools commonly used in modern data engineering environments
  • Design, implement, and maintain robust data ingestion and processing pipelines for heterogeneous data sources, including soil, weather, agronomic, geospatial, and related contextual datasets.
  • Develop scalable ETL/ELT workflows to transform raw data into structured, validated, and analytics-ready formats.
  • Ensure pipelines support both batch and, where required, near-real-time data processing.
  • Implement data versioning and lineage tracking to support reproducibility and auditability.
  • Design and manage cloud-native data architectures, including data lakes, data warehouses, and analytical storage solutions.
  • Optimize data storage and processing for performance, cost efficiency, and scalability.
  • Support deployment of data pipelines across development, testing, and pilot environments.
  • Collaborate with platform teams to ensure infrastructure aligns with DPI principles and interoperability standards.
  • Implement automated data quality checks, validation rules, and monitoring to ensure accuracy, completeness, and consistency.
  • Support enforcement of data governance requirements, including access controls, permissions, and audit logging.
  • Work with policy and governance partners to ensure technical implementations align with data protection and consent frameworks.
  • Proactively identify and remediate data reliability risks or bottlenecks.
  • Prepare and serve data in formats optimized for AI and LLM-based advisory systems, including retrieval-augmented generation (RAG) pipelines and structured knowledge services.
  • Support model evaluation, benchmarking, and experimentation workflows.
  • Contribute to MLOps workflows by supporting data versioning, pipeline automation, and integration with model deployment and evaluation processes.
  • Implement monitoring and logging for data pipelines to support observability and issue diagnosis.
  • Support reproducible experimentation through consistent data environments and pipeline automation.
  • Produce clear technical documentation covering data architectures, pipeline logic, and operational procedures.
  • Python
  • SQL
  • Data ingestion
  • Data transformation
  • Data processing
  • ETL/ELT
  • Data pipeline design
  • Cloud-native data architectures
  • Data lakes
  • Data warehouses
  • Data quality checks
  • Data governance
  • MLOps
  • Data versioning
  • Pipeline automation
  • Bachelor’s degree in Computer Science, Software Engineering, Information Systems, or a closely related technical field
bachelor degree
60
JOB-699714788799e

Vacancy title:
Senior Data Engineer

[Type: FULL_TIME, Industry: Education, and Training, Category: Computer & IT, Science & Engineering, Education]

Jobs at:
Strathmore University

Deadline of this Job:
Friday, February 27 2026

Duty Station:
Nairobi | Nairobi

Summary
Date Posted: Thursday, February 19 2026, Base Salary: Not Disclosed

Similar Jobs in Kenya
Learn more about Strathmore University
Strathmore University jobs in Kenya

JOB DETAILS:

Strathmore University is a Chartered University located in Nairobi, Kenya. It was the first multiracial and multi religious educational institution in English speaking Eastern Africa and more recently the first institute of higher learning to be ISO certified in East and Central Africa in 2004. Our mission is to provide all-round quality education in an atmo...

Responsibilities or duties

Data Pipeline Design and Implementation

  • Design, implement, and maintain robust data ingestion and processing pipelines for heterogeneous data sources, including soil, weather, agronomic, geospatial, and related contextual datasets.
  • Develop scalable ETL/ELT workflows to transform raw data into structured, validated, and analytics-ready formats.
  • Ensure pipelines support both batch and, where required, near-real-time data processing.
  • Implement data versioning and lineage tracking to support reproducibility and auditability.

Cloud-Based Data Infrastructure

  • Design and manage cloud-native data architectures, including data lakes, data warehouses, and analytical storage solutions.
  • Optimize data storage and processing for performance, cost efficiency, and scalability.
  • Support deployment of data pipelines across development, testing, and pilot environments.
  • Collaborate with platform teams to ensure infrastructure aligns with DPI principles and interoperability standards.

Data Quality, Governance, and Reliability

  • Implement automated data quality checks, validation rules, and monitoring to ensure accuracy, completeness, and consistency.
  • Support enforcement of data governance requirements, including access controls, permissions, and audit logging.
  • Work with policy and governance partners to ensure technical implementations align with data protection and consent frameworks.
  • Proactively identify and remediate data reliability risks or bottlenecks.

Enablement of AI and LLM-Based Systems

  • Prepare and serve data in formats optimized for AI and LLM-based advisory systems, including retrieval-augmented generation (RAG) pipelines and structured knowledge services.
  • Support model evaluation, benchmarking, and experimentation workflows.

MLOps Support and Operational Readiness

  • Contribute to MLOps workflows by supporting data versioning, pipeline automation, and integration with model deployment and evaluation processes.
  • Implement monitoring and logging for data pipelines to support observability and issue diagnosis.
  • Support reproducible experimentation through consistent data environments and pipeline automation.

Documentation, Collaboration, and Delivery

  • Produce clear technical documentation covering data architectures, pipeline logic, and operational procedures.

Qualifications or requirements (e.g., education, skills)

Minimum Academic Qualifications:

  • Bachelor’s degree in Computer Science, Software Engineering, Information Systems, or a closely related technical field

Experience needed

Experience:

  • Applicants should possess at least 5 years of professional experience in data engineering, with demonstrated responsibility for designing and operating complex data pipelines and data platforms.
  • Strong experience designing and implementing data ingestion, transformation, and processing pipelines (ETL/ELT) for large and heterogeneous datasets.
  • Proficiency in Python and SQL, and experience with data processing frameworks and tools commonly used in modern data engineering environments

Work Hours: 8

Experience in Months: 60

Level of Education: bachelor degree

Job application procedure
Interested in applying for this job? Click here to submit your application now.

Are you qualified for this position and interested in working with us? We would like to hear from you. Kindly send us a copy of your updated resume and letter of application (ONLY) quoting “Senior Data Scientist” on the subject line by 27 th February 2026.

All Jobs | QUICK ALERT SUBSCRIPTION

Job Info
Job Category: Computer/ IT jobs in Kenya
Job Type: Full-time
Deadline of this Job: Friday, February 27 2026
Duty Station: Nairobi | Nairobi
Posted: 19-02-2026
No of Jobs: 1
Start Publishing: 19-02-2026
Stop Publishing (Put date of 2030): 10-10-2076
Apply Now
Notification Board

Join a Focused Community on job search to uncover both advertised and non-advertised jobs that you may not be aware of. A jobs WhatsApp Group Community can ensure that you know the opportunities happening around you and a jobs Facebook Group Community provides an opportunity to discuss with employers who need to fill urgent position. Click the links to join. You can view previously sent Email Alerts here incase you missed them and Subscribe so that you never miss out.

Caution: Never Pay Money in a Recruitment Process.

Some smart scams can trick you into paying for Psychometric Tests.