Data Engineer job at International Rescue Committee
New
Website :
3 Days Ago
Linkedid Twitter Share on facebook
Data Engineer
2026-05-07T15:43:36+00:00
International Rescue Committee
https://cdn.greatkenyanjobs.com/jsjobsdata/data/employer/comp_8353/logo/IRC.png
FULL_TIME
Nairobi
Nairobi
00100
Kenya
Nonprofit, and NGO
Computer & IT, Science & Engineering, Social Services & Nonprofit
KES
MONTH
2026-05-21T17:00:00+00:00
TELECOMMUTE
8

The International Rescue Committee is a global humanitarian aid, relief and development nongovernmental organization.

Job Overview / Summary

The Data Engineer will support the implementation, configuration, and maintenance of data pipelines and transformation workflows across IRC's data environment. This role plays an active part in building and operating ELT processes, dbt models, and cloud-based data platforms including Azure Databricks and the IRC Lakehouse.

The successful candidate has foundational to growing expertise in dbt, SQL, and Databricks, and is eager to take on increasing ownership of data engineering deliverables. Working closely with senior engineers and the Data Architect, this role contributes to data modeling, pipeline development, and platform reliability in a collaborative and fast-paced environment.

Major Responsibilities

Pipeline Engineering & Orchestration

  • Build and maintain ELT data pipelines using Databricks Workflows and Azure Data Factory for batch and scheduled processing from internal and external sources.
  • Support the ingestion of data from key systems (e.g., D365 CRM, ServiceNow) into Lakehouse.
  • Monitor pipeline execution, identify failures, and troubleshooting issues in collaboration with senior engineers.
  • Contribute to pipeline documentation and help maintain runbooks and process standards.

dbt Development

  • Develop and maintain dbt models across staging, intermediate, and mart layers under the guidance of senior team members.
  • Write dbt tests and contribute to source freshness checks to support data quality.
  • Learn and apply dbt best practices including modular design, ref dependencies, and incremental model patterns.
  • Work with analysts and business teams to translate data requirements into dbt models.

SQL & Data Transformation

  • Write intermediate to advanced SQL for data extraction, transformation, and validation tasks.
  • Apply SQL techniques including joins, CTEs, window functions, and aggregations to support reporting and analytics needs.
  • Assist in query optimization and performance troubleshooting within Databricks SQL environments.
  • Support data model maintenance and help accommodate new source fields or schema changes.

Databricks & Cloud Platform

  • Develop and maintain Databricks notebooks and jobs for data transformation workloads.
  • Gain hands-on experience with Delta Lake concepts and PySpark for data processing.
  • Follow Lakehouse design patterns (Bronze/Silver/Gold) as defined by the Data Architect.
  • Support cloud resource management including basic cluster configuration and job scheduling.

Collaboration & Learning

  • Actively collaborate with the Data Team on pipeline design, troubleshooting, and delivery.
  • Participate in code reviews and incorporate feedback to improve code quality.
  • Support documentation of processes, standards, and data flows
  • Engage with Finance, FP&A, and other business teams to understand data needs and assist in solution delivery.

Key Working Relationships

  • Data Team (Data Architect, Lead Data Engineer, Data Analysts)
  • Business / Departmental Priority Setters (Finance, FP&A, Operations)
  • Enterprise Systems Owners (D365 CRM, Integra)

Travel Requirements

Special projects may require travel to enable on-the-ground learning to support data strategy deliverables.

Minimum Requirements

Experience

  • 3–6 years of hands-on experience in data engineering, analytics engineering, or a related technical role.
  • Demonstrated experience building or maintaining data pipelines in a professional setting.
  • Exposure to cloud-based data platforms, preferably Azure (Databricks, Data Factory, or Synapse).

Technical Skills — Required

dbt:

  • Working knowledge of dbt model development including staging and mart layers.
  • Familiarity with dbt tests, documentation, and source configurations.
  • Eagerness to deepen dbt skills including incremental models and CI/CD integration.

Databricks:

  • Hands-on experience with Databricks notebooks and basic job/workflow setup.
  • Familiarity with Delta Lake concepts and Databricks SQL.
  • Exposure to PySpark for data transformation tasks.

SQL:

  • Solid SQL skills: joins, CTEs, window functions, aggregations, and basic performance awareness.
  • Experience writing SQL for data transformation and validation in a cloud data warehouse.

Pipeline Engineering:

  • Experience building or supporting ELT pipelines with monitoring and basic data validation.
  • Familiarity with pipeline orchestration tools such as Azure Data Factory or Databricks Workflows.

Python:

  • Basic to intermediate Python skills for data processing, scripting, and automation.
  • Familiarity with PySpark is a plus.

Data Modeling:

  • Understanding of star/snowflake schemas and fact & dimension table concepts.
  • Exposure to Lakehouse or medallion architecture (Bronze/Silver/Gold) is a plus.

Soft Skills

  • Curious and eager to learn with a proactive approach to problem-solving.
  • Good communication skills — able to collaborate across technical and non-technical teams.
  • Attention to detail and a strong sense of data quality.
  • Comfortable working in a collaborative, fast-paced, and remote team environment.

Preferred Additional Requirements

  • Experience with Databricks or Azure Synapse Analytics.
  • Familiarity with D365 CRM or Similar data structures.
  • Exposure to Git-based workflows and CI/CD practices for data pipeline deployments.
  • Experience in a humanitarian, nonprofit, or international development context.
  • Build and maintain ELT data pipelines using Databricks Workflows and Azure Data Factory for batch and scheduled processing from internal and external sources.
  • Support the ingestion of data from key systems (e.g., D365 CRM, ServiceNow) into Lakehouse.
  • Monitor pipeline execution, identify failures, and troubleshooting issues in collaboration with senior engineers.
  • Contribute to pipeline documentation and help maintain runbooks and process standards.
  • Develop and maintain dbt models across staging, intermediate, and mart layers under the guidance of senior team members.
  • Write dbt tests and contribute to source freshness checks to support data quality.
  • Learn and apply dbt best practices including modular design, ref dependencies, and incremental model patterns.
  • Work with analysts and business teams to translate data requirements into dbt models.
  • Write intermediate to advanced SQL for data extraction, transformation, and validation tasks.
  • Apply SQL techniques including joins, CTEs, window functions, and aggregations to support reporting and analytics needs.
  • Assist in query optimization and performance troubleshooting within Databricks SQL environments.
  • Support data model maintenance and help accommodate new source fields or schema changes.
  • Develop and maintain Databricks notebooks and jobs for data transformation workloads.
  • Gain hands-on experience with Delta Lake concepts and PySpark for data processing.
  • Follow Lakehouse design patterns (Bronze/Silver/Gold) as defined by the Data Architect.
  • Support cloud resource management including basic cluster configuration and job scheduling.
  • Actively collaborate with the Data Team on pipeline design, troubleshooting, and delivery.
  • Participate in code reviews and incorporate feedback to improve code quality.
  • Support documentation of processes, standards, and data flows
  • Engage with Finance, FP&A, and other business teams to understand data needs and assist in solution delivery.
  • dbt model development
  • dbt tests
  • SQL (joins, CTEs, window functions, aggregations)
  • Databricks notebooks and jobs
  • Delta Lake concepts
  • PySpark
  • ELT pipeline building/support
  • Azure Data Factory
  • Databricks Workflows
  • Python
  • Data modeling (star/snowflake schemas)
  • Communication skills
  • Attention to detail
  • 3–6 years of hands-on experience in data engineering, analytics engineering, or a related technical role.
  • Demonstrated experience building or maintaining data pipelines in a professional setting.
  • Exposure to cloud-based data platforms, preferably Azure (Databricks, Data Factory, or Synapse).
  • Working knowledge of dbt model development including staging and mart layers.
  • Familiarity with dbt tests, documentation, and source configurations.
  • Hands-on experience with Databricks notebooks and basic job/workflow setup.
  • Familiarity with Delta Lake concepts and Databricks SQL.
  • Solid SQL skills: joins, CTEs, window functions, aggregations, and basic performance awareness.
  • Experience writing SQL for data transformation and validation in a cloud data warehouse.
  • Experience building or supporting ELT pipelines with monitoring and basic data validation.
  • Familiarity with pipeline orchestration tools such as Azure Data Factory or Databricks Workflows.
  • Basic to intermediate Python skills for data processing, scripting, and automation.
  • Understanding of star/snowflake schemas and fact & dimension table concepts.
  • Curious and eager to learn with a proactive approach to problem-solving.
  • Good communication skills — able to collaborate across technical and non-technical teams.
  • Attention to detail and a strong sense of data quality.
  • Comfortable working in a collaborative, fast-paced, and remote team environment.
bachelor degree
36
JOB-69fcb32814d6b

Vacancy title:
Data Engineer

[Type: FULL_TIME, Industry: Nonprofit, and NGO, Category: Computer & IT, Science & Engineering, Social Services & Nonprofit]

Jobs at:
International Rescue Committee

Deadline of this Job:
Thursday, May 21 2026

Duty Station:
This Job is Remote

Summary
Date Posted: Thursday, May 7 2026, Base Salary: Not Disclosed

Similar Jobs in Kenya
Learn more about International Rescue Committee
International Rescue Committee jobs in Kenya

JOB DETAILS:

The International Rescue Committee is a global humanitarian aid, relief and development nongovernmental organization.

Job Overview / Summary

The Data Engineer will support the implementation, configuration, and maintenance of data pipelines and transformation workflows across IRC's data environment. This role plays an active part in building and operating ELT processes, dbt models, and cloud-based data platforms including Azure Databricks and the IRC Lakehouse.

The successful candidate has foundational to growing expertise in dbt, SQL, and Databricks, and is eager to take on increasing ownership of data engineering deliverables. Working closely with senior engineers and the Data Architect, this role contributes to data modeling, pipeline development, and platform reliability in a collaborative and fast-paced environment.

Major Responsibilities

Pipeline Engineering & Orchestration

  • Build and maintain ELT data pipelines using Databricks Workflows and Azure Data Factory for batch and scheduled processing from internal and external sources.
  • Support the ingestion of data from key systems (e.g., D365 CRM, ServiceNow) into Lakehouse.
  • Monitor pipeline execution, identify failures, and troubleshooting issues in collaboration with senior engineers.
  • Contribute to pipeline documentation and help maintain runbooks and process standards.

dbt Development

  • Develop and maintain dbt models across staging, intermediate, and mart layers under the guidance of senior team members.
  • Write dbt tests and contribute to source freshness checks to support data quality.
  • Learn and apply dbt best practices including modular design, ref dependencies, and incremental model patterns.
  • Work with analysts and business teams to translate data requirements into dbt models.

SQL & Data Transformation

  • Write intermediate to advanced SQL for data extraction, transformation, and validation tasks.
  • Apply SQL techniques including joins, CTEs, window functions, and aggregations to support reporting and analytics needs.
  • Assist in query optimization and performance troubleshooting within Databricks SQL environments.
  • Support data model maintenance and help accommodate new source fields or schema changes.

Databricks & Cloud Platform

  • Develop and maintain Databricks notebooks and jobs for data transformation workloads.
  • Gain hands-on experience with Delta Lake concepts and PySpark for data processing.
  • Follow Lakehouse design patterns (Bronze/Silver/Gold) as defined by the Data Architect.
  • Support cloud resource management including basic cluster configuration and job scheduling.

Collaboration & Learning

  • Actively collaborate with the Data Team on pipeline design, troubleshooting, and delivery.
  • Participate in code reviews and incorporate feedback to improve code quality.
  • Support documentation of processes, standards, and data flows
  • Engage with Finance, FP&A, and other business teams to understand data needs and assist in solution delivery.

Key Working Relationships

  • Data Team (Data Architect, Lead Data Engineer, Data Analysts)
  • Business / Departmental Priority Setters (Finance, FP&A, Operations)
  • Enterprise Systems Owners (D365 CRM, Integra)

Travel Requirements

Special projects may require travel to enable on-the-ground learning to support data strategy deliverables.

Minimum Requirements

Experience

  • 3–6 years of hands-on experience in data engineering, analytics engineering, or a related technical role.
  • Demonstrated experience building or maintaining data pipelines in a professional setting.
  • Exposure to cloud-based data platforms, preferably Azure (Databricks, Data Factory, or Synapse).

Technical Skills — Required

dbt:

  • Working knowledge of dbt model development including staging and mart layers.
  • Familiarity with dbt tests, documentation, and source configurations.
  • Eagerness to deepen dbt skills including incremental models and CI/CD integration.

Databricks:

  • Hands-on experience with Databricks notebooks and basic job/workflow setup.
  • Familiarity with Delta Lake concepts and Databricks SQL.
  • Exposure to PySpark for data transformation tasks.

SQL:

  • Solid SQL skills: joins, CTEs, window functions, aggregations, and basic performance awareness.
  • Experience writing SQL for data transformation and validation in a cloud data warehouse.

Pipeline Engineering:

  • Experience building or supporting ELT pipelines with monitoring and basic data validation.
  • Familiarity with pipeline orchestration tools such as Azure Data Factory or Databricks Workflows.

Python:

  • Basic to intermediate Python skills for data processing, scripting, and automation.
  • Familiarity with PySpark is a plus.

Data Modeling:

  • Understanding of star/snowflake schemas and fact & dimension table concepts.
  • Exposure to Lakehouse or medallion architecture (Bronze/Silver/Gold) is a plus.

Soft Skills

  • Curious and eager to learn with a proactive approach to problem-solving.
  • Good communication skills — able to collaborate across technical and non-technical teams.
  • Attention to detail and a strong sense of data quality.
  • Comfortable working in a collaborative, fast-paced, and remote team environment.

Preferred Additional Requirements

  • Experience with Databricks or Azure Synapse Analytics.
  • Familiarity with D365 CRM or Similar data structures.
  • Exposure to Git-based workflows and CI/CD practices for data pipeline deployments.
  • Experience in a humanitarian, nonprofit, or international development context.

Work Hours: 8

Experience in Months: 36

Level of Education: bachelor degree

Job application procedure

Application Link:Click Here to Apply Now

All Jobs | QUICK ALERT SUBSCRIPTION

Job Info
Job Category: Computer/ IT jobs in Kenya
Job Type: Full-time
Deadline of this Job: Thursday, May 21 2026
Duty Station: This Job is Remote
Posted: 07-05-2026
No of Jobs: 1
Start Publishing: 07-05-2026
Stop Publishing (Put date of 2030): 10-10-2076
Apply Now
Notification Board

Join a Focused Community on job search to uncover both advertised and non-advertised jobs that you may not be aware of. A jobs WhatsApp Group Community can ensure that you know the opportunities happening around you and a jobs Facebook Group Community provides an opportunity to discuss with employers who need to fill urgent position. Click the links to join. You can view previously sent Email Alerts here incase you missed them and Subscribe so that you never miss out.

Caution: Never Pay Money in a Recruitment Process.

Some smart scams can trick you into paying for Psychometric Tests.