Big Data Engineer job at Safaricom Kenya
New
Today
Linkedid Twitter Share on facebook
Big Data Engineer
2025-06-20T10:07:17+00:00
Safaricom Kenya
https://cdn.greatkenyanjobs.com/jsjobsdata/data/employer/comp_8023/logo/safaricom.png
FULL_TIME
 
Nairobi
kenya
00100
Kenya
Telecommunications
Computer & IT
KES
 
MONTH
2025-06-27T17:00:00+00:00
 
Kenya
8

Key Responsibilities

  • Data Pipeline Development: Design, implement, and maintain robust data pipelines for ingesting, processing, and transforming large volumes of structured and unstructured data. Develop ETL (Extract, Transform, Load) processes to cleanse, enrich, and aggregate data for analysis. 
  • Data Storage Solutions: Architect and optimize data storage solutions, including distributed file systems, NoSQL databases, and data warehouses. Implement data partitioning, indexing, and compression techniques to maximize storage efficiency and performance. 
  • Big Data Technologies: Utilize and optimize big data technologies and frameworks such as Apache Hadoop, Apache Spark, Apache Flink, and Apache Kafka. Develop and maintain data processing jobs, queries, and analytics workflows using distributed computing frameworks and query languages. 
  • Scalability and Performance: Optimize data processing workflows for scalability, performance, and reliability. Implement parallel processing, distributed computing, and caching mechanisms to handle large-scale data processing workloads. 
  • Monitoring and Optimization: Develop monitoring and alerting solutions to track the health, performance, and availability of big data systems. Implement automated scaling, load balancing, and resource management mechanisms to optimize system utilization and performance. 
  • Data Quality and Governance: Ensure data quality and integrity throughout the data lifecycle. Implement data validation, cleansing, and enrichment processes to maintain high-quality data. Ensure compliance with data governance policies and regulatory standards. 
  • Collaboration and Documentation: Collaborate with cross-functional teams to understand data requirements and business objectives. Document data pipelines, system architecture, and best practices. Provide training and support to stakeholders on data engineering tools and technologies.

Qualifications

  • Bachelor's or master’s degree in computer science, Engineering, or related field. 
  • Proven professional SQL capabilities
  • Solid understanding of big data technologies, distributed systems, and database management principles.
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Experience with big data frameworks such as Apache Hadoop, Apache Spark, or Apache Flink.
  • Knowledge of database systems such as SQL databases, NoSQL databases, and distributed file systems.
  • Familiarity with cloud platforms such as AWS, GCP, or Azure.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.
  • Ability to work independently and manage multiple priorities in a fast-paced environment.
Data Pipeline Development: Design, implement, and maintain robust data pipelines for ingesting, processing, and transforming large volumes of structured and unstructured data. Develop ETL (Extract, Transform, Load) processes to cleanse, enrich, and aggregate data for analysis.  Data Storage Solutions: Architect and optimize data storage solutions, including distributed file systems, NoSQL databases, and data warehouses. Implement data partitioning, indexing, and compression techniques to maximize storage efficiency and performance.  Big Data Technologies: Utilize and optimize big data technologies and frameworks such as Apache Hadoop, Apache Spark, Apache Flink, and Apache Kafka. Develop and maintain data processing jobs, queries, and analytics workflows using distributed computing frameworks and query languages.  Scalability and Performance: Optimize data processing workflows for scalability, performance, and reliability. Implement parallel processing, distributed computing, and caching mechanisms to handle large-scale data processing workloads.  Monitoring and Optimization: Develop monitoring and alerting solutions to track the health, performance, and availability of big data systems. Implement automated scaling, load balancing, and resource management mechanisms to optimize system utilization and performance.  Data Quality and Governance: Ensure data quality and integrity throughout the data lifecycle. Implement data validation, cleansing, and enrichment processes to maintain high-quality data. Ensure compliance with data governance policies and regulatory standards.  Collaboration and Documentation: Collaborate with cross-functional teams to understand data requirements and business objectives. Document data pipelines, system architecture, and best practices. Provide training and support to stakeholders on data engineering tools and technologies.
 
Bachelor's or master’s degree in computer science, Engineering, or related field.  Proven professional SQL capabilities Solid understanding of big data technologies, distributed systems, and database management principles. Proficiency in programming languages such as Python, Java, or Scala. Experience with big data frameworks such as Apache Hadoop, Apache Spark, or Apache Flink. Knowledge of database systems such as SQL databases, NoSQL databases, and distributed file systems. Familiarity with cloud platforms such as AWS, GCP, or Azure. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Ability to work independently and manage multiple priorities in a fast-paced environment.
bachelor degree
No Requirements
JOB-685532d57502c

Vacancy title:
Big Data Engineer

[Type: FULL_TIME, Industry: Telecommunications, Category: Computer & IT]

Jobs at:
Safaricom Kenya

Deadline of this Job:
Friday, June 27 2025

Duty Station:
Nairobi | kenya | Kenya

Summary
Date Posted: Friday, June 20 2025, Base Salary: Not Disclosed

Similar Jobs in Kenya
Learn more about Safaricom Kenya
Safaricom Kenya jobs in Kenya

JOB DETAILS:

Key Responsibilities

  • Data Pipeline Development: Design, implement, and maintain robust data pipelines for ingesting, processing, and transforming large volumes of structured and unstructured data. Develop ETL (Extract, Transform, Load) processes to cleanse, enrich, and aggregate data for analysis. 
  • Data Storage Solutions: Architect and optimize data storage solutions, including distributed file systems, NoSQL databases, and data warehouses. Implement data partitioning, indexing, and compression techniques to maximize storage efficiency and performance. 
  • Big Data Technologies: Utilize and optimize big data technologies and frameworks such as Apache Hadoop, Apache Spark, Apache Flink, and Apache Kafka. Develop and maintain data processing jobs, queries, and analytics workflows using distributed computing frameworks and query languages. 
  • Scalability and Performance: Optimize data processing workflows for scalability, performance, and reliability. Implement parallel processing, distributed computing, and caching mechanisms to handle large-scale data processing workloads. 
  • Monitoring and Optimization: Develop monitoring and alerting solutions to track the health, performance, and availability of big data systems. Implement automated scaling, load balancing, and resource management mechanisms to optimize system utilization and performance. 
  • Data Quality and Governance: Ensure data quality and integrity throughout the data lifecycle. Implement data validation, cleansing, and enrichment processes to maintain high-quality data. Ensure compliance with data governance policies and regulatory standards. 
  • Collaboration and Documentation: Collaborate with cross-functional teams to understand data requirements and business objectives. Document data pipelines, system architecture, and best practices. Provide training and support to stakeholders on data engineering tools and technologies.

Qualifications

  • Bachelor's or master’s degree in computer science, Engineering, or related field. 
  • Proven professional SQL capabilities
  • Solid understanding of big data technologies, distributed systems, and database management principles.
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Experience with big data frameworks such as Apache Hadoop, Apache Spark, or Apache Flink.
  • Knowledge of database systems such as SQL databases, NoSQL databases, and distributed file systems.
  • Familiarity with cloud platforms such as AWS, GCP, or Azure.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.
  • Ability to work independently and manage multiple priorities in a fast-paced environment.

 

Work Hours: 8

Experience: No Requirements

Level of Education: bachelor degree

Job application procedure

Interested and qualified? Click here to apply

 

All Jobs | QUICK ALERT SUBSCRIPTION

Job Info
Job Category: Computer/ IT jobs in Kenya
Job Type: Full-time
Deadline of this Job: Friday, June 27 2025
Duty Station: Nairobi | kenya | Kenya
Posted: 20-06-2025
No of Jobs: 1
Start Publishing: 20-06-2025
Stop Publishing (Put date of 2030): 20-06-2067
Apply Now
Notification Board

Join a Focused Community on job search to uncover both advertised and non-advertised jobs that you may not be aware of. A jobs WhatsApp Group Community can ensure that you know the opportunities happening around you and a jobs Facebook Group Community provides an opportunity to discuss with employers who need to fill urgent position. Click the links to join. You can view previously sent Email Alerts here incase you missed them and Subscribe so that you never miss out.

Caution: Never Pay Money in a Recruitment Process.

Some smart scams can trick you into paying for Psychometric Tests.