You and Your Career:
If you have great numerical and analytical skills, strong foundation in data management, programming, and data processing with excellent communication and collaboration skills to work effectively with cross-functional teams, we are interested in hearing from you.
We are a learning organization and provide growth opportunities from the start. We pride ourselves on giving you the freedom, resources, and guidance to chart a fulfilling career!
Reporting and Supervision:
This position will report to the Technical Advisor, Lead Developer.
Primary Duties and Responsibilities:
Technical Expertise:
Implement and maintain data storage solutions, such as data lakes, data warehouses, or NoSQL databases
Design, implement, and continuously expand DWH data pipelines by performing extraction, transformation, and loading activities
Performance Tuning and optimization of the DWH databases, Queries and Views
Implement data integration and data synchronization processes between different systems and platforms
Support data modelling and data mart development
Synthesis of requirements, identification, and analysis of data sources for mapping and performing data quality tests to support source system mapping exercise
Define and arbitrate the transformation rules per source system and develop ETLs to populate respective data marts
Perform data profiling functions and interrogation queries against source systems and verifying data marts after ETL processes
Work closely with the Senior Data Analyst in developing data products for the larger stakeholder groups
Work with QA to resolve data quality, inconsistency, and integrity issues
Work with the DevOps team to implement and maintain data infrastructure, including cloud-based services, servers, and storage systems
Business Development:
Contribute to active proposals through contributions to strategy and preparation of technical approach and capability statements
Key Competencies Required:
Advanced SQL skills and experience with relational databases and database design
Experience working with cloud Data Warehouse solutions (e.g., Snowflake, Redshift, BigQuery, Azure, etc.)
Experience working with data ingestion tools such as Apache Kafka, Elastic Logstash
Working knowledge of Cloud-based solutions (e.g., AWS, Azure, GCP)
Strong proficiency in object-oriented languages: Python, Java, C++, Scala
Strong proficiency in scripting languages like Bash
Experience in data modelling, data governance, and data security principles
Excellent problem-solving, communication, and organizational skills
Experience working in Agile teams
Professional Expertise/Competencies Preferred:
Proven experience as a Data Engineer or similar role, with a strong understanding of data management and data engineering concepts
Strong proficiency in data pipeline and workflow management tools (e.g., Talend, Azure data factory, Pentaho, Airflow, Azkaban)
Experience building and deploying machine learning models in production
Technical expertise with data models, data mining, and segmentation techniques
Experience with data manipulation and transformation libraries and frameworks (e.g., Pandas, Spark)
Experience with data storage solutions like relational databases (e.g., MySQL, PostgreSQL), data warehouses (e.g., Redshift, BigQuery), and NoSQL databases (e.g., MongoDB, Cassandra)
Data engineering certification (e.g IBM Certified Data Engineer) is a plus
go to method of application »
Use the link(s) below to apply on company website.
Apply via :