About the Role:
BURN is seeking a skilled and experienced Data Engineer to develop & deploy the articulated solution. The role involves designing and deploying an end-to-end data pipeline system that centralizes data from various sources and enables data professionals to easily query the data. Additionally, the system should allow users to easily pull up all relevant information for a product and customer using a consumer-friendly user interface.
Duties and Responsibilities:
Develop, test and maintain data pipelines using AWS Cloud’s Python and SQL/ NoSQL databases.
Implement and manage workflow orchestration tools such as Prefect, Airflow and Airbyte.
Design and optimise data models to support efficient data storage and retrieval.
Collaborate with cross-functional teams to understand data requirements and deliver ETL processes.
Integrate and manage APIs and databases including PostgreSQL, My SQL and Microsoft SQL.
Ensure data quality and integrity through rigorous testing and validation.
Monitor and troubleshoot data pipeline issues to ensure smooth operation.
Contribute to the continuous improvement of data engineering practices and processes.
Conducted process improvements and automation for mundane and repetitive tasks.
Skills and Experience:
Bachelor’s degree in computer science, Information Systems, or related field.
At least 5 years of experience in designing and deploying end-to-end data pipelines.
Strong knowledge of SQL, ETL tools, and data warehousing concepts.
Experience working with APIs, batch exports, and SQL queries to extract data from various sources.
Experience with cloud computing platforms such as AWS, Azure, or Google Cloud Platform.
Strong data analysis and problem-solving skills.
Experience working with Microsoft Dynamics, open-source data systems like KOBO, and Call Center platforms would be an added advantage.
Excellent communication skills and ability to work in a team environment.
Apply via :
burnmanufacturing.applytojob.com