Major Responsibilities:
Develop Python, SQL, and PySpark-based applications and data flows within Databricks.
Build and maintain data pipelines using DBT and Databricks, ensuring efficient and scalable data processes.
Design and implement real-time and batch data processing systems to support analytics, reporting, and business needs.
Monitor and analyze data pipelines for performance, reliability, cost, and efficiency.
Proactively address any issues or bottlenecks to ensure smooth operations.
Discover opportunities for process improvements, including redesigning pipelines and optimizing data delivery mechanisms.
Manage and maintain Azure cloud services and monitor alerts to ensure system availability and performance.
Minimum Requirements:
Demonstrated ability writing SQL scripts (required).
Some experience with Python (strong plus).
Exposure to Databricks is a significant advantage.
Experience working in a cloud environment (strong plus).
Experience with DBT Core or DBT Cloud is a major plus.
Ability to quickly learn and absorb existing and new data structures.
Excellent interpersonal and communication skills (both written and verbal).
Ability to work independently and collaboratively within a team.
Apply via :
careers.rescue.org