Role Duties
Software Design and Development
Perform technical aspects of big data development for assigned applications including design, developing prototypes, and coding assignments.
Build analytics software through consistent development practices that will be used to deliver data to end users for exploration, advanced analytics and visualizations for day to day business reporting.
Plan and deliver highly scalable distributed big data systems, using different open source technologies including but not limited to Apache Nifi, Kafka, HDFS, HBase, Cassandra, Hive, Postgres, etc.
Code, test, and document scripts for managing different data pipelines and the big data cluster.
Testing, Troubleshooting and 3rd line Support
Receive escalated, technically complex mission critical issues, and maintain ownership of the issue until it is resolved completely.
Hands on to troubleshoot incidents, formulate theories and test hypothesis, and narrow down possibilities to find the root cause.
Develop tools, and scripts to automate troubleshooting activities.
Drive further improvements in the big data platform, tooling and processes.
Upgrading products/services and applying patches as necessary.
Maintaining backup and restoring the ETL and Reports repositories and other Systems binaries and source codes.
Research and Development
Build tools for yourself and others to increase efficiency and to make hard or repetitive tasks easy and quick.
Develop machine learning algorithms and libraries for problem solving and AI operations.
Research and provide input on design approach, performance and base functionality improvements for various software applications.
Requirements
Degree in Computer Science or related subjects.
Highly proficient in more than one modern languages, e.g. Java/Python/Scala.
Experience with relational data stores as well as one or more NoSQL data stores.
Demonstrated proficiency with distributed computing algorithms and ETL systems.
Experience with various stream-processing software platforms, such as Kafka.
Good knowledge and experience with big data technologies e.g. HDFS, Hive, Spark-Streaming, etc.
A working knowledge and experience of SQL scripting.
Good to have
Experience in deploying and managing Machine Learning models at scale.
Hands on implementation and delivery of apache Spark workloads in an Agile working environment.
If you feel that you are up to the challenge and possess the necessary qualification and experience, please send your resume with your cell phone contact indicating your experience and why you are the most suitable candidate for the role, clearly quoting the job title and job reference to the address below.
info@techsavanna.technology
Apply via :
info@techsavanna.tech