Dev Ops Engineer, Leeds
As a Dev Ops Engineer you will be working closely with the other teams within Data and infrastructure, you will be part of the team responsible for making step changes to the Data teams, data platforms, which supports real-time and offline reporting tools.
You'll be working as part of the Data Operations team to manage and build our products and services - such as our build, repositories, job scheduling and monitoring platforms. Creating tooling to allow development teams to run their own environments.
Availability, stability and performance are key requirements of our systems, you'll be able to troubleshoot issues at application, cluster and operating system level.
How you'll do it:
You'll have strong Linux administration skills, experience of docker, AWS and CI tools such as Jenkins. We expect you to have been involved in the non-functional aspects such as backup, monitoring, performance, capacity and troubleshooting.
You'll have coding skills that can be applied to our Chef and Python codebases, working with a formal SDLC backed by git version control.
Experience and exposure to Hadoop and data platforms is desirable, but not essential, we're happy to train you in the administration of Hadoop toolset e.g. HDFS, hive, hbase,, yarn and data streaming technologies such as kafka .
How we work:
We're an autonomous agile team delivering products and services from the wide roadmap. We use a combination of Scrum and Kanban to deliver products and support them.
You'll be part of the team and have input into our planning, prioritisation and retrospectives. You won't just be allocated tickets from a queue, you'll be at of our daily stand ups helping the team work through our backlog.