Data Engineer is required to work on some of the most advanced IoT data problems and join a highly innovative and profitable niche data and IoT company. This fantastic technology company are recruiting for a Data Engineer - AWS, Redshift, Spark to help build a distributed highly scalable solution.
You'll develop a large high throughput distributed data platform and engineer data pipelines and data storage solutions utilising tooling like AWS, including S3, Redshift EC2 and big data databases like cassandra. You'll need to have worked on messaging (MQTT, kinesis, kafka etc.) and data streaming (kinesis, spark, storm etc), as well as working alongside data scientists coding algorithms into production.
You'll flourish in this company if you:
- Have a data engineering background
- Have knowledge of AWS, S3, Redshift, and EC2, big data databases like Cassandra, DynamoDB
- Can build scalable distributed computing platforms and understanding
- Lambda architecture.
- Data pipelines and ETL/ELT
- Messaging / streaming solutions with Kinesis, Kafka, Spark, Storm, MQTT etc
- Understand database technologies and their use cases - relational (RDBMS), Graph (Neo4J), and NoSQL like MongoDB and other database tech like PostgreSQL.
You'll need to communicate with different audiences, both technical and non-technical as well as work in an Agile Data Engineering and Software Environment. This company have built a market leading machine learning product and you'll have the freedom to leverage the latest big data technologies that are right for each individual client's data needs.
This company have selected Agile Recruitment to help them find the very best data engineering talent. Please apply online now and I will get in touch to discuss the role of Data Engineer - AWS, Redshift, Spark