f Nandos Work For Us

Data Engineer

Putney

Salary: Competitive + bonus       •      Ref: 6662



We are looking for a Data Engineer to analyse data, provide insights and be a proponent of the shift to a more data driven Nando`s. This will enable us to solve some of today`s complex hospitality problems by creating innovative solutions using an open technology toolset whilst driving best practices.

Based in South West London and working in our Data Engineering team, although we have a remote working model too, you’ll be helping us develop our data lake and data warehouse. The work is varied, from creating new products and features to improving existing functionality. We strive to build a better customer experience and a better codebase. This is real growth at pace.

You will be using AWS, GCP, Apache Airflow, Python and SQL to interpret data from a wide range of sources and deliver it to all parts of the business, for example finance, operations and marketing.

We currently have 100 billion rows of data and growing. 


Responsibilities:

Our Data Engineers have the opportunity to work across a wide range of domains, and to embed a data-driven attitude for solving challenges.

They ensure the "voice of our data" is heard and understood as part of our decision making process.


As a Nando`s Data Engineer you will...

  • Be responsible for Nando`s big data ecosystem
  • Design, monitor and maintain the data lake and data warehouse infrastructure
  • Monitor the quality of the data;completeness, validity, accuracy, consistency, integrity, timeliness – so that users can rely on this information
  • Ingest raw data
  • Transform raw data into purified data sets
  • Facilitate any data requests made
  • Monitor data acquisition

Whilst...

  • Writing great code - We understand code is read more than it’s written, better off tested and maintainability is a must
  • Helping shape what we build - You’ll be working closely with product owners, designers and other engineers to design and refine our work. We work as a team and your input is key
  • Owning delivery - We’re obsessed with shipping value; you’ll own work beyond just a pull request. You’ll care about bugs, scalability, uptime and other non-functional requirements
  • Growing together - You’ll review other’s work and happily seek feedback on yours to ensure we build a better codebase and sharpen each other`s skills


Skills required:

You will have proven experience in the following:

  • A skilled Data Engineer - You are experienced in building complex ETL / ELT pipelines using Apache Airflow and Python
  • Solid SQL experience - You can not only interact with a database but design new schemas
  • Experience of the AWS stack - You have experience of working with the AWS toolset and some experience with other cloud providers
  • Big-data experience - You have knowledge of big data related tools and techniques (data lakes, distributed processing platforms, streaming technologies). Experience with Presto, HDFS, C++ and Intel SIMD optimisations a plus
  • Happy in the Clouds - Our Cloud Native platform is hosted in AWS, Google Cloud and Azure. You’ll be comfortable working with a system that supports users from around the world, at scale


You will also display:

  • Analytical problem-solving approach, coupled with effective communication skills - Before jumping into action you make sure you understand the issue at hand
  • Bias for action, and a sense of ownership - You see a problem, you fix the problem. You get buy in for your solutions and keep tickets moving. We’re always looking for ways to ship at pace
  • Well-principled - Especially in the craft of Software Engineering – you will understand deeply: modularity, testability, extensibility, scalability etc
  • Growth mindset and a great teamwork attitude - A willingness to use your skills and experience to mentor less-experienced engineers. A desire to learn from others and make yourself better every day
  • Intellectual curiosity, industry knowledge and agile outlook - You need to be excited about working in a fast-changing environment. Products, tools, frameworks and processes change, we evolve and take the best bits with us. The teams drive the evolution.
  • Distinct interest in machine learning - All of the data will lead us to create more models to affect how we operate, we are happy to explore this together.

Knowledge, experience & qualifications:

  • Python (NumPy, Panda)
  • Apache Airflow
  • SQL
  • Spark
  • Hadoop
  • Hive
  • Presto
  • AWS
  • EMR