Principal Data Engineer

We are looking for an experienced Principal Data Engineer
with a passion for designing and building high-quality
data pipelines to power cutting-edge genAI products.

Where:
Brighton, UK / Hybrid
Key area:
Data Team
Salary:
£80k/yr - £100k/yr
Attendance:
Full-Time
About the role

We are looking for an experienced Principal Data Engineer with a passion for designing and building high-quality data pipelines to power cutting-edge genAI products.

The successful candidate will join our multi-skilled product teams and participate in our agile process.

You will also be supported by our dynamic data team and benefit from talking data with our other data engineers, data scientists, and ML and analytics engineers.

The person we’re looking for

The ideal person would have a strong background and expertise in real-time data engineering and cloud technologies, and be able to apply this expertise to business problems to generate value.
We currently work in an AWS, Snowflake, dbt, Looker, Python, Kinesis and Airflow stack and are building out our real-time data streaming capabilities. You should be very comfortable with these (or similar).

As an individual contributor, you will be confident in taking a project brief and drawing out requirements, designing and building consensus around architectural plans, including tests, and comfortable exercising your organisation-wide influence and impact to lead plan execution and bring others along with you.

You’ll be responsible for

– Be our primary data engineering subject matter expert.

– Shaping our data engineering roadmap.

– Designing and building consensus around data architecture plans.

– Collaborating with cross-functional teams to develop and implement robust, scalable solutions.

– Translating use cases, pain points and success criteria into technical requirements.

– Designing, building, maintaining and upgrading data pipelines and self-service tooling to provide clean, efficient results.

– Writing automated tests to validate requirements.

– Promoting data governance through documentation, observability and controls.

– Using version control and performing code reviews.

– Promoting the adoption of tools and best practices across the team.

Skills & Experience

Essential skills:

– Significant commercial experience in a senior Data Engineering role.

– Great Python skills.

– Strong experience with tracking technologies such as Snowplow/Rudderstack/Segment.

– Previous experience with real-time data streaming platforms such as Kafka/Confluent/Google Cloud Pub/Sub.

– Experience handling and validating real-time data.

– Experience with stream processing frameworks such as Faust/Flink/Kafka Streams or similar.

– Proficient with ELT pipelines and the full data lifecycle including managing data pipelines over time and evolving them to meet new business requirements.

– Building organisation-wide influence to make positive impacts at scale.

– Earning a position of recognised authority inside an organisation, and perhaps externally as well.

– Experience mentoring team members and colleagues.

Diversity is incredibly important to us. Research shows how people from marginalised groups are less likely to apply for a job unless they meet every requirement. However, these accountabilities are a guide and, if you feel like this role could be for you and you don’t meet every criteria, please do apply. We’d love to hear from you.

Benefits we offer

– Employee Assistance Programme (confidential counselling)

– Medicash healthcare scheme (reclaim costs for dental, physiotherapy, osteopathy and optical care)

– easitBrighton travel scheme (discounted public transport options)

– Cycle to work scheme

– Life Insurance scheme

– 25 days annual leave + bank holidays + your birthday off (rising to 28 after 3 consecutive years with the business & 30 after 5 years)

– Contributory pension scheme

Application deadline: 31/10/2024
Apply now