As a Cloud Data Platform Engineer you are a specialist in building services that brings a variety of internal and external data sources together. You are able to build the tools and services necessary for engineers, analysts, and data scientist to ingest, perform reliable ETL upon, reproducible analyse, and production predict large volumes of data, no matter their nature, in batch and streaming. Others are able to rely on your infrastructures and processes 24 hours a day, 7 days a week to meet exacting business demands. As a Cloud Data Platform Engineer you ensure the reliability and stability necessary for the rest of the organisation outside the department to meet and exceed broad business objectives. You understand how to balance the costs of running cloud process with the benefits of reliable speed of delivery. We expect a mindset where you want to continually improve production systems. You understand what makes a good Service Level Indicator and how to set and measure appropriate Service Level Objectives. You understand how to alert people to real problems without fatiguing them, and what appropriate reactions are depending on the criticality of the events. We need creative development solutions to hard operational problems. Much of our focus is on building infrastructure and eliminating toil. We live by our post-mortems and iteratively improve the lives of the engineers that we serve.
How will you make a difference?
- Co-develop and co-operate the cloud-based data platform from inception and design, through deployment, operation and refinement
- Support internal customers (engineers) to design cost efficient data flows, help them improve their monitoring capabilities, and make sure best practice is followed
- Maintain infrastructure through code and ensure overall system health
- Oversee the automatic scaling of evolving systems
- Push for changes to our communities of practice that improve reliability and velocity of business insight and response
- Practice best effort incident response and blameless postmortems
Your key strengths Your must-have knowledge and experience:
- Two or more years implementing systems that are highly available, scalable, and self-healing on big data platforms ( Cloudera, Hortonworks, MapR, AWS, Google Cloud)
- Experience with at least one cloud provider with a preference at AWS
- Experience developing in at least one of the following in the context of data engineering: Scala, Python, Go, Java, Shell scripting
- Understanding of modern development and operations processes and methodologies
- DevOps experience (set-up of CI/CD pipelines, set-up of systems ...)
- You either have an AWS certification or are willing to achieve AWS certification within 6 months (minimum AWS Certified Associate)
Nice to have knowledge:
- Experience building highly automated infrastructures
- Implement and manage continuous delivery systems and methodologies
- Expertise in designing, analyzing and troubleshooting large-scale distributed systems
- Define and deploy monitoring, metrics, and logging systems, and automated security controls
- Ability to debug and optimize code and automate routine tasks
- Systematic problem-solving approach, coupled with strong communication skills
- Driven to deliver value and provide excellent customer service
You will be working at a leading media company bustling with fun colleagues.
Like you, they are passionate about digital and offline media and are continuously learning new things from each other and from the best in the trade. You are set out on a journey where every next week will be different from the last, and where you are stimulated on a daily basis to take things to the next level. As a cherry on top, we offer you an interesting salary package and corresponding benefits (company car, group and health insurance, 32 days of paid leave, a renowned company restaurant, ... OR a freelance agreement with a long-term commitment.