Position Overview
We are looking for a talented Senior Software Engineer to be part of our Analytics Data Pipeline team. The Data Pipeline team is responsible for developing DV’s Analytic Big Data platform, which processes billions of events per day and delivers the data to DV’s users. The ideal candidate will have experience in designing data processing systems, gathering business requirements from Product Managers then translating them into technical requirements and designs. Using advanced software development methodologies, this role is responsible for designing and implementing logical and physical data models and ensuring quality through manual and programmatic testing.
What you'll do
Design and Develop system processing billions of records a day with high performance
Deliver insightful data to clients, partners and internal users in various ways by implementing robust, scalable data delivery applications and APIs
Design and Develop streaming data transformation pipelines with Apache Beam and Kafka
Design and Develop pipelines for data transformation, materialization and validation with Snowflake
Build end to end testing infrastructure to auto validate data points added by product development teams
Design and build rest API application to interface with external applications
Analyzing data to test the correctness and effectiveness of Big Data processes.
Continuously optimize system performance to decrease the time it takes to deliver data to clients and internal stakeholders
Who you are
A passionate self-starter and top performer, eager to deliver a state of the art Big Data platform
A minimum of 5 years of hands-on experience in designing, planning, implementing and operating data intensive processes
A minimum of 2 years of hands-on development experience in Java, Python, or other object oriented languages.
Advanced SQL query writing and data analysis. Experience in calculated metrics design given multiple facts and dimensions
Thorough experience in cloud API services and development
Experience working with Kubernetes and public cloud platforms, such as AWS, GCP, etc.
Experience in one or more of the following: Beam, Spark, Kafka Streams, or similar technologies.
Experience in one or more of the following db engines: Oracle, SQL Server, PgSQL
Experience with RedShift, SnowFlake, BigQuery, Druid, Vertica or other columnar data stores
Excellent communication skills and a team player
The successful candidate’s starting salary will be determined based on a number of non-discriminating factors, including qualifications for the role, level, skills, experience, location, and balancing internal equity relative to peers at DV. The estimated salary range for this role based on the qualifications set forth in the job description is between [$69,000- $137,000]. This role will also be eligible for bonus/commission (as applicable), equity, and benefits. The range above is for the expectations as laid out in the job description; however, we are often open to a wide variety of profiles, and recognize that the person we hire may be more or less experienced than this job description as posted.
Not-so-fun fact: Research shows that while men apply to jobs when they meet an average of 60% of job criteria, women and other marginalized groups tend to only apply when they check every box. So if you think you have what it takes but you’re not sure that you check every box, apply anyway!
View Original Job Posting