Lead Data

hace 2 semanas


Americas, Perú Brilliant A tiempo completo

**About Brilliant**

Brilliant is a tight-knit team of scientists, educators, engineers, designers, storytellers, and illustrators who are redesigning education at scale.

We believe that math and science are fascinating and beautiful, but that the tools widely used to teach it are dry and ineffective. Brilliant makes learning STEM fun, through problem solving and interactive explorations - from foundational math and science to cutting-edge computer science and professional topics.

Brilliant helps over 12 million students, professionals, and lifelong learners around the world cultivate problem solving skills, build intuition, and master concepts rather than memorize them.

**Application Note**

We're always excited to welcome and encourage anyone from non-traditional backgrounds to apply, so please, don't sweat the requirements lists too much It's important to note that including a cover letter which details your interest in Brilliant and why you feel you'd be a great fit for this position will be required to be considered.

**The Role**

In this high-autonomy position, you'll direct the development and maintenance of our data infrastructure and data developer experiences for the benefit of the Data and Engineering teams. You’ll collaborate closely with a team of 4 data scientists and 5 engineering managers across an engineering team of 40. Your work will be among the most highly leveraged in the company.

You’ll build and extend modern data infrastructure built around dbt (including dbt Cloud) and Snowflake, with supporting tools like Fivetran, Census, and Amplitude.

**Responsibilities**:

- Design efficient and scalable data pipeline architecture for collecting data across a variety of sources, enabling different functions to leverage transformed data for analytics and operations.
- Improve existing data modeling and deployment practices, fostering best practices to make the team more efficient and improve data quality.
- Collaborate with engineers, product managers, and data scientists to understand data needs, overseeing end-to-end event instrumentation for new features, including naming conventions and properties.
- Drive data “operationalization” - ensuring that we’re sending the right data to the right tools and services, on time and under cost (such as by management of tools like Census).
- Ensure consistent pipeline performance when it comes to latency and error-handling.
- Optimize the entire data stack — from data storage to transformation to analytical tooling — from a performance, cost, and scalability standpoint.
- Lead us into a future of convenient data governance, by selecting ideal CDP and supporting tools.

**Who are you?**:

- ** Experienced**: You bring at least 5 years of software engineering experience, including at least 2 years of working directly with some part of the “modern” data stack (dbt core & cloud, Fivetran, Snowflake, or equivalents).
- ** Empathy for both worlds**: You’ve worked closely enough with software engineering teams to understand their concerns and have also walked in the shoes of a data scientist.
- ** Technically proficient**: You possess advanced SQL skills and solid Python skills, and you’ve directly built or managed live systems which involved reliance on third-party tools.
- ** A builder**: You’re enthusiastic about establishing the foundations of a data team and their tools from scratch.

**What might you tackle in the first 90 days?**:

- Audit our data infrastructure from top to bottom to proactively identify performance, scale, and complexity considerations.
- Audit our data stack (e.g. Snowflake, Fivetran, dbt, Census, Amplitude, Avo) to ensure conformity to best practices.
- Audit existing ELT process for business critical data models and recommend ways to improve data quality, integrity, and reliability.
- Review and extend data observability, monitoring, and alerting — with a deep empathy for how data issues could adversely affect the end user experience.
- Determine priority (and vendor/OSS selection) for data governance tooling.
- Determine priority and general implementation approach for supporting managed business metrics throughout dbt and related tools.

$180,000 - $220,000 a year
- We use a systematic compensation framework. Salary scales are set each year for each job vertical, based on market data and company budget. Independently, managers level folks on their team, and those levels are mapped formulaically to our compensation scales.

Additional, we always make First and Best Offers - there is no negotiation (for new hires nor our existing teammates). This ensures that people are paid based on their expected contribution, rather than their negotiation skills.

**Our Engineering Team**

Our engineers are extraordinary programmers without big egos. We love to share knowledge and support each other. We work together as an interdependent team to accomplish a common goal, and we know how to get things done. We main