Available logo

Data Engineer

Available
Full-time
Remote
United States


Overview:

The Data Engineer will join a team of 100+ purpose-driven staff members in a friendly, focused, fast-paced entrepreneurial environment.  The National Society of Leadership and Success (NSLS) is the largest accredited leadership honor society in the United States, with over 800 chapters and more than 2 million members. 


The Data Engineer will lead the transformation of our analytics infrastructure. You'll play a critical role in migrating from a legacy Apache Hop and Redshift system to a modern Snowflake and dbt Cloud stack, establishing a single source of truth for our business stakeholders, and building maintainable pipelines that will serve as the foundation for NSLS's data platform for years to come.


This is a rare opportunity to join at an inflection point where you can make an outsized impact. You'll replace unnecessarily complex "spaghetti code" with clean, modular systems and implement best practices that will set the standard for future engineers. If you enjoy working in dbt, value readable and maintainable code, and want to build something new while leveraging modern tools, this role is for you.


Responsibilities: 

In Your First 6 Months

  • Contribute to the Snowflake migration: Work with the broader data team to deprecate our legacy Redshift and Apache Hop infrastructure by completing the migration to Snowflake and dbt Cloud
  • Build production-grade dbt pipelines: Develop and maintain SQL transformations in dbt that power analytics for business stakeholders across the organization
  • Establish data architecture: Refine and maintain our medallion architecture (bronze, silver, and gold layers) to create clear separation of concerns and a single source of truth
  • Collaborate with the data team: Partner with our AWS Engineer, Analytics Engineer, and Business Analyst in a sprint-based workflow with code reviews and regular standups

Ongoing Responsibilities

  • Maintain dbt transformations (60-70% of role): Own approximately 15-20 core models and summary views that drive business reporting, ensuring they're performant, well-documented, and easy to maintain
  • Build new data pipelines: Ingest data from APIs and new sources as business needs evolve
  • Enable reverse ETL: Develop and manage batch processes to send transformed data to downstream services like HubSpot using tools like Hightouch
  • Monitor data quality: Implement testing and alerting to catch issues before they impact stakeholders
  • Contribute to technical standards: Help establish and maintain best practices for code quality, documentation, and data modeling

Qualifications:

  • 2-5 years of experience as a Data Engineer, Analytics Engineer, or similar role
  • Expert SQL skills: You write advanced, readable SQL including CTEs, window functions, and query optimization techniques
  • dbt proficiency: You've built and maintained production dbt projects and understand modeling best practices (this is critical for the role)
  • Python fundamentals: Comfortable working with dataframes, querying APIs, and writing scripts for data processing
  • Modern data stack familiarity: Experience with Snowflake, AWS, and orchestration tools like dbt Cloud or Airflow
  • Code craftsmanship: You write clean, modular, well-documented code that others can easily understand and maintain
  • AI-assisted development: Proficient with AI coding tools (Claude Code, GitHub Copilot, Cursor, or similar) to accelerate development, debug efficiently, and learn new technologies quickly

Nice to Have

  • Experience with reverse ETL tools (Hightouch, Census, etc.)
  • Familiarity with Fivetran or similar ingestion platforms
  • Background in batch processing and API integrations
  • Experience with Hex, PostHog, or similar analytics/CDP tools

Who You Are

  • Quality-focused: You care deeply about writing maintainable code and establishing good technical hygiene
  • Collaborative: You thrive in code reviews, give and receive feedback well, and enjoy working closely with a small team
  • Resourceful: You're comfortable leveraging AI and other tools to learn quickly and solve problems independently
  • AI-native developer: You understand how to effectively use AI assistants as force multipliers (whether generating boilerplate code, exploring documentation, debugging issues, or rapidly prototyping solutions) while maintaining code quality and understanding
  • Business-minded: You understand (or want to learn) how your technical work supports business objectives and can communicate with non-technical stakeholders when needed
  • Self-directed: You work well with autonomy in a sprint-based environment where you commit to deliverables and own your work

How We Work

  • Sprint-based workflow: Bi-weekly sprints with planning, standups twice per week, and regular retrospectives
  • Weekly 1:1s: Regular check-ins with the Head of Data for feedback, support, and career growth
  • Collaborative code review: Your work will be reviewed by senior engineers on the team to ensure quality and knowledge sharing
  • Work-life balance: Standard business hours (9-5 or similar), no on-call or off-hours expectations
  • AI-assisted development: We actively encourage the use of AI coding assistants like Claude Code to write better code faster, accelerate onboarding to new codebases, and increase overall engineering velocity

Tech Stack

  • Data Warehouse: Snowflake (migrating from Redshift)
  • Transformation: dbt Cloud
  • Orchestration: dbt Cloud (replacing Apache Hop)
  • Visualization: Hex
  • Customer Data Platform: PostHog
  • Infrastructure: AWS
  • Source Systems: HubSpot, Drupal, Symfony, Shopify (100+ tables, <1TB total)


The National Society of Leadership and Success is an equal opportunity employer committed to diversity, equality, and inclusion


Visit nsls.org to learn more about our organization