Data Engineer
ProducePay
At ProducePay, we believe strongly in driving business strategies and decisions with evidence-based data, rather than intuition, opinion, or assumption. As a senior member of our top-notch and fast-paced engineering team, we are looking for a Senior Data Engineer who will design, implement, and optimize ProducePay's vast and diverse data platform (ELT/ETL) and modern data warehouse architecture, ensuring scalability, reliability, and security for mission-critical analytics. This role will own the entire data lifecycle, working closely and collaboratively with other engineers, as well as other functional teams to define requirements, integrate data from a variety of sources, and deploy high-quality data pipelines in support of the analytics needs of ProducePay. As a thought leader and mentor on the engineering team, you will establish best practices for data quality, pipeline development, and platform optimization, while actively guiding junior team members.
Responsibilities:
- Design and implement robust data governance, quality, and observability frameworks to ensure accuracy across all data products.
- Own the architecture, modeling, and optimization of the data warehouse (e.g., dbt transformations, dimensional modeling) to support business intelligence and data science initiatives.
- Build, deploy, and monitor high-volume, reliable data pipelines using appropriate workflow orchestration tools (e.g., Apache Airflow, Dagster).
- Reviewing and analyzing data sets to identify patterns or trends that may have business implications.
- Assisting and providing feedback for new and existing data models, databases.
- Providing consultation on data management and integrity issues to other members of the company.
- Recommending changes to existing databases to improve performance or resolve issues.
- 5+ years of professional experience in Data Engineering.
- At least a Bachelor’s Degree in Computer Science, Engineering, or a related quantitative field.
- Familiar developing ELT pipelines using Python, REST, and SQL in a Linux environment.
- Passionate about data and the power of data.
- Willing to step outside of comfort and immediate responsibility zones to drive results and desired outcome.
- Committed to making data a key focus for overall company strategy, growth, and product development.
- Deep expertise in advanced SQL and data modeling concepts (e.g., Dimensional Modeling, 3NF).
- Hands-on production experience with Snowflake and PostgreSQL.
- Proficiency in Python or Scala, including experience with data processing libraries (e.g., Pandas, Spark).
- Production experience building and maintaining data services on AWS (e.g., S3, EC2, Lambda, Kinesis/MSK).
- Excellent communication and presentation skills.
- Highly organized with good time management skills.
- Strong attention to detail.
- Strong understanding of Infrastructure as Code (IaC) principles; experience with Pulumi or Terraform is highly preferred.
- Exceptional problem-solving ability.
- Ability to succeed in a collaborative work environment and work cross-functionally.
- Health Care Plan (Medical, Dental & Vision)
- Retirement Plan (401k)
- Life Insurance (Basic, Voluntary & AD&D)
- Paid Time Off (Vacation, Sick & Public Holidays)
- Family Leave (Maternity, Paternity)
- Short Term & Long Term Disability
- Training & Development
- Work From Home
- Wellness Resources