Senior Data Engineer
Senior Data Engineer
Location: Draper, Utah (Fully In-Office)
Schedule: Monday–Friday, 9 AM – 5 PM
AT THIS TIME, WE ARE ONLY CONSIDERING LOCAL APPLICANTS WITH NO VISA SPONSORSHIP REQUIREMENTS TO WORK DIRECTLY AS A FULL TIME EMPLOYEE.
Are you passionate about building scalable data solutions that drive business intelligence and decision-making? We're looking for a Senior Data Engineer to design, optimize, and maintain robust data pipelines and warehouse solutions. This role is ideal for a highly skilled data professional with a strong background in cloud-based data engineering, ETL/ELT development, and cross-functional collaboration.
What You'll Do
- Develop & Optimize Data Pipelines: Design and implement seamless data integration from Microsoft Azure, Salesforce, and external sources to ensure high availability and performance.
- Architect & Maintain Data Infrastructure: Design and manage scalable ETL/ELT processes, ensuring structured, normalized, and easily accessible data.
- Collaborate & Solve Challenges: Work closely with cross-functional teams to troubleshoot issues, refine data models, and enhance overall system performance.
- Mentor & Lead: Share best practices, mentor junior developers, and contribute to the continuous improvement of our data ecosystem.
What We’re Looking For
Required Qualifications:
- Bachelor’s degree in Computer Science, Data Engineering, or a related field (or equivalent experience).
- 5+ years of experience in data integration, development, or engineering, with a focus on data warehouse solutions and data transport platforms.
- Strong expertise in cloud-based data warehouse design (preferably in Microsoft Azure).
- Advanced knowledge of data modeling, normalization, and relational databases.
- Proficiency in Python for scripting and data processing.
- Extensive experience with ETL/ELT frameworks and tools.
- Hands-on experience with SQL and NoSQL databases (PostgreSQL, MySQL, SQL Server, MongoDB).
- Familiarity with Data Warehousing Solutions (e.g., Snowflake, Redshift).
- Experience with big data frameworks (Hadoop, Apache Spark).
- Strong understanding of containerization and orchestration (Kubernetes).
- Proven ability to work in an Agile development environment.
- Knowledge of data governance frameworks (e.g., NIST CSF 2.0).
- Experience with CI/CD pipelines (GitHub or similar).
- Strong problem-solving skills, attention to detail, and ability to work in a fast-paced environment.
Preferred Qualifications:
- Experience integrating Salesforce or CRM systems into data pipelines.
Additional Details
🚀 This is a Fully In-Office Position
This role requires you to work on-site at our Draper, Utah office. Remote work is not available. Applicants must currently reside in the U.S. and be legally authorized to work without sponsorship.
📢 Important: If you are not within commuting distance of Draper, UT, please do not apply.