Job Description
Job Summary: The Data Engineer will be responsible for expanding and optimizing the data architecture, including data pipelines, flow, and collection, for cross-functional teams. You will work on data architecture analysis, design, development, testing, and delivery of data applications, ETL processes, and reporting services. The ideal candidate will possess a strong technical background, with experience in data engineering technologies such as Hadoop, Spark, Python, SQL, and Java. This role requires an individual who thrives in a collaborative environment, has a keen problem-solving ability, and is driven to deliver high-quality data solutions.
Key Responsibilities: Data Architecture & Development: Design and develop enterprise-wide data architecture solutions using Hadoop, Spark, Scala, Python, SQL, and other relevant data technologies.
ETL/ELT Processes: Handle the end-to-end development of ETL/ELT processes, from discovery and design to coding and production deployment.
Collaboration: Work closely with business, product, and technical stakeholders to gather requirements and provide thoughtful solutions.
Issue Resolution: Address assigned tickets and deliver code in a timely manner. Participate in on-call rotation to research and resolve production issues.
Code Quality & Best Practices: Review code, approve pull requests, and drive the implementation of coding standards and best practices.
Problem Solving: Demonstrate strong analytical skills, identifying data patterns and troubleshooting complex issues.
Risk & Dependency Management: Identify dependencies and raise risks early to ensure timely project delivery.
Continuous Improvement: Contribute to the continuous improvement of data engineering processes and solutions.
Required Qualifications: Education: Undergraduate degree required; Master's degree or other advanced degree preferred.
Experience: 3+ years of experience in data engineering or as a member of an IT team.
3+ years of experience architecting enterprise-wide solutions, including the use of technologies such as Hadoop, Spring Boot/Spring Cloud, REST APIs, and SQL.
3+ years of professional experience working with Python or Core Java, AWS, Microservices, and data engineering technologies.
Technical Skills: Strong experience in data modeling, working with complex data structures, and ensuring data quality and managing the data lifecycle.
Advanced proficiency with SQL, including query authoring and working with relational databases.
Experience in building, optimizing, and managing big data pipelines and data architectures.
Communication Skills: Excellent written and verbal communication skills with the ability to collaborate effectively with cross-functional teams.
Preferred Qualifications: Additional Skills: Familiarity with Spring Boot and Spring Cloud for building scalable applications.
Experience with Microservices architectures and cloud technologies (AWS).
Understanding of data processing frameworks like Spark, Scala, and Hadoop for big data engineering solutions.
A passion for continuous improvement and a business-owner mentality, with a drive to transform business processes through technology.
Certifications: Certifications in AWS, Big Data Technologies, or related areas are a plus but not required.
Key Competencies: Strong problem-solving and analytical capabilities, with the ability to identify data patterns and troubleshoot complex issues.
Ability to work in a fast-paced, dynamic environment, managing multiple priorities and tasks.
Collaborative team player with a proactive approach to communication and stakeholder engagement.
Detail-oriented with a strong focus on delivering high-quality, error-free solutions.
Education: Bachelors Degree JobRialto
Job Tags