What are the responsibilities and job description for the Data Integration Engineer position at Moore?
As the Data Integration Engineer, you will play a pivotal role in building and optimizing global data integration processes. You will work closely with our data ingestion platform to write and maintain Scala code to feed ETL (Extract, Transform, Load) pipelines, ensure data consistency, and develop scalable solutions for integrating diverse datasets from multiple sources. Your work will contribute to ensuring high data quality, integrity, and accessibility for the entire enterprise across multiple use cases.
Moore is a data-driven constituent experience management (CXM) company achieving accelerated growth for clients through integrated supporter experiences across all platforms, channels and devices. We are an innovation-led company that is the largest marketing, data and fundraising company in North America serving the purpose-driven industry with clients across education, association, political and commercial sectors.
Check out www.WeAreMoore.com for more information.
Your Impact:
Moore is a data-driven constituent experience management (CXM) company achieving accelerated growth for clients through integrated supporter experiences across all platforms, channels and devices. We are an innovation-led company that is the largest marketing, data and fundraising company in North America serving the purpose-driven industry with clients across education, association, political and commercial sectors.
Check out www.WeAreMoore.com for more information.
Your Impact:
- Design and implement data integration solutions by writing and optimizing Scala code to support scalable data ingestion and transformation pipelines.
- Develop automated processes to ensure the consistency of incoming data across multiple sources and ensure alignment with universal data standards.
- Implement automated scripts and processes in Scala to cleanse and standardize incoming data to maintain consistency across datasets.
- Continuously enhance the performance and scalability of data ingestion processes to handle growing data volume and new data types.
- Ensure accurate data transformation from raw input to structured formats suitable for downstream application.
- Collaborate with cross-functional team members such as data architects, data scientists, data analysts, and operations teams to integrate new data sources and build efficient data conversion processes.
- Create comprehensive documentation of data integration solutions, best practices, and coding standards for both technical and non-technical audiences.
- Participate in Agile workflows involving large development efforts.
- Design and develop automated checks to ensure the completeness, accuracy, and validity of ingested data, identifying anomalies or inconsistencies as early as possible.
- Establish key performance indicators (KPIs) for data quality and implement processes to track and report these metrics regularly.
- Create robust validation frameworks that can automatically detect and flag issues like missing, duplicated, or incorrect data before it enters downstream systems and implement fallback mechanisms for error handling to maintain data integrity.
- Maintain a log of data quality issues, how they were addressed, and lessons learned to help improve future integration processes and ensure transparency across the organization.
- Conduct thorough testing and quality checks on new data sources before they are integrated into the data pipeline, ensuring alignment with internal data quality requirements.
- Collaborate with other teams to investigate and resolve data discrepancies between source systems and integrated data, ensuring any issues are quickly resolved.
- Bachelor’s degree in Computer Science, Data Engineering, or a related field (or equivalent work experience).
- 3 years of experience in data integration, ETL development, or related roles with a focus on large-scale data systems.
- At least 2 years of experience with a Cloud database such as Snowflake or Azure SQL.
- 1 or more years of experience writing and optimizing production-grade Scala code, particularly for data processing and transformation.
- At least 1 year of experience using CRM software.
- In-depth understanding of donor management and data services, data processing, and analytics.
- Experience working in a data co-op or data-driven environment where multiple stakeholders contribute to and use shared data.
- Ability to troubleshoot and optimize data flows and resolve performance bottlenecks in data processes.
- Expertise in developing efficient code and writing advanced queries to handle large datasets and complex data transformations.
- Strong attention to detail with a focus on data quality, consistency, and integrity.
- Strong problem-solving skills and capable of handling complex data integration challenges in a fast-paced environment.
- Excellent written and verbal communication skills, with the ability to explain technical concepts to both technical and non-technical audiences.
- Experience with JIRA and GitHub a plus.
- Join the largest marketing and fundraising company in North America serving the nonprofit industry where we prioritize innovation and professional growth.
- Collaborate with industry subject matter experts with over 5,000 employees across the enterprise.
- To help you stay energized, engaged and inspired, we offer a wide range of benefits including comprehensive healthcare, paid holidays and generous paid time off so you can have the time and space to recharge and pursue your other passions and be with the people you care about.
- Moore is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
Salary : $72,500