What are the responsibilities and job description for the Data Engineer II position at Propio Language Services?
PropioLanguage Servicesis transforming communication by developing tools and technologies that make it easier and more efficient for clients to engage with the Limited English Proficiency Population to improve access to healthcare and essential services in social services, education, legal and many others.
The Data Engineer II will play a key role in designing, developing, and maintaining our data infrastructure. This position requires a blend of technical skills, analytical thinking, and the ability to collaborate with cross-functional teams to support data-driven decision-making processes.
Key Responsibilities:
The Data Engineer II will play a key role in designing, developing, and maintaining our data infrastructure. This position requires a blend of technical skills, analytical thinking, and the ability to collaborate with cross-functional teams to support data-driven decision-making processes.
Key Responsibilities:
- Design, implement, and optimize data pipelines for efficient data processing and analysis.
- Develop and maintain ETL (Extract, Transform, Load) processes to ensure data accuracy and availability.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
- Ensure the scalability, reliability, and performance of data systems.
- Implement and maintain data security and privacy measures.
- Troubleshoot and resolve data-related issues and anomalies.
- Continuously improve and document data engineering processes and best practices.
- Act as a subject matter expert and strategic advisor on data-related initiatives, ensuring alignment with organizational objectives
- Perform ad hoc data analytics and reporting to meet emergent business requirements.
- Develop and optimize complex SQL queries and stored procedures for data extraction and modeling.
- Build custom automated paginated reports using SSRS (SQL Server Reporting Services).
- Bachelor’s Degree in Computer Science, Data Science, Mathematics, Statistics, or a related field; or equivalent work experience.
- Minimum of 3 years of experience in a comparable data engineering role.
- Proven experience with designing and building scalable data pipelines and ETL processes.
- Experience with cloud platforms (AWS, Azure, Google Cloud) and their data services.
- Proficiency in SQL and experience with relational databases (e.g., MySQL, PostgreSQL).
- Strong programming skills in languages such as Python or Scala.
- Experience with big data technologies (e.g., Hadoop, Spark).
- Familiarity with data warehousing solutions (e.g., Databricks).
- Knowledge of data modeling, data warehousing, and data lake concepts.