What are the responsibilities and job description for the Python/Pyspark Big Data ETL Engineer position at Case Interactive?
Job Details
Description & Requirements
Company runs on data.As the Data Management & Analytics team within Engineering, we support our organization s needs around managing data efficiently and enabling everyone across the company to make informed decisions by providing insights into the data.
We are responsible for ingesting and preparing massive amounts of data for reporting, dashboards, self-service and advanced analytics.
A key objective of this role is for you to help to build and support enterprise level data analytics programs leveraging traditional warehouse technologies, Pyspark, MPP databases and Hadoop.
In order to be successful:
- You should have a working knowledge of industry standard Data Infrastructure (e.g. Warehouse, BI, Analytics, Big-Data, etc.) tools with the goal of providing end users with analytics at the speed of thought.
- You should be proficient at developing, architecting, standardizing and supporting technology platforms using Industry leading ETL solutions.
- You should thrive in building scalable and high throughput systems
- You should have experience with agile BI & ETL practices to assist with the interim Data preparation for Data Discovery & self-service needs.
- You must have strong communication, presentation, problem-solving, and trouble-shooting skills.
- You should be highly motivated to drive innovations company-wide.
You ll need to Have:
- 5 years of experience in designing and developing ETL pipelines leveraging pyspark/python.
- Strong understanding of data warehousing methodologies, ETL processing and dimensional data modeling.
- Advanced SQL capabilities are required. Knowledge of database design techniques and experience working with extremely large data volumes is a plus.
- Demonstrated experience and ability to work with business users to gather requirements and manage scope.
- Experience in workflow tools such as Airflow or Tidal
- Experience working in a big data environment with technologies such as Greenplum, Hadoop and HIVE
- BA, BS, MS, PhD in Computer Science, Engineering or related technology field
We d love to see:
- Experience with large database and DW Implementation (20 TBs)
- Understanding of VLDB performance aspects, such as table partitioning, sharding, table distribution and optimization techniques
- Knowledge of reporting tools such as Qlik Sense, Tableau, Cognos