What are the responsibilities and job description for the Sr. Data Engineer - W2 ONLY position at Tricon Solutions?
********* W2 ONLY ROLE - NO C2C - Only qualified candidates located near the O'Fallon, MO area to be considered due to the position requiring an onsite presence *********
Job Title: Senior Data EngineerLocation: O'Fallon, MO
Type: 9 months contract on W2
Role Overview:
Seeking an enthusiastic and adaptable Senior Data Engineer responsible for developing and managing data pipelines and assisting various data-driven requests across the company. The position involves a wide variety of technical skills, from developing high throughput Spark jobs to tuning the performance of Big Data solutions. The ideal candidate will have experience working with large-scale data and building automated data ETL processes. Data warehousing experience would be ideal.
Qualifications:
Excellent problem-solving skills with a solid understanding of data engineering concepts.
Proficient in Apache Spark with Python and related technologies.
Strong knowledge of SQL and performance tuning.
Experience in Big Data technologies like Hadoop and Oracle Exadata.
Solid knowledge of Linux environments and proficiency with bash scripting.
Effective verbal and written communication skills.
Nice to Have:
Knowledge or prior experience with Apache Kafka, Apache Spark with Scala.
Orchestration with Apache Nifi, Apache Airflow.
Java development and microservices architecture.
Build tools like Jenkins.
Log analysis and monitoring using Splunk.
Experience with Databricks, AWS.
Working with large data sets with terabytes of data.
Key Responsibilities:
Build and maintain big data technologies, environments, and applications, seeking opportunities for improvements and efficiencies.
Perform ETL (Extract, Transform, Load) processes based on business requirements using Apache Spark and data ingestion from Apache Kafka.
Work with various data platforms including Apache Hadoop, Apache Ozone, AWS S3, Delta Lake, Apache Iceberg.
Utilize orchestration tools like Apache NiFi for managing and scheduling data flows efficiently.
Write performant SQL statements to analyze data with Hive/Impala/Oracle.
Full application development lifecycle (SDLC) from design to deployment.
Work with multiple stakeholders across teams to fulfill ad-hoc investigations, including large-scale data extraction, transformation, and analyses.
NOTES FROM HIRING MANAGER
- Looking for Data engineer!
- Do you have experience in Spark?
- Do you have experience in Python?
- Do you have experience in SQL?
- Are you familiar with Hadoop?