What are the responsibilities and job description for the Data Engineer (Databricks, Python, SQL, PySpark) - Contract / Full Time - Jersey City, NJ (Hybrid) position at iPivot?
Hi,
I am Suresh from IPivot. Please find the job description below for your reference. If interested, reply with an updated resume.
Job Title: Data Engineer (Databricks, Python, SQL, PySpark)
Location: Jersey City, NJ (Hybrid)
Job Type: Contract / Full Time
Note: Need Only Independent W2 / Visa Transfers Consultants.
Job Summary
We are seeking a highly skilled Data Engineer with hands-on experience in Databricks, Python, SQL, and PySpark to join our growing data engineering team.
In this role, you'll build scalable data pipelines, work on big data processing, and collaborate across teams to deliver reliable, analytics-ready data.
The ideal candidate has strong experience with cloud data platforms and is passionate about driving data solutions using modern tools and technologies.
Required Skills
3 years of experience in data engineering or a related role.
Strong hands-on experience with Databricks and Apache Spark (PySpark).
Proficiency in Python and SQL for data manipulation and scripting.
Experience working with large datasets and building scalable data processing workflows.
Familiarity with cloud platforms (AWS, Azure, or GCP), especially cloud-native data solutions.
Understanding of data modeling, warehousing concepts, and performance tuning.
Experience with version control (Git) and CI/CD for data pipelines.
Preferred Qualifications
Experience with Delta Lake and the Lakehouse architecture.
Exposure to orchestration tools like Airflow, DBT, or Azure Data Factory.
Experience working in Agile/Scrum environments.
Knowledge of real-time data processing and streaming (e.g., Kafka, Structured Streaming) is a plus.
Certification in Databricks or relevant cloud technologies.
Key Responsibilities
Design, build, and maintain large-scale data pipelines on Databricks using PySpark and SQL.
Develop efficient, reliable, and scalable ETL/ELT workflows to ingest and transform structured and unstructured data.
Collaborate with data scientists, analysts, and product teams to understand data needs and deliver actionable datasets.
Optimize data performance and resource usage within Databricks clusters.
Automate data validation and monitoring to ensure pipeline reliability and data quality.
Write clean, modular, and testable code in Python.
Implement best practices for data security, governance, and compliance.
Document data workflows, architecture, and technical decisions. Thanks and Regards,
Suresh Durgam
Senior Recruiter
M: (732) 813-4401
E: durgams@ipivot.io
I am Suresh from IPivot. Please find the job description below for your reference. If interested, reply with an updated resume.
Job Title: Data Engineer (Databricks, Python, SQL, PySpark)
Location: Jersey City, NJ (Hybrid)
Job Type: Contract / Full Time
Note: Need Only Independent W2 / Visa Transfers Consultants.
Job Summary
We are seeking a highly skilled Data Engineer with hands-on experience in Databricks, Python, SQL, and PySpark to join our growing data engineering team.
In this role, you'll build scalable data pipelines, work on big data processing, and collaborate across teams to deliver reliable, analytics-ready data.
The ideal candidate has strong experience with cloud data platforms and is passionate about driving data solutions using modern tools and technologies.
Required Skills
3 years of experience in data engineering or a related role.
Strong hands-on experience with Databricks and Apache Spark (PySpark).
Proficiency in Python and SQL for data manipulation and scripting.
Experience working with large datasets and building scalable data processing workflows.
Familiarity with cloud platforms (AWS, Azure, or GCP), especially cloud-native data solutions.
Understanding of data modeling, warehousing concepts, and performance tuning.
Experience with version control (Git) and CI/CD for data pipelines.
Preferred Qualifications
Experience with Delta Lake and the Lakehouse architecture.
Exposure to orchestration tools like Airflow, DBT, or Azure Data Factory.
Experience working in Agile/Scrum environments.
Knowledge of real-time data processing and streaming (e.g., Kafka, Structured Streaming) is a plus.
Certification in Databricks or relevant cloud technologies.
Key Responsibilities
Design, build, and maintain large-scale data pipelines on Databricks using PySpark and SQL.
Develop efficient, reliable, and scalable ETL/ELT workflows to ingest and transform structured and unstructured data.
Collaborate with data scientists, analysts, and product teams to understand data needs and deliver actionable datasets.
Optimize data performance and resource usage within Databricks clusters.
Automate data validation and monitoring to ensure pipeline reliability and data quality.
Write clean, modular, and testable code in Python.
Implement best practices for data security, governance, and compliance.
Document data workflows, architecture, and technical decisions. Thanks and Regards,
Suresh Durgam
Senior Recruiter
M: (732) 813-4401
E: durgams@ipivot.io