What are the responsibilities and job description for the Big Data Engineer position at Zentek Infosoft Inc?
Job Details
Hi,
Greetings from Zentek Infosoft Inc. !!
We have an immediate opening with one of our clients, please go through the job description and if you are interested kindly share your updated resume ASAP.
A little bit about Zentek Infosoft: Zentek Infosoft Inc. have been providing Information Technology solutions, enterprise staffing and professional services for organizations in every industry, in the public and private sectors, and of every size, from start-ups to Fortune 500 firms over the last 6 years. Our customers include banks and financial services firms, manufacturers, retail chains, healthcare organizations, internet service and telecommunications providers, educational institutions, IT consulting giants and public-sector agencies.
Job Description :-
Required Skills:
1.Proficiency in data engineering programming languages (preferably Python, alternatively Scala or Java)
2. Proficiency in at least one cluster computing frameworks (preferably Spark, alternatively Flink or Storm)
3. Proficiency in at least one cloud data Lakehouse platforms (preferably AWS data lake services or Databricks, alternatively Hadoop), atleast one relational data stores (Postgres, Oracle or similar) and at least one NOSQL data stores (Cassandra, Dynamo, MongoDB or similar)
Proficiency in at least one scheduling/orchestration tools (preferably Airflow, alternatively AWS Step Functions or similar)
4. Proficiency with data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), big-data storage formats (Parquet, Iceberg, or similar), data processing methodologies (batch, micro-batching, and stream), one or more data modelling techniques (Dimensional, Data Vault, Kimball, Inmon, etc.), Agile methodology (develop PI plans and roadmaps), TDD (or BDD) and CI/CD tools (Jenkins, Git,)
Strong organizational, problem-solving and critical thinking skills; Strong documentation skills
Preferred skills:
Experience using AWS Bedrock APIs
Knowledge of Generative AI concepts (such as RAG, Vector embeddings, Model fine tuning, Agentic AI)
Experience in IaC (preferably Terraform, alternatively AWS cloud formation)
Required Skills:
1.Proficiency in data engineering programming languages (preferably Python, alternatively Scala or Java)
2. Proficiency in at least one cluster computing frameworks (preferably Spark, alternatively Flink or Storm)
3. Proficiency in at least one cloud data Lakehouse platforms (preferably AWS data lake services or Databricks, alternatively Hadoop), atleast one relational data stores (Postgres, Oracle or similar) and at least one NOSQL data stores (Cassandra, Dynamo, MongoDB or similar)
Proficiency in at least one scheduling/orchestration tools (preferably Airflow, alternatively AWS Step Functions or similar)
4. Proficiency with data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), big-data storage formats (Parquet, Iceberg, or similar), data processing methodologies (batch, micro-batching, and stream), one or more data modelling techniques (Dimensional, Data Vault, Kimball, Inmon, etc.), Agile methodology (develop PI plans and roadmaps), TDD (or BDD) and CI/CD tools (Jenkins, Git,)
Strong organizational, problem-solving and critical thinking skills; Strong documentation skills
Preferred skills:
Experience using AWS Bedrock APIs
Knowledge of Generative AI concepts (such as RAG, Vector embeddings, Model fine tuning, Agentic AI)
Experience in IaC (preferably Terraform, alternatively AWS cloud formation)
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
Salary : $55 - $58