What are the responsibilities and job description for the Senior TigerGraph Database Developer position at Artmac Soft LLC?
Job Description
Job Description
Who we are
Artmac Soft is a technology consulting and service-oriented IT company dedicated to providing innovative technology solutions and services to Customers.
Job Description :
Job Title : Senior TigerGraph Database Developer
Job Type : C2C
Experience : 8-10 Years
Location : Austin Texas
Responsibilities :
- 5 years of professional experience in data engineering.
- 3-5 years of hands-on experience with Graph Databases (Neo4j, TigerGraph, or ArangoDB).
- 3-5 years of hands-on experience with Spark.
- 2-5 years of experience with NoSQL databases.
- 1 years of experience with PySpark.
- 1 years of experience with Data Science concepts and practices.
- Strong understanding of data engineering principles and best practices.
- Proficiency in at least one of the following query languages : Cypher (Neo4j), GSQL (TigerGraph), or AQL (ArangoDB).
- Experience with data modeling and schema design, especially for graph databases.
- Experience with ETL processes and tools.
- Solid understanding of database performance tuning and optimization.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.
- Design and implement data pipelines to ingest, transform, and load data from various sources into our data lake and graph database.
- Develop and maintain ETL processes using Spark and PySpark.
- Design and implement the schema and data model for our graph database (Neo4j, TigerGraph, or ArangoDB).
- Write efficient and performant queries for data retrieval, analysis, and visualization within the graph database.
- Work with NoSQL databases to support various data storage and retrieval needs.
- Collaborate with data scientists and other stakeholders to understand their data requirements and translate them into effective data solutions.
- Monitor and optimize the performance and scalability of the data infrastructure, including the graph database and Spark clusters.
- Implement best practices for data governance, security, and quality.
- Work with staged environments (development, integration / testing, and production) to ensure smooth deployments.
- Contribute to the development of internal tools and libraries for working with data.
- Document data pipelines, data models, and other technical aspects of the data infrastructure.
- Stay up-to-date with the latest advancements in data engineering, big data technologies, and graph databases.
- Optional : If BOM domain is crucial] Apply knowledge of Bill of Materials (BOM) structures and data to design and optimize data pipelines and graph database models.
Qualification :