What are the responsibilities and job description for the Data Engineer position at VLink Inc?
Job Details
Job Title: Data Engineer (Google Cloud Platform, BigQuery, Snowflake, Azure, Delta) Location: Seattle, WA - hybrid (local only) Employment Type: Contract
Duration: 6 Months
About VLink: Started in 2006 and headquartered in Connecticut, VLink is one of the fastest growing digital technology services and consulting companies. Since its inception, our innovative team members have been solving the most complex business, and IT challenges of our global clients.
Job Description:
***MUST HAVE INDUSTRY EXPERIENCE in ECOMMERCE/RETAIL***
**Need snowflake - to migrate data into snowflake **experience with massive amounts of data -- this company has over 13 million users*** **Google Cloud Platform, Google Analytics, Azure, Pipelines, Delta**
As an Engineer II, you will bring a high level of technical knowledge, but also an ability to spread knowledge to your co-workers. You will help form the core of our engineering practice at the company by contributing to all areas of development and operations (pre-production to production). You will be an example of what good engineering looks like and help others around you refine their skills. You will be part of a day-to-day production release team and may perform on-call support functions as needed. Having a DevOps mindset is the key to success in this role, as Engineers are commonly part of full DevOps teams that "own all parts of software development, release pipelines, production monitoring, security and support.
Data Engineering Projects
Data pipeline creation and maintenance. Stack: Google Cloud Platform (Google Cloud Platform), Azure Cloud, Azure Databricks, Snowflake
o Includes engineering documentation, knowledge transfer to other engineers, future enhancements and maintenance
Create secure data views and publish them to the Enterprise Data Exchange via Snowflake for other teams to consume
Data pipeline modernization and migration via Databricks Delta Live Tables (DLT) and Unity Catalog
Leverage existing CICD process for pipeline deployment
Adhere to PII encryption and masking standards
Data Engineering Tools/Techniques
Orchestration tools- ADF, AirFlow, FiveTran
Languages- SQL, Python
Data Modeling- Star and Snowflake Schema
Streaming- Kafka, EventHub, Spark, Snowflake Streaming
DevOps Support
Support improvements to current CICD process
Production monitoring and failure support
Provide an escalation point and participate in on-call support rotations
Participate in discussions on how to improve DevOps
Be aware of product release and how that impacts our business
Take part in Agile ceremonies
Perform engineering assignments using existing procedures and best practices
Conduct research to aid in product troubleshooting and optimization efforts
Participate in and contribute to our Engineering Community of Practice
Qualifications:
Completed Bachelor's degree or diploma (or equivalent experience) in Computer Science, Software Engineering or Software Architecture preferred; candidates with substantial and relevant industry experience are also eligible
5 years of relevant engineering experience
Google Professional Data Engineer Certification is preferred
Experience in Big Table, ClickStream data migration, Semi-Structured and Un-Structured data management
Experience with Google Google Cloud Platform and BigQuery
Experience with developing complex SQL queries
Experience with CI/CD principles and best practices
Experience with Azure Data Factory, Azure Data Bricks, Snowflake, and Storage Accounts.
Experience working with a Data Engineering team and understanding of Data Engineering practices.
Ability to learn, understand, and work quickly with new emerging technologies, methodologies, and solutions in the Cloud/IT technology space
Experience with bug tracking and task management software such as JIRA, etc.
Experienced in managing outages, customer escalations, crisis management, and other similar circumstances.