Requirement # : 1
Job Title : Data Engineer - Implementation Consultant
Location : Irving, on-site 3x per week
Top Skills for this role :
- Candidates need to be hands on
- Strong experience with Python, IPAAs, Data Engineering
Candidates must be US Citizens
Expected Start Date : ASAP
End Date (if applicable) : 12 / 31 / 24 and Extendable in to 2025
Interview Process : After couple of initial round final interview needs to be Onsite (This is Must)
Summary : We are currently onboarding North America based OPCO' s (Operating companies) to their new digital platform (Spark) which is built on Kubernetes running in AZURE, the customer and associate experience (CX / AX) on their new SPARK platform in EMEA. They are planning to migrate North American operating companies (OPCOs) to the platform, beginning in 2024.
With this integration work the engineer will be responsible for designing, configuring, and implementing data validation tools which will validate the data transferred between. You will ensure best practices are followed through the implementation lifecycle and help construct data sets for use by the team and other business stakeholders to power both reporting and validation.
Job responsibilities
Highly skilled and motivated Data EngineerAs a Data Engineer, you will be responsible for validating the data between two systems, ensuring accuracy and consistency.Implement data quality checks and monitoring processes.Design and Develop data entity validation scripts using python.Validate and reconcile data between two systems, ensuring accuracy and consistency.Collaborate with cross-functional teams to identify and resolve data discrepancies and issues.Develop and implement data quality checks and monitoring processes to ensure data integrity.Design and maintain data pipelines and ETL processes to extract, transform, and load data from various sources.Optimize data storage and retrieval processes to improve performance and efficiency.Perform data analysis and profiling to identify data quality issues and recommend solutions.Develop and maintain documentation for data validation processes, data mappings, and data lineage.Stay up to date with industry trends and best practices in data engineering and data validation.Support migration of North American OPCOs, enabling data validation on the new platform.Gather business requirements from key client stakeholders and translate these into technical tracking specifications based on standards and industry best practice.Required Skills
Strong knowledge of IPAAS and data validation for migration of applications.Demonstrable experience in delivering end to end data validation.Confident understanding of web development and its languages and possess Python proficiency.Proactive and highly organised with strong time management and planning skills. Able to change direction and work on multiple projects across a wide range of topics.Desired] Knowledge of Python / Google BigQuery architecture.Requirement # : 2
Job Title : Senior Associate DevOps (Senior Associate SRE / DevOps)
Location : On-site Irving, TX. Expected to be in office Monday - Friday, 8am - 5pm. Every alternate week we will have Tuesday and Thursday WFH.
Project Overview : Client is currently onboarding North America based OPCO' s (Operating companies) to their new digital platform which is built on cloud native technology Kubernetes running in AZURE,
They are planning to migrate North American operating companies (OPCOs) to the platform, beginning in 2024 and we are looking for experienced engineers in Handling L2 Support. Addressing all technical issues raised on platform.
Industry Experience :
Retail industry experience preferably and B2B experience will be a nice to have.Expected Start Date : ASAP
End Date : 31-Dec-2024 - potential extension OR conversion to FTE (depending on performance)
Top Skills Required :
Azure Cloud and Kubernetes experience.SRE experience is a must have. Cannot just be a DevOps personGood analytical and technical skills in addressing L2 incident which needs good L2 incident management experience.Familiar with Agile and ITSM .Extra Notes :
Is this role solely focused on level II support for the Kubernetes migration or are they also implementing / automating Kubernetes in Azure? If both, what % of the role is focused on support vs. implementation? Role would be primarily level II and engineer. would be focusing on tools implementing on Kubernetes , ex - kubecost setup for cost reduction , % of implementation would be 50% and 50% operational supportThe description asks for programming experience in either Java, Javascript or Python : are they using those languages to debug / troubleshoot issues or are they responsible for developing applications? Intent is to have the team debug / troubleshoot issuesSame for "Spring, React, NextJs, GraphQL...are they using those languages to debug / troubleshoot issues or are they developing applications / APIs using those languages? Intent is to have the team debug / troubleshoot issuesDoes this person need experience with Dynatrace AND Splunk or can they have one vs the other ? Ideally to have both, we use Dynatrace for APM and Splunk for log monitoring, if we are getting a candidate with either of one, we can groom them on other.Detailed Job Description :
Mission statement
Setting up monitoring and alerting tools (observability stack) and maintaining them as the product evolves.Responding to incidents : contribute to communication, analysis, and fixing during an incident.Internal customers (OpCos) technical support (L3) to investigate and qualify software bugs or integration issues, according to the rotation plan.Contribution to system design and operations management. This involves solutions analysis, screening, experimentation, and integration.Contribute to building the knowledge base (internal playbooks and customer materials) and influence technical choices to improve resiliency and operability.Required skills and qualifications.
Computer science or scientific discipline backgroundDevOps or SRE experienceProactive approach, analytical mindset, and willingness to own and drive topics in full autonomyAbility to program with one or more high-level programming languages : Java, JavaScript, or PythonExperience with public cloud infrastructure : Azure or GCP and microservices architectureExperience in two or more components of the following stack : Spring (java), React, NextJs, GraphQL, Azure PipelinesProficient level in English : speaking and writing Familiar with Agile organizations (Scrum and SAFe) and ITSM practicesPreferred skills
Experience in Dynatrace and Splunk
Coding experience beyond basic scripting on one of the mentioned languages : be able to
debug and push hotfixes if needed.
Hands-on experience in networking and Kubernetes will be a plus.