What are the responsibilities and job description for the Python Data Architect (Banking) position at Open Systems Technologies?
Our Role:
We are looking for an astute, determined professional like you to fulfil a Data Engineering role within
our Technology Solutions Group. You will showcase your success in a fast-paced environment through
collaboration, ownership and innovation. Your expertise in emerging trends and practices will evoke
stimulating discussions around optimization and change to help keep our competitive edge. This
rewarding opportunity will enable you to make a big impact in our organization, so if this sounds
exciting, then PGIM might be the place.
Your Impact:
• Build and maintain new and existing applications in preparation for a large-scale
architectural migration within an Agile function.
• Align with the Product Owner and Scrum Master in assessing business needs and
transforming them into scalable applications.
• Build and maintain code to manage data received from heterogenous data formats including
web-based sources, internal/external databases, flat files, heterogenous data formats
(binary, ASCII).
• Help build new enterprise Datawarehouse and maintain the existing one.
• Design and support effective storage and retrieval of very large internal and external data
set and be forward think about the convergence strategy with our AWS cloud migration
• Assess the impact of scaling up and scaling out and ensure sustained data management and
data delivery performance.
• Build interfaces for supporting evolving and new applications and accommodating new data
sources and types of data.
Your Required Skills:
• 5 years of experience in building out data pipelines in Python or Java
• 2 years of experience working in Azure Cloud Service
• Experience with data processing platform, such as Azure Data Factory
• Experience with data lake/data marts/data warehouse
• Experience with transactional database engines, such as SQL server
• Fluent with SQL for data analysis
• Excellent analytical and problem-solving skills with the ability to think quickly and offer
alternatives both independently and within teams.
• Exposure working in an Agile environment with Scrum Master/Product owner and ability to
deliver
• Ability to communicate the status and challenges with the team
• Demonstrating the ability to learn new skills and work as a team
Your Desired Skills:
• Experience with Spark
• Experience working in Hadoop or other Big data platforms
• Exposure to deploying code through pipeline
• Good exposure to Containers like ECS or Docker
• Working experience in a Linux based environment
• Direct experience supporting multiple business units for foundational data work and sound
understanding of capital markets within Fixed Income
• Knowledge of Jira, Confluence, SAFe development methodology & DevOps
• Proven ability to work quickly in a dynamic environment.
• Bachelor's degree Computer Science or a related field.