Job Posting for Senior Executive, Data & Digital at Keppel Ltd.
Job Description
Develops, maintains scalable data pipelines and builds out new integrations to support continuing increases in data volume and complexity
Develop and maintain scalable, optimized data pipelines leveraging Python and AWS services to support increasing data volume and complexity, while ensuring seamless integration with AI platforms like Bedrock and Google. Further enhance data accessibility and drive data-driven decision making by collaborating with analytics and business teams to refine data models for business intelligence tools.
Develop, maintain, and optimize scalable data pipelines using Python and AWS services (e.g., S3, Lambda, ECS, EKS, RDS, SNS/SQS, Vector DB).
Rapidly developing next-generation scalable, flexible, and high-performance data pipelines.
Collaborate with analytics and business teams to create and improve data models for business intelligence.
End-to-end ownership of data quality in our core datasets and data pipelines.
Participate in code reviews and contribute to DevOps / DataOps / MLOps.
Job Requirements
Bachelor's degree in Computer Science, Engineering, or a related field.
2-3 years of experience in data engineering or a similar role.
Strong programming skills in Python, SQL, AWS and related tech stack.
Experience with building scalable data pipelines with technologies such as Glue, Airflow, Kafka, Spark etc..
Experience using Snowflake, DBT, Bedrock is a plus.
Good understanding of basic machine learning concepts (Sagemaker).
If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution.
Compensation Planning
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles
Skills Library