What are the responsibilities and job description for the AWS Software Engineer III - Java/Python position at JPMorgan Chase?
Job Description
We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible.
As an AWS Software Engineer III - Java/Python at JPMorgan Chase within the Commercial & Investment Bank – Market Data Lake team, you are an integral part of an agile team that works to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. As a core technical contributor, you are responsible for conducting critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.
Job responsibilities
- Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
- Develops secure high-quality production code, and maintains algorithms that run synchronously with appropriate systems
- Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development
- Contributes to software engineering communities of practice and events that explore new and emerging technologies
- Adds to team culture of diversity, equity, inclusion, and respect
Required qualifications, capabilities, and skills
- Formal training or certification on Software Engineering concepts and 3 years applied experience.
- Previous development experience in Java and/or Python
- Hands-on working experience on AWS, and with services like KMS, IAM, Lambda, S3, Glue, EKS, SNS etc.
- Experience working with tools such as Terraform to provision AWS cloud services
- Test Driven Development experience using JUnit, Mojito or similar
- Working experience with query engines such as AWS Athena & Redshift
- Working experience with message bus technologies such as Kafka or AMPS
- Extensive knowledge and work experience with distributed system and massively parallel processing
- Experience developing open-source libraries or internal libraries that are integrated into applications by other internal teams
- Working knowledge of SQL
Preferred qualifications, capabilities, and skills
- Prior experience with big data technologies is a big plus
- Experience working with Databricks, Iceberg are a plus.
- Hands-on experience with Docker, Kubernetes or related container platform
- Experience with AWS Lake Formation