What are the responsibilities and job description for the DevOps Engineer position at EDF Renewables?
The Global Solar Optimization Platform is focused on building end-to-end solutions for our solar sites around the world and transforms the data into meaningful metrics for our customers.
Our team ingests massive volumes of data and generates a series of KPIs that help us and our customers understand how our sites are performing, where and why they are under-performing, what the most common problems are, etc.
The number of sites and the volume of data is growing fast. We'll need your help building smart, reliable, and performant solutions to make the most of the data.
Our current stack includes Redshift, RDS, Glue, Lambdas, S3, the AWS developer tools, and other AWS technologies. If you have good ideas for alternative solutions, we want to hear about it!
Near-future opportunities (as in later this year!) include being a part of solving and building solutions for our revolutionary state-of-the-art energy storage solutions.
Responsibilities:
15% - Build, support, test, and maintain efficient CI/CD pipelines that allows the dev team to deploy world-class software as quickly and efficiently as possible
15% - Monitor and report system metrics to improve overall efficiency and to increase scalability across the platform
15% - Maintain critical services and create a vigilant notification strategy for internal teams
15% - Guide decisions around reliability and resiliency including auto-scaling, self-healing, and circuit-breakers
10% - Find opportunities to improve or optimize processes through automation
10% - Work with the development team to prototype new ideas and generate detailed reports and metrics to guide decisions
10% - Help mentor and educate your teammates (and vice versa) to expand everyone's skillsets
5% - Work closely with data engineers create, improve, and maintain automated highly reliable data pipelines
5% - Other duties as assigned
Qualifications: (Degree/Certifications/License/Experience/Specialized Knowledge/Skills)
Education/Experience –
BS in Computer Science or relevant experience
4 years of experience working in software engineering
2 years of hands-on experience building and managing AWS infrastructure
2 years of hands-on experience of programming in languages such as Python, Ruby, Go, Swift, Java, .Net, C or similar object-oriented language
Experience with automating cloud native technologies, deploying applications, and provisioning infrastructure
Hands-on experience with Infrastructure as Code, using CloudFormation, CDK, or Sceptre
Experience in an automated CI/CD environment. Detailed understanding of automation of all elements of a deployment pipeline including source control, CI, deployment, and QA automation
Experience with *nix file systems and bash scripting
Bonus points for previous experience with a Big Data team and/or automation tools (Chef, Puppet, Ansible, Salt)
Skills/Knowledge/Abilities –
Proactive communicator who can translate between technical and non-technical stakeholders
Team player who is interested in sharing knowledge and mentoring others
Someone who stays up-to-date with high-potential new technologies, and can evaluate and present to the team
Physical Requirements: (describe any physical demands of the job such as lifting, climbing, standing, stooping, etc.)
Working Conditions:
95% of time is spent in the office environment, utilizing computers (frequent use of various Microsoft software/programs), phones, and general office equipment. 5% of time is spent outside of the office visiting vendors’ and/or internal customers’ sites in additional to attending various conferences and meetings.