What are the responsibilities and job description for the AWS Data Engineer with strong Python (32274) position at Myticas Consulting?
Myticas's direct client based out of Phoenix, AZ, is currently seeking an AWS Data Engineer with strong PYTHON experience for a 100% remote contract position.
Job Description:
We are looking for a versatile and experienced professional to fill the role of Data Engineer with a strong expertise in utilizing AWS tools and services. The ideal candidate will be adept at both data analysis and engineering, with a deep understanding of AWS technologies to drive data-driven insights and solutions.
Responsibilities:
- Data Exploration and Analysis: Leverage AWS tools such as Amazon Redshift, Amazon Athena, and Amazon Quicksight to extract, transform, and analyze data, providing actionable insights to support business decisions.
- Machine Learning and Predictive Analytics: Develop and deploy machine learning models using AWS SageMaker, utilizing algorithms to generate predictive and prescriptive insights from large datasets.
- Data Pipeline Development: Design, build, and maintain end-to-end data pipelines using AWS services like AWS Glue, ensuring efficient data movement and transformation from various sources to target destinations.
- Real-time Data Processing: Implement real-time data processing using AWS Kinesis, Lambda, and other relevant tools, enabling near-instantaneous data insights and actions.
- Big Data Technologies: Work with AWS EMR and related tools to process and analyze large-scale datasets using technologies like Apache Spark and Hadoop.
- Data Warehousing: Design and optimize data warehousing solutions using AWS Redshift, ensuring high-performance querying, and reporting capabilities.
- Data Quality and Governance: Implement data validation, cleansing, and quality checks using AWS services to ensure the accuracy and reliability of data.
- Dashboard Creation: Develop interactive and visually appealing dashboards using Amazon QuickSight or other relevant tools, enabling stakeholders to easily consume and interpret data.
- Collaboration: Collaborate with cross-functional teams to understand business requirements, provide data-driven insights, and develop solutions that align with organizational goals.
- Cloud Cost Optimization: Monitor and optimize AWS resource utilization to manage costs effectively while maintaining high-performance data processing and analysis capabilities.
Qualifications:
- Bachelor's or advanced degree in Computer Science, Data Science, Engineering, or a related field.
- Strong proficiency in AWS services, including but not limited to Redshift, Athena, Glue, SageMaker, Kinesis, Lambda, EMR, and Quicksight.
- Demonstrated experience in both data analysis and engineering, with a portfolio showcasing impactful data-driven projects.
- Proficiency in programming languages such as Python, R, or Java for data manipulation, analysis, and model development and component deployment through Terraform.
- Experience with machine learning frameworks and libraries (e.g., TensorFlow, Scikit-learn) and their integration with AWS tools.
- Solid understanding of data warehousing concepts, data modeling, and SQL proficiency.
- Strong problem-solving skills and the ability to work in a fast-paced, collaborative environment.
- Excellent communication skills to translate technical insights into actionable business recommendations.