Recent Searches

You haven't searched anything yet.

10 Cloud Software & Data Engineer Jobs in Houston, TX

SET JOB ALERT
Details...
JPMorgan Chase
Houston, TX | Full Time
$124k-149k (estimate)
1 Day Ago
PsychPlus
Houston, TX | Full Time
$114k-138k (estimate)
3 Days Ago
JPMorgan Chase
Houston, TX | Full Time
$124k-149k (estimate)
1 Week Ago
Confidential
Houston, TX | Full Time
$114k-139k (estimate)
3 Weeks Ago
SLB
Houston, TX | Full Time
$114k-139k (estimate)
5 Months Ago
Sapient Corporation
Houston, TX | Full Time
$104k-130k (estimate)
5 Months Ago
CANONICAL
Houston, TX | Full Time
$123k-147k (estimate)
2 Months Ago
Sysco
Houston, TX | Full Time
$110k-139k (estimate)
2 Weeks Ago
CACI
Houston, TX | Full Time
$114k-136k (estimate)
4 Days Ago
Jade Biz Services
Houston, TX | Full Time
$115k-141k (estimate)
2 Months Ago
Cloud Software & Data Engineer
Confidential Houston, TX
$114k-139k (estimate)
Full Time 3 Weeks Ago
Save

sadSorry! This job is no longer available. Please explore similar jobs listed on the left.

Confidential is Hiring a Cloud Software & Data Engineer Near Houston, TX

Job Details

A Cloud Software & Data Engineer is responsible for developing data engineering applications using third-party and in-house frameworks, leveraging a broad set of development skills that cover data engineering, data accessibility skillsets. The Cloud Software & Data Engineer is responsible for the complete software lifecycle - analysis, design, development, testing, implementation and support, as well as troubleshooting issues, deployment/upgrade of services and associated data, performance tuning and other maintenance work. This specific type of cloud developer will focus on additional items: data engineering (large scale data transformation and manipulation, ETL, etc.), as well as infrastructure fine-tuning for optimization purposes. The position reports to the software project manager.
Responsibilities
  • Work with subject matter experts to clarify requirements and use cases.
  • Turn requirements and user stories into functionality via implementation efforts which include: design, build & maintain efficient, reusable, reliable code for high quality software and services, documentation and traceability.
  • Develop server-side services to be elastically scalable and secure by design to support high volume & high velocity data processing. Services should be backward and forward compatible to ease deployment.
  • Ensure the solution is deployable, operable, and secure.
  • Write and maintain provisioning, deployment, CI/CD and maintenance scripts for services they developed.
  • Write Unit Tests, Automation testing, Data Simulations.
  • Support, maintain, troubleshoot and fine-tune working cloud environments and the software run within.
  • Builds prototypes, products and systems that meets the project quality standards and requirements.
  • Be an individual contributor which includes technical leadership and documentation to developers and stakeholders.
  • Provide timely corrective actions on all assigned defects and issues.
  • Contributes to development plan by providing task estimates.
  • Fulfil organizational responsibilities (sharing knowledge & experience with other teams/ groups)
  • Conduct technical training(s)/session(s), write whitepapers/case studies/blogs etc.
  • REQUIREMENTS
  • Bachelor's degree or higher in Computer Science or related with minimum 5 years working experience.
  • 5 years of software development experience in Big Data technologies (Spark Database & Data Lakes).
  • SQL, No-SQL, JSON, CSV, Parquet data type experience.
  • Most Importantly - Hands on experience building scalable data pipelines using Python & PySpark
  • Advanced knowledge of large-scale parallel computing engines (Spark) - provisioning, deployment, development of computing pipelines, operation and support with performance tuning (3y ).
  • Good experience in building/tuning Spark pipelines in Python. (take out)
  • Good Programming experience with Core Python.
  • Design, build and maintain data processing pipelines in Apache NiFi, Spark Jobs.
  • Extensive knowledge of data structures, patterns and algorithms (5y ).
  • Expertise with several back-end development languages and their associated frameworks like Python (3y ).
  • In-depth knowledge of application, cloud networking and security as well as related development best-practices and patterns (3y ).
  • Advanced knowledge of containerization and virtualization (Kubernetes), as well as scaling clusters & debugging issues on high volume/velocity data jobs and best practices (3y ).
  • Good experience in Spark, Databricks on Kubernetes.
  • Cloud platform knowledge - Azure public cloud expertise (3y ).
  • Advanced knowledge of DevOps, CI/CD and cloud deployment practices (5y ).
  • Advanced skills in setting up and operating databases (relational and non-relational) (3y )
  • Experienced in application profiling, bottleneck analysis and performance tuning.
  • Effective communication and cross functional skills.
  • Problem solving skills, Team player, adaptable & quick worker.
  • Have worked in highly Agile projects in the past.
  • Bachelor's degree or higher in Computer Science or related with minimum 5 years working experience.
  • 5 years of software development experience in Big Data technologies (Spark Database & Data Lakes).
  • SQL, No-SQL, JSON, CSV, Parquet data type experience.
  • Advanced knowledge of large-scale parallel computing engines (Spark) - provisioning, deployment, development of computing pipelines, operation and support with performance tuning (3y ).
  • Good experience in building/tuning Spark pipelines in Python.
  • Good Programming experience with Python.
  • Design, build and maintain data processing pipelines in Apache NiFi, Spark Jobs.
  • Extensive knowledge of data structures, patterns and algorithms (5y ).
  • Expertise with several back-end development languages and their associated frameworks like Python (3y ).
  • In-depth knowledge of application, cloud networking and security as well as related development best-practices and patterns (3y ).
  • Advanced knowledge of containerization and virtualization (Kubernetes), as well as scaling clusters & debugging issues on high volume/velocity data jobs and best practices (3y ).
  • Good experience in Spark, Databricks on Kubernetes.
  • Cloud platform knowledge - Azure public cloud expertise (3y ).
  • Advanced knowledge of DevOps, CI/CD and cloud deployment practices (5y ).
  • Advanced skills in setting up and operating databases (relational and non-relational) (3y )
  • Experienced in application profiling, bottleneck analysis and performance tuning.
  • Effective communication and cross functional skills.
  • Problem solving skills, Team player, adaptable & quick worker.
  • Have worked in highly Agile projects in the past.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

Job Summary

JOB TYPE

Full Time

SALARY

$114k-139k (estimate)

POST DATE

07/13/2024

EXPIRATION DATE

07/14/2024

WEBSITE

michaelmabraham.com

SIZE

<25

Show more

Confidential
Full Time
$42k-53k (estimate)
Just Posted
Confidential
Full Time
$61k-78k (estimate)
Just Posted
Confidential
Full Time
$70k-87k (estimate)
Just Posted