Demo

Data Engineer

Delineate LLC
Indianapolis, IN Full Time
POSTED ON 12/4/2024
AVAILABLE BEFORE 2/3/2025
Grow With Us: Join Our Team of Innovators

At Delineate, we're all about curiosity, collaboration, and making a real impact. We value diverse perspectives and foster an environment where everyone's ideas contribute to success. Our team works together with energy and purpose, tackling complex challenges to deliver practical, lasting solutions. From data engineering to program and policy evaluation, we tailor our services to fit each client's unique needs, helping them achieve meaningful progress. At Delineate, you'll be part of a fast-moving, supportive team that's passionate about driving real change.

Job overviewJob TitleData Engineer IIDate Listed11/27/2024Job LocationIndianapolis, INDescription Our data engineers skillfully navigate the intricate waters of technology, business dynamics, and data variability. Their role demands a delicate balance between technical expertise and strategic vision, ensuring that data solutions align with industry standards and harmonize uniquely with our clients' goals and organizational analytics maturity. The Data Engineer is a key technical contributor responsible for designing, building, and maintaining robust data infrastructure and pipelines that enable seamless data integration, transformation, and analysis. This role involves optimizing cloud resource usage, ensuring data quality and governance, and implementing scalable, efficient data solutions aligned with business objectives. As a member of the data services team, the Data Engineer collaboratively designs database schemas, develops ETL workflows, and ensures compliance with data privacy and regulatory standards. They proactively diagnose and resolve complex technical issues, optimize queries, and contribute to process improvements. Status Part-Time or Full-time considered. Hybrid schedule with remote consideration.SalarySalary for this position is competitive and will be determined based on the candidate's experience, expertise, and qualifications.Benefits for Full-Time Employees
  • Paid Holidays
  • Paid Time Off
  • Employer Retirement Contributions
  • Health Savings Account Contributions
  • Health, Vision, and Dental Insurance Coverage
  • Profit sharing and/or annual bonuses dependent on company performance
  • Flexible work arrangements including remote work
  • Professional development
  • Access to life insurance benefits, short and long-Term disability insurance, and an employee assistance program
Education

Bachelor's degree in Data Science, Computer Science, Software Engineering, Information Systems, Mathematics, or a related technical field is required. Equivalent practical experience in data engineering or related technical roles may be considered in lieu of formal education.

While certifications are not required for this role, they can demonstrate a strong foundation in key technologies and platforms, showcasing your commitment to professional growth and technical expertise. Example certifications include the following:

  • Microsoft DP-900: Azure Data Fundamentals
  • Databricks Certified Data Engineer Associate or Professional
  • Microsoft Azure Data Engineer Associate
  • AWS Certified Data Engineer Associate
  • Snowflake SnowPro Core Certification
  • Google Professional Data Engineer
Technical skills and knowledge you bring to the role

The ideal candidate is highly skilled in Python, SQL, and cloud-based data storage technologies, with a strong focus on automation and continuous learning. They take ownership of tasks within cross-team initiatives, mentor more junior team members, and recommend innovative tools and solutions to enhance performance. This role is integral to driving the scalability, reliability, and efficiency of our clients' data systems.

  • Programming Languages: Proficiency in Python and SQL for data processing and query optimization. Experience with PySpark and one or more additional languages like Scala, Java, or Bash for managing data workflows.

  • Data Storage and Databases: Strong knowledge of relational databases (e.g., PostgreSQL, MySQL, Oracle). Experience with modern data warehouses such as Snowflake, Amazon Redshift, or Google BigQuery. Familiarity with data lakes (e.g., Amazon S3, Azure Data Lake) and lakehouse solutions (e.g., Delta Lake, Apache Iceberg).

  • Big Data Frameworks: Hands-on experience with Apache Spark for distributed data processing, including leveraging Apache Spark through Databricks. Knowledge of Apache Kafka or similar tools for real-time data streaming.

  • Cloud Platforms: Experience with cloud technologies such as AWS (S3, Glue, Redshift), Microsoft Azure (Data Factory, Synapse), or Google Cloud Platform (BigQuery, Dataflow).

  • Data Governance and Security: Understanding of data governance frameworks, compliance (GDPR, HIPAA), and tools like Unity Catalog, Apache Atlas, and Great Expectations.

  • Pipeline Monitoring and Optimization: Experience with monitoring tools, such as Apache Airflow, for pipeline performance. Ability to optimize and troubleshoot data pipelines for scalability and efficiency.
Key responsibilities

In this role, you will work alongside the Delineate team to:

  • Develop database schemas for moderately complex data models, optimizing for query performance. Design and implement data models utilizing concepts like dimensional modeling (Kimball) and normalized data structures (Inmon) to store data for analytical reporting in alignment with business requirements.

  • Design and implement automated data validation and quality checks to ensure data accuracy, consistency, and anomaly detection. Collaborate with cross-functional teams to maintain data integrity across systems and pipelines.

  • Implement and enforce data lifecycle management practices, including data retention, archiving, and deletion. Ensure policies are applied consistently across platforms.

  • Contribute to data integration strategies by designing and optimizing ETL workflows to integrate diverse data sources. Implement data transformation processes to improve data usability and streamline integration.

  • Develop and automate scalable data pipelines, ensuring continuous and reliable data flow. Optimize pipeline monitoring processes to quickly detect and address failures or delays.

  • Enhance ETL workflow performance through optimization techniques and independently refine code to improve efficiency and resource utilization.

  • Conduct thorough root cause analyses for moderately complex issues, identifying underlying problems and proposing effective solutions to prevent recurrence.

  • Lead the resolution of moderately complex incidents, ensuring swift recovery and minimal disruption to operations.

  • Write and optimize efficient SQL queries to improve performance on moderately complex datasets and ensure data processing efficiency.

  • Apply analytical skills to independently tackle and resolve moderately complex technical challenges, delivering innovative and practical solutions.

  • Write complex and modular data processing scripts in Python for performance, ensuring efficient, repeatable, and traceable data transformations and retrieval.

  • Maintain version control practices (e.g., Git) while harnessing DevOps principles to automate build, test, and deployment processes, ensuring continuous integration across development and production environments.

  • Conduct thorough code reviews to uphold standards and best practices. Develop scripts to automate workflows, enhancing efficiency and reducing manual errors.

  • Optimize storage usage in cloud environments for cost-efficiency and performance. Maintain and improve data warehousing systems, ensuring query performance and operational reliability.

  • Ensure the efficient operation of data lakes and integrate them with data processing tools. Optimize storage and processing within a data lakehouse architecture for advanced analytics.

  • Evaluate and recommend new tools and technologies to improve team efficiency and workflow. Suggest process improvements and embrace changes in methodologies and tools.

  • Incorporate data governance best practices within workflows, maintaining high standards of data privacy and regulatory compliance across all tasks and projects.

  • Ensure security measures are consistently applied across all data systems, proactively identifying and addressing areas where additional measures are required. Implement improvements to enhance data protection.
Delineate differentiators

Problem solving is at the heart of who we are, and the ability to break down complex challenges, apply evidence-based practices, and develop practical solutions is at the core of every project. All Delineate employees demonstrate and continuously grow in the the following areas:

  • Adaptive Problem Solver: Demonstrates resilience and flexibility in tackling complex, ambiguous issues. Embraces change as an integral part of the problem-solving process.

  • Evidence-Based Solutions Steward: Approaches problem-solving with a foundation in systematic research, rigorous data analysis, and industry best practices. Builds strong partnerships between business and technical teams to ensure that solutions triangulate evidence, experience, and organizational objectives.

  • Methodical Problem Solver: Assimilates data, facts, and logical reasoning to make rational, informed decisions, even under tight timelines with complex challenges. Approaches each situation with a structured, analytical mindset, critical and inquisitive thinking, testing hypotheses and assumptions, and pursuing sound conclusions.

  • Data Translator: Translates insights into practical and accessible solutions that embed into client processes, driving measured impact and value.

  • Client-Focused Professional: Builds and maintains strong client relationships, demonstrating professionalism, empathy, altruism, and delivering high-quality solutions that exceed client expectations. Places client interests first, focusing on early and continuous value.

  • Mentoring and Development: Actively seeks opportunities for growth and contributes to others' development through feedback and mentorship. Provides mentorship to engineers by offering guidance on technical challenges, conducting code reviews, and delivering constructive feedback to enhance the quality of their work.

  • Strategic Thinker with Attention to Detail: Uses data, logical reasoning, and interpersonal skills to influence others. Builds a persuasive case for ideas or recommendations, grounded in analysis and aligned with business goals.

  • Goal-Oriented Planner: Takes ownership of responsibilities, demonstrating creative adaptability and the ability to plan and execute detailed project strategies. Defines clear goals and outcomes, delivering measurable results to clients and stakeholders.

Salary : $109,400 - $131,400

If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution. Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right. Surveys & Data Sets

What is the career path for a Data Engineer?

Sign up to receive alerts about other jobs on the Data Engineer career path by checking the boxes next to the positions that interest you.
Income Estimation: 
$92,929 - $122,443
Income Estimation: 
$122,257 - $154,284
Income Estimation: 
$92,929 - $122,443
Income Estimation: 
$122,257 - $154,284
Income Estimation: 
$122,257 - $154,284
Income Estimation: 
$143,391 - $179,890
Income Estimation: 
$168,522 - $211,152
Income Estimation: 
$189,259 - $248,928
Income Estimation: 
$71,122 - $96,652
Income Estimation: 
$92,929 - $122,443
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles Skills Library

Not the job you're looking for? Here are some other Data Engineer jobs in the Indianapolis, IN area that may be a better fit.

Data Engineer

RADcube, LLC, Carmel, IN

Data Engineer

ACES, Carmel, IN

AI Assistant is available now!

Feel free to start your new journey!