What are the responsibilities and job description for the Agentforce Data Engineer position at Apolis?
Agentforce Data Engineer
Work location: SF, Seattle, Dallas, Indianapolis, New York (other Salesforce office cities potentially) Hybrid - 3 days weekly in office US
JOB DESCRIPTION:
Must have - DBT (data build tool) experience
Key Skills:
• Develop DBT ETL pipelines for data ingestion and transformation.
• Maintaining, deploying and code versioning the ETL process.
• Using GIT CI/CD for DevOps.
• Actively develop, enhance and maintain data pipelines and workflows for marketing data and metrics.
• Design & develop easy, repeatable and reusable automation data frameworks.
• Work and collaborate with global teams across North America, EMEA and APAC.
• Help in building POC solutions for new marketing metrics that drive effective decision making.
• Design & develop easy, repeatable and reusable automation data frameworks.
• Responsible for end-to-end data management activities, including but not limited to identify fields, data lineage and integration, performing data quality checks, analysis and presenting data.
Key Responsibilities & Scope:
Build metrics from unstructured data - Chat transcripts, AI Agent product logs, Agent interaction texts - to build conversational analytics.
Build metrics from structured data - Salesforce Data Cloud, Snowflake tables, Salesforce Sales Cloud objects.
Design, develop, and maintain data transformation pipelines using dbt to transform raw data into structured, usable datasets for analytics and reporting.
Collaborate with cross-functional teams including data analysts, software engineers, and product managers to translate business requirements into data-driven solutions that provide actionable insights and enhance customer experiences.
Optimize data storage, retrieval, and processing to support high-performance analytics, ensuring the system scales effectively with increasing data volumes.
Ensure data quality and consistency by implementing automated monitoring, alerting, and validation processes.
Automate and streamline data operations to reduce time-to-insight and enhance data reliability Define and uphold data engineering best practices, including code quality, version control, and testing standards.
Support and strengthen data security, privacy, and compliance across all data engineering initiatives to meet industry standards
Work location: SF, Seattle, Dallas, Indianapolis, New York (other Salesforce office cities potentially) Hybrid - 3 days weekly in office US
JOB DESCRIPTION:
Must have - DBT (data build tool) experience
Key Skills:
• Develop DBT ETL pipelines for data ingestion and transformation.
• Maintaining, deploying and code versioning the ETL process.
• Using GIT CI/CD for DevOps.
• Actively develop, enhance and maintain data pipelines and workflows for marketing data and metrics.
• Design & develop easy, repeatable and reusable automation data frameworks.
• Work and collaborate with global teams across North America, EMEA and APAC.
• Help in building POC solutions for new marketing metrics that drive effective decision making.
• Design & develop easy, repeatable and reusable automation data frameworks.
• Responsible for end-to-end data management activities, including but not limited to identify fields, data lineage and integration, performing data quality checks, analysis and presenting data.
Key Responsibilities & Scope:
Build metrics from unstructured data - Chat transcripts, AI Agent product logs, Agent interaction texts - to build conversational analytics.
Build metrics from structured data - Salesforce Data Cloud, Snowflake tables, Salesforce Sales Cloud objects.
Design, develop, and maintain data transformation pipelines using dbt to transform raw data into structured, usable datasets for analytics and reporting.
Collaborate with cross-functional teams including data analysts, software engineers, and product managers to translate business requirements into data-driven solutions that provide actionable insights and enhance customer experiences.
Optimize data storage, retrieval, and processing to support high-performance analytics, ensuring the system scales effectively with increasing data volumes.
Ensure data quality and consistency by implementing automated monitoring, alerting, and validation processes.
Automate and streamline data operations to reduce time-to-insight and enhance data reliability Define and uphold data engineering best practices, including code quality, version control, and testing standards.
Support and strengthen data security, privacy, and compliance across all data engineering initiatives to meet industry standards
Salary : $117,200 - $140,700