What are the responsibilities and job description for the Business Intelligence Developer position at Saxon Global?
Business Intelligence Developer
Saxon Global is hiring a skilled Business Intelligence Developer to join its team. As a Business Intelligence Developer, you will be responsible for designing and implementing data storage solutions using Apache Hadoop and other big data technologies. You will also play a key role in the development of business intelligence solutions that help our customers make informed business decisions.
Your primary responsibility will be to design and implement data storage and analytics solutions using data lake patterns. To achieve this, you will draw on your expertise in data analysis, technology, and your passion for helping people leverage technology to transform their business processes and analytics. You will work closely with cross-functional teams to analyze, design, and develop data storage and analytic solutions, ensuring that our solutions meet the highest standards of quality and performance.
You will be responsible for defining and executing ETL processes using Apache Spark on Hadoop and other relevant tools, determining appropriate translations and validations between source data and target databases, implementing business logic to cleanse and transform data, and designing and implementing error handling procedures. You will also work closely with data architects to develop project documentation and storage standards, monitor performance, troubleshoot, and tune ETL processes using tools like those in the AWS ecosystem, create and automate ETL mappings to consume loan level data source applications to target applications, and execute end-to-end implementation of underlying data ingestion workflow.
To succeed in this role, you must have at least 5 years of experience developing in Java and Python, as well as a Bachelor's degree in statistics, data science, or a related field. You should also have experience working with different databases and an understanding of data concepts, including data warehousing and data lake patterns. Furthermore, you must have 3 years' experience in Data Storage/Hadoop platform implementation, including hands-on experience in implementation and performance tuning Hadoop/Spark implementations, specifically using Amazon Elastic Map Reduce (EMR), and implementing AWS services in a variety of distributed computing environments.
Saxon Global is hiring a skilled Business Intelligence Developer to join its team. As a Business Intelligence Developer, you will be responsible for designing and implementing data storage solutions using Apache Hadoop and other big data technologies. You will also play a key role in the development of business intelligence solutions that help our customers make informed business decisions.
Your primary responsibility will be to design and implement data storage and analytics solutions using data lake patterns. To achieve this, you will draw on your expertise in data analysis, technology, and your passion for helping people leverage technology to transform their business processes and analytics. You will work closely with cross-functional teams to analyze, design, and develop data storage and analytic solutions, ensuring that our solutions meet the highest standards of quality and performance.
You will be responsible for defining and executing ETL processes using Apache Spark on Hadoop and other relevant tools, determining appropriate translations and validations between source data and target databases, implementing business logic to cleanse and transform data, and designing and implementing error handling procedures. You will also work closely with data architects to develop project documentation and storage standards, monitor performance, troubleshoot, and tune ETL processes using tools like those in the AWS ecosystem, create and automate ETL mappings to consume loan level data source applications to target applications, and execute end-to-end implementation of underlying data ingestion workflow.
To succeed in this role, you must have at least 5 years of experience developing in Java and Python, as well as a Bachelor's degree in statistics, data science, or a related field. You should also have experience working with different databases and an understanding of data concepts, including data warehousing and data lake patterns. Furthermore, you must have 3 years' experience in Data Storage/Hadoop platform implementation, including hands-on experience in implementation and performance tuning Hadoop/Spark implementations, specifically using Amazon Elastic Map Reduce (EMR), and implementing AWS services in a variety of distributed computing environments.