What are the responsibilities and job description for the Senior Platform Data Architect position at iBusiness Funding?
About iBusiness Funding
iBusiness Funding is a leader in providing innovative Software as a Service (SaaS) solutions for banks and lenders, with a specialization in SBA lending. We build scalable lending platforms that streamline the business lending process, allowing lenders to efficiently deliver capital to small and medium-sized businesses.
To date, we’ve processed over $7 billion in SBA loans and handle more than 1,000 business loan applications daily. Our team is driven by our core values of innovation, integrity, enjoyment, and family.
Position Description:
Do you have deep expertise in leading the end-to-end development of large datasets across a variety of platforms? Are you great at designing advanced data systems, driving strategic initiatives, and redefining best practices with a cloud-based approach to scalability and automation? Join iBusiness Funding as a Platform Data Architect where you will take a leadership role in enabling data driven solutions to our internal and external clients.
Your expertise will drive innovation, mentor team members, and shape the data strategy for impactful business outcomes. Partnering with product and business / operations teams, you will take a strategic approach to solving business challenges by driving scalable and innovative solutions. As a senior team member, you will lead efforts to align data strategies with organizational goals, bringing your expertise in data to play a pivotal role in advancing our company’s data-driven initiatives.
Key Job Responsibilities:
In this role, you will have the opportunity to display and develop your skills in the following areas:
• Lead the evolution of our data framework and architecture to support the evolution of our platform and our customers’ reporting needs
• Lead a team of data engineers and database admins to build out and maintain a performant, fault-tolerant, database systems and infrastructure, using best practices such as sharding/partitioning, replicas, compression, parallelization, etc.
• Advise and collaborate with our software engineers to build and maintain optimal data models that align with our data vision and requirements while support product and business requirements
• Develop, maintain, and support ETL pipelines with robust monitoring, alarming, and fault-tolerance at scale using cloud native technologies, such as AWS Glue. AWS Data Migration Service, and Lambda
• Design data structures that support heavy distributed querying while incorporating data security best practices such as column/row level security
• Identify and drive opportunities to continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for stakeholders.
• Provide thought leadership and collaborate cross-functionally with teams such as risk, analytics, and product to design and develop innovative, data-driven products and tools.
What You Will Need:
• Bachelor's or Master’s degree in Computer Science, Engineering, Mathematics, or a related technical discipline
• 5 years of data engineering experience at a large publicly traded company
• 5 years of experience working with large-scale data repositories, including data lakes and dimensional data warehouses.
• 5 years experience working with multi-tenant data architecture
• 5 years of experience work with AWS database and data lake technologies such as Athena, RDS Aurora, RedShift, Neptune, etc.
• Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets.
• Strong experience with data security and privacy best practices, including encryption, access controls, and compliance requirements.
• Experience leading and mentoring teams and ensuring adherence to coding standards and best practices.
• Experience leading the end-to-end lifecycle of highly available and distributed data systems in a production environment.
• Candidates must be authorized to work in the U.S.
What Would Be Nice To Have:
• Ability to write high quality, maintainable, and robust code, often in SQL and Python
• Experience with AWS Lake Formation and big data technologies like EMR
• Experience with data streaming technologies such as Firehose and Kinesis
• Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)
• AWS Database Specialty Certification
• Hands-on in any scripting language (BASH, C#, Java, Python, Typescript)
• Hands on experience using ETL tools (SSIS, Alteryx, Talend)
• Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations
Conclusion:
This job description is intended to convey information essential to understanding the scope of the job and the general nature and level of work performed by job holders within this job. This job description is not intended to be an exhaustive list of qualifications, skills, efforts, duties, responsibilities, or working conditions associated with the position.
The company is an equal opportunity employer and will consider all applications without regard to race, sex, age, color, religion, national origin, veteran status, disability, genetic information, or any other characteristic protected by law.