HatchPros Inc is Hiring a GEN AI Data Platform Engineer Near Hartford, CT
VISA-USC/GC/GC-EAD candidates and Need Any ID need Local candidates feed back on Peter - I need some one located within 1 hour of Hartford, CT with Hands-on Engineering not an Architect who cant have LLM modeling in 2019...when its only been around for 2-3 yrs. They are looking for candidates located in Hartford, CT(Preferred) or Charlotte, NC. 2nd interview will be inperson This role requires versatility and expertise across a wide range of skills. Someone with a diverse background/experience and an engineer at heart will fit into this role seamlessly. The Generative AI team is comprised of a multiple cross-functional group that works in unison and ensures a sound move from our research activities to scalable solutions. You will collaborate closely with our cloud, security, infrastructure, enterprise architecture and data science team to conceive and execute essential functionalities. Responsibilities
Design and build fault-tolerant infrastructure to support the Generative AI Ref architecture (RAG, Summarization, Agent etc).
Ensure code is delivered without vulnerabilities by enforcing engineering practices, code scanning, etc.
Partner with our shared service teams like Architecture, Cloud, Security, etc to design and implement platform solutions.
Collaborate with the DS team to develop a self-service internal developer Generative AI platform.
Design and build the Data ingestion pipeline for Finetuning LLM Models.
Create templates (Architecture As Code) implementing Ref architecture application’s topology.
Build a feedback system using HITL for Supervised finetuning.
Qualifications
Bachelor's degree in Computer Science, Computer Engineering, or a technical field.
4 years of experience with AWS cloud and 4 years developing using Python.
At least 8 years of experience designing and building data-intensive solutions using distributed computing.
8 years building and/or platform infrastructure solutions for enterprises.
Experience with CI/CD pipelines, Automated Testing, Automated Deployments, Agile methodologies, Unit Testing and Integration Testing tools.
Experience with building scalable serverless application (real-time / batch) on AWS stack (Lambda step function)
Knowledge of distributed NoSQL database systems.
Experience with data engineering, with HPCs, vector embedding, and Hybrid/Semantic search technologies.
Experience with AWS OpenSearch, Step/Lambda Functions, API Gateways, ECS/Docker is a plus.
Proficiency in customization techniques across various stages of the RAG pipeline, including model fine-tuning, retrieval re-ranking, and hierarchical navigable small-world graph (HNSW) is a plus.