Principal Software Engineer - IE06GE
We're determined to make a difference and are proud to be an insurance company that goes well beyond coverages and policies. Working here means having every opportunity to achieve your goals - and to help others accomplish theirs, too. Join our team as we help shape the future.
At the Hartford, we are seeking a Principal GEN AI Engineer who is responsible for building our GenAIOps (Internal Developer platform) to accelerate designing, developing, and deploying GenAI usecases, and drive innovation at scale.
We are driven by a strong determination to create a meaningful impact and take pride in being an insurance company that extends far beyond the realms of policies and coverages. When you choose to be a part of our team, you open the door to endless opportunities for personal and professional growth, as well as the chance to empower others in reaching their aspirations. You will help bring the transformative power of Generative AI capabilities to re-imagine the 'art of possible' and serve our internal customers and transform the businesses.
Want more jobs like this?
Get jobs delivered to your inbox every week.
This team is dedicated to Generative AI platform engineering. We are looking for an experienced Principal Platform Engineer, to help us build the foundation of our Generative AI capability. You will work on a wide range of initiatives, whether that's building Responsible AI guardrails, or building LLM observability stack, or finetuning a RAG, or working with the DevSecOps team to build the CICD pipeline, or designing a Generative AI Infrastructure that conforms to our strict security standards, or working with the data science team in their pursuit of improving the accuracy of the LLM models.
This role requires versatility and expertise across a wide range of skills. Someone with a diverse background/experience and an engineer at heart will fit into this role seamlessly.
This role will have a Hybrid work arrangement, with the expectation of working in an office location (Charlotte, NC; Chicago, IL; Hartford, CT; Columbus, OH; Frisco, TX) 3 days a week (Tuesday through Thursday). Candidates must be authorized to work in the US without company sponsorship. The company will not support the STEM OPT I-983 Training Plan endorsement for this position.
Responsibilities:
- Design and build fault-tolerant solution to support the Generative AI Ref architecture (RAG, Summarization, Agent etc).
- Ensure code is delivered without vulnerabilities by enforcing engineering practices, code scanning, etc.
- Build and maintain IAC (terraform/Cloud Formation), CICD (Jenkins) scripts, CodePipeline, uDeploy, & GitHub Actions.
- Partner with our shared service teams like Architecture, Cloud, Security, etc to design and implement platform solutions.
- Collaborate with the DS team to develop a self-service internal developer Generative AI platform.
- Design and setup Guardrails for both responsible use of this technology and to prevent adversarial attacks and jailbreaks as well.
- Build the Observability stack for monitoring, logging, experiment tracking and replays.
- Design and build LLM agentic workflows integrating function calling, planning and self reflection to build our next gen Cognitive Architecture.
- Create templates (Architecture As Code) implementing Ref architecture application's topology.
- Build a feedback system using HITL for Supervised finetuning.
Qualifications:
- Bachelor's degree in Computer Science, Computer Engineering, or a technical field.
- 10+ years of experience with AWS cloud.
- Extensive programming experience with Python, Typescript.
- At least 8 years of experience designing and building data-intensive solutions using distributed computing.
- 10+ years building and shipping software and/or platform infrastructure solutions for enterprises.
- Experience with CI/CD pipelines, Automated Testing, Automated Deployments, Agile methodologies, Unit Testing and Integration Testing tools.
- Experience with building scalable serverless application (real-time / batch) on AWS stack (Lambda + step function)
- Knowledge of distributed NoSQL database systems.
- Experience with data engineering, ETL technology, and conversation UX is a plus.
- Experience with HPCs, vector embedding, and Hybrid/Semantic search technologies.
- Experience with AWS OpenSearch, Step/Lambda Functions, SageMaker, API Gateways, ECS/Docker is a plus.
- Proficiency in customization techniques across various stages of the RAG pipeline, including model fine-tuning, retrieval re-ranking, Hybrid search and multimodal RAG plus.
- Strong proficiency in embeddings, ANN/KNN, vector stores, quantization, database optimization, & performance tuning.
- Experience in building Agentic system using frameworks like LangGraph and crewai is a plus.
- Experience with LLM orchestration frameworks like Langchain, LangSmith, LangGraph, LlamaIndex etc.
- Basic understanding of Natural Language Processing, vector space models and Deep Learning.
- Experience with CI/CD pipelines, Automated Testing, Automated Deployments, Agile methodologies, Unit Testing, and Integration Testing tools.
- Excellent problem-solving skills and the ability to work in a collaborative team environment.
- Excellent communication skills.
Compensation
The listed annualized base pay range is primarily based on analysis of similar positions in the external market. Actual base pay could vary and may be above or below the listed range based on factors including but not limited to performance, proficiency and demonstration of competencies required for the role. The base pay is just one component of The Hartford's total compensation package for employees. Other rewards may include short-term or annual bonuses, long-term incentives, and on-the-spot recognition. The annualized base pay range for this role is:
$166,560 - $249,840
Equal Opportunity Employer/Females/Minorities/Veterans/Disability/Sexual Orientation/Gender Identity or Expression/Religion/Age
About Us | Culture & Employee Insights | Diversity, Equity and Inclusion | Benefits