Dir & Data Engineer - GE06BE
We're determined to make a difference and are proud to be an insurance company that goes well beyond coverages and policies. Working here means having every opportunity to achieve your goals - and to help others accomplish theirs, too. Join our team as we help shape the future.
The Hartford is at the forefront of data-driven innovation. Our Data and AI Engineering teams are dedicated to building robust and scalable data pipelines, AI/ML pipelines, and agentic solutions that power our enterprise. We're excited about the transformative potential of Generative AI and are seeking a dynamic Transformation Lead to champion its adoption that will maximize its impact on our engineering practices.
Want more jobs like this?
Get jobs in Chicago, IL delivered to your inbox every week.
As the Principal Data Engineer - Transformation Lead, you will be a key driver in shaping the future of our data and AI engineering tool bench and workflows. You will lead the strategy for integrating GenAI into our development workflows, fostering a culture of innovation, and empowering our engineers to achieve unprecedented productivity and ease of use. You will collaborate closely with data engineers, platform engineers, data scientists, and architects to define and implement best practices, tools, and processes for leveraging GenAI in data pipeline development, AI/ML pipeline creation, and the building of agentic solutions.
This role will have a Hybrid work schedule, with the expectation of working in an office (Columbus, OH, Chicago, IL, Hartford, CT or Charlotte, NC) 3 days a week. Must be eligible to work in the US without company sponsorship.
Primary Job Responsibilities
- GenAI Strategy & Vision for Engineering: Develop and execute a comprehensive strategy for integrating generative capabilities and advanced automation into our data and AI engineering workflows.
- Developer Advocacy & Community Building: Champion the adoption of GenAI tools and techniques, fostering a vibrant community of practice within the data engineering teams.
- Tool bench Strategy: Collaborate with engineers, scientists, and architects to define and evolve the data and AI engineering tool bench, focusing on GenAI integration.
- Engineering Experience Transformation: Identify and address pain points in the engineering experience, leveraging GenAI to streamline development, testing, integration, and deployment processes.
- Best Practices & Standards: Define and promote best practices for using GenAI in data pipeline development, AI/ML pipeline creation, and agentic solution building.
- Knowledge Sharing & Training: Develop and deliver training programs and resources to educate engineers on GenAI tools and techniques.
- Collaboration & Communication: Serve as a liaison between data engineers, platform engineers, data scientists, and architects, ensuring alignment and effective communication.
- Feedback Loop & Measurement: Establish and manage a feedback loop to continuously improve the adoption and utilization or generative capabilities, along with metrics to monitor impact and refinement.
- Code and Solution Documentation: Contribute to the development and maintenance of comprehensive documentation for optimized engineering workflows with relevant examples and quick starts.
- Technology Evaluation and Adoption: Partner with Architecture, Data Science, and Engineering Leadership to evaluate and recommend new technologies and trends to enhance our engineering capabilities, drive engineering productivity and ease of use, and facilitate innovation.
- Mentorship and Evangelism: Mentor data and AI engineers and collaborate with other architects and engineers to promote the benefits of embracing standard tooling, leveraging automation and generative augmentation, and continuously pushing to advance our portfolio of data and AI capabilities.
- Stay Current: Continuously evaluate and recommend new data technologies and trends to improve our data capabilities.
Skills
- Expert understanding of cloud platforms (e.g., AWS, GCP, Snowflake).
- Extensive knowledge of Informatica IDMC for data integration and transformation.
- Experience with Python, PySpark in a data engineering context.
- Proficiency in Python and familiarity with data engineering tools like Snowflake and Informatica IDMC.
- Experience with CI/CD automation (e.g., GitHub).
- Excellent communication, collaboration, and presentation skills.
- Passion for developer advocacy and community building.
- Ability to translate complex technical concepts into clear and concise language.
- Not afraid to be hands-on, you will write sample code and as well as participate in developer forums and support queues to diagnose common problems and friction points that engineers encounter
Education, Experience, Certifications and Licenses
- Bachelor's or Master's degree in Computer Science, Information Systems, a related field, or equivalent work experience.
- 10+ years of experience in software development with 5+ in data engineering/pipeline development.
Compensation
The listed annualized base pay range is primarily based on analysis of similar positions in the external market. Actual base pay could vary and may be above or below the listed range based on factors including but not limited to performance, proficiency and demonstration of competencies required for the role. The base pay is just one component of The Hartford's total compensation package for employees. Other rewards may include short-term or annual bonuses, long-term incentives, and on-the-spot recognition. The annualized base pay range for this role is:
$145,440 - $218,160
Equal Opportunity Employer/Females/Minorities/Veterans/Disability/Sexual Orientation/Gender Identity or Expression/Religion/Age
About Us | Culture & Employee Insights | Diversity, Equity and Inclusion | Benefits