Description and Requirements
As the GenAI Application Security Engineer, you will secure Healthfirst's AI-driven solutions, working with technologies like AWS Bedrock. Collaborating with engineering, data science, and product teams across the organization, you will integrate security best practices into AI development lifecycles. This highly visible leadership role is responsible for influencing AI security at scale and shaping the security landscape for next-generation AI systems. At Healthfirst, we lead the way in securing generative AI (GenAI) and cloud-based solutions, ensuring our AI-driven applications remain secure and resilient against emerging threats. As a healthcare technology leader, we integrate security as a key enabler of innovation. Our AppSec team is growing, and we are seeking a GenAI Application Security Engineer to define security standards, drive strategic initiatives, and strengthen protections for AI applications.
Want more jobs like this?
Get Software Engineering jobs that are Remote delivered to your inbox every week.
Duties and Responsibilities:
- Drive the strategic direction of Secure AI Development programs, embedding security into the AI ecosystem.
- Advise senior executives, engineering leaders, and stakeholders on AI/ML security risks and mitigation strategies.
- Lead security assessments, including threat modeling, risk assessments, and security architecture reviews for GenAI platforms and cloud infrastructure, focusing on AWS Bedrock and other platforms.
- Develop and implement security frameworks tailored to AI/ML systems, addressing risks like model poisoning, adversarial AI, and data privacy threats.
- Define security best practices for AI model development, deployment, and monitoring to ensure resilience against emerging threats.
- Establish security monitoring and automation for GenAI applications, enabling scalable, proactive threat detection.
- Conduct secure code reviews, penetration testing, and vulnerability assessments to identify and mitigate AI-specific security risks.
- Develop security policies and governance structures aligned with industry regulations (e.g., HIPAA, PCI) and ethical AI standards pertinent to Healthfirst.
- Mentor and develop engineers, fostering a security-first mindset across engineering and product teams.
- Stay ahead of evolving threats, AI-specific security risks, and industry best practices.
- Engage with internal and external stakeholders to ensure regulatory compliance and AI ethics alignment.
- Lead and contribute to discussions, presentations, whitepapers, establishing Healthfirst as a leader in AI security.
- Support development of incident response plans and mitigation strategies tailored to GenAI applications and environments.
Minimum Qualifications:
- Bachelor's Degree in Computer Science or Cyber Security or High School Diploma/GED(accredited) with equivalent work experience.
- 5 - 8+ years of experience in application security, secure software development, or cybersecurity, with at least 2 - 3+ years focused on AI/ML security or cloud security.
- Expertise in secure AI/ML development, including model security risks, adversarial attacks, and ethical AI considerations.
- Hands-on experience with cloud platforms, particularly AWS (AWS Bedrock knowledge is a plus).
- Proficiency in secure software development, threat modeling, and vulnerability management within AI/ML systems, web apps and API's.
- Experience with security testing methodologies, such as SAST, DAST, and SCA.
- Strong communication and presentation skills, capable of engaging with executive leadership, technical teams, and external stakeholders.
- Proven leadership experience, driving security initiatives, influencing security strategies, and mentoring security teams.
Preferred Qualifications:
- Experience with GenAI platforms such as AWS Bedrock, OpenAI, or similar.
- Expertise with application security tools (e.g., Veracode, Burp Suite, or other code scanning tools).
- Experience in web application and API penetration testing.
- Deep understanding of DevSecOps principles, including container security, IaC security, and cloud-native security best practices.
- Experience in security governance for AI ethics, data privacy, and regulatory compliance frameworks.
- Experience collaborating with regulators, auditors, and compliance teams to ensure AI security governance aligns with industry standards.
- Security certifications (e.g., CISSP, AWS Certified Security, OSCP) are a plus.
Compliance and Regulatory Responsibilities: See Above
- License/Certification: See Above
Hiring Range:
- Greater New York City Area (NY, NJ, CT residents): $131,900 - $190,570
- All Other Locations (within approved locations): $117,400 - $174,675
As a candidate for this position, your salary and related elements of compensation will be contingent upon your work experience, education, licenses and certifications, and any other factors Healthfirst deems pertinent to the hiring decision.
In addition to your salary, Healthfirst offers employees a full range of benefits such as, medical, dental and vision coverage, incentive and recognition programs, life insurance, and 401k contributions (all benefits are subject to eligibility requirements). Healthfirst believes in providing a competitive compensation and benefits package wherever its employees work and live.
The hiring range is defined as the lowest and highest salaries that Healthfirst in "good faith" would pay to a new hire, or for a job promotion, or transfer into this role.
WE ARE AN EQUAL OPPORTUNITY EMPLOYER. Applicants and employees are considered for positions and are evaluated without regard to mental or physical disability, race, color, religion, gender, gender identity, sexual orientation, national origin, age, genetic information, military or veteran status, marital status, mental or physical disability or any other protected Federal, State/Province or Local status unrelated to the performance of the work involved.