Job Summary
At NetApp, our mission is to help our customers bring AI to their data - wherever and however they want - in a way that is agile, achievable, and secure. Our AI tools help customers to seamlessly deploy AI on data in-place: on-prem, hybrid or cloud. By providing AI-ready infrastructure, NetApp enables confident innovation and effective data management for our customers, while maintaining the highest standards of security and regulatory compliance.
Job Requirements
- Provide technical direction for AI projects, ensuring the application of best practices and cutting-edge technologies.
- Collaborate with cross-functional teams to align AI initiatives with business objectives.
- Spearhead by providing technical direction on multiple AI projects, ensuring timely and high-quality delivery.
- Stay current with the latest advancements in AI/ML and integrate new technologies into the team's work.
- Proven expertise in AI and machine learning, including supervised and unsupervised learning, neural networks, natural language processing, computer vision, or reinforcement learning.
- Experience deploying AI/ML models in production environments at scale.
- Proficiency in programming languages such as Python, Scala, Java, or C++.
- Strong familiarity with AI/ML frameworks and tools such as TensorFlow, PyTorch, Scikit-learn, etc.
- Working in Linux, AWS/Azure/GCP, Kubernetes - Control plane, Auto scaling, orchestration, containerization is a must
- Proficiency No Sql Document Databases (e.g., Mongo DB, Cassandra, Cosmos DB, Document DB)
- Experience building Micro Services, REST APIs and related API frameworks.
- A strong understanding and experience with concepts related to computer architecture, data structures and programming practices
- Storage Domain experience is a plus
Want more jobs like this?
Get jobs in San Jose, CA delivered to your inbox every week.
Education
- 8-10+ yrs Proficiency in programming languages like Python, Scala, Java
- 6+ years experience with Machine Learning Libraries and Frameworks: PyTorch, TensorFlow, Keras, Open AI, LLMs ( Open Source), LangChain etc
- 6+ years experience working in Linux, AWS/Azure/GCP, Kubernetes - Control plane, Auto scaling, orchestration, containerization is a must.
- 6+ years experience with No Sql Document Databases (e.g., Mongo DB, Cassandra, Cosmos DB, Document DB)
- 6+ years experience working building Micro Services, REST APIs and related API frameworks.
- 8+ yrs experience with Big Data Technologies: Understanding big data technologies and platforms like Spark, Hadoop and distributed storage systems for handling large-scale datasets and parallel processing.
- A strong understanding and experience with concepts related to computer architecture, data structures and programming practices
- Storage Domain experience is a plus
Compensation
The base salary range for this position is $190,000 - $250,000 and will be determined by the candidate's location, qualifications, experience, and education. Final compensation packages are competitive and in line with industry standards, reflecting a variety of factors, and include a comprehensive benefits package. This may cover Health Insurance, Life Insurance, Retirement or Pension Plans, Paid Time Off (PTO), various Leave options, Performance-Based Incentives, employee stock purchase plan, and/or restricted stocks (RSU's), with all offerings subject to regional variations and governed by local laws, regulations, and company policies. Benefits may vary by country and region, and further details will be provided as part of the recruitment process.
Nearest Major Market: San Jose
Nearest Secondary Market: Palo Alto
Job Segment: Database, Developer, Java, Linux, SQL, Technology