About Gap Inc.
Our past is full of iconic moments - but our future is going to spark many more. Our brands - Gap, Banana Republic, Old Navy and Athleta - have dressed people from all walks of life and all kinds of families, all over the world, for every occasion for more than 50 years.
But we're more than the clothes that we make. We know that business can and should be a force for good, and it's why we work hard to make product that makes people feel good, inside and out. It's why we're committed to giving back to the communities where we live and work. If you're one of the super-talented who thrive on change, aren't afraid to take risks and love to make a difference, come grow with us.
About the Role
Want more jobs like this?
Get Data and Analytics jobs in Hyderabad, India delivered to your inbox every week.
In this role, you will design highly scalable and high performing technology solutions in an Agile work environment and produce and deliver code and/or test cases using your knowledge of software development and Agile practice. You will collaborate closely with business support teams, product managers, security and architecture to assist in resolving critical production issues to help simplify and improve business processes through the latest in technology and automation. You are a technical expert that will lead through the requirements gathering, design, development, deployment, and support phases of a product. You are proficient in at least one core programming languages or packages.
What You'll Do
- Senior Data Engineer with expertise in designing and implementing scalable data solutions, including robust data pipelines.
- Strong proficiency in ETL processes, MLOps practices for efficient model deployment, and utilizing technologies such as Databricks, DataLake, Vector DB, and Feature Store are essential
- Design, optimize, and maintain scalable data pipelines using PySpark (Apache Spark), Python, Databricks, and Delta Lake.
- Implement MLOps practices for efficient deployment and monitoring of machine learning models.
- Should be able to Develop strategies and tools for detecting and mitigating data drift.
- Utilize Vector DB for effective data querying and management.
- Establish and manage a Feature Store to centralize and share feature data for machine learning models.
- Ensure data integrity and quality throughout all stages of the pipeline.
- Collaborate with teams and stakeholders to deliver impactful data solutions.
- Demonstrate proficiency in Python programming, PySpark (Apache Spark), data architecture, ETL processes, and cloud platforms (AWS, Azure, GCP
- Overall 5+ years experience into Databricks, Delta Lake, PySpark (Apache Spark), MLOps, Data Drift Detection, Vector DB and Feature Store .
- Expeience into Designing, Optimizing and Maintianing Data Pipelines
- Experinence into Implementaton of MLOps practices for efficient deployment and monitoring of ML models
- Should be able to Develop strategies and tools for detecting and mitigating data drift.
- Utilize Vector DB for effective data querying and management.
- One of the most competitive paid time off plans in the industry
- Comprehensive health coverage for employees, same-sex partners and their families
- Health and wellness program: free annual health check-ups, fitness center and Employee Assistance Program
- Comprehensive benefits to support the journey of parenthood
- Retirement planning assistance
- See more of the benefits we offer.