Introduction
At IBM, work is more than a job - it's a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you've never thought possible. Are you ready to lead in this new era of technology and solve some of the world's most challenging problems? If so, let's talk.
IBM Consulting is IBM's consulting and global professional services business, with market leading capabilities in business and technology transformation. With deep expertise in many industries, we offer strategy, experience, technology, and operations services to many of the most innovative and valuable companies in the world. Our people are focused on accelerating our clients' businesses through the power of collaboration. We believe in the power of technology responsibly used to help people, partners and the planet.
Want more jobs like this?
Get jobs in Noida, India delivered to your inbox every week.
Within IBM Consulting, Asset Engineering Services is a group who builds software products and repeatable solutions to accompany and support multiple consulting and services engagements across different clients. Our group is looking for a motivated and seasoned Senior Data Scientist to our global team to deliver outstanding results for both internal/external clients and build a robust and repeatable product offering. You will be working in one of the top 10 assets in the group with highly collaborative teams in a dynamic and agile environment.
Our team works on a product called Modern Data Accelerators which provides a foundation of data and analytics to enable business outcomes through the creation of cloud and hybrid cloud environments. It manages: Curation, Ingestion, Ingestion/Curation, Metadata, Operational Controls, Persist & Publish, and Data Security.
Your Role and Responsibilities
- Develop and maintain data pipelines for data ingestion, transformation, and loading.
- Optimize data pipelines for performance and scalability.
- Monitor data quality and troubleshoot issues.
- Collaborate with Data Architect and DevOps for infrastructure management.
Required Technical and Professional Expertise
- Strong programming skills in Python (or similar language).
- Experience building data pipelines and data transformation processes with DBT, DuckDB, Spark, and other ETL/ELT tools.
- Familiarity with data ingestion tools like Kafka (if applicable).
- Experience of cloud platforms for data storage and processing.
Preferred Technical and Professional Expertise
- Knowledge of Model Ops practices on cloud and containerization
- Familiarity with design lead development methodologies in complex data platforms and analytics
- Ability to work with data from the data platform and communicate insights effectively.