Skip to main contentA logo with &quat;the muse&quat; in dark blue text.

Software Developer - Data Platform

AT IBM
IBM

Software Developer - Data Platform

Krakow, Poland

Introduction
At IBM, work is more than a job - it's a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you've never thought possible. Are you ready to lead in this new era of technology and solve some of the world's most challenging problems? If so, lets talk.

Your Role and Responsibilities
We are currently seeking a talented software developer focused on our data platform that powers transformative AI/ML products reaching tens of millions of customers per day, feeding billions of customers worldwide. The department covers data infrastructure, data pipelines, analysis, and performance optimization.

Want more jobs like this?

Get Software Engineering jobs in Krakow, Poland delivered to your inbox every week.

By signing up, you agree to our Terms of Service & Privacy Policy.

The ideal candidate has experience architecting, developing, and supporting large-scale data platforms & infrastructure with a focus on resilience, scalability, and performance within a fast-growing, agile environment.

Responsibilities:
• Develop and maintain the petabyte scale data lake, warehouse, pipelines, and query layers.
• Develop and support multi-region data ingestion system from geographically distributed edge AI systems.
• Develop and support AI research pipelines, training and evaluation pipelines, audio re-encoding and scanning pipelines, and various analysis outputs for business users
• Use pipelines to manage resilient idempotent coordination with external databases, APIs, and systems
• Work with AI Speech and Audio engineers to support and co-develop heterogenous pipelines over large flows of conversation AI data to support and accelerate experimentation with new AI models and improvements

Required Technical and Professional Expertise
• 4+ Years Professional Python Experience.
• 2+ Years PubSub Experience (Kafka, Kinesis, SQS, MQTT, etc).
• 3+ Years working in petabyte scale data platforms.
• 3+ Years working in AWS.
• Experience building schema-based parsers or ETLs using standard tooling in Python.
• Experience developing with Apache Avro, Parquet Schemas, SQLAlchemy (or similar ORMs), and pySpark in Python

Preferred Technical and Professional Expertise
Professional experience with conversational AI (chatbots, virtual assistants, etc.)
Professional experience developing and supporting large scale Lakehouses
Professional experience architecting and implementing large scale query engines such as Presto

What we offer:
• Working for a top 5 IT company according to Forbes 2022 best employers ranking
• International and prestigious projects
• Highly skilled teams of experts
• Wide range of IBM trainings and certificates
• Unlimited access to Udemy, Harvard Business Review, Safari O'Reilly, getAbstract, IBM AI Skills Academy
And what is more:
• Contract of employment
• Competitive compensation - salary range, depending on your skills and experience
• Private medical care and life insurance
• Employee Assistance Program
• Sport, charity & other networking groups
• Summer / winter camps for children
• Discounts with IBM employee badge
• Referral Bonus Program
• Home office option
• No dress code

Client-provided location(s): Kraków, Poland
Job ID: IBM-20738473
Employment Type: Full Time

Company Videos

Hear directly from employees about what it is like to work at IBM.