We are seeking a highly skilled and motivated Senior Data Software Engineer to join our innovative team.
In this role, you will leverage cloud-native technologies, including Databricks, Azure DevOps, Delta Lake tables, Spark Structured Streaming, and Lakehouse architecture, within our Microsoft Azure environment. You will play a pivotal role in processing large volumes of financial data, driving innovative solutions that enhance scalability, reduce processing latency, and improve transparency.
The remote option applies only to the Candidates who will be working from any location in Greece.
#LI-DNI#LI-IK2
Responsibilities
- Engineer and maintain high-performance data solutions leveraging Python, Apache Spark, and Delta Lake
- Participate in the full software development lifecycle, including requirements analysis, solution design, code reviews, and documentation
- Collaborate with business partners, IT teams, and architects to ensure data pipelines meet complex and evolving requirements
- Optimize financial data processing pipelines to improve scalability and reduce latency
- Use automated testing to ensure reliability, stability, and accuracy in data processing
- Break down complex data problems into achievable solutions through strong analytical skills
- Support operations teams by troubleshooting and resolving complex pipeline issues
- Identify opportunities for improvement and proactively address challenges
- Design and implement robust relational data models and SQL-based solutions
- Partner with the product owner and team members to deliver seamless data workflows
- Build systems and pipelines that align with modern cloud-native architectural principles
Want more jobs like this?
Get jobs in Kardítsa, Greece delivered to your inbox every week.
- 3+ years of professional experience in data engineering or software engineering roles
- Strong core Python software engineering experience with proficiency in Apache Spark and Delta Lake
- Expertise in relational data models and SQL for designing and managing large-scale data solutions
- Knowledge of Azure Cloud Ecosystem, Databricks, and Azure DevOps
- Background in financial data processing and handling large-scale analytics workloads
- Familiarity with Spark Structured Streaming and its application to real-time data pipelines
- Capability to develop, debug, and refine end-to-end automated testing processes
- Experience with architectural practices related to Lakehouse platforms and data lake solutions
- Fluency in English (B2 level) to collaborate effectively across remote and in-person teams
- Understanding of modern big data processing techniques beyond Spark Structured Streaming
- Background in optimizing Delta Lake and Lakehouse architectures for complex business needs
- Showcase of delivering solutions in finance-specific domains or similar industries
- Familiarity with designing and implementing pipelines in distributed systems with low latency
- For you:
- Paid annual vacation
- Paid sick leave days
- Private health insurance
- Stable income
- Meal and home office compensation
- For your comfortable work:
- Remote and hybrid work opportunities
- Corporate laptop
- Possibility to work on your own device
- Free licensed software
- Relocation opportunities
- Free wellbeing activities
- For your growth:
- Possibility to create a Personal Development Plan from the first day in the company
- Free trainings for technical and soft skills
- Free access to LinkedIn Learning platform
- Free access to internal and external e-Libraries
- Certification opportunities
- Language courses
- Internal technical and non-technical communities
- Possibility to contribute in internal, opensource products
EPAM is committed to providing our global team of 52,800+ EPAMers with inspiring careers. EPAMers lead with passion and honesty and think creatively. Our people are the source of our success, and we value collaboration, try always to understand our customers' business, and strive for the highest standards of excellence.