Skip to main contentA logo with &quat;the muse&quat; in dark blue text.

Data Engineer - Japanese Bilingual

AT IBM
IBM

Data Engineer - Japanese Bilingual

Manila, Philippines

Introduction
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.

A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe.

You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat.

Want more jobs like this?

Get jobs delivered to your inbox every week.

Select a location
By signing up, you agree to our Terms of Service & Privacy Policy.


Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience.

Your Role and Responsibilities

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.

Your primary responsibilities include:

  • Strategic Data Model Design and ETL Optimization: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements.
  • Robust Data Infrastructure Management: Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization.
  • Seamless Data Accessibility and Security Coordination: Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too.
PHJP2024

Required Technical and Professional Expertise

  • Proficient with AWS Data Platform components for Data Lakehouse - AWS S3, RedShift, RedShift Spectrum, AWS Glue with Spark, AWS Glue with Python, Lambda functions with Python, AWS Glue Catalog and AWS Glue Databrew, Dynamo DB, Aurora
  • Proficient with AWS Kinesis and Managed Streaming for Apache Kafka
  • Proficient with using other open source technologies like Apache Airflow and dbt, Spark / Python or Spark / Scala on AWS Platform
  • Experience in developing batch and real time data pipelines for Data Warehouse and Datalake
  • Experience in using DataBricks services on AWS platform
  • Experience in Scheduling and managing the data services on AWS Platform
  • Amenable to work on a client-based schedule (dayshift, mid-shift, night-shift) and in any IBM location in Quezon City (Eastwood and/or UP Ayala Technohub) or Cebu.

Preferred Technical and Professional Expertise

  • JLPT N1-N3 certification is preferred
  • Experience in Big Data tools such as Python, Hadoop, Hive, or Spark
  • Proven background in SQL, Unix/Linux, and ETL process

Client-provided location(s): Quezon City, Metro Manila, Philippines; Cebu City, 6000 Cebu, Philippines
Job ID: IBM-21297907
Employment Type: Full Time

Company Videos

Hear directly from employees about what it is like to work at IBM.