What you'll do:
"Eaton Corporation's Center for Intelligent Power has an opening for a Associate Engineer- Machine Learning Operations. The ideal candidate will be responsible for developing and maintaining the infrastructure and tools required to deploy and maintain machine learning models at scale. This position requires understanding in machine learning and software engineering. The candidate will work closely with other teams to make sure the requested features by the businesses are delivered.
About Eaton:
Eaton is power management company with 2018 sales of $21.6 billion. We make what matters work. Everywhere you look-from the technology and machinery that surrounds us, to the critical services and infrastructure that we depend on every day-you'll find one thing in common. It all relies on power. That's why Eaton is dedicated to improving people's lives and the environment with power management technologies that are more reliable, efficient, safe and sustainable. Because this is what matters. We are confident we can deliver on this promise because of the attributes that our employees embody. We're ethical, passionate, accountable, efficient, transparent, and we're committed to learning. These values enable us to tackle some of the toughest challenges on the planet, never losing sight of what matters."
Want more jobs like this?
Get jobs in Pune, India delivered to your inbox every week.
"• Maintain the infrastructure and tools required to deploy machine learning models at scale.
• Develop and maintain Data Engineering pipelines, continuous integration, and deployment (CI/CD) pipelines for machine learning models.
• Develop, train, and validate machine learning models to address business needs.
• Understanding the challenges in productionizing machine learning software and collaborating with data scientists to ensure that the software best practices, templates and other MLOps principles are integrated to reduce cycle time.
• Develop and maintain documentation and training materials for machine learning solutions.
• Keep up to date with emerging technologies and trends in machine learning and cloud infrastructure."
Qualifications:
• Requires a minimum of a Bachelor level degree in computer science or equivalent software engineering discipline.
• Bachelor's or Master's degree in Computer Science, Software Engineering, or related field.
Skills:
"• Fresher with understanding in machine learning, software engineering, or related field.
• Understanding of machine learning frameworks such as TensorFlow, PyTorch, or Scikit-learn.
• Understanding of writing data engineering pipeline code with coding best practices.
• Understanding of CI/CD pipelines and containerization technologies such as Docker and Kubernetes.
• Understanding of algorithms such as regression, classification, clustering and deep learning.
• Strong analytical and problem-solving skills.
• Excellent communication skills and ability to work collaboratively with other teams."
• Fresher with understanding in machine learning, software engineering, or related field.
• Understanding of machine learning frameworks such as TensorFlow, PyTorch, or Scikit-learn.
• Understanding of cloud infrastructure such as AWS, Azure, or GCP.
• Understanding of writing data engineering pipeline code with coding best practices.
• Understanding of CI/CD pipelines and containerization technologies such as Docker and Kubernetes.
• Understanding of algorithms such as regression, classification, clustering and deep learning.
• Understanding of software engineering best practices, including version control, testing, and deployment.
• Strong analytical and problem-solving skills.
• Excellent communication skills and ability to work collaboratively with other teams.
• Ability to manage multiple projects and priorities in a fast-paced environment.
Desired Expertise (in one or more of the following areas):
• Professional certification in Machine Learning or related field.
• Understanding of data engineering and data warehousing concepts.
• Understanding of big data technologies such as Hadoop, Spark, or Kafka.
• Familiarity with DevOps practices and tools.
• Familiarity with monitoring and logging tools such as ELK, Grafana, or Prometheus.
• Familiarity with Agile methodologies."