Company Summary
DISH Network Technologies India Pvt. Ltd is a technology subsidiary of EchoStar Corporation. Since 1980, the organisation has been at the forefront of technology, serving as a disruptive force and driving innovation and value on behalf of our customers.
Our Product Portfolio: Boost Mobile (consumer wireless), Boost Mobile Network (5G connectivity), DISH TV (Direct Broadcast Satellite), Sling TV (OTT), OnTech (Smart Home Services), Hughes (global satellite connectivity solutions) and Hughesnet (satellite internet).
Our facilities in India is one of EchoStar's largest development centers outside the US. As a hub for technological convergence, our engineering talent is a catalyst for innovation in multimedia network and communications development.
Want more jobs like this?
Get jobs in Bangalore, India delivered to your inbox every week.
Department Summary
Our Technology teams challenge the status quo and reimagine capabilities across industries. Whether through research and development, technology innovation or solution engineering, our people play vital roles in connecting consumers with the products and platforms of tomorrow.
Job Duties and Responsibilities
- Build Data Pipelines using Python, Scala and PySpark to extract, transform, and load data from various sources into our data lake/data warehouse. (preferably in Databricks)
- Design, build, and maintain efficient data pipelines using AWS services such as S3, Lambda, EMR, cloudwatch, sns, sqs and others.
- Optimize and tune data pipelines for performance and reliability.
- Implement data quality and validation processes to ensure high-quality data.
- Work closely with data scientists and analysts to understand data requirements and provide scalable solutions.
- Monitor and troubleshoot data infrastructure and pipelines in production.
- 4+ years as a Data Engineer in the wireless and/or telecom space esp., in Sales, Marketing, Retention, Product, Revenue and Partner Management etc.
- Strong proficiency with AWS cloud services for data engineering.
- Experience with Databricks for data processing and analytics.
- Proficiency in Python and Scala for scripting and data manipulation.
- Solid understanding of distributed computing principles and experience with big data frameworks such as PySpark.
- Experience with data modeling, data warehousing, and building ETL pipelines.
- Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent practical experience
- Strong problem-solving skills and attention to detail.
- Strong agile experience with scrum
- Excellent communication and collaboration skills.
- Employee Stock Purchase
- Term Insurance
- Accident Insurance
- Health Insurance
- Training Reimbursement
- Gratuity
- Mobile and Internet Reimbursement
- Team Outings