Role 1: C2 - Solution Architect for Modern Data and AI Platforms Engineering
JOB Location: - Bangalore
Job Description:
A). For Data Engineering Practice Presales and solutioning (80% of time)
• Leads practice presales and proposals for Industry verticals in the areas of Modern Data Platforms,
Data engineering, Analytics and AI etc.
• Respond to Proposals - reactive and proactive, Lead proposal defense, anchor proposal development
and collaboration.
• Lead the technical architecture and provide technical oversight for the solution roadmap
implementation.
• Participate in customer discovery engagements to lead & blueprint the technical solution.
• Define the solution roadmap based on industry trends and market requirements.
Want more jobs like this?
Get jobs in Bangalore, India delivered to your inbox every week.
• Develop solution offerings and engage with customers to discuss solution proposals and achieve
business wins.
• Develop solution technical use cases, POVs etc.
• Development of Automation/Innovation for own portfolio and practice
• Development of partner ecosystem for own portfolio and practice
• Lead customer POCs and Pilots on need basis
• Participate in internal and external forums (e.g. standard organization, industry forums) to
contribute/propose ideas, review proposals or present new proposals for standardization
• Mentor and groom project team members in the core technology areas
B) For Jumpstart Client projects (20% of time)
• Ability to blueprint and plan complex Data Engineering projects across multiple client projects across
Operational and Analytical Data Platforms
• Strong Stakeholder management and relationship development
• Leads engagement in best suited SDLC model or client suggested model of SAFe
• Develop estimates both effort and commercials.
• Scope, Change and Commercial management.
• Validates Technical outcomes to requirements.
• Review the Solutions and suggests improvements.
Roles & Responsibilities:
Skills Required:
• 14+ years of overall work experience.
• Mandatory minimum 5 yrs experience in Solution development and Presales
• Mandatory - Should have conducted Presales and solutioning in Product Engineering service lines or
startups
• Exposure to sectors, - Manufacturing, Hitech, ISVs, Auto, ENU, communication & network
equipment and BFSI.
• Strong hands-on work experience in architecting, designing, and engineering data platform solutions
including one or more of the following skills:
• Solution Planning and estimation - Functional, technical etc
• Well versed in embedding frameworks, accelerators and execution models
• Must have led in Solution Application Architecture, Data Modelling and Data Flow Architecture
• Experience in data management in areas of Data Catalogs, MDM, Lineage and Data Quality
• Experience in architecting solutions in hybrid landscapes working on Unstructured Data, Time Series,
OT, ET and Enterprise data etc.
• Rich Experience designing and engineering a data platform on Azure or AWS or GCP
• Experience on One or more Cloud Data Processing
o Azure Streaming & Batch Data - Azure Event Hub, Azure IOT Hub, Azure Stream Analytics,
Azure Data Factory, and spark
Internal to Wipro
o AWS Streaming & Batch Data - AWS IOT Core,AWS IOT Events, AWS Kinesis Data Stream,
AWS Glue, AWS Step Functions, AWS EMR etc
o GCP Data - Google Dataflow, Google pub-sub
o Open Source Data extraction, processing tools, APIs, developing custom connectors and
tools, Python coding etc
xt Processing, NLP, Knowledge Graph Construction/Chatbot integration,
MLOPs pipeline creation
Qualifications:
• Experience on Any Big Data Stores
o CDW - Azure Synapse Datawarehouse, AWS Redshift, GCP Big Query
o Cloud Data Lake, Delta Lake- AWS Delta Lake, Databricks Lakehouse, GCP Data Lake (Using
Dataproc, Cloud Storage, Big Query)
o Cloud Nosql- Azure CosmosDB, AWS DynamoDB, Google BigTable(Columnar storage)
o OpenSource NoSql- Casandra, HBase Cloud
• Experience on any of below Big Data Management & pipelines.
o AWS- AWS Elastic Map Reduce(Managed Hadoop, Managed Spark)
o Azure- (Azure Data Factory, Azure Datalake, Azure Databricks)
o GCP - GCP Dataproc , GCP DataFlow
o Open Source - Spark using python/R/Scala
• Experience on any of below Data Governance
o Data Catalog- Azure Purview, AWS Glue Crawler, Google Data Catalog
o Data Lineage- Azure Purview,Google Data Catalog, AWS Glue
o opensource Data Governance - Apache Atlas, Openlineage and other tools
• Experience on any of below Graph
o Graph Construction, Graph DB, Graph API, Graph Processing, Graph Visualization, Ontology
Construction
o Azure Cosmos DB, Apache Gremlin, AWS-Neptune, Neo4j,
o Others-ArangoDB, Aerospike, Janus Graph, WebProtege,
• Experience in Data Ops, Data Movement Platforms, Reverse ETL and various other modern and
emerging stacks is preferred
• Awareness of AI/ML: Te