Job Description
The Hadoop Architect (HA) is expected to play a pivotal bridging role in understanding & mapping client’s Big Data business needs to data-centric solutions based on Hadoop and other ecosystem technologies.
At a broad level, the HA will leverage his/her solutioning expertise to translate the customer’s business need into a techno-analytic problem and appropriately work with the technology and analytic specialists to bring large scale analytic solutions to fruition. The right candidate should have prior experience in Scalable Data-centric Big Data Architecture Design, Consulting and pre-sales.
There are 4 areas that this role will focus on in order of importance:
- Consultant - Providing thought leadership and best practices related to Big Data architecture and data-centric applications and technology to Genpact’s customers. Identify specific capability, practices, tools, and people that the organization would need to develop in this regard. May need to work directly with clients to understand & map business needs to data-centric solutions based on Hadoop and other ecosystem technologies.
- Solutioning - Conceptualize and develop relevant Big Data solutions/ architecture for our customers. Envisioning opportunistic areas with our clients by demonstrating relevant and credible Big Data solutions and paradigms. The role may entail carrying out high level assessment of customer big data architecture readiness and creating the associated technology / analytic benchmarking which may ultimately converge into a client specific Big data roadmap
- Pre-Sales – Lead / Collaborate in response to applied Big Data based RFI/RFP
- Operational Excellence – Ensure execution of critical Big data projects, including driving innovative implementation and insights derived from data
Job Requirements
- MS in Computer Science, Information systems, or Computer engineering, Systems Engineering with 7+ years of relevant experience
- Or PhD in Computer Science, Computational Science or Engineering with 5+ years of industry experience in Big Data or Parallel / Distributed Computing
- Exposure to Big Data Concepts is a must. Full cycle experience desirable in atleast 1 Big Data project from creating a Business use case, Big Data assessment/roadmap, Technology & Analytic Solutioning, Implementation and Change Management
- Demonstrated expertise in Service Oriented Architecture (SOA) design principles
- Strong Hands-on Experience using MapReduce (or other parallel programming technique)
- Strong Hands-on Experience using NoSQL projects in Apache Hadoop ecosystem such as Pig, Hive, Oozie, HBase, Flume or Sqoop, Pig Latin.
- Past experience in stitching large architecture solutions and participating in RFP responses.
- Reasonably well-exposed to concepts in Machine Learning & Statistics, esp., in their application to learning from large data sources.
- Reasonable exposure in deriving insights from large distributed databases.
Country: USA, State: New York, City: New York, Company: Genpact.
Комментариев нет:
Отправить комментарий