Role Responsibilities
As a Data Engineering lead, you will be responsible for building and expanding our data architecture on
Hadoop platforms. You will have typically > 15 years of experience with significant experience in data
solutions on Hadoop, Java/Scala programming and software design and architecture.
Key Responsibilities
∙Create and maintain the bank’s data pipeline and platform architecture
∙Assemble large, complex data sets that meet functional / non-functional business requirements
∙Identify, design, and implement internal process improvements: automating manual processes,
optimizing data delivery, re-designing infrastructure for greater scalability, etc.
∙Build monitoring/alerting tools & frameworks using OSS technologies for doing key
business/technical performance metrics.
∙Work with stakeholders including the Executive, Product, Data and Design teams to assist with
data-related technical issues and support their data infrastructure needs.
∙Keep our data separated and secure across national boundaries through multiple data-center.
∙Help ‘commercialize’ the data platform by working closely with the Data Science team in exploring
opportunities, be it fighting fraud or developing multi-channel cross-sell/up-sell initiatives
Requirements
∙Strong analytic skills related to working with unstructured datasets.
∙Ability to quickly learn and implement newer technologies, particularly on the Big Data side
including HIVE, HBase, Kafka, Spark and Nifi
∙Exposure to data pipelines, architectures and data sets
∙Experience performing root cause analysis on internal and external data and processes to
answer specific business questions and identify opportunities for improvement.
∙Build processes supporting data transformation, data structures, metadata, dependency and
workload management.
∙Working knowledge of message queuing, stream processing and highly scalable ‘big data’ data
stores.
Bachelors
Data Engineering,
Banking/Financial Services/Broking