Big Data Platform Engineer / Big Data Platform Administrator

Location:
Basel, Switzerland
Industry Sector:
International Public Sector
Type of contract:
Contract
Salary:
Competitive

Big Data Platform Engineer / Big Data Platform Administrator

← Back to listings

Office location: Basel

Department: General Secretariat

Unit: Information Management Services

Service:

Employment – Duration: 3 years

Contract type: Fixed-term

FTE%: 100%

Application Deadline: 30/06/2019

Description

We are looking for a number of experts in big data, data analytics and software engineering to help us build the next generation of our data and analytics systems.

Your contribution to our mission We compile and analyse an array of data and statistics on global financial systems and micro-economic activity. From this, we generate a variety of insights for central banks, financial regulatory institutions, specialised publications and high-profile academia around the world. Indeed, some of the source data sets we work with and the ways in which we combine them are unique to our organisation.

But we know that our full potential in this area is yet to be realised. Our aim is to collect a wider variety of data and harness new technology to generate more powerful insights. This is your chance to be involved from the beginning.

A collaborative role that transforms our use of data You will join a collaborative team tackling a range of complex software and data challenges. You will be helping us explore and exploit cutting-edge technologies, including big data platforms and machine learning. You will be looking outwards and collaborating; not just with the team but with expert economists, technologists, data scientists and statisticians – and counterparts in other international organisations and central banks.

You will focus on delivering data lake solutions that will power the Bank’s data and analytics transformation. You will have strong, practical experience in building and managing big data platforms using the Hadoop family of technologies including HBase, Hive, Pig, Spark and Scala.

Your qualifications and experience

You will have a graduate degree in computer science, or another IT-related technical field, with at least two years of experience in software development or data engineering. Above all, you will have a passion for data and analytics.

Principal accountabilities

  • Engineer and operate the platform from a technology point of view
  • Work in collaboration with federated data engineering teams as well as the network and system engineering team
  • Maintain and continually improve the platform infrastructure
  • Control and monitor the platform performance and capacity and advise necessary infrastructure changes
  • Ensure availability of the platform
  • Actively engage in the design of the new big data platform
  • Actively contribute in selection and integration of any Big Data tools and frameworks required to provide requested capabilities
  • Improve upon the data ingestion pipelines, ETL jobs, and alarms to maintain data integrity and data availability
  • Stay up-to-date with advances in data persistence and big data technologies, and run pilots to design the data architecture to scale with the increased data sets of advertiser experience
  • Design and develop different architectural models for our scalable data processing as well as scalable data storage
  • Responsible for documentation, performance testing and debugging business applications as required
  • Represent the team in the project meetings

Qualification and experience required

  • At least two years of experience in a Hadoop administration and/or a support role
  • Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
  • Experience with various messaging systems, such as Kafka
  • Proficient understanding of distributed computing principles
  • Familiar with specific characteristics of the Hortonworks / Cloudera stack as well as latest technology trends in the big data analytics domain
  • Experience with automated deployment tools like Ansible or Puppet
  • Expert understanding of ETL principles and how to apply them within Hadoop is a plus
  • Experience of creating and managing big data pipeline using kafka, flume & Spark
  • Experience with agile development methodologies

Skills required

  • Good abstraction and conceptual data management skills combined with a self-reliant, team-minded, communicative, proactive personality
  • Significant knowledge of Big Data technologies and tools with the ability to share ideas among a collaborative team.
  • Strong passion for data analytics and engineering in general
In your application please specify that you found out about this opportunity on GCFjobs.com
Veuillez indiquer dans votre candidature que vous avez consulté cette offre sur le site web GCFjobs.com

Login or register to apply for this job

Bank for International Settlements (BIS)

The Bank for International Settlements (BIS) is an international financial institution owned by central banks which "fosters international monetary and financial cooperation and serves as a bank for central banks". The BIS carries out its work through its meetings, programmes and through the Basel Process – hosting international groups pursuing global financial stability and facilitating their interaction.

All Bank for International Settlements (BIS) jobs