We are given mandate to build Data engineering team for our client who is looking for a dynamic, self-motivated and technically competent individual who has an interest in data and technology – data architecture, data modelling, data integration etc. Specializing in the Data domain, you will work in a high-pace data engineering team that is delivering and supporting our clients data needs.
Reach out to me if you have :
A good degree in Computer Science, Computer Engineering or equivalent
• Relevant working experience in data modelling and data integration, preferably in an investment and banking environment.
• Experience working with enterprise databases using database technologies (PL/SQL, SQL, NoSQL) data integration products (e.g. Informatica)
• Good knowledge of Linux family of OS
Exposure and knowledge in any of the following technologies is advantageous:
• Big Data
• Hadoop Technologies: HDFS, Zookeeper, Yarn, Spark, Hive, Impala, Sqoop, Solr, ELK, Flume, Kafka
• Hadoop Platforms: Cloudera, Databricks
• NoSQL Databases: Neo4J
• Cloud based Big Data Services: AWS EMR, Azure HDInsight
• Elastic Search
• Programming/Scripting Language
• Shell Script
• RESTful Data API
• Experienced with the Systems Development Life Cycle implementation methodology (SDLC) and/or agile methodologies like Scrum and Kanban.
• Good team player, with strong analytical skills and enjoy complex problem solving with innovative ideas
• Strong communication/people skills required to interact with data analysts, business end-users and vendors to design and develop solutions
• Passion for data and technology
• CFA equivalent certifications would be an added advantage.
• Good at working with details and is meticulous for operations