Job Description

BIG DATA (HADOOP, SPARK, KAFKA, STORM, SCALA), INFORMATICA, TERADATA AND CORE JAVA
3.00 to 10.00 Years
3
BENGALURU [India]
IT
05/03/2019
05/04/2019
Professional experience in IT industry with Big Data (Hadoop, spark, Kafka, Storm, Scala), Informatica, Teradata and Core java. Experience in storage, querying, processing and analysis of Big Data applications. Hands on experience with Hadoop Ecosystem such as Map Reduce, YARN, Hive, Pig, HBase, Sqoop, Oozie, Zookeeper, Hue, Impala and Flume. Experience in writing Pig scripts and Hive Queries for processing and analyzing large volumes of data. Written Sqoop queries to import and export data between HDFS, hive, HBase and Relational Database Management. Experience and knowledge of NOSQL databases like Mongo DB, HBase and Cassandra. Hands on experience with Big Data tools like Spark, Kafka, Storm. Hadoop cluster setup, installation, capacity planning and administration experience of multi node cluster using Cloudera(CDH 5.10) and Horton works (HDP2.6) for Apache Hadoop.