Big Data Architect III
Location: Monroe LA
Duration: 9 Months
As a Big Data (Hadoop) Architect/Developer, will be responsible for Cloudera Hadoop development, high-speed querying, managing and deploying Flume, Kafka, HIVE and Spark, and oversee handover to operational teams and propose best practices / standards. Expertise with Designing, building, installing, configuring and developing Hadoop echo system. Familiarity with Pentaho and Nifi a bonus skillset.
Principal Duties and Responsibilities (Essential Functions**):
- Work with development teams within the data and analytics team to design, develop, and execute solutions to derive business insights and solve clients' operational and strategic problems.
- Support the development of data and analytics solutions and product that improve existing processes and decision making.
- Build internal capabilities to better serve clients and demonstrate thought leadership in latest innovations in big data, and advanced analytics.
- Contribute to business and market development.
Specific skills and abilities:
- Defining job flows
- Managing and Reviewing Hadoop Log Files
- Manage Hadoop jobs using scheduler
- Cluster Coordination services through Zookeeper
- Support MapReduce programs running on the Hadoop cluster
- Ability to write MapReduce jobs
- Experience in writing Spark scripts
- Hands on experience in HiveQL
- Familiarity with data loading tools like Flume, Sqoop
- Knowledge of workflow/schedulers like Oozie
- Knowledge of ETL tools like Pentaho
Qualifications & Skills:
- Bachelor’s degree or related technical field preferred
- Expertise with HBase, NOSQL, HDFS, JAVA map reduce for SOLR indexing, data transformation, back-end programming, java, JS, Node.js and OOAD
- 7 + years’ of experience in IT with minimum 2 years’ of experience in Hadoop.