Playing with functions in hive

Apache Hive has built in functions which can be listed with

SHOW FUNCTIONS;

to play with concat we will run the following script.

-- Use testdb
use testdb;
-- describe concat function
DESC FUNCTION concat;
-- describe table mytable2
DESC mytable2;
-- Perform select query uniting fname and lname
SELECT CONCAT(fname,' ',lname) FROM mytable2;

We can execute with beeline or hive. We will use beeline.

user@computer:$ beeline -u jdbc:hive2://localhost:10000 -f Documents/concat.hql --verbose=false --showWarnings=false
scan complete in 8ms
Connecting to jdbc:hive2://localhost:10000
Connected to: Apache Hive (version 0.13.1-cdh5.2.0)
Driver: Hive JDBC (version 0.13.1-cdh5.2.0)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://localhost:10000> use testdb;
No rows affected (0.104 seconds)
0: jdbc:hive2://localhost:10000> -- Describe concat function
0: jdbc:hive2://localhost:10000> DESC FUNCTION concat;
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--+
| tab_name |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--+
| concat(str1, str2, ... strN) - returns the concatenation of str1, str2, ... strN or concat(bin1, bin2, ... binN) - returns the concatenation of bytes in binary data bin1, bin2, ... binN |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--+
1 row selected (0.162 seconds)
0: jdbc:hive2://localhost:10000>
0: jdbc:hive2://localhost:10000> DESC mytable2;
+-----------+------------+----------+--+
| col_name | data_type | comment |
+-----------+------------+----------+--+
| id | int | |
| lname | string | |
| fname | string | |
+-----------+------------+----------+--+
3 rows selected (0.133 seconds)
0: jdbc:hive2://localhost:10000>
0: jdbc:hive2://localhost:10000> SELECT CONCAT(fname,' ',lname) FROM mytable2;
+--------------------+--+
| _c0 |
+--------------------+--+
| John Doe |
| William Lancaster |
| Burp Gentoo |
+--------------------+--+
3 rows selected (18.848 seconds)
0: jdbc:hive2://localhost:10000>
Closing: 0: jdbc:hive2://localhost:10000

We can also play with functions from inside hive cli as shown below with the sqrt function.

user@computer:$
hive> DESC function sqrt;
OK
sqrt(x) - returns the square root of x
Time taken: 0.018 seconds, Fetched: 1 row(s)
hive> SELECT SQRT(64);
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201710131004_0373, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201710131004_0373
Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_201710131004_0373
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2017-12-06 12:33:16,450 Stage-1 map = 0%, reduce = 0%
2017-12-06 12:33:23,476 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.58 sec
2017-12-06 12:33:28,497 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 1.58 sec
MapReduce Total cumulative CPU time: 1 seconds 580 msec
Ended Job = job_201710131004_0373
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Cumulative CPU: 1.58 sec HDFS Read: 273 HDFS Write: 4 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 580 msec
OK
8.0
Time taken: 18.894 seconds, Fetched: 1 row(s)
hive>

More info about Hive functions here.

Run httpd with docker

So below is the script:

#!/bin/bash

echo "Running httpd with docker."

docker run   --rm -v "$PWD":/usr/local/apache2/htdocs  httpd

We use the following options:
-v, –volume list Bind mount a volume
–rm Automatically remove the container when it exits

Quite simple, right? Execution below.

user@computer:$ bash ~/docker/run_httpd.sh
Running httpd with docker.
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.6. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.6. Set the 'ServerName' directive globally to suppress this message
[Thu Aug 17 22:22:55.249981 2017] [mpm_event:notice] [pid 1:tid 140029488904064] AH00489: Apache/2.4.27 (Unix) configured -- resuming normal operations
[Thu Aug 17 22:22:55.250079 2017] [core:notice] [pid 1:tid 140029488904064] AH00094: Command line: 'httpd -D FOREGROUND'

And we proceed to test.

user@computer:$ curl http://172.17.0.6
<HTML>
<HEAD>
First page
</HEAD>
<BODY>
Testing docker httpd



We are getting below index.html because we are mapping /tmp/httpd (current $PWD) to /usr/local/apache2/htdocs. In /tmp/httpd we created an example index.html as shown above.
More info here

Run python script with docker

So I started playing with docker and asked myself whether it would be possible to run a python script with docker. Well, answer is yes. Example of the script below.

#!/usr/bin/python

import sys
print "Running script!!"
print sys.version_info

Execution below:

user@computer:$ docker run -it --rm --name pythonscript -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:2 python script.py
Running script!!
sys.version_info(major=2, minor=7, micro=13, releaselevel='final', serial=0)

References:
Docker documentation

Listing namenodes and datanodes in Hadoop

Ever wondered how to list Hadoop namenodes? Quite easy as seen below.

user@computer:$ hdfs getconf -namenodes
hadoop02.mydomain.com hadoop01.mydomain.com

Now if you want to list the datanodes we do that with dfsadmin.

user@computer:$ hdfs dfsadmin -printTopology
Rack: /default
192.168.0.15:50010 (hadoop15.mydomain.com)
192.168.0.16:50010 (hadoop16.mydomain.com)
192.168.0.17:50010 (hadoop17.mydomain.com)
192.168.0.18:50010 (hadoop18.mydomain.com)
192.168.0.19:50010 (hadoop19.mydomain.com)
192.168.0.20:50010 (hadoop20.mydomain.com)
192.168.0.21:50010 (hadoop21.mydomain.com)
192.168.0.22:50010 (hadoop22.mydomain.com)
192.168.0.23:50010 (hadoop23.mydomain.com)
192.168.0.24:50010 (hadoop24.mydomain.com)


Above command should be executed as a user with hdfs superuser permissions.

Apache Flume to write web server logs to Hadoop

In this post we will use flume to dump Apache webserver logs into HDFS. We already have a web server running and flume installed, but we need to configure a target and a source.

We use the following file as target.

## TARGET AGENT ##  
## configuration file location:  /etc/flume-ng/conf
## START Agent: flume-ng agent -c conf -f /etc/flume-ng/conf/flume-trg-agent.conf -n collector

#http://flume.apache.org/FlumeUserGuide.html#avro-source
collector.sources = AvroIn  
collector.sources.AvroIn.type = avro  
collector.sources.AvroIn.bind = 0.0.0.0  
collector.sources.AvroIn.port = 4545  
collector.sources.AvroIn.channels = mc1 mc2

## Channels ##
## Source writes to 2 channels, one for each sink
collector.channels = mc1 mc2

#http://flume.apache.org/FlumeUserGuide.html#memory-channel

collector.channels.mc1.type = memory  
collector.channels.mc1.capacity = 100

collector.channels.mc2.type = memory  
collector.channels.mc2.capacity = 100

## Sinks ##
collector.sinks = LocalOut HadoopOut

## Write copy to Local Filesystem 
#http://flume.apache.org/FlumeUserGuide.html#file-roll-sink
collector.sinks.LocalOut.type = file_roll  
collector.sinks.LocalOut.sink.directory = /var/log/flume-ng  
collector.sinks.LocalOut.sink.rollInterval = 0  
collector.sinks.LocalOut.channel = mc1

## Write to HDFS
#http://flume.apache.org/FlumeUserGuide.html#hdfs-sink
collector.sinks.HadoopOut.type = hdfs  
collector.sinks.HadoopOut.channel = mc2  
collector.sinks.HadoopOut.hdfs.path = /user/training/flume/events/%{log_type}/%y%m%d  
collector.sinks.HadoopOut.hdfs.fileType = DataStream  
collector.sinks.HadoopOut.hdfs.writeFormat = Text  
collector.sinks.HadoopOut.hdfs.rollSize = 0  
collector.sinks.HadoopOut.hdfs.rollCount = 10000  
collector.sinks.HadoopOut.hdfs.rollInterval = 600

Continue reading