ClickHouse/benchmark/hive/log/log_10m/log_hits_10m

9307 lines
595 KiB
Plaintext
Raw Normal View History

2016-02-07 21:58:58 +00:00
start time: Ср. сент. 11 18:04:18 MSK 2013
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_27940@mturlrep13_201309111804_333737265.txt
hive> ;
hive> quit;
times: 1
query: SELECT count(*) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_28397@mturlrep13_201309111804_2040079163.txt
hive> SELECT count(*) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0219
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:04:42,860 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:04:49,913 Stage-1 map = 4%, reduce = 0%
2013-09-11 18:04:52,925 Stage-1 map = 7%, reduce = 0%
2013-09-11 18:04:55,955 Stage-1 map = 14%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-11 18:04:56,962 Stage-1 map = 14%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-11 18:04:57,969 Stage-1 map = 14%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-11 18:04:58,976 Stage-1 map = 14%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-11 18:04:59,982 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-11 18:05:00,988 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-11 18:05:01,993 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-11 18:05:02,998 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-11 18:05:04,003 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-11 18:05:05,008 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-11 18:05:06,018 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-11 18:05:07,024 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-11 18:05:08,029 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-11 18:05:09,035 Stage-1 map = 39%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-11 18:05:10,039 Stage-1 map = 39%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-11 18:05:11,044 Stage-1 map = 39%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-11 18:05:12,051 Stage-1 map = 47%, reduce = 0%, Cumulative CPU 51.0 sec
2013-09-11 18:05:13,056 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 78.63 sec
2013-09-11 18:05:14,062 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 78.63 sec
2013-09-11 18:05:15,067 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 78.63 sec
2013-09-11 18:05:16,072 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 78.63 sec
2013-09-11 18:05:17,077 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 78.63 sec
2013-09-11 18:05:18,083 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 78.63 sec
2013-09-11 18:05:19,088 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 78.63 sec
2013-09-11 18:05:20,093 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 18:05:21,098 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 18:05:22,103 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 18:05:23,108 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 18:05:24,112 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 18:05:25,117 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 18:05:26,122 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 18:05:27,127 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 18:05:28,131 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 18:05:29,136 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 18:05:30,141 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 18:05:31,151 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 18:05:32,159 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 18:05:33,163 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 18:05:34,168 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 18:05:35,173 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 18:05:36,208 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 18:05:37,212 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 18:05:38,217 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 18:05:39,222 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 18:05:40,227 Stage-1 map = 93%, reduce = 17%, Cumulative CPU 116.26 sec
2013-09-11 18:05:41,240 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 152.04 sec
2013-09-11 18:05:42,244 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 152.04 sec
2013-09-11 18:05:43,248 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 152.04 sec
2013-09-11 18:05:44,253 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 152.04 sec
2013-09-11 18:05:45,259 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 154.32 sec
2013-09-11 18:05:46,264 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 154.32 sec
2013-09-11 18:05:47,269 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 154.32 sec
MapReduce Total cumulative CPU time: 2 minutes 34 seconds 320 msec
Ended Job = job_201309101627_0219
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 154.32 sec HDFS Read: 1082943442 HDFS Write: 9 SUCCESS
Total MapReduce CPU Time Spent: 2 minutes 34 seconds 320 msec
OK
10000000
Time taken: 79.195 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_31005@mturlrep13_201309111805_665303300.txt
hive> ;
hive> quit;
times: 2
query: SELECT count(*) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_31457@mturlrep13_201309111805_1086613.txt
hive> SELECT count(*) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0220
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:06:01,601 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:06:08,629 Stage-1 map = 7%, reduce = 0%
2013-09-11 18:06:11,641 Stage-1 map = 14%, reduce = 0%
2013-09-11 18:06:18,671 Stage-1 map = 22%, reduce = 0%
2013-09-11 18:06:21,683 Stage-1 map = 29%, reduce = 0%
2013-09-11 18:06:24,695 Stage-1 map = 36%, reduce = 0%
2013-09-11 18:06:27,707 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:06:28,718 Stage-1 map = 46%, reduce = 0%, Cumulative CPU 34.66 sec
2013-09-11 18:06:29,725 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 78.93 sec
2013-09-11 18:06:30,731 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 78.93 sec
2013-09-11 18:06:31,736 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 78.93 sec
2013-09-11 18:06:32,742 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 78.93 sec
2013-09-11 18:06:33,747 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 78.93 sec
2013-09-11 18:06:34,752 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 78.93 sec
2013-09-11 18:06:35,757 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 78.93 sec
2013-09-11 18:06:36,762 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 78.93 sec
2013-09-11 18:06:37,767 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 78.93 sec
2013-09-11 18:06:38,772 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 78.93 sec
2013-09-11 18:06:39,778 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 78.93 sec
2013-09-11 18:06:40,783 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 78.93 sec
2013-09-11 18:06:41,787 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 78.93 sec
2013-09-11 18:06:42,793 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 78.93 sec
2013-09-11 18:06:43,798 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 78.93 sec
2013-09-11 18:06:44,803 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 78.93 sec
2013-09-11 18:06:45,808 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 78.93 sec
2013-09-11 18:06:46,867 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 78.93 sec
2013-09-11 18:06:47,872 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 78.93 sec
2013-09-11 18:06:48,877 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 78.93 sec
2013-09-11 18:06:49,888 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 78.93 sec
2013-09-11 18:06:50,893 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 78.93 sec
2013-09-11 18:06:51,916 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 78.93 sec
2013-09-11 18:06:52,921 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 78.93 sec
2013-09-11 18:06:53,925 Stage-1 map = 93%, reduce = 17%, Cumulative CPU 112.4 sec
2013-09-11 18:06:54,931 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 112.4 sec
2013-09-11 18:06:55,940 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 154.14 sec
2013-09-11 18:06:56,945 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 154.14 sec
2013-09-11 18:06:57,950 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 157.93 sec
2013-09-11 18:06:58,954 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 157.93 sec
2013-09-11 18:06:59,959 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 157.93 sec
2013-09-11 18:07:00,963 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 157.93 sec
2013-09-11 18:07:01,970 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 159.3 sec
2013-09-11 18:07:02,975 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 159.3 sec
2013-09-11 18:07:03,987 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 159.3 sec
MapReduce Total cumulative CPU time: 2 minutes 39 seconds 300 msec
Ended Job = job_201309101627_0220
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 159.3 sec HDFS Read: 1082943442 HDFS Write: 9 SUCCESS
Total MapReduce CPU Time Spent: 2 minutes 39 seconds 300 msec
OK
10000000
Time taken: 71.026 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_836@mturlrep13_201309111807_561551889.txt
hive> ;
hive> quit;
times: 3
query: SELECT count(*) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_1313@mturlrep13_201309111807_1846613812.txt
hive> SELECT count(*) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0221
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:07:18,418 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:07:25,501 Stage-1 map = 7%, reduce = 0%
2013-09-11 18:07:31,523 Stage-1 map = 14%, reduce = 0%
2013-09-11 18:07:34,537 Stage-1 map = 22%, reduce = 0%
2013-09-11 18:07:37,547 Stage-1 map = 25%, reduce = 0%
2013-09-11 18:07:38,551 Stage-1 map = 29%, reduce = 0%
2013-09-11 18:07:40,559 Stage-1 map = 32%, reduce = 0%
2013-09-11 18:07:41,564 Stage-1 map = 36%, reduce = 0%
2013-09-11 18:07:43,571 Stage-1 map = 39%, reduce = 0%
2013-09-11 18:07:47,591 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 85.17 sec
2013-09-11 18:07:48,598 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 85.17 sec
2013-09-11 18:07:49,609 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 85.17 sec
2013-09-11 18:07:50,615 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 85.17 sec
2013-09-11 18:07:51,620 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 85.17 sec
2013-09-11 18:07:52,625 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 85.17 sec
2013-09-11 18:07:53,630 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 85.17 sec
2013-09-11 18:07:54,635 Stage-1 map = 54%, reduce = 17%, Cumulative CPU 85.17 sec
2013-09-11 18:07:55,640 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 107.69 sec
2013-09-11 18:07:56,660 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 107.69 sec
2013-09-11 18:07:57,668 Stage-1 map = 61%, reduce = 17%, Cumulative CPU 107.69 sec
2013-09-11 18:07:58,672 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 107.69 sec
2013-09-11 18:07:59,676 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 107.69 sec
2013-09-11 18:08:00,681 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 107.69 sec
2013-09-11 18:08:01,685 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 107.69 sec
2013-09-11 18:08:02,690 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 107.69 sec
2013-09-11 18:08:03,694 Stage-1 map = 69%, reduce = 17%, Cumulative CPU 107.69 sec
2013-09-11 18:08:04,699 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 107.69 sec
2013-09-11 18:08:05,703 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 107.69 sec
2013-09-11 18:08:06,708 Stage-1 map = 76%, reduce = 17%, Cumulative CPU 107.69 sec
2013-09-11 18:08:07,712 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 107.69 sec
2013-09-11 18:08:08,716 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 107.69 sec
2013-09-11 18:08:09,720 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 107.69 sec
2013-09-11 18:08:10,725 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 107.69 sec
2013-09-11 18:08:11,729 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 107.69 sec
2013-09-11 18:08:12,734 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 128.12 sec
2013-09-11 18:08:13,739 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 128.12 sec
2013-09-11 18:08:14,743 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 152.45 sec
2013-09-11 18:08:15,747 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 152.45 sec
2013-09-11 18:08:16,752 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 152.45 sec
2013-09-11 18:08:17,859 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 152.45 sec
2013-09-11 18:08:18,863 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 152.45 sec
2013-09-11 18:08:19,868 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 152.45 sec
2013-09-11 18:08:20,874 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 154.14 sec
2013-09-11 18:08:21,879 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 154.14 sec
MapReduce Total cumulative CPU time: 2 minutes 34 seconds 140 msec
Ended Job = job_201309101627_0221
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 154.14 sec HDFS Read: 1082943442 HDFS Write: 9 SUCCESS
Total MapReduce CPU Time Spent: 2 minutes 34 seconds 140 msec
OK
10000000
Time taken: 71.919 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_3224@mturlrep13_201309111808_410663518.txt
hive> ;
hive> quit;
times: 1
query: SELECT count(*) FROM hits_10m WHERE AdvEngineID != 0;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_3703@mturlrep13_201309111808_1060047951.txt
hive> SELECT count(*) FROM hits_10m WHERE AdvEngineID != 0;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0222
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:08:42,497 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:08:47,524 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.16 sec
2013-09-11 18:08:48,531 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.16 sec
2013-09-11 18:08:49,539 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.16 sec
2013-09-11 18:08:50,546 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.16 sec
2013-09-11 18:08:51,551 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.16 sec
2013-09-11 18:08:52,557 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.16 sec
2013-09-11 18:08:53,563 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.25 sec
2013-09-11 18:08:54,568 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.25 sec
2013-09-11 18:08:55,574 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 23.94 sec
2013-09-11 18:08:56,582 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 25.2 sec
2013-09-11 18:08:57,609 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 25.2 sec
MapReduce Total cumulative CPU time: 25 seconds 200 msec
Ended Job = job_201309101627_0222
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 25.2 sec HDFS Read: 907716 HDFS Write: 7 SUCCESS
Total MapReduce CPU Time Spent: 25 seconds 200 msec
OK
171127
Time taken: 24.984 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_5325@mturlrep13_201309111808_1536943547.txt
hive> ;
hive> quit;
times: 2
query: SELECT count(*) FROM hits_10m WHERE AdvEngineID != 0;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_5808@mturlrep13_201309111809_1398241978.txt
hive> SELECT count(*) FROM hits_10m WHERE AdvEngineID != 0;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0223
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:09:11,922 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:09:15,946 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.33 sec
2013-09-11 18:09:16,954 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.33 sec
2013-09-11 18:09:17,965 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.33 sec
2013-09-11 18:09:18,971 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.33 sec
2013-09-11 18:09:19,977 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.33 sec
2013-09-11 18:09:20,983 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 17.02 sec
2013-09-11 18:09:21,989 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.14 sec
2013-09-11 18:09:22,995 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.14 sec
2013-09-11 18:09:24,000 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 23.14 sec
2013-09-11 18:09:25,007 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 25.1 sec
2013-09-11 18:09:26,014 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 25.1 sec
MapReduce Total cumulative CPU time: 25 seconds 100 msec
Ended Job = job_201309101627_0223
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 25.1 sec HDFS Read: 907716 HDFS Write: 7 SUCCESS
Total MapReduce CPU Time Spent: 25 seconds 100 msec
OK
171127
Time taken: 22.815 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_7174@mturlrep13_201309111809_1436708878.txt
hive> ;
hive> quit;
times: 3
query: SELECT count(*) FROM hits_10m WHERE AdvEngineID != 0;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_7626@mturlrep13_201309111809_1436982971.txt
hive> SELECT count(*) FROM hits_10m WHERE AdvEngineID != 0;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0224
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:09:40,309 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:09:44,332 Stage-1 map = 25%, reduce = 0%, Cumulative CPU 6.0 sec
2013-09-11 18:09:45,340 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.79 sec
2013-09-11 18:09:46,348 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.79 sec
2013-09-11 18:09:47,354 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.79 sec
2013-09-11 18:09:48,360 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.79 sec
2013-09-11 18:09:49,367 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.73 sec
2013-09-11 18:09:50,372 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.73 sec
2013-09-11 18:09:51,378 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.73 sec
2013-09-11 18:09:52,384 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 23.73 sec
2013-09-11 18:09:53,392 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 25.56 sec
2013-09-11 18:09:54,397 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 25.56 sec
MapReduce Total cumulative CPU time: 25 seconds 560 msec
Ended Job = job_201309101627_0224
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 25.56 sec HDFS Read: 907716 HDFS Write: 7 SUCCESS
Total MapReduce CPU Time Spent: 25 seconds 560 msec
OK
171127
Time taken: 22.71 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9005@mturlrep13_201309111810_1951461595.txt
hive> ;
hive> quit;
times: 1
query: SELECT sum(AdvEngineID), count(*), avg(ResolutionWidth) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9573@mturlrep13_201309111810_1888842341.txt
hive> SELECT sum(AdvEngineID), count(*), avg(ResolutionWidth) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0225
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:10:15,477 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:10:23,514 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.01 sec
2013-09-11 18:10:24,522 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.01 sec
2013-09-11 18:10:25,528 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.01 sec
2013-09-11 18:10:26,534 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.01 sec
2013-09-11 18:10:27,540 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.01 sec
2013-09-11 18:10:28,545 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.01 sec
2013-09-11 18:10:29,552 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.01 sec
2013-09-11 18:10:30,558 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 33.01 sec
2013-09-11 18:10:31,563 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 33.01 sec
2013-09-11 18:10:32,568 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 33.01 sec
2013-09-11 18:10:33,573 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 33.01 sec
2013-09-11 18:10:34,578 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 33.01 sec
2013-09-11 18:10:35,583 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 33.01 sec
2013-09-11 18:10:36,590 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 35.1 sec
2013-09-11 18:10:37,595 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 35.1 sec
MapReduce Total cumulative CPU time: 35 seconds 100 msec
Ended Job = job_201309101627_0225
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 35.1 sec HDFS Read: 8109219 HDFS Write: 30 SUCCESS
Total MapReduce CPU Time Spent: 35 seconds 100 msec
OK
Time taken: 32.289 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_11687@mturlrep13_201309111810_1411263650.txt
hive> ;
hive> quit;
times: 2
query: SELECT sum(AdvEngineID), count(*), avg(ResolutionWidth) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_12147@mturlrep13_201309111810_146667107.txt
hive> SELECT sum(AdvEngineID), count(*), avg(ResolutionWidth) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0226
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:10:51,464 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:10:58,498 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.22 sec
2013-09-11 18:10:59,506 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.22 sec
2013-09-11 18:11:00,512 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.22 sec
2013-09-11 18:11:01,518 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.22 sec
2013-09-11 18:11:02,523 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.22 sec
2013-09-11 18:11:03,529 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.22 sec
2013-09-11 18:11:04,535 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 23.49 sec
2013-09-11 18:11:05,540 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 31.49 sec
2013-09-11 18:11:06,545 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 31.49 sec
2013-09-11 18:11:07,550 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 31.49 sec
2013-09-11 18:11:08,555 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 31.49 sec
2013-09-11 18:11:09,560 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 31.49 sec
2013-09-11 18:11:10,565 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 31.49 sec
2013-09-11 18:11:11,572 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 33.46 sec
2013-09-11 18:11:12,577 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 33.46 sec
2013-09-11 18:11:13,582 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 33.46 sec
MapReduce Total cumulative CPU time: 33 seconds 460 msec
Ended Job = job_201309101627_0226
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 33.46 sec HDFS Read: 8109219 HDFS Write: 30 SUCCESS
Total MapReduce CPU Time Spent: 33 seconds 460 msec
OK
Time taken: 30.316 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_13663@mturlrep13_201309111811_609536671.txt
hive> ;
hive> quit;
times: 3
query: SELECT sum(AdvEngineID), count(*), avg(ResolutionWidth) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_14110@mturlrep13_201309111811_1164319988.txt
hive> SELECT sum(AdvEngineID), count(*), avg(ResolutionWidth) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0227
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:11:27,691 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:11:33,721 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.15 sec
2013-09-11 18:11:34,728 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.15 sec
2013-09-11 18:11:35,735 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.15 sec
2013-09-11 18:11:36,740 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.15 sec
2013-09-11 18:11:37,746 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.15 sec
2013-09-11 18:11:38,751 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.15 sec
2013-09-11 18:11:39,757 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.15 sec
2013-09-11 18:11:40,762 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 32.31 sec
2013-09-11 18:11:41,768 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 32.31 sec
2013-09-11 18:11:42,773 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 32.31 sec
2013-09-11 18:11:43,777 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 32.31 sec
2013-09-11 18:11:44,781 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 32.31 sec
2013-09-11 18:11:45,786 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 32.31 sec
2013-09-11 18:11:46,793 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 34.38 sec
2013-09-11 18:11:47,799 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 34.38 sec
2013-09-11 18:11:48,804 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 34.38 sec
MapReduce Total cumulative CPU time: 34 seconds 380 msec
Ended Job = job_201309101627_0227
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 34.38 sec HDFS Read: 8109219 HDFS Write: 30 SUCCESS
Total MapReduce CPU Time Spent: 34 seconds 380 msec
OK
Time taken: 29.584 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_15640@mturlrep13_201309111811_1787412572.txt
hive> ;
hive> quit;
times: 1
query: SELECT sum(UserID) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_16152@mturlrep13_201309111812_1844919592.txt
hive> SELECT sum(UserID) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0228
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:12:10,652 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:12:17,682 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.4 sec
2013-09-11 18:12:18,689 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.4 sec
2013-09-11 18:12:19,695 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.4 sec
2013-09-11 18:12:20,701 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.4 sec
2013-09-11 18:12:21,706 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.4 sec
2013-09-11 18:12:22,711 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.4 sec
2013-09-11 18:12:23,717 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.4 sec
2013-09-11 18:12:24,722 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 30.64 sec
2013-09-11 18:12:25,728 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 30.64 sec
2013-09-11 18:12:26,732 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 30.64 sec
2013-09-11 18:12:27,737 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 30.64 sec
2013-09-11 18:12:28,741 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 30.64 sec
2013-09-11 18:12:29,745 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 30.64 sec
2013-09-11 18:12:30,752 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 32.61 sec
2013-09-11 18:12:31,758 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 32.61 sec
2013-09-11 18:12:32,762 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 32.61 sec
MapReduce Total cumulative CPU time: 32 seconds 610 msec
Ended Job = job_201309101627_0228
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 32.61 sec HDFS Read: 57312623 HDFS Write: 21 SUCCESS
Total MapReduce CPU Time Spent: 32 seconds 610 msec
OK
-4662894107982093709
Time taken: 32.018 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_17658@mturlrep13_201309111812_1917152659.txt
hive> ;
hive> quit;
times: 2
query: SELECT sum(UserID) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_18081@mturlrep13_201309111812_969986372.txt
hive> SELECT sum(UserID) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0229
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:12:46,636 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:12:52,673 Stage-1 map = 25%, reduce = 0%, Cumulative CPU 7.62 sec
2013-09-11 18:12:53,681 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.58 sec
2013-09-11 18:12:54,687 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.58 sec
2013-09-11 18:12:55,692 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.58 sec
2013-09-11 18:12:56,698 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.58 sec
2013-09-11 18:12:57,704 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.58 sec
2013-09-11 18:12:58,710 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 22.97 sec
2013-09-11 18:12:59,715 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 30.12 sec
2013-09-11 18:13:00,721 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 30.12 sec
2013-09-11 18:13:01,726 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 30.12 sec
2013-09-11 18:13:02,731 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 30.12 sec
2013-09-11 18:13:03,736 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 30.12 sec
2013-09-11 18:13:04,740 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 30.12 sec
2013-09-11 18:13:05,747 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 32.15 sec
2013-09-11 18:13:06,752 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 32.15 sec
2013-09-11 18:13:07,757 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 32.15 sec
MapReduce Total cumulative CPU time: 32 seconds 150 msec
Ended Job = job_201309101627_0229
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 32.15 sec HDFS Read: 57312623 HDFS Write: 21 SUCCESS
Total MapReduce CPU Time Spent: 32 seconds 150 msec
OK
-4662894107982093709
Time taken: 29.424 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_19726@mturlrep13_201309111813_601476946.txt
hive> ;
hive> quit;
times: 3
query: SELECT sum(UserID) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_20163@mturlrep13_201309111813_458490060.txt
hive> SELECT sum(UserID) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0230
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:13:22,398 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:13:28,444 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.87 sec
2013-09-11 18:13:29,452 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.87 sec
2013-09-11 18:13:30,459 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.87 sec
2013-09-11 18:13:31,464 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.87 sec
2013-09-11 18:13:32,470 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.87 sec
2013-09-11 18:13:33,476 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.87 sec
2013-09-11 18:13:34,481 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 29.52 sec
2013-09-11 18:13:35,486 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 29.52 sec
2013-09-11 18:13:36,493 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 31.4 sec
2013-09-11 18:13:37,500 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 31.4 sec
MapReduce Total cumulative CPU time: 31 seconds 400 msec
Ended Job = job_201309101627_0230
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 31.4 sec HDFS Read: 57312623 HDFS Write: 21 SUCCESS
Total MapReduce CPU Time Spent: 31 seconds 400 msec
OK
-4662894107982093709
Time taken: 23.314 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_21535@mturlrep13_201309111813_1532287977.txt
hive> ;
hive> quit;
times: 1
query: SELECT count(DISTINCT UserID) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_22021@mturlrep13_201309111813_1807628976.txt
hive> SELECT count(DISTINCT UserID) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0231
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:13:58,394 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:14:05,419 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:14:08,436 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.36 sec
2013-09-11 18:14:09,442 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.36 sec
2013-09-11 18:14:10,449 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.36 sec
2013-09-11 18:14:11,454 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.36 sec
2013-09-11 18:14:12,460 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.36 sec
2013-09-11 18:14:13,466 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.36 sec
2013-09-11 18:14:14,471 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.36 sec
2013-09-11 18:14:15,518 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.36 sec
2013-09-11 18:14:16,523 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 27.36 sec
2013-09-11 18:14:17,529 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 27.36 sec
2013-09-11 18:14:18,535 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 40.6 sec
2013-09-11 18:14:19,551 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.46 sec
2013-09-11 18:14:20,556 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.46 sec
2013-09-11 18:14:21,561 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.46 sec
2013-09-11 18:14:22,566 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.46 sec
2013-09-11 18:14:23,572 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.46 sec
2013-09-11 18:14:24,576 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.46 sec
2013-09-11 18:14:25,581 Stage-1 map = 100%, reduce = 88%, Cumulative CPU 55.46 sec
2013-09-11 18:14:26,589 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 63.41 sec
2013-09-11 18:14:27,646 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 63.41 sec
MapReduce Total cumulative CPU time: 1 minutes 3 seconds 410 msec
Ended Job = job_201309101627_0231
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 63.41 sec HDFS Read: 57312623 HDFS Write: 8 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 3 seconds 410 msec
OK
2037258
Time taken: 39.018 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_23547@mturlrep13_201309111814_32240262.txt
hive> ;
hive> quit;
times: 2
query: SELECT count(DISTINCT UserID) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_24000@mturlrep13_201309111814_899622484.txt
hive> SELECT count(DISTINCT UserID) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0232
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:14:41,528 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:14:48,560 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:14:50,573 Stage-1 map = 47%, reduce = 0%, Cumulative CPU 14.3 sec
2013-09-11 18:14:51,581 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.07 sec
2013-09-11 18:14:52,588 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.07 sec
2013-09-11 18:14:53,594 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.07 sec
2013-09-11 18:14:54,599 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.07 sec
2013-09-11 18:14:55,606 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.07 sec
2013-09-11 18:14:56,613 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.07 sec
2013-09-11 18:14:57,619 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.07 sec
2013-09-11 18:14:58,625 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 28.07 sec
2013-09-11 18:14:59,630 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 28.07 sec
2013-09-11 18:15:00,635 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.82 sec
2013-09-11 18:15:01,640 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.82 sec
2013-09-11 18:15:02,645 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.82 sec
2013-09-11 18:15:03,651 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.82 sec
2013-09-11 18:15:04,656 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.82 sec
2013-09-11 18:15:05,661 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.82 sec
2013-09-11 18:15:06,667 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.82 sec
2013-09-11 18:15:07,672 Stage-1 map = 100%, reduce = 88%, Cumulative CPU 55.82 sec
2013-09-11 18:15:08,679 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 63.98 sec
2013-09-11 18:15:09,685 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 63.98 sec
2013-09-11 18:15:10,690 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 63.98 sec
MapReduce Total cumulative CPU time: 1 minutes 3 seconds 980 msec
Ended Job = job_201309101627_0232
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 63.98 sec HDFS Read: 57312623 HDFS Write: 8 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 3 seconds 980 msec
OK
2037258
Time taken: 37.449 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_25822@mturlrep13_201309111815_360131093.txt
hive> ;
hive> quit;
times: 3
query: SELECT count(DISTINCT UserID) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_26668@mturlrep13_201309111815_984788128.txt
hive> SELECT count(DISTINCT UserID) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0233
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:15:23,620 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:15:31,655 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:15:33,669 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.26 sec
2013-09-11 18:15:34,676 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.26 sec
2013-09-11 18:15:35,683 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.26 sec
2013-09-11 18:15:36,689 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.26 sec
2013-09-11 18:15:37,695 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.26 sec
2013-09-11 18:15:38,702 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.26 sec
2013-09-11 18:15:39,708 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.26 sec
2013-09-11 18:15:40,713 Stage-1 map = 72%, reduce = 17%, Cumulative CPU 24.26 sec
2013-09-11 18:15:41,718 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 24.26 sec
2013-09-11 18:15:42,723 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 36.34 sec
2013-09-11 18:15:43,727 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 49.93 sec
2013-09-11 18:15:44,732 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 49.93 sec
2013-09-11 18:15:45,737 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 49.93 sec
2013-09-11 18:15:46,741 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 49.93 sec
2013-09-11 18:15:47,746 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 49.93 sec
2013-09-11 18:15:48,751 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 49.93 sec
2013-09-11 18:15:49,755 Stage-1 map = 100%, reduce = 88%, Cumulative CPU 49.93 sec
2013-09-11 18:15:50,762 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 57.65 sec
2013-09-11 18:15:51,767 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 57.65 sec
2013-09-11 18:15:52,773 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 57.65 sec
MapReduce Total cumulative CPU time: 57 seconds 650 msec
Ended Job = job_201309101627_0233
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 57.65 sec HDFS Read: 57312623 HDFS Write: 8 SUCCESS
Total MapReduce CPU Time Spent: 57 seconds 650 msec
OK
2037258
Time taken: 36.433 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_28232@mturlrep13_201309111815_1215288015.txt
hive> ;
hive> quit;
times: 1
query: SELECT count(DISTINCT SearchPhrase) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_28709@mturlrep13_201309111816_1499620624.txt
hive> SELECT count(DISTINCT SearchPhrase) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0234
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:16:13,939 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:16:20,965 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:16:22,978 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.31 sec
2013-09-11 18:16:23,985 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.31 sec
2013-09-11 18:16:24,992 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.31 sec
2013-09-11 18:16:25,998 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.31 sec
2013-09-11 18:16:27,004 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.31 sec
2013-09-11 18:16:28,010 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.31 sec
2013-09-11 18:16:29,016 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.31 sec
2013-09-11 18:16:30,021 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 22.31 sec
2013-09-11 18:16:31,026 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 32.71 sec
2013-09-11 18:16:32,031 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.17 sec
2013-09-11 18:16:33,035 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.17 sec
2013-09-11 18:16:34,040 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.17 sec
2013-09-11 18:16:35,045 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.17 sec
2013-09-11 18:16:36,050 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.17 sec
2013-09-11 18:16:37,055 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.17 sec
2013-09-11 18:16:38,060 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.17 sec
2013-09-11 18:16:39,065 Stage-1 map = 100%, reduce = 94%, Cumulative CPU 44.17 sec
2013-09-11 18:16:40,073 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 51.78 sec
2013-09-11 18:16:41,078 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 51.78 sec
2013-09-11 18:16:42,083 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 51.78 sec
MapReduce Total cumulative CPU time: 51 seconds 780 msec
Ended Job = job_201309101627_0234
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 51.78 sec HDFS Read: 27820105 HDFS Write: 8 SUCCESS
Total MapReduce CPU Time Spent: 51 seconds 780 msec
OK
1110413
Time taken: 38.322 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_30252@mturlrep13_201309111816_93736906.txt
hive> ;
hive> quit;
times: 2
query: SELECT count(DISTINCT SearchPhrase) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_30689@mturlrep13_201309111816_96459731.txt
hive> SELECT count(DISTINCT SearchPhrase) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0235
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:16:55,164 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:17:03,202 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:17:04,212 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.55 sec
2013-09-11 18:17:05,219 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.55 sec
2013-09-11 18:17:06,226 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.55 sec
2013-09-11 18:17:07,232 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.55 sec
2013-09-11 18:17:08,237 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.55 sec
2013-09-11 18:17:09,243 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.55 sec
2013-09-11 18:17:10,250 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.55 sec
2013-09-11 18:17:11,255 Stage-1 map = 72%, reduce = 17%, Cumulative CPU 22.55 sec
2013-09-11 18:17:12,261 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.9 sec
2013-09-11 18:17:13,266 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.9 sec
2013-09-11 18:17:14,271 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.9 sec
2013-09-11 18:17:15,275 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.9 sec
2013-09-11 18:17:16,280 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.9 sec
2013-09-11 18:17:17,285 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.9 sec
2013-09-11 18:17:18,290 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.9 sec
2013-09-11 18:17:19,296 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.9 sec
2013-09-11 18:17:20,301 Stage-1 map = 100%, reduce = 93%, Cumulative CPU 44.9 sec
2013-09-11 18:17:21,309 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 52.41 sec
2013-09-11 18:17:22,314 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 52.41 sec
MapReduce Total cumulative CPU time: 52 seconds 410 msec
Ended Job = job_201309101627_0235
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 52.41 sec HDFS Read: 27820105 HDFS Write: 8 SUCCESS
Total MapReduce CPU Time Spent: 52 seconds 410 msec
OK
1110413
Time taken: 34.532 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_32218@mturlrep13_201309111817_1630955863.txt
hive> ;
hive> quit;
times: 3
query: SELECT count(DISTINCT SearchPhrase) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_32645@mturlrep13_201309111817_1064457391.txt
hive> SELECT count(DISTINCT SearchPhrase) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0236
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:17:35,486 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:17:43,516 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:17:44,526 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.45 sec
2013-09-11 18:17:45,533 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.45 sec
2013-09-11 18:17:46,539 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.45 sec
2013-09-11 18:17:47,545 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.45 sec
2013-09-11 18:17:48,550 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.45 sec
2013-09-11 18:17:49,556 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.45 sec
2013-09-11 18:17:50,563 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.45 sec
2013-09-11 18:17:51,568 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 21.45 sec
2013-09-11 18:17:52,573 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.27 sec
2013-09-11 18:17:53,578 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.27 sec
2013-09-11 18:17:54,583 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.27 sec
2013-09-11 18:17:55,588 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.27 sec
2013-09-11 18:17:56,593 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.2 sec
2013-09-11 18:17:57,598 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.2 sec
2013-09-11 18:17:58,604 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.2 sec
2013-09-11 18:17:59,609 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.2 sec
2013-09-11 18:18:00,614 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 44.2 sec
2013-09-11 18:18:01,621 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 50.93 sec
2013-09-11 18:18:02,626 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 50.93 sec
MapReduce Total cumulative CPU time: 50 seconds 930 msec
Ended Job = job_201309101627_0236
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 50.93 sec HDFS Read: 27820105 HDFS Write: 8 SUCCESS
Total MapReduce CPU Time Spent: 50 seconds 930 msec
OK
1110413
Time taken: 34.523 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_1798@mturlrep13_201309111818_1042447198.txt
hive> ;
hive> quit;
times: 1
query: SELECT min(EventDate), max(EventDate) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_2299@mturlrep13_201309111818_765489992.txt
hive> SELECT min(EventDate), max(EventDate) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0237
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:18:24,738 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:18:30,765 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.26 sec
2013-09-11 18:18:31,772 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.26 sec
2013-09-11 18:18:32,778 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.26 sec
2013-09-11 18:18:33,784 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.26 sec
2013-09-11 18:18:34,789 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.26 sec
2013-09-11 18:18:35,795 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.26 sec
2013-09-11 18:18:36,801 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 27.81 sec
2013-09-11 18:18:37,806 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 27.81 sec
2013-09-11 18:18:38,811 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 27.81 sec
2013-09-11 18:18:39,816 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 27.81 sec
2013-09-11 18:18:40,821 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 27.81 sec
2013-09-11 18:18:41,826 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 27.81 sec
2013-09-11 18:18:42,832 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 27.81 sec
2013-09-11 18:18:43,839 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 29.81 sec
2013-09-11 18:18:44,844 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 29.81 sec
2013-09-11 18:18:45,849 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 29.81 sec
MapReduce Total cumulative CPU time: 29 seconds 810 msec
Ended Job = job_201309101627_0237
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 29.81 sec HDFS Read: 597016 HDFS Write: 6 SUCCESS
Total MapReduce CPU Time Spent: 29 seconds 810 msec
OK
Time taken: 31.306 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_3746@mturlrep13_201309111818_1331650833.txt
hive> ;
hive> quit;
times: 2
query: SELECT min(EventDate), max(EventDate) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_4171@mturlrep13_201309111818_897231302.txt
hive> SELECT min(EventDate), max(EventDate) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0238
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:18:59,980 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:19:05,006 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 13.74 sec
2013-09-11 18:19:06,013 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 13.74 sec
2013-09-11 18:19:07,020 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 13.74 sec
2013-09-11 18:19:08,025 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 13.74 sec
2013-09-11 18:19:09,030 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 13.74 sec
2013-09-11 18:19:10,036 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 13.74 sec
2013-09-11 18:19:11,041 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 26.68 sec
2013-09-11 18:19:12,046 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 26.68 sec
2013-09-11 18:19:13,054 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 26.68 sec
2013-09-11 18:19:14,061 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 28.62 sec
2013-09-11 18:19:15,068 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 28.62 sec
MapReduce Total cumulative CPU time: 28 seconds 620 msec
Ended Job = job_201309101627_0238
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 28.62 sec HDFS Read: 597016 HDFS Write: 6 SUCCESS
Total MapReduce CPU Time Spent: 28 seconds 620 msec
OK
Time taken: 23.519 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_5847@mturlrep13_201309111819_1092177982.txt
hive> ;
hive> quit;
times: 3
query: SELECT min(EventDate), max(EventDate) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_6293@mturlrep13_201309111819_874118653.txt
hive> SELECT min(EventDate), max(EventDate) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0239
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 18:19:28,271 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:19:34,299 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 13.64 sec
2013-09-11 18:19:35,306 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 13.64 sec
2013-09-11 18:19:36,314 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 13.64 sec
2013-09-11 18:19:37,319 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 13.64 sec
2013-09-11 18:19:38,324 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 13.64 sec
2013-09-11 18:19:39,330 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 13.64 sec
2013-09-11 18:19:40,336 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 26.97 sec
2013-09-11 18:19:41,341 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 26.97 sec
2013-09-11 18:19:42,347 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 26.97 sec
2013-09-11 18:19:43,354 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 28.77 sec
2013-09-11 18:19:44,360 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 28.77 sec
MapReduce Total cumulative CPU time: 28 seconds 770 msec
Ended Job = job_201309101627_0239
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 28.77 sec HDFS Read: 597016 HDFS Write: 6 SUCCESS
Total MapReduce CPU Time Spent: 28 seconds 770 msec
OK
Time taken: 23.402 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_7692@mturlrep13_201309111819_276347275.txt
hive> ;
hive> quit;
times: 1
query: SELECT AdvEngineID, count(*) AS c FROM hits_10m WHERE AdvEngineID != 0 GROUP BY AdvEngineID ORDER BY c DESC;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_8135@mturlrep13_201309111819_385716024.txt
hive> SELECT AdvEngineID, count(*) AS c FROM hits_10m WHERE AdvEngineID != 0 GROUP BY AdvEngineID ORDER BY c DESC;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0240
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:20:04,638 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:20:10,670 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.51 sec
2013-09-11 18:20:11,678 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.51 sec
2013-09-11 18:20:12,685 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.51 sec
2013-09-11 18:20:13,692 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.51 sec
2013-09-11 18:20:14,698 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.51 sec
2013-09-11 18:20:15,704 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.51 sec
2013-09-11 18:20:16,710 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 24.33 sec
2013-09-11 18:20:17,716 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 24.33 sec
2013-09-11 18:20:18,724 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 28.27 sec
2013-09-11 18:20:19,731 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 28.27 sec
2013-09-11 18:20:20,737 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 28.27 sec
MapReduce Total cumulative CPU time: 28 seconds 270 msec
Ended Job = job_201309101627_0240
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0241
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:20:24,339 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:20:25,345 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 18:20:26,351 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 18:20:27,356 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 18:20:28,361 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 18:20:29,366 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 18:20:30,371 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 18:20:31,376 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 18:20:32,391 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 18:20:33,396 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.18 sec
2013-09-11 18:20:34,401 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.18 sec
2013-09-11 18:20:35,407 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.18 sec
MapReduce Total cumulative CPU time: 2 seconds 180 msec
Ended Job = job_201309101627_0241
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 28.27 sec HDFS Read: 907716 HDFS Write: 384 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.18 sec HDFS Read: 1153 HDFS Write: 60 SUCCESS
Total MapReduce CPU Time Spent: 30 seconds 450 msec
OK
Time taken: 40.897 seconds, Fetched: 9 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_10933@mturlrep13_201309111820_1327597176.txt
hive> ;
hive> quit;
times: 2
query: SELECT AdvEngineID, count(*) AS c FROM hits_10m WHERE AdvEngineID != 0 GROUP BY AdvEngineID ORDER BY c DESC;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_11362@mturlrep13_201309111820_1775916404.txt
hive> SELECT AdvEngineID, count(*) AS c FROM hits_10m WHERE AdvEngineID != 0 GROUP BY AdvEngineID ORDER BY c DESC;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0242
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:20:50,332 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:20:54,363 Stage-1 map = 25%, reduce = 0%, Cumulative CPU 5.92 sec
2013-09-11 18:20:55,371 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.99 sec
2013-09-11 18:20:56,378 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.99 sec
2013-09-11 18:20:57,385 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.99 sec
2013-09-11 18:20:58,392 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.99 sec
2013-09-11 18:20:59,398 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.48 sec
2013-09-11 18:21:00,404 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.48 sec
2013-09-11 18:21:01,410 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.48 sec
2013-09-11 18:21:02,418 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 25.33 sec
2013-09-11 18:21:03,425 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 27.22 sec
2013-09-11 18:21:04,432 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 27.22 sec
MapReduce Total cumulative CPU time: 27 seconds 220 msec
Ended Job = job_201309101627_0242
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0243
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:21:06,978 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:21:08,988 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 18:21:09,993 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 18:21:10,998 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 18:21:12,002 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 18:21:13,019 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 18:21:14,024 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 18:21:15,029 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 18:21:16,035 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 0.77 sec
2013-09-11 18:21:17,040 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.16 sec
2013-09-11 18:21:18,046 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.16 sec
2013-09-11 18:21:19,051 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.16 sec
MapReduce Total cumulative CPU time: 2 seconds 160 msec
Ended Job = job_201309101627_0243
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 27.22 sec HDFS Read: 907716 HDFS Write: 384 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.16 sec HDFS Read: 1153 HDFS Write: 60 SUCCESS
Total MapReduce CPU Time Spent: 29 seconds 380 msec
OK
Time taken: 36.981 seconds, Fetched: 9 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_13448@mturlrep13_201309111821_1010315846.txt
hive> ;
hive> quit;
times: 3
query: SELECT AdvEngineID, count(*) AS c FROM hits_10m WHERE AdvEngineID != 0 GROUP BY AdvEngineID ORDER BY c DESC;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_13877@mturlrep13_201309111821_1055383868.txt
hive> SELECT AdvEngineID, count(*) AS c FROM hits_10m WHERE AdvEngineID != 0 GROUP BY AdvEngineID ORDER BY c DESC;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0244
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:21:32,304 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:21:38,338 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.77 sec
2013-09-11 18:21:39,346 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.77 sec
2013-09-11 18:21:40,353 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.77 sec
2013-09-11 18:21:41,360 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.77 sec
2013-09-11 18:21:42,366 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 17.34 sec
2013-09-11 18:21:43,373 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.62 sec
2013-09-11 18:21:44,379 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.62 sec
2013-09-11 18:21:45,385 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 23.62 sec
2013-09-11 18:21:46,393 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 27.25 sec
2013-09-11 18:21:47,400 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 27.25 sec
MapReduce Total cumulative CPU time: 27 seconds 250 msec
Ended Job = job_201309101627_0244
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0245
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:21:50,923 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:21:52,932 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 18:21:53,938 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 18:21:54,943 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 18:21:55,948 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 18:21:56,954 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 18:21:57,960 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 18:21:58,964 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 18:21:59,970 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.13 sec
2013-09-11 18:22:00,976 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.13 sec
2013-09-11 18:22:01,982 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.13 sec
MapReduce Total cumulative CPU time: 2 seconds 130 msec
Ended Job = job_201309101627_0245
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 27.25 sec HDFS Read: 907716 HDFS Write: 384 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.13 sec HDFS Read: 1153 HDFS Write: 60 SUCCESS
Total MapReduce CPU Time Spent: 29 seconds 380 msec
OK
Time taken: 37.244 seconds, Fetched: 9 row(s)
hive> quit;
-- мощная фильтрация. После фильтрации почти ничего не остаётся, но делаем ещё агрегацию.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_15957@mturlrep13_201309111822_240108876.txt
hive> ;
hive> quit;
times: 1
query: SELECT RegionID, count(DISTINCT UserID) AS u FROM hits_10m GROUP BY RegionID ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_16471@mturlrep13_201309111822_1455089061.txt
hive> SELECT RegionID, count(DISTINCT UserID) AS u FROM hits_10m GROUP BY RegionID ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0246
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:22:22,925 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:22:29,950 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:22:32,967 Stage-1 map = 46%, reduce = 0%, Cumulative CPU 14.49 sec
2013-09-11 18:22:33,975 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.57 sec
2013-09-11 18:22:34,984 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.57 sec
2013-09-11 18:22:35,990 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.57 sec
2013-09-11 18:22:36,996 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.57 sec
2013-09-11 18:22:38,003 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.57 sec
2013-09-11 18:22:39,008 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.57 sec
2013-09-11 18:22:40,015 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.57 sec
2013-09-11 18:22:41,021 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 28.57 sec
2013-09-11 18:22:42,026 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 28.57 sec
2013-09-11 18:22:43,032 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 28.57 sec
2013-09-11 18:22:44,038 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.83 sec
2013-09-11 18:22:45,043 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.83 sec
2013-09-11 18:22:46,049 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.83 sec
2013-09-11 18:22:47,055 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.83 sec
2013-09-11 18:22:48,060 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.83 sec
2013-09-11 18:22:49,067 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 63.29 sec
2013-09-11 18:22:50,073 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 63.29 sec
2013-09-11 18:22:51,079 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 70.54 sec
2013-09-11 18:22:52,084 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 70.54 sec
MapReduce Total cumulative CPU time: 1 minutes 10 seconds 540 msec
Ended Job = job_201309101627_0246
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0247
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:22:54,572 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:22:56,582 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.56 sec
2013-09-11 18:22:57,589 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.56 sec
2013-09-11 18:22:58,594 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.56 sec
2013-09-11 18:22:59,600 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.56 sec
2013-09-11 18:23:00,605 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.56 sec
2013-09-11 18:23:01,610 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.56 sec
2013-09-11 18:23:02,688 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.56 sec
2013-09-11 18:23:03,693 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.56 sec
2013-09-11 18:23:04,698 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.16 sec
2013-09-11 18:23:05,704 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.16 sec
2013-09-11 18:23:06,709 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.16 sec
MapReduce Total cumulative CPU time: 3 seconds 160 msec
Ended Job = job_201309101627_0247
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 70.54 sec HDFS Read: 67340015 HDFS Write: 100142 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 3.16 sec HDFS Read: 100911 HDFS Write: 96 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 13 seconds 700 msec
OK
Time taken: 53.83 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_18961@mturlrep13_201309111823_617403759.txt
hive> ;
hive> quit;
times: 2
query: SELECT RegionID, count(DISTINCT UserID) AS u FROM hits_10m GROUP BY RegionID ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_19391@mturlrep13_201309111823_22985485.txt
hive> SELECT RegionID, count(DISTINCT UserID) AS u FROM hits_10m GROUP BY RegionID ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0248
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:23:19,790 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:23:27,823 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:23:29,837 Stage-1 map = 47%, reduce = 0%, Cumulative CPU 14.01 sec
2013-09-11 18:23:30,846 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.85 sec
2013-09-11 18:23:31,853 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.85 sec
2013-09-11 18:23:32,859 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.85 sec
2013-09-11 18:23:33,865 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.85 sec
2013-09-11 18:23:34,872 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.85 sec
2013-09-11 18:23:35,878 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.85 sec
2013-09-11 18:23:36,884 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.85 sec
2013-09-11 18:23:37,890 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 28.85 sec
2013-09-11 18:23:38,896 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 28.85 sec
2013-09-11 18:23:39,902 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 43.08 sec
2013-09-11 18:23:40,908 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.83 sec
2013-09-11 18:23:41,913 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.83 sec
2013-09-11 18:23:42,919 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.83 sec
2013-09-11 18:23:43,925 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.83 sec
2013-09-11 18:23:44,930 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.83 sec
2013-09-11 18:23:45,937 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 63.27 sec
2013-09-11 18:23:46,943 Stage-1 map = 100%, reduce = 98%, Cumulative CPU 63.27 sec
2013-09-11 18:23:47,949 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 70.4 sec
2013-09-11 18:23:48,954 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 70.4 sec
MapReduce Total cumulative CPU time: 1 minutes 10 seconds 400 msec
Ended Job = job_201309101627_0248
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0249
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:23:51,398 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:23:54,410 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.63 sec
2013-09-11 18:23:55,416 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.63 sec
2013-09-11 18:23:56,421 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.63 sec
2013-09-11 18:23:57,427 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.63 sec
2013-09-11 18:23:58,432 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.63 sec
2013-09-11 18:23:59,438 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.63 sec
2013-09-11 18:24:00,443 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.63 sec
2013-09-11 18:24:01,464 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 1.63 sec
2013-09-11 18:24:02,470 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.23 sec
2013-09-11 18:24:03,476 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.23 sec
MapReduce Total cumulative CPU time: 3 seconds 230 msec
Ended Job = job_201309101627_0249
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 70.4 sec HDFS Read: 67340015 HDFS Write: 100142 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 3.23 sec HDFS Read: 100911 HDFS Write: 96 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 13 seconds 630 msec
OK
Time taken: 51.073 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_21591@mturlrep13_201309111824_608372135.txt
hive> ;
hive> quit;
times: 3
query: SELECT RegionID, count(DISTINCT UserID) AS u FROM hits_10m GROUP BY RegionID ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_22042@mturlrep13_201309111824_581555967.txt
hive> SELECT RegionID, count(DISTINCT UserID) AS u FROM hits_10m GROUP BY RegionID ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0250
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:24:16,471 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:24:24,501 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:24:26,515 Stage-1 map = 46%, reduce = 0%, Cumulative CPU 14.52 sec
2013-09-11 18:24:27,522 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.86 sec
2013-09-11 18:24:28,532 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.86 sec
2013-09-11 18:24:29,538 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.86 sec
2013-09-11 18:24:30,544 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.86 sec
2013-09-11 18:24:31,551 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.86 sec
2013-09-11 18:24:32,557 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.86 sec
2013-09-11 18:24:33,563 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.86 sec
2013-09-11 18:24:34,568 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 28.86 sec
2013-09-11 18:24:35,573 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 28.86 sec
2013-09-11 18:24:36,578 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 28.86 sec
2013-09-11 18:24:37,584 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 58.7 sec
2013-09-11 18:24:38,589 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 58.7 sec
2013-09-11 18:24:39,595 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 58.7 sec
2013-09-11 18:24:40,600 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 58.7 sec
2013-09-11 18:24:41,605 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 58.7 sec
2013-09-11 18:24:42,613 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 64.21 sec
2013-09-11 18:24:43,618 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 64.21 sec
2013-09-11 18:24:44,625 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 71.23 sec
2013-09-11 18:24:45,630 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 71.23 sec
MapReduce Total cumulative CPU time: 1 minutes 11 seconds 230 msec
Ended Job = job_201309101627_0250
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0251
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:24:49,129 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:24:51,138 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.5 sec
2013-09-11 18:24:52,144 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.5 sec
2013-09-11 18:24:53,149 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.5 sec
2013-09-11 18:24:54,154 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.5 sec
2013-09-11 18:24:55,160 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.5 sec
2013-09-11 18:24:56,165 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.5 sec
2013-09-11 18:24:57,171 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.5 sec
2013-09-11 18:24:58,176 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.01 sec
2013-09-11 18:24:59,181 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.01 sec
2013-09-11 18:25:00,186 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.01 sec
MapReduce Total cumulative CPU time: 3 seconds 10 msec
Ended Job = job_201309101627_0251
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 71.23 sec HDFS Read: 67340015 HDFS Write: 100142 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 3.01 sec HDFS Read: 100911 HDFS Write: 96 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 14 seconds 240 msec
OK
Time taken: 51.083 seconds, Fetched: 10 row(s)
hive> quit;
-- агрегация, среднее количество ключей.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_24272@mturlrep13_201309111825_819364546.txt
hive> ;
hive> quit;
times: 1
query: SELECT RegionID, sum(AdvEngineID), count(*) AS c, avg(ResolutionWidth), count(DISTINCT UserID) FROM hits_10m GROUP BY RegionID ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_25048@mturlrep13_201309111825_117221208.txt
hive> SELECT RegionID, sum(AdvEngineID), count(*) AS c, avg(ResolutionWidth), count(DISTINCT UserID) FROM hits_10m GROUP BY RegionID ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0252
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:25:21,952 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:25:28,978 Stage-1 map = 29%, reduce = 0%
2013-09-11 18:25:31,990 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:25:35,008 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.17 sec
2013-09-11 18:25:36,015 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.17 sec
2013-09-11 18:25:37,023 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.17 sec
2013-09-11 18:25:38,029 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.17 sec
2013-09-11 18:25:39,034 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.17 sec
2013-09-11 18:25:40,040 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.17 sec
2013-09-11 18:25:41,045 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.17 sec
2013-09-11 18:25:42,050 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.17 sec
2013-09-11 18:25:43,055 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 35.17 sec
2013-09-11 18:25:44,060 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 35.17 sec
2013-09-11 18:25:45,065 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 35.17 sec
2013-09-11 18:25:46,070 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 35.17 sec
2013-09-11 18:25:47,098 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 35.17 sec
2013-09-11 18:25:48,104 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.29 sec
2013-09-11 18:25:49,109 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.29 sec
2013-09-11 18:25:50,114 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.29 sec
2013-09-11 18:25:51,119 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.29 sec
2013-09-11 18:25:52,124 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.29 sec
2013-09-11 18:25:53,129 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.29 sec
2013-09-11 18:25:54,135 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.29 sec
2013-09-11 18:25:55,140 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.29 sec
2013-09-11 18:25:56,146 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.29 sec
2013-09-11 18:25:57,154 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 75.46 sec
2013-09-11 18:25:58,160 Stage-1 map = 100%, reduce = 96%, Cumulative CPU 83.41 sec
2013-09-11 18:25:59,225 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 84.91 sec
2013-09-11 18:26:00,231 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 84.91 sec
MapReduce Total cumulative CPU time: 1 minutes 24 seconds 910 msec
Ended Job = job_201309101627_0252
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0253
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:26:03,726 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:26:05,734 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.54 sec
2013-09-11 18:26:06,738 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.54 sec
2013-09-11 18:26:07,742 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.54 sec
2013-09-11 18:26:08,747 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.54 sec
2013-09-11 18:26:09,786 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.54 sec
2013-09-11 18:26:10,790 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.54 sec
2013-09-11 18:26:11,795 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.54 sec
2013-09-11 18:26:12,800 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 1.54 sec
2013-09-11 18:26:13,805 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.09 sec
2013-09-11 18:26:14,810 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.09 sec
MapReduce Total cumulative CPU time: 3 seconds 90 msec
Ended Job = job_201309101627_0253
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 84.91 sec HDFS Read: 74853201 HDFS Write: 148871 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 3.09 sec HDFS Read: 149640 HDFS Write: 414 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 28 seconds 0 msec
OK
Time taken: 63.196 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_27967@mturlrep13_201309111826_889850944.txt
hive> ;
hive> quit;
times: 2
query: SELECT RegionID, sum(AdvEngineID), count(*) AS c, avg(ResolutionWidth), count(DISTINCT UserID) FROM hits_10m GROUP BY RegionID ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_28412@mturlrep13_201309111826_925180195.txt
hive> SELECT RegionID, sum(AdvEngineID), count(*) AS c, avg(ResolutionWidth), count(DISTINCT UserID) FROM hits_10m GROUP BY RegionID ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0254
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:26:33,670 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:26:40,696 Stage-1 map = 29%, reduce = 0%
2013-09-11 18:26:43,708 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:26:46,726 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.26 sec
2013-09-11 18:26:47,733 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.26 sec
2013-09-11 18:26:48,741 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.26 sec
2013-09-11 18:26:49,748 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.26 sec
2013-09-11 18:26:50,753 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.26 sec
2013-09-11 18:26:51,760 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.26 sec
2013-09-11 18:26:52,765 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.26 sec
2013-09-11 18:26:53,771 Stage-1 map = 80%, reduce = 8%, Cumulative CPU 33.26 sec
2013-09-11 18:26:54,795 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 33.26 sec
2013-09-11 18:26:55,800 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 33.26 sec
2013-09-11 18:26:56,806 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 33.26 sec
2013-09-11 18:26:57,813 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 60.67 sec
2013-09-11 18:26:58,820 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 60.67 sec
2013-09-11 18:26:59,826 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.82 sec
2013-09-11 18:27:00,831 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.82 sec
2013-09-11 18:27:01,837 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.82 sec
2013-09-11 18:27:02,843 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.82 sec
2013-09-11 18:27:03,849 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.82 sec
2013-09-11 18:27:04,854 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.82 sec
2013-09-11 18:27:05,859 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.82 sec
2013-09-11 18:27:06,865 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.82 sec
2013-09-11 18:27:07,870 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.82 sec
2013-09-11 18:27:08,878 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 75.48 sec
2013-09-11 18:27:09,884 Stage-1 map = 100%, reduce = 95%, Cumulative CPU 75.48 sec
2013-09-11 18:27:10,890 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 84.13 sec
2013-09-11 18:27:11,896 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 84.13 sec
MapReduce Total cumulative CPU time: 1 minutes 24 seconds 130 msec
Ended Job = job_201309101627_0254
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0255
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:27:15,439 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:27:17,447 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.56 sec
2013-09-11 18:27:18,451 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.56 sec
2013-09-11 18:27:19,457 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.56 sec
2013-09-11 18:27:20,462 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.56 sec
2013-09-11 18:27:21,467 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.56 sec
2013-09-11 18:27:22,472 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.56 sec
2013-09-11 18:27:23,477 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.56 sec
2013-09-11 18:27:24,482 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.15 sec
2013-09-11 18:27:25,487 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.15 sec
2013-09-11 18:27:26,493 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.15 sec
MapReduce Total cumulative CPU time: 3 seconds 150 msec
Ended Job = job_201309101627_0255
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 84.13 sec HDFS Read: 74853201 HDFS Write: 148871 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 3.15 sec HDFS Read: 149640 HDFS Write: 414 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 27 seconds 280 msec
OK
Time taken: 65.921 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_31044@mturlrep13_201309111827_588046604.txt
hive> ;
hive> quit;
times: 3
query: SELECT RegionID, sum(AdvEngineID), count(*) AS c, avg(ResolutionWidth), count(DISTINCT UserID) FROM hits_10m GROUP BY RegionID ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_31497@mturlrep13_201309111827_1421279621.txt
hive> SELECT RegionID, sum(AdvEngineID), count(*) AS c, avg(ResolutionWidth), count(DISTINCT UserID) FROM hits_10m GROUP BY RegionID ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0256
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:27:41,761 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:27:49,791 Stage-1 map = 36%, reduce = 0%
2013-09-11 18:27:52,803 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:27:54,818 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.22 sec
2013-09-11 18:27:55,825 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.22 sec
2013-09-11 18:27:56,833 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.22 sec
2013-09-11 18:27:57,840 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.22 sec
2013-09-11 18:27:58,846 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.22 sec
2013-09-11 18:27:59,852 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.22 sec
2013-09-11 18:28:00,858 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.22 sec
2013-09-11 18:28:01,864 Stage-1 map = 88%, reduce = 8%, Cumulative CPU 34.22 sec
2013-09-11 18:28:02,869 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 34.22 sec
2013-09-11 18:28:03,875 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 34.22 sec
2013-09-11 18:28:04,881 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 34.22 sec
2013-09-11 18:28:05,887 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 34.22 sec
2013-09-11 18:28:06,892 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 50.09 sec
2013-09-11 18:28:07,897 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.82 sec
2013-09-11 18:28:08,903 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.82 sec
2013-09-11 18:28:09,908 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.82 sec
2013-09-11 18:28:10,914 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.82 sec
2013-09-11 18:28:11,920 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.82 sec
2013-09-11 18:28:12,925 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.82 sec
2013-09-11 18:28:13,930 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.82 sec
2013-09-11 18:28:14,936 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.82 sec
2013-09-11 18:28:15,941 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.82 sec
2013-09-11 18:28:16,948 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 76.29 sec
2013-09-11 18:28:17,953 Stage-1 map = 100%, reduce = 94%, Cumulative CPU 76.29 sec
2013-09-11 18:28:19,811 Stage-1 map = 100%, reduce = 94%, Cumulative CPU 76.29 sec
2013-09-11 18:28:20,994 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 86.22 sec
2013-09-11 18:28:22,073 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 86.22 sec
2013-09-11 18:28:23,078 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 86.22 sec
2013-09-11 18:28:24,083 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 86.22 sec
MapReduce Total cumulative CPU time: 1 minutes 26 seconds 220 msec
Ended Job = job_201309101627_0256
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0257
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:28:30,225 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:28:35,317 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.67 sec
2013-09-11 18:28:36,331 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.67 sec
2013-09-11 18:28:37,335 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.67 sec
2013-09-11 18:28:38,340 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.67 sec
2013-09-11 18:28:39,345 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.67 sec
2013-09-11 18:28:40,350 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.67 sec
2013-09-11 18:28:41,354 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.67 sec
2013-09-11 18:28:42,359 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.67 sec
2013-09-11 18:28:43,364 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.67 sec
2013-09-11 18:28:44,369 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.67 sec
2013-09-11 18:28:45,374 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.67 sec
2013-09-11 18:28:46,379 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.31 sec
2013-09-11 18:28:47,384 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.31 sec
2013-09-11 18:28:48,389 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.31 sec
2013-09-11 18:28:49,395 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.31 sec
2013-09-11 18:28:50,400 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.31 sec
MapReduce Total cumulative CPU time: 3 seconds 310 msec
Ended Job = job_201309101627_0257
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 86.22 sec HDFS Read: 74853201 HDFS Write: 148871 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 3.31 sec HDFS Read: 149640 HDFS Write: 414 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 29 seconds 530 msec
OK
Time taken: 77.923 seconds, Fetched: 10 row(s)
hive> quit;
-- агрегация, среднее количество ключей, несколько агрегатных функций.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_2540@mturlrep13_201309111828_1228944488.txt
hive> ;
hive> quit;
times: 1
query: SELECT MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhoneModel ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_3275@mturlrep13_201309111829_1496747935.txt
hive> SELECT MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhoneModel ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0258
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:29:20,524 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:29:27,560 Stage-1 map = 46%, reduce = 0%, Cumulative CPU 6.02 sec
2013-09-11 18:29:28,569 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.58 sec
2013-09-11 18:29:29,577 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.58 sec
2013-09-11 18:29:30,585 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.58 sec
2013-09-11 18:29:31,592 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.58 sec
2013-09-11 18:29:32,598 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.58 sec
2013-09-11 18:29:33,605 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.58 sec
2013-09-11 18:29:34,611 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.58 sec
2013-09-11 18:29:35,618 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 28.88 sec
2013-09-11 18:29:36,623 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 28.88 sec
2013-09-11 18:29:37,629 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 28.88 sec
2013-09-11 18:29:38,634 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 28.88 sec
2013-09-11 18:29:39,640 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 28.88 sec
2013-09-11 18:29:40,645 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 28.88 sec
2013-09-11 18:29:41,653 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 32.47 sec
2013-09-11 18:29:42,660 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 36.02 sec
2013-09-11 18:29:43,666 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 36.02 sec
MapReduce Total cumulative CPU time: 36 seconds 20 msec
Ended Job = job_201309101627_0258
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0259
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:29:49,843 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:29:52,856 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:29:53,926 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:29:54,932 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:29:55,937 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:29:56,943 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:29:57,948 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:29:58,953 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:29:59,958 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:30:00,963 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:30:01,969 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:30:02,974 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:30:03,980 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:30:04,985 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:30:05,990 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.43 sec
2013-09-11 18:30:06,995 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.43 sec
2013-09-11 18:30:08,001 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.43 sec
2013-09-11 18:30:09,008 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.43 sec
2013-09-11 18:30:10,015 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.43 sec
MapReduce Total cumulative CPU time: 2 seconds 430 msec
Ended Job = job_201309101627_0259
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 36.02 sec HDFS Read: 58273488 HDFS Write: 21128 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.43 sec HDFS Read: 21897 HDFS Write: 127 SUCCESS
Total MapReduce CPU Time Spent: 38 seconds 450 msec
OK
Time taken: 63.358 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_7226@mturlrep13_201309111830_168478607.txt
hive> ;
hive> quit;
times: 2
query: SELECT MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhoneModel ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_8250@mturlrep13_201309111830_1240423626.txt
hive> SELECT MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhoneModel ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0260
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:30:47,475 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:30:55,514 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:30:56,528 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.56 sec
2013-09-11 18:30:57,535 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.56 sec
2013-09-11 18:30:58,544 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.56 sec
2013-09-11 18:30:59,550 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.56 sec
2013-09-11 18:31:00,556 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.56 sec
2013-09-11 18:31:01,563 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.56 sec
2013-09-11 18:31:02,574 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.56 sec
2013-09-11 18:31:03,580 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 25.74 sec
2013-09-11 18:31:04,587 Stage-1 map = 100%, reduce = 8%, Cumulative CPU 33.71 sec
2013-09-11 18:31:05,593 Stage-1 map = 100%, reduce = 8%, Cumulative CPU 33.71 sec
2013-09-11 18:31:06,599 Stage-1 map = 100%, reduce = 8%, Cumulative CPU 33.71 sec
2013-09-11 18:31:07,605 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 33.71 sec
2013-09-11 18:31:08,610 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 33.71 sec
2013-09-11 18:31:09,617 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 33.71 sec
2013-09-11 18:31:10,623 Stage-1 map = 100%, reduce = 42%, Cumulative CPU 33.71 sec
2013-09-11 18:31:11,643 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.64 sec
2013-09-11 18:31:12,649 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.64 sec
2013-09-11 18:31:13,655 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.64 sec
2013-09-11 18:31:14,661 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.64 sec
2013-09-11 18:31:15,668 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.64 sec
2013-09-11 18:31:16,675 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.64 sec
MapReduce Total cumulative CPU time: 40 seconds 640 msec
Ended Job = job_201309101627_0260
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0261
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:31:21,465 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:31:23,474 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.97 sec
2013-09-11 18:31:24,479 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.97 sec
2013-09-11 18:31:25,485 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.97 sec
2013-09-11 18:31:26,490 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.97 sec
2013-09-11 18:31:27,495 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.97 sec
2013-09-11 18:31:28,501 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.97 sec
2013-09-11 18:31:29,506 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.97 sec
2013-09-11 18:31:30,511 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.51 sec
2013-09-11 18:31:31,516 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.51 sec
2013-09-11 18:31:32,521 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.51 sec
2013-09-11 18:31:33,527 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.51 sec
MapReduce Total cumulative CPU time: 2 seconds 510 msec
Ended Job = job_201309101627_0261
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 40.64 sec HDFS Read: 58273488 HDFS Write: 21128 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.51 sec HDFS Read: 21897 HDFS Write: 127 SUCCESS
Total MapReduce CPU Time Spent: 43 seconds 150 msec
OK
Time taken: 71.328 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_11493@mturlrep13_201309111831_1720637849.txt
hive> ;
hive> quit;
times: 3
query: SELECT MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhoneModel ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_12279@mturlrep13_201309111831_901895827.txt
hive> SELECT MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhoneModel ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0262
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:32:03,855 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:32:10,590 Stage-1 map = 36%, reduce = 0%
2013-09-11 18:32:12,950 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.86 sec
2013-09-11 18:32:13,958 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.86 sec
2013-09-11 18:32:14,965 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.86 sec
2013-09-11 18:32:15,972 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.86 sec
2013-09-11 18:32:16,978 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.86 sec
2013-09-11 18:32:17,984 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.86 sec
2013-09-11 18:32:18,990 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.86 sec
2013-09-11 18:32:19,997 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.86 sec
2013-09-11 18:32:21,002 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.86 sec
2013-09-11 18:32:22,009 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.86 sec
2013-09-11 18:32:23,014 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.86 sec
2013-09-11 18:32:24,021 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.86 sec
2013-09-11 18:32:25,027 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.86 sec
2013-09-11 18:32:26,034 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.86 sec
2013-09-11 18:32:27,040 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.86 sec
2013-09-11 18:32:28,046 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 20.25 sec
2013-09-11 18:32:29,052 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 28.79 sec
2013-09-11 18:32:30,059 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 28.79 sec
2013-09-11 18:32:31,064 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 28.79 sec
2013-09-11 18:32:32,070 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 28.79 sec
2013-09-11 18:32:33,076 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 28.79 sec
2013-09-11 18:32:34,081 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 28.79 sec
2013-09-11 18:32:35,088 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 28.79 sec
2013-09-11 18:32:36,120 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 35.98 sec
2013-09-11 18:32:37,126 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 35.98 sec
2013-09-11 18:32:38,133 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 35.98 sec
2013-09-11 18:32:39,226 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 35.98 sec
MapReduce Total cumulative CPU time: 35 seconds 980 msec
Ended Job = job_201309101627_0262
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0263
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:32:45,523 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:32:47,542 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
2013-09-11 18:32:48,547 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
2013-09-11 18:32:50,032 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
2013-09-11 18:32:51,036 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
2013-09-11 18:32:52,041 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
2013-09-11 18:32:53,046 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
2013-09-11 18:32:54,050 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
2013-09-11 18:32:55,055 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 0.93 sec
2013-09-11 18:32:56,061 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 0.93 sec
2013-09-11 18:32:57,066 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.44 sec
2013-09-11 18:32:58,071 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.44 sec
2013-09-11 18:32:59,076 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.44 sec
2013-09-11 18:33:00,081 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.44 sec
2013-09-11 18:33:01,087 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.44 sec
2013-09-11 18:33:02,092 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.44 sec
MapReduce Total cumulative CPU time: 2 seconds 440 msec
Ended Job = job_201309101627_0263
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 35.98 sec HDFS Read: 58273488 HDFS Write: 21128 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.44 sec HDFS Read: 21897 HDFS Write: 127 SUCCESS
Total MapReduce CPU Time Spent: 38 seconds 420 msec
OK
Time taken: 75.971 seconds, Fetched: 10 row(s)
hive> quit;
-- мощная фильтрация по строкам, затем агрегация по строкам.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_16116@mturlrep13_201309111833_605870459.txt
hive> ;
hive> quit;
times: 1
query: SELECT MobilePhone, MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhone, MobilePhoneModel ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_16886@mturlrep13_201309111833_202130159.txt
hive> SELECT MobilePhone, MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhone, MobilePhoneModel ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0264
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:33:29,621 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:33:37,775 Stage-1 map = 36%, reduce = 0%
2013-09-11 18:33:38,846 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.67 sec
2013-09-11 18:33:39,854 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.67 sec
2013-09-11 18:33:40,861 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.67 sec
2013-09-11 18:33:41,868 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.67 sec
2013-09-11 18:33:42,875 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.67 sec
2013-09-11 18:33:43,882 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.67 sec
2013-09-11 18:33:44,889 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.67 sec
2013-09-11 18:33:45,895 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.67 sec
2013-09-11 18:33:46,900 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.67 sec
2013-09-11 18:33:47,906 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.67 sec
2013-09-11 18:33:48,913 Stage-1 map = 88%, reduce = 0%, Cumulative CPU 16.67 sec
2013-09-11 18:33:49,918 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 33.02 sec
2013-09-11 18:33:50,924 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 33.02 sec
2013-09-11 18:33:51,930 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 33.02 sec
2013-09-11 18:33:52,936 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 33.02 sec
2013-09-11 18:33:53,941 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 33.02 sec
2013-09-11 18:33:55,094 Stage-1 map = 100%, reduce = 50%, Cumulative CPU 33.02 sec
2013-09-11 18:33:56,113 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.54 sec
2013-09-11 18:33:57,119 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.54 sec
2013-09-11 18:33:58,125 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.54 sec
2013-09-11 18:33:59,132 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.54 sec
2013-09-11 18:34:00,138 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.54 sec
2013-09-11 18:34:01,144 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.54 sec
MapReduce Total cumulative CPU time: 40 seconds 540 msec
Ended Job = job_201309101627_0264
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0265
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:34:04,726 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:34:06,735 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:34:07,741 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:34:08,746 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:34:09,751 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:34:10,756 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:34:11,761 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:34:12,766 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:34:13,771 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.95 sec
2013-09-11 18:34:14,777 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.45 sec
2013-09-11 18:34:15,783 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.45 sec
2013-09-11 18:34:16,788 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.45 sec
MapReduce Total cumulative CPU time: 2 seconds 450 msec
Ended Job = job_201309101627_0265
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 40.54 sec HDFS Read: 59259422 HDFS Write: 22710 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.45 sec HDFS Read: 23479 HDFS Write: 149 SUCCESS
Total MapReduce CPU Time Spent: 42 seconds 990 msec
OK
Time taken: 60.019 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_20052@mturlrep13_201309111834_786769685.txt
hive> ;
hive> quit;
times: 2
query: SELECT MobilePhone, MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhone, MobilePhoneModel ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_20585@mturlrep13_201309111834_648955611.txt
hive> SELECT MobilePhone, MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhone, MobilePhoneModel ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0266
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:34:37,882 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:34:45,163 Stage-1 map = 21%, reduce = 0%
2013-09-11 18:34:46,294 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.03 sec
2013-09-11 18:34:47,302 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.03 sec
2013-09-11 18:34:48,310 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.03 sec
2013-09-11 18:34:49,317 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.03 sec
2013-09-11 18:34:50,323 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.03 sec
2013-09-11 18:34:51,330 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.03 sec
2013-09-11 18:34:52,338 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.03 sec
2013-09-11 18:34:53,345 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.03 sec
2013-09-11 18:34:54,351 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.03 sec
2013-09-11 18:34:55,357 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.03 sec
2013-09-11 18:34:56,364 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 15.03 sec
2013-09-11 18:34:57,370 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 31.28 sec
2013-09-11 18:34:58,377 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 31.28 sec
2013-09-11 18:34:59,383 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 32.94 sec
2013-09-11 18:35:00,389 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 32.94 sec
2013-09-11 18:35:01,395 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 32.94 sec
2013-09-11 18:35:02,401 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 32.94 sec
2013-09-11 18:35:03,410 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 39.58 sec
2013-09-11 18:35:04,417 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 39.58 sec
2013-09-11 18:35:05,447 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 39.58 sec
MapReduce Total cumulative CPU time: 39 seconds 580 msec
Ended Job = job_201309101627_0266
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0267
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:35:12,739 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:35:14,998 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.97 sec
2013-09-11 18:35:16,004 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.97 sec
2013-09-11 18:35:17,010 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.97 sec
2013-09-11 18:35:18,015 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.97 sec
2013-09-11 18:35:19,020 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.97 sec
2013-09-11 18:35:20,025 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.97 sec
2013-09-11 18:35:21,031 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.97 sec
2013-09-11 18:35:22,036 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.97 sec
2013-09-11 18:35:23,042 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.97 sec
2013-09-11 18:35:24,048 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.45 sec
2013-09-11 18:35:25,053 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.45 sec
2013-09-11 18:35:26,058 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.45 sec
2013-09-11 18:35:27,064 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.45 sec
MapReduce Total cumulative CPU time: 2 seconds 450 msec
Ended Job = job_201309101627_0267
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 39.58 sec HDFS Read: 59259422 HDFS Write: 22710 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.45 sec HDFS Read: 23479 HDFS Write: 149 SUCCESS
Total MapReduce CPU Time Spent: 42 seconds 30 msec
OK
Time taken: 59.993 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_24080@mturlrep13_201309111835_1895693790.txt
hive> ;
hive> quit;
times: 3
query: SELECT MobilePhone, MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhone, MobilePhoneModel ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_24734@mturlrep13_201309111835_1260739241.txt
hive> SELECT MobilePhone, MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhone, MobilePhoneModel ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0268
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:35:59,129 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:36:06,189 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.87 sec
2013-09-11 18:36:07,197 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.87 sec
2013-09-11 18:36:08,207 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.87 sec
2013-09-11 18:36:09,214 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.87 sec
2013-09-11 18:36:10,222 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.87 sec
2013-09-11 18:36:11,228 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.87 sec
2013-09-11 18:36:12,234 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.87 sec
2013-09-11 18:36:13,241 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 30.46 sec
2013-09-11 18:36:14,248 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 30.46 sec
2013-09-11 18:36:15,254 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 30.46 sec
2013-09-11 18:36:16,260 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 30.46 sec
2013-09-11 18:36:17,266 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 30.46 sec
2013-09-11 18:36:18,272 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 30.46 sec
2013-09-11 18:36:19,278 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 30.46 sec
2013-09-11 18:36:20,286 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 37.87 sec
2013-09-11 18:36:21,292 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 37.87 sec
2013-09-11 18:36:22,300 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 37.87 sec
MapReduce Total cumulative CPU time: 37 seconds 870 msec
Ended Job = job_201309101627_0268
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0269
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:36:25,859 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:36:27,868 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.86 sec
2013-09-11 18:36:28,873 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.86 sec
2013-09-11 18:36:29,879 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.86 sec
2013-09-11 18:36:30,885 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.86 sec
2013-09-11 18:36:31,890 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.86 sec
2013-09-11 18:36:32,895 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.86 sec
2013-09-11 18:36:33,900 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.86 sec
2013-09-11 18:36:34,906 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.28 sec
2013-09-11 18:36:35,911 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.28 sec
2013-09-11 18:36:36,917 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.28 sec
MapReduce Total cumulative CPU time: 2 seconds 280 msec
Ended Job = job_201309101627_0269
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 37.87 sec HDFS Read: 59259422 HDFS Write: 22710 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.28 sec HDFS Read: 23479 HDFS Write: 149 SUCCESS
Total MapReduce CPU Time Spent: 40 seconds 150 msec
OK
Time taken: 52.093 seconds, Fetched: 10 row(s)
hive> quit;
-- мощная фильтрация по строкам, затем агрегация по паре из числа и строки.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_27342@mturlrep13_201309111836_1910306053.txt
hive> ;
hive> quit;
times: 1
query: SELECT SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_27811@mturlrep13_201309111836_1213140572.txt
hive> SELECT SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0270
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:36:57,519 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:37:04,550 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:37:05,563 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.33 sec
2013-09-11 18:37:06,570 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.33 sec
2013-09-11 18:37:07,577 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.33 sec
2013-09-11 18:37:08,583 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.33 sec
2013-09-11 18:37:09,589 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.33 sec
2013-09-11 18:37:10,595 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.33 sec
2013-09-11 18:37:11,602 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.33 sec
2013-09-11 18:37:12,608 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 20.33 sec
2013-09-11 18:37:13,613 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.89 sec
2013-09-11 18:37:14,618 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.89 sec
2013-09-11 18:37:15,623 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.89 sec
2013-09-11 18:37:16,629 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.89 sec
2013-09-11 18:37:17,635 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.89 sec
2013-09-11 18:37:18,640 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.89 sec
2013-09-11 18:37:19,646 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.89 sec
2013-09-11 18:37:20,652 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.89 sec
2013-09-11 18:37:21,659 Stage-1 map = 100%, reduce = 98%, Cumulative CPU 47.0 sec
2013-09-11 18:37:22,665 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 54.11 sec
2013-09-11 18:37:23,671 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 54.11 sec
MapReduce Total cumulative CPU time: 54 seconds 110 msec
Ended Job = job_201309101627_0270
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0271
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:37:26,224 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:37:34,251 Stage-2 map = 50%, reduce = 0%
2013-09-11 18:37:36,259 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.77 sec
2013-09-11 18:37:37,265 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.77 sec
2013-09-11 18:37:38,269 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.77 sec
2013-09-11 18:37:39,274 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.77 sec
2013-09-11 18:37:40,279 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.77 sec
2013-09-11 18:37:41,284 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.77 sec
2013-09-11 18:37:42,289 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.77 sec
2013-09-11 18:37:43,294 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.77 sec
2013-09-11 18:37:44,300 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 13.77 sec
2013-09-11 18:37:45,306 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.08 sec
2013-09-11 18:37:46,312 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.08 sec
2013-09-11 18:37:47,317 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.08 sec
MapReduce Total cumulative CPU time: 18 seconds 80 msec
Ended Job = job_201309101627_0271
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 54.11 sec HDFS Read: 27820105 HDFS Write: 79726641 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 18.08 sec HDFS Read: 79727410 HDFS Write: 275 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 12 seconds 190 msec
OK
Time taken: 59.798 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_30130@mturlrep13_201309111837_1070436156.txt
hive> ;
hive> quit;
times: 2
query: SELECT SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_30577@mturlrep13_201309111837_1857751509.txt
hive> SELECT SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0272
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:38:00,966 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:38:09,007 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.49 sec
2013-09-11 18:38:10,015 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.49 sec
2013-09-11 18:38:11,023 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.49 sec
2013-09-11 18:38:12,029 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.49 sec
2013-09-11 18:38:13,036 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.49 sec
2013-09-11 18:38:14,042 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.49 sec
2013-09-11 18:38:15,050 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.49 sec
2013-09-11 18:38:16,057 Stage-1 map = 97%, reduce = 8%, Cumulative CPU 30.84 sec
2013-09-11 18:38:17,063 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.01 sec
2013-09-11 18:38:18,069 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.01 sec
2013-09-11 18:38:19,075 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.01 sec
2013-09-11 18:38:20,081 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.01 sec
2013-09-11 18:38:21,087 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.01 sec
2013-09-11 18:38:22,093 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.01 sec
2013-09-11 18:38:23,100 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.01 sec
2013-09-11 18:38:24,106 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.01 sec
2013-09-11 18:38:25,113 Stage-1 map = 100%, reduce = 56%, Cumulative CPU 41.01 sec
2013-09-11 18:38:26,122 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 56.19 sec
2013-09-11 18:38:27,128 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 56.19 sec
MapReduce Total cumulative CPU time: 56 seconds 190 msec
Ended Job = job_201309101627_0272
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0273
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:38:29,657 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:38:37,684 Stage-2 map = 50%, reduce = 0%
2013-09-11 18:38:39,692 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.16 sec
2013-09-11 18:38:40,697 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.16 sec
2013-09-11 18:38:41,702 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.16 sec
2013-09-11 18:38:42,707 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.16 sec
2013-09-11 18:38:43,712 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.16 sec
2013-09-11 18:38:44,716 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.16 sec
2013-09-11 18:38:45,721 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.16 sec
2013-09-11 18:38:46,726 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 13.16 sec
2013-09-11 18:38:47,730 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 13.16 sec
2013-09-11 18:38:48,736 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 17.7 sec
2013-09-11 18:38:49,742 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 17.7 sec
2013-09-11 18:38:50,747 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 17.7 sec
MapReduce Total cumulative CPU time: 17 seconds 700 msec
Ended Job = job_201309101627_0273
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 56.19 sec HDFS Read: 27820105 HDFS Write: 79726641 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 17.7 sec HDFS Read: 79727410 HDFS Write: 275 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 13 seconds 890 msec
OK
Time taken: 57.143 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_415@mturlrep13_201309111838_1362426037.txt
hive> ;
hive> quit;
times: 3
query: SELECT SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_874@mturlrep13_201309111838_1940288899.txt
hive> SELECT SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0274
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:39:05,176 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:39:12,209 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.99 sec
2013-09-11 18:39:13,217 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.99 sec
2013-09-11 18:39:14,223 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.99 sec
2013-09-11 18:39:15,230 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.99 sec
2013-09-11 18:39:16,235 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.99 sec
2013-09-11 18:39:17,241 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.99 sec
2013-09-11 18:39:18,251 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.99 sec
2013-09-11 18:39:19,258 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 30.52 sec
2013-09-11 18:39:20,264 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.59 sec
2013-09-11 18:39:21,270 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.59 sec
2013-09-11 18:39:22,275 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.59 sec
2013-09-11 18:39:23,281 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.59 sec
2013-09-11 18:39:24,287 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.59 sec
2013-09-11 18:39:25,293 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.59 sec
2013-09-11 18:39:26,298 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.59 sec
2013-09-11 18:39:27,304 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.59 sec
2013-09-11 18:39:28,310 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.59 sec
2013-09-11 18:39:29,319 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 55.31 sec
2013-09-11 18:39:30,325 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 55.31 sec
2013-09-11 18:39:31,332 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 55.31 sec
MapReduce Total cumulative CPU time: 55 seconds 310 msec
Ended Job = job_201309101627_0274
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0275
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:39:33,849 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:39:41,877 Stage-2 map = 50%, reduce = 0%
2013-09-11 18:39:43,918 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.1 sec
2013-09-11 18:39:44,924 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.1 sec
2013-09-11 18:39:45,929 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.1 sec
2013-09-11 18:39:46,934 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.1 sec
2013-09-11 18:39:47,939 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.1 sec
2013-09-11 18:39:48,944 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.1 sec
2013-09-11 18:39:49,948 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.1 sec
2013-09-11 18:39:50,954 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.1 sec
2013-09-11 18:39:52,010 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 13.1 sec
2013-09-11 18:39:53,015 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 17.36 sec
2013-09-11 18:39:54,020 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 17.36 sec
MapReduce Total cumulative CPU time: 17 seconds 360 msec
Ended Job = job_201309101627_0275
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 55.31 sec HDFS Read: 27820105 HDFS Write: 79726641 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 17.36 sec HDFS Read: 79727410 HDFS Write: 275 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 12 seconds 670 msec
OK
Time taken: 57.089 seconds, Fetched: 10 row(s)
hive> quit;
-- средняя фильтрация по строкам, затем агрегация по строкам, большое количество ключей.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_3290@mturlrep13_201309111840_695958684.txt
hive> ;
hive> quit;
times: 1
query: SELECT SearchPhrase, count(DISTINCT UserID) AS u FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_3872@mturlrep13_201309111840_1093049561.txt
hive> SELECT SearchPhrase, count(DISTINCT UserID) AS u FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0276
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:40:17,267 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:40:24,294 Stage-1 map = 36%, reduce = 0%
2013-09-11 18:40:26,312 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.08 sec
2013-09-11 18:40:27,320 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.08 sec
2013-09-11 18:40:28,328 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.08 sec
2013-09-11 18:40:29,334 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.08 sec
2013-09-11 18:40:30,340 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.08 sec
2013-09-11 18:40:31,346 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.08 sec
2013-09-11 18:40:32,354 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.08 sec
2013-09-11 18:40:33,360 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.08 sec
2013-09-11 18:40:34,366 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 23.08 sec
2013-09-11 18:40:35,372 Stage-1 map = 93%, reduce = 17%, Cumulative CPU 35.37 sec
2013-09-11 18:40:36,377 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.42 sec
2013-09-11 18:40:37,385 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.42 sec
2013-09-11 18:40:38,390 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.42 sec
2013-09-11 18:40:39,396 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.42 sec
2013-09-11 18:40:40,402 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.42 sec
2013-09-11 18:40:41,409 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.42 sec
2013-09-11 18:40:42,415 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.42 sec
2013-09-11 18:40:43,424 Stage-1 map = 100%, reduce = 95%, Cumulative CPU 55.41 sec
2013-09-11 18:40:44,430 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 63.73 sec
2013-09-11 18:40:45,435 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 63.73 sec
MapReduce Total cumulative CPU time: 1 minutes 3 seconds 730 msec
Ended Job = job_201309101627_0276
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0277
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:40:48,962 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:40:55,985 Stage-2 map = 50%, reduce = 0%
2013-09-11 18:40:57,993 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.39 sec
2013-09-11 18:40:58,999 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.39 sec
2013-09-11 18:41:00,004 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.39 sec
2013-09-11 18:41:01,009 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.39 sec
2013-09-11 18:41:02,013 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.39 sec
2013-09-11 18:41:03,018 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.39 sec
2013-09-11 18:41:04,023 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.39 sec
2013-09-11 18:41:05,028 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.39 sec
2013-09-11 18:41:06,034 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 13.39 sec
2013-09-11 18:41:07,040 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 17.97 sec
2013-09-11 18:41:08,046 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 17.97 sec
2013-09-11 18:41:09,051 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 17.97 sec
MapReduce Total cumulative CPU time: 17 seconds 970 msec
Ended Job = job_201309101627_0277
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 63.73 sec HDFS Read: 84536695 HDFS Write: 79726544 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 17.97 sec HDFS Read: 79727313 HDFS Write: 293 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 21 seconds 700 msec
OK
Time taken: 62.137 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_7182@mturlrep13_201309111841_759539300.txt
hive> ;
hive> quit;
times: 2
query: SELECT SearchPhrase, count(DISTINCT UserID) AS u FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_7624@mturlrep13_201309111841_70388529.txt
hive> SELECT SearchPhrase, count(DISTINCT UserID) AS u FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0278
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:41:23,258 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:41:30,293 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:41:31,306 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.41 sec
2013-09-11 18:41:32,314 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.41 sec
2013-09-11 18:41:33,321 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.41 sec
2013-09-11 18:41:34,328 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.41 sec
2013-09-11 18:41:35,334 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.41 sec
2013-09-11 18:41:36,341 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.41 sec
2013-09-11 18:41:37,348 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.41 sec
2013-09-11 18:41:38,355 Stage-1 map = 88%, reduce = 8%, Cumulative CPU 22.41 sec
2013-09-11 18:41:39,362 Stage-1 map = 93%, reduce = 17%, Cumulative CPU 33.84 sec
2013-09-11 18:41:40,368 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 45.56 sec
2013-09-11 18:41:41,374 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 45.56 sec
2013-09-11 18:41:42,380 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 45.56 sec
2013-09-11 18:41:43,385 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 45.56 sec
2013-09-11 18:41:44,391 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 45.56 sec
2013-09-11 18:41:45,398 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 45.56 sec
2013-09-11 18:41:46,404 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 45.56 sec
2013-09-11 18:41:47,410 Stage-1 map = 100%, reduce = 52%, Cumulative CPU 45.56 sec
2013-09-11 18:41:48,416 Stage-1 map = 100%, reduce = 88%, Cumulative CPU 45.56 sec
2013-09-11 18:41:49,424 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 62.28 sec
2013-09-11 18:41:50,430 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 62.28 sec
MapReduce Total cumulative CPU time: 1 minutes 2 seconds 280 msec
Ended Job = job_201309101627_0278
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0279
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:41:53,909 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:42:00,937 Stage-2 map = 50%, reduce = 0%
2013-09-11 18:42:02,947 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.32 sec
2013-09-11 18:42:03,953 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.32 sec
2013-09-11 18:42:04,958 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.32 sec
2013-09-11 18:42:05,962 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.32 sec
2013-09-11 18:42:06,967 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.32 sec
2013-09-11 18:42:07,972 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.32 sec
2013-09-11 18:42:08,976 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.32 sec
2013-09-11 18:42:09,981 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.32 sec
2013-09-11 18:42:10,987 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 13.32 sec
2013-09-11 18:42:11,992 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 17.74 sec
2013-09-11 18:42:12,998 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 17.74 sec
2013-09-11 18:42:14,004 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 17.74 sec
MapReduce Total cumulative CPU time: 17 seconds 740 msec
Ended Job = job_201309101627_0279
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 62.28 sec HDFS Read: 84536695 HDFS Write: 79726544 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 17.74 sec HDFS Read: 79727313 HDFS Write: 293 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 20 seconds 20 msec
OK
Time taken: 59.295 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9991@mturlrep13_201309111842_1002615077.txt
hive> ;
hive> quit;
times: 3
query: SELECT SearchPhrase, count(DISTINCT UserID) AS u FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_10431@mturlrep13_201309111842_612246455.txt
hive> SELECT SearchPhrase, count(DISTINCT UserID) AS u FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0280
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:42:27,139 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:42:35,171 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:42:36,186 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.88 sec
2013-09-11 18:42:37,194 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.88 sec
2013-09-11 18:42:38,202 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.88 sec
2013-09-11 18:42:39,209 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.88 sec
2013-09-11 18:42:40,215 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.88 sec
2013-09-11 18:42:41,222 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.88 sec
2013-09-11 18:42:42,229 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.88 sec
2013-09-11 18:42:43,236 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 22.88 sec
2013-09-11 18:42:44,242 Stage-1 map = 93%, reduce = 17%, Cumulative CPU 34.43 sec
2013-09-11 18:42:45,247 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.67 sec
2013-09-11 18:42:46,277 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.67 sec
2013-09-11 18:42:47,283 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.67 sec
2013-09-11 18:42:48,288 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.67 sec
2013-09-11 18:42:49,294 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.67 sec
2013-09-11 18:42:50,299 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.67 sec
2013-09-11 18:42:51,305 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.67 sec
2013-09-11 18:42:52,311 Stage-1 map = 100%, reduce = 88%, Cumulative CPU 46.67 sec
2013-09-11 18:42:53,319 Stage-1 map = 100%, reduce = 94%, Cumulative CPU 54.9 sec
2013-09-11 18:42:54,326 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 63.26 sec
2013-09-11 18:42:55,333 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 63.26 sec
MapReduce Total cumulative CPU time: 1 minutes 3 seconds 260 msec
Ended Job = job_201309101627_0280
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0281
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:42:57,815 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:43:05,847 Stage-2 map = 50%, reduce = 0%
2013-09-11 18:43:07,856 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.3 sec
2013-09-11 18:43:08,862 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.3 sec
2013-09-11 18:43:09,867 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.3 sec
2013-09-11 18:43:10,872 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.3 sec
2013-09-11 18:43:11,877 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.3 sec
2013-09-11 18:43:12,883 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.3 sec
2013-09-11 18:43:13,888 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.3 sec
2013-09-11 18:43:14,893 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.3 sec
2013-09-11 18:43:15,898 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 13.3 sec
2013-09-11 18:43:16,904 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 17.84 sec
2013-09-11 18:43:17,909 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 17.84 sec
2013-09-11 18:43:18,915 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 17.84 sec
MapReduce Total cumulative CPU time: 17 seconds 840 msec
Ended Job = job_201309101627_0281
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 63.26 sec HDFS Read: 84536695 HDFS Write: 79726544 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 17.84 sec HDFS Read: 79727313 HDFS Write: 293 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 21 seconds 100 msec
OK
Time taken: 59.238 seconds, Fetched: 10 row(s)
hive> quit;
-- агрегация чуть сложнее.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_12833@mturlrep13_201309111843_203223094.txt
hive> ;
hive> quit;
times: 1
query: SELECT SearchEngineID, SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchEngineID, SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_13310@mturlrep13_201309111843_869968579.txt
hive> SELECT SearchEngineID, SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchEngineID, SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0282
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:43:40,513 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:43:47,543 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:43:48,556 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.59 sec
2013-09-11 18:43:49,563 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.59 sec
2013-09-11 18:43:50,572 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.59 sec
2013-09-11 18:43:51,578 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.59 sec
2013-09-11 18:43:52,585 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.59 sec
2013-09-11 18:43:53,592 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.59 sec
2013-09-11 18:43:54,598 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.59 sec
2013-09-11 18:43:55,605 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.59 sec
2013-09-11 18:43:56,611 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 20.59 sec
2013-09-11 18:43:57,617 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.09 sec
2013-09-11 18:43:58,623 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.09 sec
2013-09-11 18:43:59,629 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.09 sec
2013-09-11 18:44:00,636 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.79 sec
2013-09-11 18:44:01,642 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.79 sec
2013-09-11 18:44:02,648 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.79 sec
2013-09-11 18:44:03,655 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.79 sec
2013-09-11 18:44:04,661 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.79 sec
2013-09-11 18:44:05,670 Stage-1 map = 100%, reduce = 97%, Cumulative CPU 50.58 sec
2013-09-11 18:44:06,676 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 57.28 sec
2013-09-11 18:44:07,682 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 57.28 sec
MapReduce Total cumulative CPU time: 57 seconds 280 msec
Ended Job = job_201309101627_0282
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0283
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:44:11,283 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:44:18,307 Stage-2 map = 50%, reduce = 0%
2013-09-11 18:44:21,319 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.71 sec
2013-09-11 18:44:22,324 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.71 sec
2013-09-11 18:44:23,329 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.71 sec
2013-09-11 18:44:24,333 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.71 sec
2013-09-11 18:44:25,338 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.71 sec
2013-09-11 18:44:26,343 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.71 sec
2013-09-11 18:44:27,347 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.71 sec
2013-09-11 18:44:28,352 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.71 sec
2013-09-11 18:44:29,357 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 15.71 sec
2013-09-11 18:44:30,362 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 20.74 sec
2013-09-11 18:44:31,368 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 20.74 sec
2013-09-11 18:44:32,374 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 20.74 sec
MapReduce Total cumulative CPU time: 20 seconds 740 msec
Ended Job = job_201309101627_0283
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 57.28 sec HDFS Read: 30310112 HDFS Write: 84160093 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 20.74 sec HDFS Read: 84160862 HDFS Write: 297 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 18 seconds 20 msec
OK
Time taken: 61.858 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_15620@mturlrep13_201309111844_1876164990.txt
hive> ;
hive> quit;
times: 2
query: SELECT SearchEngineID, SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchEngineID, SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_16081@mturlrep13_201309111844_1689787397.txt
hive> SELECT SearchEngineID, SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchEngineID, SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0284
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:44:45,391 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:44:53,428 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.0 sec
2013-09-11 18:44:54,436 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.0 sec
2013-09-11 18:44:55,445 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.0 sec
2013-09-11 18:44:56,452 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.0 sec
2013-09-11 18:44:57,458 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.0 sec
2013-09-11 18:44:58,465 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.0 sec
2013-09-11 18:44:59,471 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.0 sec
2013-09-11 18:45:00,479 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.0 sec
2013-09-11 18:45:01,485 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 33.06 sec
2013-09-11 18:45:02,492 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.76 sec
2013-09-11 18:45:03,498 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.76 sec
2013-09-11 18:45:04,504 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.76 sec
2013-09-11 18:45:05,510 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.76 sec
2013-09-11 18:45:06,515 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.76 sec
2013-09-11 18:45:07,521 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.76 sec
2013-09-11 18:45:08,527 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.76 sec
2013-09-11 18:45:09,533 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.76 sec
2013-09-11 18:45:10,539 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 43.76 sec
2013-09-11 18:45:11,546 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 59.3 sec
2013-09-11 18:45:12,553 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 59.3 sec
2013-09-11 18:45:13,559 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 59.3 sec
MapReduce Total cumulative CPU time: 59 seconds 300 msec
Ended Job = job_201309101627_0284
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0285
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:45:16,070 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:45:24,097 Stage-2 map = 50%, reduce = 0%
2013-09-11 18:45:27,108 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.71 sec
2013-09-11 18:45:28,114 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.71 sec
2013-09-11 18:45:29,119 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.71 sec
2013-09-11 18:45:30,123 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.71 sec
2013-09-11 18:45:31,128 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.71 sec
2013-09-11 18:45:32,133 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.71 sec
2013-09-11 18:45:33,139 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.71 sec
2013-09-11 18:45:34,144 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 13.71 sec
2013-09-11 18:45:35,149 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 13.71 sec
2013-09-11 18:45:36,155 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 16.95 sec
2013-09-11 18:45:37,160 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 16.95 sec
MapReduce Total cumulative CPU time: 16 seconds 950 msec
Ended Job = job_201309101627_0285
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 59.3 sec HDFS Read: 30310112 HDFS Write: 84160093 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 16.95 sec HDFS Read: 84160862 HDFS Write: 297 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 16 seconds 250 msec
OK
Time taken: 59.204 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_19315@mturlrep13_201309111845_275707985.txt
hive> ;
hive> quit;
times: 3
query: SELECT SearchEngineID, SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchEngineID, SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_19758@mturlrep13_201309111845_1428482437.txt
hive> SELECT SearchEngineID, SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchEngineID, SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0286
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:45:50,307 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:45:58,346 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.37 sec
2013-09-11 18:45:59,353 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.37 sec
2013-09-11 18:46:00,362 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.37 sec
2013-09-11 18:46:01,368 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.37 sec
2013-09-11 18:46:02,375 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.37 sec
2013-09-11 18:46:03,381 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.37 sec
2013-09-11 18:46:04,387 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.37 sec
2013-09-11 18:46:05,394 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.37 sec
2013-09-11 18:46:06,401 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 31.78 sec
2013-09-11 18:46:07,410 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.63 sec
2013-09-11 18:46:08,416 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.63 sec
2013-09-11 18:46:09,422 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.63 sec
2013-09-11 18:46:10,428 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.63 sec
2013-09-11 18:46:11,434 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.63 sec
2013-09-11 18:46:12,440 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.63 sec
2013-09-11 18:46:13,447 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.63 sec
2013-09-11 18:46:14,453 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.63 sec
2013-09-11 18:46:15,462 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 50.32 sec
2013-09-11 18:46:16,468 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 58.04 sec
2013-09-11 18:46:17,474 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 58.04 sec
MapReduce Total cumulative CPU time: 58 seconds 40 msec
Ended Job = job_201309101627_0286
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0287
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:46:20,954 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:46:27,976 Stage-2 map = 50%, reduce = 0%
2013-09-11 18:46:30,987 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.59 sec
2013-09-11 18:46:31,992 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.59 sec
2013-09-11 18:46:32,998 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.59 sec
2013-09-11 18:46:34,003 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.59 sec
2013-09-11 18:46:35,008 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.59 sec
2013-09-11 18:46:36,013 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.59 sec
2013-09-11 18:46:37,017 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.59 sec
2013-09-11 18:46:38,023 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.59 sec
2013-09-11 18:46:39,028 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 13.59 sec
2013-09-11 18:46:40,033 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.25 sec
2013-09-11 18:46:41,039 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.25 sec
2013-09-11 18:46:42,044 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.25 sec
MapReduce Total cumulative CPU time: 18 seconds 250 msec
Ended Job = job_201309101627_0287
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 58.04 sec HDFS Read: 30310112 HDFS Write: 84160093 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 18.25 sec HDFS Read: 84160862 HDFS Write: 297 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 16 seconds 290 msec
OK
Time taken: 59.141 seconds, Fetched: 10 row(s)
hive> quit;
-- агрегация по числу и строке, большое количество ключей.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_22085@mturlrep13_201309111846_197414974.txt
hive> ;
hive> quit;
times: 1
query: SELECT UserID, count(*) AS c FROM hits_10m GROUP BY UserID ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_22551@mturlrep13_201309111846_356966815.txt
hive> SELECT UserID, count(*) AS c FROM hits_10m GROUP BY UserID ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0288
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:47:04,259 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:47:11,285 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:47:14,304 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.67 sec
2013-09-11 18:47:15,311 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.67 sec
2013-09-11 18:47:16,320 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.67 sec
2013-09-11 18:47:17,326 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.67 sec
2013-09-11 18:47:18,332 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.67 sec
2013-09-11 18:47:19,339 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.67 sec
2013-09-11 18:47:20,345 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.67 sec
2013-09-11 18:47:21,351 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.67 sec
2013-09-11 18:47:22,356 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 27.67 sec
2013-09-11 18:47:23,387 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 27.67 sec
2013-09-11 18:47:24,392 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 40.98 sec
2013-09-11 18:47:25,398 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.58 sec
2013-09-11 18:47:26,403 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.58 sec
2013-09-11 18:47:27,408 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.58 sec
2013-09-11 18:47:28,414 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.58 sec
2013-09-11 18:47:29,419 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.58 sec
2013-09-11 18:47:30,424 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.58 sec
2013-09-11 18:47:31,432 Stage-1 map = 100%, reduce = 96%, Cumulative CPU 62.71 sec
2013-09-11 18:47:32,438 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 69.85 sec
2013-09-11 18:47:33,443 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 69.85 sec
MapReduce Total cumulative CPU time: 1 minutes 9 seconds 850 msec
Ended Job = job_201309101627_0288
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0289
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:47:37,065 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:47:47,092 Stage-2 map = 50%, reduce = 0%
2013-09-11 18:47:51,104 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.97 sec
2013-09-11 18:47:52,109 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.97 sec
2013-09-11 18:47:53,113 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.97 sec
2013-09-11 18:47:54,117 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.97 sec
2013-09-11 18:47:55,122 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.97 sec
2013-09-11 18:47:56,127 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.97 sec
2013-09-11 18:47:57,131 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.97 sec
2013-09-11 18:47:58,136 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 17.97 sec
2013-09-11 18:47:59,141 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 17.97 sec
2013-09-11 18:48:00,146 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 22.32 sec
2013-09-11 18:48:01,150 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 22.32 sec
MapReduce Total cumulative CPU time: 22 seconds 320 msec
Ended Job = job_201309101627_0289
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 69.85 sec HDFS Read: 57312623 HDFS Write: 55475412 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 22.32 sec HDFS Read: 55476177 HDFS Write: 246 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 32 seconds 170 msec
OK
Time taken: 66.72 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_24895@mturlrep13_201309111848_816582111.txt
hive> ;
hive> quit;
times: 2
query: SELECT UserID, count(*) AS c FROM hits_10m GROUP BY UserID ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_25356@mturlrep13_201309111848_354267688.txt
hive> SELECT UserID, count(*) AS c FROM hits_10m GROUP BY UserID ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0290
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:48:14,183 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:48:22,214 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:48:24,229 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 26.34 sec
2013-09-11 18:48:25,236 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 26.34 sec
2013-09-11 18:48:26,243 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 26.34 sec
2013-09-11 18:48:27,249 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 26.34 sec
2013-09-11 18:48:28,256 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 26.34 sec
2013-09-11 18:48:29,262 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 26.34 sec
2013-09-11 18:48:30,267 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 26.34 sec
2013-09-11 18:48:31,273 Stage-1 map = 96%, reduce = 8%, Cumulative CPU 26.34 sec
2013-09-11 18:48:32,277 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 26.34 sec
2013-09-11 18:48:33,282 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 39.02 sec
2013-09-11 18:48:34,288 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.45 sec
2013-09-11 18:48:35,293 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.45 sec
2013-09-11 18:48:36,298 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.45 sec
2013-09-11 18:48:37,304 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.45 sec
2013-09-11 18:48:38,309 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.45 sec
2013-09-11 18:48:39,315 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.45 sec
2013-09-11 18:48:40,320 Stage-1 map = 100%, reduce = 55%, Cumulative CPU 54.45 sec
2013-09-11 18:48:41,328 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 68.84 sec
2013-09-11 18:48:42,334 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 68.84 sec
2013-09-11 18:48:43,340 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 68.84 sec
MapReduce Total cumulative CPU time: 1 minutes 8 seconds 840 msec
Ended Job = job_201309101627_0290
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0291
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:48:45,869 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:48:56,900 Stage-2 map = 50%, reduce = 0%
2013-09-11 18:49:00,912 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.7 sec
2013-09-11 18:49:01,917 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.7 sec
2013-09-11 18:49:02,922 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.7 sec
2013-09-11 18:49:03,927 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.7 sec
2013-09-11 18:49:04,931 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.7 sec
2013-09-11 18:49:05,936 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.7 sec
2013-09-11 18:49:06,941 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.7 sec
2013-09-11 18:49:07,946 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 18.7 sec
2013-09-11 18:49:08,951 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 18.7 sec
2013-09-11 18:49:10,270 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 23.14 sec
2013-09-11 18:49:11,275 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 23.14 sec
MapReduce Total cumulative CPU time: 23 seconds 140 msec
Ended Job = job_201309101627_0291
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 68.84 sec HDFS Read: 57312623 HDFS Write: 55475412 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 23.14 sec HDFS Read: 55476181 HDFS Write: 246 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 31 seconds 980 msec
OK
Time taken: 64.324 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_27717@mturlrep13_201309111849_53528863.txt
hive> ;
hive> quit;
times: 3
query: SELECT UserID, count(*) AS c FROM hits_10m GROUP BY UserID ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_28180@mturlrep13_201309111849_1908133462.txt
hive> SELECT UserID, count(*) AS c FROM hits_10m GROUP BY UserID ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0292
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:49:25,328 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:49:32,358 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:49:34,373 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.15 sec
2013-09-11 18:49:35,381 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.15 sec
2013-09-11 18:49:36,388 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.15 sec
2013-09-11 18:49:37,394 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.15 sec
2013-09-11 18:49:38,399 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.15 sec
2013-09-11 18:49:39,406 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.15 sec
2013-09-11 18:49:40,413 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.15 sec
2013-09-11 18:49:41,430 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.15 sec
2013-09-11 18:49:42,441 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 27.15 sec
2013-09-11 18:49:43,447 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 27.15 sec
2013-09-11 18:49:44,453 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 39.96 sec
2013-09-11 18:49:45,458 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 56.29 sec
2013-09-11 18:49:46,464 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 56.29 sec
2013-09-11 18:49:47,469 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 56.29 sec
2013-09-11 18:49:48,475 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 56.29 sec
2013-09-11 18:49:49,480 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 56.29 sec
2013-09-11 18:49:50,486 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 56.29 sec
2013-09-11 18:49:51,494 Stage-1 map = 100%, reduce = 96%, Cumulative CPU 63.39 sec
2013-09-11 18:49:52,500 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 70.79 sec
2013-09-11 18:49:53,506 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 70.79 sec
MapReduce Total cumulative CPU time: 1 minutes 10 seconds 790 msec
Ended Job = job_201309101627_0292
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0293
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:49:57,008 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:50:07,044 Stage-2 map = 50%, reduce = 0%
2013-09-11 18:50:11,059 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.15 sec
2013-09-11 18:50:12,064 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.15 sec
2013-09-11 18:50:13,069 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.15 sec
2013-09-11 18:50:14,074 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.15 sec
2013-09-11 18:50:15,078 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.15 sec
2013-09-11 18:50:16,083 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.15 sec
2013-09-11 18:50:17,088 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.15 sec
2013-09-11 18:50:18,093 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 18.15 sec
2013-09-11 18:50:19,100 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 18.15 sec
2013-09-11 18:50:20,106 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 22.72 sec
2013-09-11 18:50:21,111 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 22.72 sec
2013-09-11 18:50:22,117 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 22.72 sec
MapReduce Total cumulative CPU time: 22 seconds 720 msec
Ended Job = job_201309101627_0293
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 70.79 sec HDFS Read: 57312623 HDFS Write: 55475412 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 22.72 sec HDFS Read: 55476181 HDFS Write: 246 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 33 seconds 510 msec
OK
Time taken: 65.232 seconds, Fetched: 10 row(s)
hive> quit;
-- агрегация по очень большому количеству ключей, может не хватить оперативки.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_31241@mturlrep13_201309111850_2087772278.txt
hive> ;
hive> quit;
times: 1
query: SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_31725@mturlrep13_201309111850_1907040905.txt
hive> SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0294
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:50:43,595 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:50:50,623 Stage-1 map = 36%, reduce = 0%
2013-09-11 18:50:53,635 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:50:56,653 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.96 sec
2013-09-11 18:50:57,661 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.96 sec
2013-09-11 18:50:58,669 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.96 sec
2013-09-11 18:50:59,675 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.96 sec
2013-09-11 18:51:00,682 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.96 sec
2013-09-11 18:51:01,689 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.96 sec
2013-09-11 18:51:02,695 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.96 sec
2013-09-11 18:51:03,727 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.96 sec
2013-09-11 18:51:04,733 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 34.96 sec
2013-09-11 18:51:05,739 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 34.96 sec
2013-09-11 18:51:06,745 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 34.96 sec
2013-09-11 18:51:07,751 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 34.96 sec
2013-09-11 18:51:08,756 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 34.96 sec
2013-09-11 18:51:09,762 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.89 sec
2013-09-11 18:51:10,767 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.89 sec
2013-09-11 18:51:11,772 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.89 sec
2013-09-11 18:51:12,778 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.89 sec
2013-09-11 18:51:13,783 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.89 sec
2013-09-11 18:51:14,788 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.89 sec
2013-09-11 18:51:15,794 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.89 sec
2013-09-11 18:51:16,800 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.89 sec
2013-09-11 18:51:17,805 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.89 sec
2013-09-11 18:51:18,811 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.89 sec
2013-09-11 18:51:19,817 Stage-1 map = 100%, reduce = 86%, Cumulative CPU 69.89 sec
2013-09-11 18:51:20,822 Stage-1 map = 100%, reduce = 86%, Cumulative CPU 69.89 sec
2013-09-11 18:51:21,830 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 91.37 sec
2013-09-11 18:51:22,836 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 91.37 sec
MapReduce Total cumulative CPU time: 1 minutes 31 seconds 370 msec
Ended Job = job_201309101627_0294
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0295
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:51:25,350 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:51:36,383 Stage-2 map = 46%, reduce = 0%
2013-09-11 18:51:39,394 Stage-2 map = 50%, reduce = 0%
2013-09-11 18:51:42,404 Stage-2 map = 96%, reduce = 0%
2013-09-11 18:51:44,411 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.3 sec
2013-09-11 18:51:45,416 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.3 sec
2013-09-11 18:51:46,421 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.3 sec
2013-09-11 18:51:47,425 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.3 sec
2013-09-11 18:51:48,430 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.3 sec
2013-09-11 18:51:49,435 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.3 sec
2013-09-11 18:51:50,439 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.3 sec
2013-09-11 18:51:51,444 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 23.3 sec
2013-09-11 18:51:52,449 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 23.3 sec
2013-09-11 18:51:53,454 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 23.3 sec
2013-09-11 18:51:54,460 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 27.86 sec
2013-09-11 18:51:55,465 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 27.86 sec
2013-09-11 18:51:56,470 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 27.86 sec
MapReduce Total cumulative CPU time: 27 seconds 860 msec
Ended Job = job_201309101627_0295
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 91.37 sec HDFS Read: 84536695 HDFS Write: 146202868 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 27.86 sec HDFS Read: 146210119 HDFS Write: 256 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 59 seconds 230 msec
OK
Time taken: 82.786 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_2016@mturlrep13_201309111851_1953281129.txt
hive> ;
hive> quit;
times: 2
query: SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_2464@mturlrep13_201309111852_1098803176.txt
hive> SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0296
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:52:09,495 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:52:17,540 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:52:21,561 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.76 sec
2013-09-11 18:52:22,568 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.76 sec
2013-09-11 18:52:23,576 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.76 sec
2013-09-11 18:52:24,583 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.76 sec
2013-09-11 18:52:25,589 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.76 sec
2013-09-11 18:52:26,595 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.76 sec
2013-09-11 18:52:27,600 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.76 sec
2013-09-11 18:52:28,606 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.76 sec
2013-09-11 18:52:29,611 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 33.76 sec
2013-09-11 18:52:30,616 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 33.76 sec
2013-09-11 18:52:31,622 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 33.76 sec
2013-09-11 18:52:32,627 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 33.76 sec
2013-09-11 18:52:33,633 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.13 sec
2013-09-11 18:52:34,638 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.13 sec
2013-09-11 18:52:35,643 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.13 sec
2013-09-11 18:52:36,648 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.13 sec
2013-09-11 18:52:37,654 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.13 sec
2013-09-11 18:52:38,659 Stage-1 map = 100%, reduce = 21%, Cumulative CPU 68.13 sec
2013-09-11 18:52:39,665 Stage-1 map = 100%, reduce = 21%, Cumulative CPU 68.13 sec
2013-09-11 18:52:40,671 Stage-1 map = 100%, reduce = 21%, Cumulative CPU 68.13 sec
2013-09-11 18:52:41,677 Stage-1 map = 100%, reduce = 21%, Cumulative CPU 68.13 sec
2013-09-11 18:52:42,683 Stage-1 map = 100%, reduce = 21%, Cumulative CPU 68.13 sec
2013-09-11 18:52:43,688 Stage-1 map = 100%, reduce = 21%, Cumulative CPU 68.13 sec
2013-09-11 18:52:44,694 Stage-1 map = 100%, reduce = 85%, Cumulative CPU 68.13 sec
2013-09-11 18:52:45,700 Stage-1 map = 100%, reduce = 85%, Cumulative CPU 68.13 sec
2013-09-11 18:52:46,708 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 89.71 sec
2013-09-11 18:52:47,714 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 89.71 sec
2013-09-11 18:52:48,720 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 89.71 sec
MapReduce Total cumulative CPU time: 1 minutes 29 seconds 710 msec
Ended Job = job_201309101627_0296
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0297
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:52:52,227 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:53:02,258 Stage-2 map = 46%, reduce = 0%, Cumulative CPU 10.28 sec
2013-09-11 18:53:03,263 Stage-2 map = 46%, reduce = 0%, Cumulative CPU 10.28 sec
2013-09-11 18:53:04,267 Stage-2 map = 46%, reduce = 0%, Cumulative CPU 10.28 sec
2013-09-11 18:53:05,272 Stage-2 map = 50%, reduce = 0%, Cumulative CPU 10.28 sec
2013-09-11 18:53:06,276 Stage-2 map = 50%, reduce = 0%, Cumulative CPU 10.28 sec
2013-09-11 18:53:07,281 Stage-2 map = 50%, reduce = 0%, Cumulative CPU 10.28 sec
2013-09-11 18:53:08,286 Stage-2 map = 96%, reduce = 0%, Cumulative CPU 10.28 sec
2013-09-11 18:53:09,291 Stage-2 map = 96%, reduce = 0%, Cumulative CPU 10.28 sec
2013-09-11 18:53:10,296 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.23 sec
2013-09-11 18:53:11,300 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.23 sec
2013-09-11 18:53:12,305 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.23 sec
2013-09-11 18:53:13,309 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.23 sec
2013-09-11 18:53:14,313 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.23 sec
2013-09-11 18:53:15,318 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.23 sec
2013-09-11 18:53:16,322 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.23 sec
2013-09-11 18:53:17,327 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 23.23 sec
2013-09-11 18:53:18,331 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 23.23 sec
2013-09-11 18:53:19,336 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 23.23 sec
2013-09-11 18:53:20,341 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 27.69 sec
2013-09-11 18:53:21,347 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 27.69 sec
2013-09-11 18:53:22,352 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 27.69 sec
MapReduce Total cumulative CPU time: 27 seconds 690 msec
Ended Job = job_201309101627_0297
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 89.71 sec HDFS Read: 84536695 HDFS Write: 146202868 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 27.69 sec HDFS Read: 146210123 HDFS Write: 256 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 57 seconds 400 msec
OK
Time taken: 80.259 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_5290@mturlrep13_201309111853_777112224.txt
hive> ;
hive> quit;
times: 3
query: SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_5789@mturlrep13_201309111853_1022585979.txt
hive> SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0298
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:53:36,406 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:53:43,433 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:53:47,454 Stage-1 map = 47%, reduce = 0%, Cumulative CPU 16.37 sec
2013-09-11 18:53:48,462 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.32 sec
2013-09-11 18:53:49,469 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.32 sec
2013-09-11 18:53:50,476 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.32 sec
2013-09-11 18:53:51,485 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.32 sec
2013-09-11 18:53:52,495 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.32 sec
2013-09-11 18:53:53,500 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.32 sec
2013-09-11 18:53:54,506 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.32 sec
2013-09-11 18:53:55,512 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 34.32 sec
2013-09-11 18:53:56,519 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 34.32 sec
2013-09-11 18:53:57,525 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 34.32 sec
2013-09-11 18:53:58,531 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 34.32 sec
2013-09-11 18:53:59,537 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.03 sec
2013-09-11 18:54:00,543 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.66 sec
2013-09-11 18:54:01,548 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.66 sec
2013-09-11 18:54:02,554 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.66 sec
2013-09-11 18:54:03,560 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.66 sec
2013-09-11 18:54:04,566 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.66 sec
2013-09-11 18:54:05,572 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.66 sec
2013-09-11 18:54:06,578 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.66 sec
2013-09-11 18:54:07,584 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.66 sec
2013-09-11 18:54:08,590 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.66 sec
2013-09-11 18:54:09,596 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.66 sec
2013-09-11 18:54:10,602 Stage-1 map = 100%, reduce = 85%, Cumulative CPU 69.66 sec
2013-09-11 18:54:11,608 Stage-1 map = 100%, reduce = 85%, Cumulative CPU 69.66 sec
2013-09-11 18:54:12,616 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 89.68 sec
2013-09-11 18:54:13,622 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 89.68 sec
2013-09-11 18:54:14,628 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 89.68 sec
MapReduce Total cumulative CPU time: 1 minutes 29 seconds 680 msec
Ended Job = job_201309101627_0298
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0299
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:54:17,084 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:54:28,119 Stage-2 map = 46%, reduce = 0%
2013-09-11 18:54:31,128 Stage-2 map = 50%, reduce = 0%
2013-09-11 18:54:34,137 Stage-2 map = 96%, reduce = 0%
2013-09-11 18:54:36,145 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.19 sec
2013-09-11 18:54:37,151 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.19 sec
2013-09-11 18:54:38,156 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.19 sec
2013-09-11 18:54:39,161 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.19 sec
2013-09-11 18:54:40,166 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.19 sec
2013-09-11 18:54:41,171 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.19 sec
2013-09-11 18:54:42,176 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.19 sec
2013-09-11 18:54:43,181 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.19 sec
2013-09-11 18:54:44,187 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 24.19 sec
2013-09-11 18:54:45,192 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 24.19 sec
2013-09-11 18:54:46,198 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 24.19 sec
2013-09-11 18:54:47,204 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 28.71 sec
2013-09-11 18:54:48,210 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 28.71 sec
MapReduce Total cumulative CPU time: 28 seconds 710 msec
Ended Job = job_201309101627_0299
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 89.68 sec HDFS Read: 84536695 HDFS Write: 146202868 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 28.71 sec HDFS Read: 146210123 HDFS Write: 256 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 58 seconds 390 msec
OK
Time taken: 80.119 seconds, Fetched: 10 row(s)
hive> quit;
-- ещё более сложная агрегация.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_8402@mturlrep13_201309111854_283727002.txt
hive> ;
hive> quit;
times: 1
query: SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_8865@mturlrep13_201309111855_86754766.txt
hive> SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0300
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:55:11,854 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:55:18,883 Stage-1 map = 36%, reduce = 0%
2013-09-11 18:55:21,896 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:55:24,917 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.54 sec
2013-09-11 18:55:25,924 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.54 sec
2013-09-11 18:55:26,932 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.54 sec
2013-09-11 18:55:27,938 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.54 sec
2013-09-11 18:55:28,944 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.54 sec
2013-09-11 18:55:29,950 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.54 sec
2013-09-11 18:55:30,955 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.54 sec
2013-09-11 18:55:31,961 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 33.54 sec
2013-09-11 18:55:32,967 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 33.54 sec
2013-09-11 18:55:33,973 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 33.54 sec
2013-09-11 18:55:34,979 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 33.54 sec
2013-09-11 18:55:35,985 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 33.54 sec
2013-09-11 18:55:36,991 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 49.64 sec
2013-09-11 18:55:37,996 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.87 sec
2013-09-11 18:55:39,002 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.87 sec
2013-09-11 18:55:40,008 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.87 sec
2013-09-11 18:55:41,013 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.87 sec
2013-09-11 18:55:42,019 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.87 sec
2013-09-11 18:55:43,025 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.87 sec
2013-09-11 18:55:44,032 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 72.63 sec
2013-09-11 18:55:45,037 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 77.38 sec
2013-09-11 18:55:46,042 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 77.38 sec
MapReduce Total cumulative CPU time: 1 minutes 17 seconds 380 msec
Ended Job = job_201309101627_0300
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 77.38 sec HDFS Read: 84536695 HDFS Write: 889 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 17 seconds 380 msec
OK
Time taken: 45.248 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_11369@mturlrep13_201309111855_856551231.txt
hive> ;
hive> quit;
times: 2
query: SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_11797@mturlrep13_201309111855_1646605999.txt
hive> SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0301
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:56:00,548 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:56:07,575 Stage-1 map = 39%, reduce = 0%
2013-09-11 18:56:10,588 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:56:12,602 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 31.19 sec
2013-09-11 18:56:13,609 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 31.19 sec
2013-09-11 18:56:14,617 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 31.19 sec
2013-09-11 18:56:15,624 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 31.19 sec
2013-09-11 18:56:16,630 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 31.19 sec
2013-09-11 18:56:17,636 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 31.19 sec
2013-09-11 18:56:18,641 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 31.19 sec
2013-09-11 18:56:19,647 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 31.19 sec
2013-09-11 18:56:20,653 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 31.19 sec
2013-09-11 18:56:21,659 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 31.19 sec
2013-09-11 18:56:22,665 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 31.19 sec
2013-09-11 18:56:23,671 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 46.73 sec
2013-09-11 18:56:24,676 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 64.93 sec
2013-09-11 18:56:25,682 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 64.93 sec
2013-09-11 18:56:26,687 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 64.93 sec
2013-09-11 18:56:27,693 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 64.93 sec
2013-09-11 18:56:28,698 Stage-1 map = 100%, reduce = 21%, Cumulative CPU 64.93 sec
2013-09-11 18:56:29,703 Stage-1 map = 100%, reduce = 21%, Cumulative CPU 64.93 sec
2013-09-11 18:56:30,709 Stage-1 map = 100%, reduce = 21%, Cumulative CPU 64.93 sec
2013-09-11 18:56:31,717 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 69.62 sec
2013-09-11 18:56:32,722 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 74.09 sec
2013-09-11 18:56:33,728 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 74.09 sec
MapReduce Total cumulative CPU time: 1 minutes 14 seconds 90 msec
Ended Job = job_201309101627_0301
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 74.09 sec HDFS Read: 84536695 HDFS Write: 889 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 14 seconds 90 msec
OK
Time taken: 41.584 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_13564@mturlrep13_201309111856_1947594684.txt
hive> ;
hive> quit;
times: 3
query: SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_14004@mturlrep13_201309111856_1046197498.txt
hive> SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0302
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:56:47,624 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:56:54,649 Stage-1 map = 43%, reduce = 0%
2013-09-11 18:56:58,669 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 32.32 sec
2013-09-11 18:56:59,676 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 32.32 sec
2013-09-11 18:57:00,687 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 32.32 sec
2013-09-11 18:57:01,693 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 32.32 sec
2013-09-11 18:57:02,701 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 32.32 sec
2013-09-11 18:57:03,707 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 32.32 sec
2013-09-11 18:57:04,713 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 32.32 sec
2013-09-11 18:57:05,718 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 32.32 sec
2013-09-11 18:57:06,723 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 32.32 sec
2013-09-11 18:57:07,729 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 32.32 sec
2013-09-11 18:57:08,734 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 32.32 sec
2013-09-11 18:57:09,740 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 32.32 sec
2013-09-11 18:57:10,745 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 63.85 sec
2013-09-11 18:57:11,749 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 63.85 sec
2013-09-11 18:57:12,754 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 63.85 sec
2013-09-11 18:57:13,758 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 63.85 sec
2013-09-11 18:57:14,763 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 63.85 sec
2013-09-11 18:57:15,769 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 63.85 sec
2013-09-11 18:57:16,774 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 63.85 sec
2013-09-11 18:57:17,780 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 63.85 sec
2013-09-11 18:57:18,787 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 73.17 sec
2013-09-11 18:57:19,793 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 73.17 sec
2013-09-11 18:57:20,799 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 73.17 sec
MapReduce Total cumulative CPU time: 1 minutes 13 seconds 170 msec
Ended Job = job_201309101627_0302
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 73.17 sec HDFS Read: 84536695 HDFS Write: 889 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 13 seconds 170 msec
OK
Time taken: 41.413 seconds, Fetched: 10 row(s)
hive> quit;
-- то же самое, но без сортировки.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_15789@mturlrep13_201309111857_1616448565.txt
hive> ;
hive> quit;
times: 1
query: SELECT UserID, minute(EventTime), SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, minute(EventTime), SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_16290@mturlrep13_201309111857_668645670.txt
hive> SELECT UserID, minute(EventTime), SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, minute(EventTime), SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0303
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:57:42,641 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:57:49,670 Stage-1 map = 7%, reduce = 0%
2013-09-11 18:57:52,683 Stage-1 map = 22%, reduce = 0%
2013-09-11 18:57:55,696 Stage-1 map = 29%, reduce = 0%
2013-09-11 18:57:58,710 Stage-1 map = 36%, reduce = 0%
2013-09-11 18:58:01,726 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 43.37 sec
2013-09-11 18:58:02,733 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 53.76 sec
2013-09-11 18:58:03,741 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 53.76 sec
2013-09-11 18:58:04,747 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 53.76 sec
2013-09-11 18:58:05,753 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 53.76 sec
2013-09-11 18:58:06,758 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 53.76 sec
2013-09-11 18:58:07,763 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 53.76 sec
2013-09-11 18:58:08,769 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 53.76 sec
2013-09-11 18:58:09,775 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 53.76 sec
2013-09-11 18:58:10,781 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 53.76 sec
2013-09-11 18:58:11,788 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 53.76 sec
2013-09-11 18:58:12,793 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 53.76 sec
2013-09-11 18:58:13,799 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 53.76 sec
2013-09-11 18:58:14,805 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 53.76 sec
2013-09-11 18:58:15,811 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 53.76 sec
2013-09-11 18:58:16,816 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 53.76 sec
2013-09-11 18:58:17,822 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 53.76 sec
2013-09-11 18:58:18,827 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 53.76 sec
2013-09-11 18:58:19,833 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 53.76 sec
2013-09-11 18:58:20,839 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 53.76 sec
2013-09-11 18:58:21,845 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 53.76 sec
2013-09-11 18:58:22,850 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 78.8 sec
2013-09-11 18:58:23,855 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 105.63 sec
2013-09-11 18:58:24,860 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 105.63 sec
2013-09-11 18:58:25,866 Stage-1 map = 100%, reduce = 29%, Cumulative CPU 105.63 sec
2013-09-11 18:58:26,871 Stage-1 map = 100%, reduce = 29%, Cumulative CPU 105.63 sec
2013-09-11 18:58:27,877 Stage-1 map = 100%, reduce = 29%, Cumulative CPU 105.63 sec
2013-09-11 18:58:28,882 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 105.63 sec
2013-09-11 18:58:29,888 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 105.63 sec
2013-09-11 18:58:30,893 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 105.63 sec
2013-09-11 18:58:31,899 Stage-1 map = 100%, reduce = 73%, Cumulative CPU 105.63 sec
2013-09-11 18:58:32,905 Stage-1 map = 100%, reduce = 73%, Cumulative CPU 105.63 sec
2013-09-11 18:58:34,657 Stage-1 map = 100%, reduce = 73%, Cumulative CPU 105.63 sec
2013-09-11 18:58:35,663 Stage-1 map = 100%, reduce = 78%, Cumulative CPU 105.63 sec
2013-09-11 18:58:36,668 Stage-1 map = 100%, reduce = 78%, Cumulative CPU 105.63 sec
2013-09-11 18:58:37,673 Stage-1 map = 100%, reduce = 78%, Cumulative CPU 105.63 sec
2013-09-11 18:58:38,679 Stage-1 map = 100%, reduce = 86%, Cumulative CPU 105.63 sec
2013-09-11 18:58:39,684 Stage-1 map = 100%, reduce = 86%, Cumulative CPU 105.63 sec
2013-09-11 18:58:40,690 Stage-1 map = 100%, reduce = 86%, Cumulative CPU 105.63 sec
2013-09-11 18:58:41,695 Stage-1 map = 100%, reduce = 95%, Cumulative CPU 105.63 sec
2013-09-11 18:58:42,702 Stage-1 map = 100%, reduce = 97%, Cumulative CPU 127.04 sec
2013-09-11 18:58:43,707 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 148.11 sec
2013-09-11 18:58:44,713 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 148.11 sec
MapReduce Total cumulative CPU time: 2 minutes 28 seconds 110 msec
Ended Job = job_201309101627_0303
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0304
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 18:58:47,302 Stage-2 map = 0%, reduce = 0%
2013-09-11 18:59:01,348 Stage-2 map = 28%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:02,353 Stage-2 map = 28%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:03,358 Stage-2 map = 28%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:04,362 Stage-2 map = 28%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:05,370 Stage-2 map = 28%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:06,375 Stage-2 map = 28%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:07,381 Stage-2 map = 50%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:08,385 Stage-2 map = 50%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:09,390 Stage-2 map = 50%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:10,395 Stage-2 map = 50%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:11,400 Stage-2 map = 50%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:12,405 Stage-2 map = 50%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:13,410 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:14,415 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:15,419 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:16,424 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:17,429 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:18,434 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:19,438 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:20,452 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:21,457 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:22,461 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:23,466 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 17.05 sec
2013-09-11 18:59:24,471 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 42.88 sec
2013-09-11 18:59:25,476 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 42.88 sec
2013-09-11 18:59:26,480 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 42.88 sec
2013-09-11 18:59:27,485 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 42.88 sec
2013-09-11 18:59:28,489 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 42.88 sec
2013-09-11 18:59:29,494 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 42.88 sec
2013-09-11 18:59:30,498 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 42.88 sec
2013-09-11 18:59:31,503 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 42.88 sec
2013-09-11 18:59:32,507 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 42.88 sec
2013-09-11 18:59:33,511 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 42.88 sec
2013-09-11 18:59:34,515 Stage-2 map = 100%, reduce = 69%, Cumulative CPU 42.88 sec
2013-09-11 18:59:35,520 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 49.19 sec
2013-09-11 18:59:36,525 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 49.19 sec
2013-09-11 18:59:37,530 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 49.19 sec
MapReduce Total cumulative CPU time: 49 seconds 190 msec
Ended Job = job_201309101627_0304
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 148.11 sec HDFS Read: 84944733 HDFS Write: 241346048 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 49.19 sec HDFS Read: 241349358 HDFS Write: 268 SUCCESS
Total MapReduce CPU Time Spent: 3 minutes 17 seconds 300 msec
OK
Time taken: 124.833 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_19630@mturlrep13_201309111859_2048292740.txt
hive> ;
hive> quit;
times: 2
query: SELECT UserID, minute(EventTime), SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, minute(EventTime), SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_20052@mturlrep13_201309111859_1861199290.txt
hive> SELECT UserID, minute(EventTime), SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, minute(EventTime), SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0305
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 18:59:51,492 Stage-1 map = 0%, reduce = 0%
2013-09-11 18:59:58,519 Stage-1 map = 7%, reduce = 0%
2013-09-11 19:00:01,534 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 20.11 sec
2013-09-11 19:00:02,541 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 20.11 sec
2013-09-11 19:00:03,550 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 20.11 sec
2013-09-11 19:00:04,556 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 20.11 sec
2013-09-11 19:00:05,562 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 20.11 sec
2013-09-11 19:00:06,568 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 20.11 sec
2013-09-11 19:00:07,574 Stage-1 map = 39%, reduce = 0%, Cumulative CPU 20.11 sec
2013-09-11 19:00:08,580 Stage-1 map = 39%, reduce = 0%, Cumulative CPU 20.11 sec
2013-09-11 19:00:09,586 Stage-1 map = 39%, reduce = 0%, Cumulative CPU 20.11 sec
2013-09-11 19:00:10,591 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 20.11 sec
2013-09-11 19:00:11,599 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.39 sec
2013-09-11 19:00:12,604 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.39 sec
2013-09-11 19:00:13,610 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.39 sec
2013-09-11 19:00:14,616 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.39 sec
2013-09-11 19:00:15,621 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.39 sec
2013-09-11 19:00:16,627 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.39 sec
2013-09-11 19:00:17,633 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.39 sec
2013-09-11 19:00:18,638 Stage-1 map = 54%, reduce = 8%, Cumulative CPU 54.39 sec
2013-09-11 19:00:19,644 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 54.39 sec
2013-09-11 19:00:20,649 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 54.39 sec
2013-09-11 19:00:21,655 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 54.39 sec
2013-09-11 19:00:22,660 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 54.39 sec
2013-09-11 19:00:23,666 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 54.39 sec
2013-09-11 19:00:24,671 Stage-1 map = 76%, reduce = 17%, Cumulative CPU 54.39 sec
2013-09-11 19:00:25,676 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 54.39 sec
2013-09-11 19:00:26,682 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 54.39 sec
2013-09-11 19:00:27,687 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 54.39 sec
2013-09-11 19:00:28,692 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 54.39 sec
2013-09-11 19:00:29,698 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 54.39 sec
2013-09-11 19:00:30,703 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 78.63 sec
2013-09-11 19:00:31,708 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 106.04 sec
2013-09-11 19:00:32,713 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 106.04 sec
2013-09-11 19:00:33,717 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 106.04 sec
2013-09-11 19:00:34,722 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 106.04 sec
2013-09-11 19:00:35,727 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 106.04 sec
2013-09-11 19:00:36,732 Stage-1 map = 100%, reduce = 50%, Cumulative CPU 106.04 sec
2013-09-11 19:00:37,736 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 106.04 sec
2013-09-11 19:00:38,742 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 106.04 sec
2013-09-11 19:00:39,746 Stage-1 map = 100%, reduce = 70%, Cumulative CPU 106.04 sec
2013-09-11 19:00:40,751 Stage-1 map = 100%, reduce = 74%, Cumulative CPU 106.04 sec
2013-09-11 19:00:41,756 Stage-1 map = 100%, reduce = 74%, Cumulative CPU 106.04 sec
2013-09-11 19:00:42,762 Stage-1 map = 100%, reduce = 78%, Cumulative CPU 106.04 sec
2013-09-11 19:00:43,767 Stage-1 map = 100%, reduce = 82%, Cumulative CPU 106.04 sec
2013-09-11 19:00:45,242 Stage-1 map = 100%, reduce = 82%, Cumulative CPU 106.04 sec
2013-09-11 19:00:46,247 Stage-1 map = 100%, reduce = 87%, Cumulative CPU 106.04 sec
2013-09-11 19:00:47,253 Stage-1 map = 100%, reduce = 87%, Cumulative CPU 106.04 sec
2013-09-11 19:00:48,259 Stage-1 map = 100%, reduce = 87%, Cumulative CPU 106.04 sec
2013-09-11 19:00:49,264 Stage-1 map = 100%, reduce = 96%, Cumulative CPU 106.04 sec
2013-09-11 19:00:50,275 Stage-1 map = 100%, reduce = 96%, Cumulative CPU 106.04 sec
2013-09-11 19:00:51,283 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 147.57 sec
2013-09-11 19:00:52,288 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 147.57 sec
MapReduce Total cumulative CPU time: 2 minutes 27 seconds 570 msec
Ended Job = job_201309101627_0305
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0306
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 19:00:54,856 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:01:08,898 Stage-2 map = 28%, reduce = 0%
2013-09-11 19:01:15,016 Stage-2 map = 50%, reduce = 0%
2013-09-11 19:01:24,046 Stage-2 map = 78%, reduce = 0%
2013-09-11 19:01:32,074 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.92 sec
2013-09-11 19:01:33,080 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.92 sec
2013-09-11 19:01:34,085 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.92 sec
2013-09-11 19:01:35,090 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.92 sec
2013-09-11 19:01:36,095 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.92 sec
2013-09-11 19:01:37,100 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.92 sec
2013-09-11 19:01:38,105 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.92 sec
2013-09-11 19:01:39,110 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.92 sec
2013-09-11 19:01:40,116 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.92 sec
2013-09-11 19:01:41,121 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.92 sec
2013-09-11 19:01:42,126 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.92 sec
2013-09-11 19:01:43,131 Stage-2 map = 100%, reduce = 69%, Cumulative CPU 43.92 sec
2013-09-11 19:01:44,142 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 50.31 sec
2013-09-11 19:01:45,148 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 50.31 sec
MapReduce Total cumulative CPU time: 50 seconds 310 msec
Ended Job = job_201309101627_0306
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 147.57 sec HDFS Read: 84944733 HDFS Write: 241346048 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 50.31 sec HDFS Read: 241349354 HDFS Write: 268 SUCCESS
Total MapReduce CPU Time Spent: 3 minutes 17 seconds 880 msec
OK
Time taken: 122.023 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_23810@mturlrep13_201309111901_515641948.txt
hive> ;
hive> quit;
times: 3
query: SELECT UserID, minute(EventTime), SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, minute(EventTime), SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_24270@mturlrep13_201309111901_54212452.txt
hive> SELECT UserID, minute(EventTime), SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, minute(EventTime), SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0307
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:01:57,968 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:02:06,001 Stage-1 map = 7%, reduce = 0%
2013-09-11 19:02:09,013 Stage-1 map = 22%, reduce = 0%
2013-09-11 19:02:12,025 Stage-1 map = 29%, reduce = 0%
2013-09-11 19:02:15,039 Stage-1 map = 39%, reduce = 0%
2013-09-11 19:02:18,050 Stage-1 map = 43%, reduce = 0%
2013-09-11 19:02:19,060 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.29 sec
2013-09-11 19:02:20,067 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.29 sec
2013-09-11 19:02:21,073 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.29 sec
2013-09-11 19:02:22,078 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.29 sec
2013-09-11 19:02:23,083 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.29 sec
2013-09-11 19:02:24,089 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.29 sec
2013-09-11 19:02:25,094 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.29 sec
2013-09-11 19:02:26,099 Stage-1 map = 54%, reduce = 17%, Cumulative CPU 55.29 sec
2013-09-11 19:02:27,104 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 55.29 sec
2013-09-11 19:02:28,109 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 55.29 sec
2013-09-11 19:02:29,114 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 55.29 sec
2013-09-11 19:02:30,119 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 55.29 sec
2013-09-11 19:02:31,124 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 55.29 sec
2013-09-11 19:02:32,130 Stage-1 map = 76%, reduce = 17%, Cumulative CPU 55.29 sec
2013-09-11 19:02:33,135 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 55.29 sec
2013-09-11 19:02:34,140 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 55.29 sec
2013-09-11 19:02:35,146 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 55.29 sec
2013-09-11 19:02:36,151 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 55.29 sec
2013-09-11 19:02:37,156 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 78.76 sec
2013-09-11 19:02:38,160 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 78.76 sec
2013-09-11 19:02:39,165 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 105.48 sec
2013-09-11 19:02:40,169 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 105.48 sec
2013-09-11 19:02:41,174 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 105.48 sec
2013-09-11 19:02:42,178 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 105.48 sec
2013-09-11 19:02:43,183 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 105.48 sec
2013-09-11 19:02:44,188 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 105.48 sec
2013-09-11 19:02:45,193 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 105.48 sec
2013-09-11 19:02:46,198 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 105.48 sec
2013-09-11 19:02:47,202 Stage-1 map = 100%, reduce = 73%, Cumulative CPU 105.48 sec
2013-09-11 19:02:48,207 Stage-1 map = 100%, reduce = 73%, Cumulative CPU 105.48 sec
2013-09-11 19:02:49,212 Stage-1 map = 100%, reduce = 73%, Cumulative CPU 105.48 sec
2013-09-11 19:02:50,845 Stage-1 map = 100%, reduce = 77%, Cumulative CPU 105.48 sec
2013-09-11 19:02:51,850 Stage-1 map = 100%, reduce = 81%, Cumulative CPU 105.48 sec
2013-09-11 19:02:52,855 Stage-1 map = 100%, reduce = 84%, Cumulative CPU 105.48 sec
2013-09-11 19:02:53,861 Stage-1 map = 100%, reduce = 84%, Cumulative CPU 105.48 sec
2013-09-11 19:02:54,866 Stage-1 map = 100%, reduce = 88%, Cumulative CPU 105.48 sec
2013-09-11 19:02:55,871 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 105.48 sec
2013-09-11 19:02:56,875 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 105.48 sec
2013-09-11 19:02:57,883 Stage-1 map = 100%, reduce = 96%, Cumulative CPU 105.48 sec
2013-09-11 19:02:58,890 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 147.84 sec
2013-09-11 19:02:59,895 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 147.84 sec
MapReduce Total cumulative CPU time: 2 minutes 27 seconds 840 msec
Ended Job = job_201309101627_0307
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0308
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 19:03:03,355 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:03:16,392 Stage-2 map = 28%, reduce = 0%
2013-09-11 19:03:22,408 Stage-2 map = 50%, reduce = 0%
2013-09-11 19:03:31,433 Stage-2 map = 78%, reduce = 0%
2013-09-11 19:03:39,456 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.73 sec
2013-09-11 19:03:40,460 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.73 sec
2013-09-11 19:03:41,464 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.73 sec
2013-09-11 19:03:42,469 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.73 sec
2013-09-11 19:03:43,473 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.73 sec
2013-09-11 19:03:44,478 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.73 sec
2013-09-11 19:03:45,482 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.73 sec
2013-09-11 19:03:46,487 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.73 sec
2013-09-11 19:03:47,491 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.73 sec
2013-09-11 19:03:48,496 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.73 sec
2013-09-11 19:03:49,500 Stage-2 map = 100%, reduce = 68%, Cumulative CPU 43.73 sec
2013-09-11 19:03:50,504 Stage-2 map = 100%, reduce = 68%, Cumulative CPU 43.73 sec
2013-09-11 19:03:51,509 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 50.23 sec
2013-09-11 19:03:52,514 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 50.23 sec
MapReduce Total cumulative CPU time: 50 seconds 230 msec
Ended Job = job_201309101627_0308
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 147.84 sec HDFS Read: 84944733 HDFS Write: 241346048 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 50.23 sec HDFS Read: 241349358 HDFS Write: 268 SUCCESS
Total MapReduce CPU Time Spent: 3 minutes 18 seconds 70 msec
OK
Time taken: 121.826 seconds, Fetched: 10 row(s)
hive> quit;
-- ещё более сложная агрегация, не стоит выполнять на больших таблицах.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_27368@mturlrep13_201309111904_1529836462.txt
hive> ;
hive> quit;
times: 1
query: SELECT UserID FROM hits_10m WHERE UserID = 12345678901234567890;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_27852@mturlrep13_201309111904_1081537639.txt
hive> SELECT UserID FROM hits_10m WHERE UserID = 12345678901234567890;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0309
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 0
2013-09-11 19:04:14,784 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:04:19,814 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.75 sec
2013-09-11 19:04:20,820 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.75 sec
2013-09-11 19:04:21,827 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.75 sec
2013-09-11 19:04:22,832 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.75 sec
2013-09-11 19:04:23,838 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.75 sec
2013-09-11 19:04:24,843 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 15.39 sec
2013-09-11 19:04:25,848 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 15.39 sec
2013-09-11 19:04:26,854 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 15.39 sec
MapReduce Total cumulative CPU time: 15 seconds 390 msec
Ended Job = job_201309101627_0309
MapReduce Jobs Launched:
Job 0: Map: 4 Cumulative CPU: 15.39 sec HDFS Read: 57312623 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 15 seconds 390 msec
OK
Time taken: 21.676 seconds
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_29092@mturlrep13_201309111904_253068464.txt
hive> ;
hive> quit;
times: 2
query: SELECT UserID FROM hits_10m WHERE UserID = 12345678901234567890;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_29537@mturlrep13_201309111904_1909407625.txt
hive> SELECT UserID FROM hits_10m WHERE UserID = 12345678901234567890;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0310
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 0
2013-09-11 19:04:40,721 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:04:44,746 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.65 sec
2013-09-11 19:04:45,753 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.65 sec
2013-09-11 19:04:46,761 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.65 sec
2013-09-11 19:04:47,767 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.65 sec
2013-09-11 19:04:48,774 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 11.23 sec
2013-09-11 19:04:49,780 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 15.19 sec
2013-09-11 19:04:50,786 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 15.19 sec
MapReduce Total cumulative CPU time: 15 seconds 190 msec
Ended Job = job_201309101627_0310
MapReduce Jobs Launched:
Job 0: Map: 4 Cumulative CPU: 15.19 sec HDFS Read: 57312623 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 15 seconds 190 msec
OK
Time taken: 18.26 seconds
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_30764@mturlrep13_201309111904_2146737469.txt
hive> ;
hive> quit;
times: 3
query: SELECT UserID FROM hits_10m WHERE UserID = 12345678901234567890;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_31195@mturlrep13_201309111904_832602397.txt
hive> SELECT UserID FROM hits_10m WHERE UserID = 12345678901234567890;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0311
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 0
2013-09-11 19:05:04,452 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:05:09,482 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.95 sec
2013-09-11 19:05:10,490 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.95 sec
2013-09-11 19:05:11,498 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.95 sec
2013-09-11 19:05:12,504 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.95 sec
2013-09-11 19:05:13,510 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 15.6 sec
2013-09-11 19:05:14,516 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 15.6 sec
2013-09-11 19:05:15,522 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 15.6 sec
MapReduce Total cumulative CPU time: 15 seconds 600 msec
Ended Job = job_201309101627_0311
MapReduce Jobs Launched:
Job 0: Map: 4 Cumulative CPU: 15.6 sec HDFS Read: 57312623 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 15 seconds 600 msec
OK
Time taken: 19.128 seconds
hive> quit;
-- мощная фильтрация по столбцу типа UInt64.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_667@mturlrep13_201309111905_1082929224.txt
hive> ;
hive> quit;
times: 1
query: SELECT count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%';
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_1182@mturlrep13_201309111905_294903948.txt
hive> SELECT count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%';;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0312
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 19:05:36,814 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:05:43,841 Stage-1 map = 43%, reduce = 0%
2013-09-11 19:05:44,854 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.28 sec
2013-09-11 19:05:45,861 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.28 sec
2013-09-11 19:05:46,869 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.28 sec
2013-09-11 19:05:47,876 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.28 sec
2013-09-11 19:05:48,882 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.28 sec
2013-09-11 19:05:49,888 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.28 sec
2013-09-11 19:05:50,894 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.28 sec
2013-09-11 19:05:51,901 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.6 sec
2013-09-11 19:05:52,906 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.6 sec
2013-09-11 19:05:53,912 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.6 sec
2013-09-11 19:05:54,917 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.6 sec
2013-09-11 19:05:55,923 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.6 sec
2013-09-11 19:05:56,928 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.6 sec
2013-09-11 19:05:57,935 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 37.72 sec
2013-09-11 19:05:58,941 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 37.72 sec
MapReduce Total cumulative CPU time: 37 seconds 720 msec
Ended Job = job_201309101627_0312
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 37.72 sec HDFS Read: 109451651 HDFS Write: 5 SUCCESS
Total MapReduce CPU Time Spent: 37 seconds 720 msec
OK
8428
Time taken: 32.101 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_2715@mturlrep13_201309111906_1847551068.txt
hive> ;
hive> quit;
times: 2
query: SELECT count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%';
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_3181@mturlrep13_201309111906_1226066334.txt
hive> SELECT count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%';;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0313
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 19:06:12,741 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:06:19,778 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.88 sec
2013-09-11 19:06:20,785 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.88 sec
2013-09-11 19:06:21,792 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.88 sec
2013-09-11 19:06:22,798 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.88 sec
2013-09-11 19:06:23,804 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.88 sec
2013-09-11 19:06:24,810 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.88 sec
2013-09-11 19:06:25,816 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.88 sec
2013-09-11 19:06:26,821 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 26.71 sec
2013-09-11 19:06:27,827 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.82 sec
2013-09-11 19:06:28,832 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.82 sec
2013-09-11 19:06:29,837 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.82 sec
2013-09-11 19:06:30,842 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.82 sec
2013-09-11 19:06:31,847 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.82 sec
2013-09-11 19:06:32,855 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 38.11 sec
2013-09-11 19:06:33,861 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 38.11 sec
2013-09-11 19:06:34,866 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 38.11 sec
MapReduce Total cumulative CPU time: 38 seconds 110 msec
Ended Job = job_201309101627_0313
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 38.11 sec HDFS Read: 109451651 HDFS Write: 5 SUCCESS
Total MapReduce CPU Time Spent: 38 seconds 110 msec
OK
8428
Time taken: 30.408 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_4693@mturlrep13_201309111906_1529180994.txt
hive> ;
hive> quit;
times: 3
query: SELECT count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%';
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_5305@mturlrep13_201309111906_779445337.txt
hive> SELECT count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%';;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0314
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 19:06:48,153 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:06:56,192 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.5 sec
2013-09-11 19:06:57,200 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.5 sec
2013-09-11 19:06:58,207 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.5 sec
2013-09-11 19:06:59,213 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.5 sec
2013-09-11 19:07:00,219 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.5 sec
2013-09-11 19:07:01,225 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.5 sec
2013-09-11 19:07:02,232 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.5 sec
2013-09-11 19:07:03,239 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.91 sec
2013-09-11 19:07:04,244 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.91 sec
2013-09-11 19:07:05,250 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.91 sec
2013-09-11 19:07:06,255 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.91 sec
2013-09-11 19:07:07,261 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.91 sec
2013-09-11 19:07:08,266 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.91 sec
2013-09-11 19:07:09,274 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 37.92 sec
2013-09-11 19:07:10,280 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 37.92 sec
2013-09-11 19:07:11,285 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 37.92 sec
MapReduce Total cumulative CPU time: 37 seconds 920 msec
Ended Job = job_201309101627_0314
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 37.92 sec HDFS Read: 109451651 HDFS Write: 5 SUCCESS
Total MapReduce CPU Time Spent: 37 seconds 920 msec
OK
8428
Time taken: 30.38 seconds, Fetched: 1 row(s)
hive> quit;
-- фильтрация по поиску подстроки в строке.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_6820@mturlrep13_201309111907_920541373.txt
hive> ;
hive> quit;
times: 1
query: SELECT SearchPhrase, MAX(URL), count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_7269@mturlrep13_201309111907_775629306.txt
hive> SELECT SearchPhrase, MAX(URL), count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0315
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:07:32,401 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:07:39,430 Stage-1 map = 32%, reduce = 0%
2013-09-11 19:07:40,443 Stage-1 map = 39%, reduce = 0%, Cumulative CPU 9.06 sec
2013-09-11 19:07:41,451 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.91 sec
2013-09-11 19:07:42,458 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.91 sec
2013-09-11 19:07:43,465 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.91 sec
2013-09-11 19:07:44,471 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.91 sec
2013-09-11 19:07:45,478 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.91 sec
2013-09-11 19:07:46,485 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.91 sec
2013-09-11 19:07:47,492 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.91 sec
2013-09-11 19:07:48,498 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 18.91 sec
2013-09-11 19:07:49,504 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.92 sec
2013-09-11 19:07:50,509 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.92 sec
2013-09-11 19:07:51,515 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.92 sec
2013-09-11 19:07:52,521 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.92 sec
2013-09-11 19:07:53,527 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.92 sec
2013-09-11 19:07:54,534 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 42.18 sec
2013-09-11 19:07:55,540 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 42.18 sec
MapReduce Total cumulative CPU time: 42 seconds 180 msec
Ended Job = job_201309101627_0315
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0316
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 19:07:59,142 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:08:01,152 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-11 19:08:02,158 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-11 19:08:03,163 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-11 19:08:04,168 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-11 19:08:05,174 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-11 19:08:06,181 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-11 19:08:07,186 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-11 19:08:08,191 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.24 sec
2013-09-11 19:08:09,197 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.24 sec
2013-09-11 19:08:10,203 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.24 sec
MapReduce Total cumulative CPU time: 2 seconds 240 msec
Ended Job = job_201309101627_0316
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 42.18 sec HDFS Read: 136675723 HDFS Write: 5172 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.24 sec HDFS Read: 5941 HDFS Write: 984 SUCCESS
Total MapReduce CPU Time Spent: 44 seconds 420 msec
OK
Time taken: 47.807 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9501@mturlrep13_201309111908_1204289107.txt
hive> ;
hive> quit;
times: 2
query: SELECT SearchPhrase, MAX(URL), count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9961@mturlrep13_201309111908_623470333.txt
hive> SELECT SearchPhrase, MAX(URL), count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0317
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:08:23,302 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:08:31,343 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.52 sec
2013-09-11 19:08:32,351 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.52 sec
2013-09-11 19:08:33,359 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.52 sec
2013-09-11 19:08:34,366 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.52 sec
2013-09-11 19:08:35,373 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.52 sec
2013-09-11 19:08:36,379 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.52 sec
2013-09-11 19:08:37,386 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.52 sec
2013-09-11 19:08:38,394 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.52 sec
2013-09-11 19:08:39,400 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.06 sec
2013-09-11 19:08:40,406 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.06 sec
2013-09-11 19:08:41,412 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.06 sec
2013-09-11 19:08:42,419 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.06 sec
2013-09-11 19:08:43,426 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.06 sec
2013-09-11 19:08:44,433 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 39.17 sec
2013-09-11 19:08:45,440 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.39 sec
2013-09-11 19:08:46,446 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.39 sec
MapReduce Total cumulative CPU time: 41 seconds 390 msec
Ended Job = job_201309101627_0317
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0318
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 19:08:48,932 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:08:50,941 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.78 sec
2013-09-11 19:08:51,947 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.78 sec
2013-09-11 19:08:52,954 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.78 sec
2013-09-11 19:08:53,960 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.78 sec
2013-09-11 19:08:54,965 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.78 sec
2013-09-11 19:08:55,971 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.78 sec
2013-09-11 19:08:56,976 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.78 sec
2013-09-11 19:08:57,983 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 0.78 sec
2013-09-11 19:08:58,989 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.3 sec
2013-09-11 19:08:59,995 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.3 sec
2013-09-11 19:09:01,000 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.3 sec
MapReduce Total cumulative CPU time: 2 seconds 300 msec
Ended Job = job_201309101627_0318
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 41.39 sec HDFS Read: 136675723 HDFS Write: 5172 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.3 sec HDFS Read: 5941 HDFS Write: 984 SUCCESS
Total MapReduce CPU Time Spent: 43 seconds 690 msec
OK
Time taken: 45.178 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_12180@mturlrep13_201309111909_1253260332.txt
hive> ;
hive> quit;
times: 3
query: SELECT SearchPhrase, MAX(URL), count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_12625@mturlrep13_201309111909_397580064.txt
hive> SELECT SearchPhrase, MAX(URL), count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0319
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:09:13,988 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:09:22,031 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.22 sec
2013-09-11 19:09:23,039 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.22 sec
2013-09-11 19:09:24,047 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.22 sec
2013-09-11 19:09:25,054 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.22 sec
2013-09-11 19:09:26,060 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.22 sec
2013-09-11 19:09:27,066 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.22 sec
2013-09-11 19:09:28,072 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.22 sec
2013-09-11 19:09:29,080 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.22 sec
2013-09-11 19:09:30,086 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.92 sec
2013-09-11 19:09:31,091 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.92 sec
2013-09-11 19:09:32,097 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.92 sec
2013-09-11 19:09:33,103 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.92 sec
2013-09-11 19:09:34,108 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.92 sec
2013-09-11 19:09:35,116 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 38.05 sec
2013-09-11 19:09:36,160 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.26 sec
2013-09-11 19:09:37,167 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.26 sec
MapReduce Total cumulative CPU time: 40 seconds 260 msec
Ended Job = job_201309101627_0319
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0320
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 19:09:39,694 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:09:41,703 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 19:09:42,708 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 19:09:43,713 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 19:09:44,718 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 19:09:45,722 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 19:09:46,727 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 19:09:47,732 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 19:09:48,737 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 19:09:49,743 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.23 sec
2013-09-11 19:09:50,749 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.23 sec
MapReduce Total cumulative CPU time: 2 seconds 230 msec
Ended Job = job_201309101627_0320
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 40.26 sec HDFS Read: 136675723 HDFS Write: 5172 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.23 sec HDFS Read: 5941 HDFS Write: 984 SUCCESS
Total MapReduce CPU Time Spent: 42 seconds 490 msec
OK
Time taken: 44.192 seconds, Fetched: 10 row(s)
hive> quit;
-- вынимаем большие столбцы, фильтрация по строке.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_14855@mturlrep13_201309111909_548656441.txt
hive> ;
hive> quit;
times: 1
query: SELECT SearchPhrase, MAX(URL), MAX(Title), count(*) AS c, count(DISTINCT UserID) FROM hits_10m WHERE Title LIKE '%Яндекс%' AND URL NOT LIKE '%.yandex.%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_15305@mturlrep13_201309111910_808798094.txt
hive> SELECT SearchPhrase, MAX(URL), MAX(Title), count(*) AS c, count(DISTINCT UserID) FROM hits_10m WHERE Title LIKE '%Яндекс%' AND URL NOT LIKE '%.yandex.%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0321
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:10:12,656 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:10:19,683 Stage-1 map = 22%, reduce = 0%
2013-09-11 19:10:22,695 Stage-1 map = 43%, reduce = 0%
2013-09-11 19:10:23,708 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.32 sec
2013-09-11 19:10:24,746 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.32 sec
2013-09-11 19:10:25,755 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.32 sec
2013-09-11 19:10:26,761 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.32 sec
2013-09-11 19:10:27,769 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.32 sec
2013-09-11 19:10:28,775 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.32 sec
2013-09-11 19:10:29,781 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.32 sec
2013-09-11 19:10:30,787 Stage-1 map = 73%, reduce = 8%, Cumulative CPU 24.32 sec
2013-09-11 19:10:31,793 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 24.32 sec
2013-09-11 19:10:32,799 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 24.32 sec
2013-09-11 19:10:33,804 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 36.21 sec
2013-09-11 19:10:34,810 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.39 sec
2013-09-11 19:10:35,816 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.39 sec
2013-09-11 19:10:36,824 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 50.53 sec
2013-09-11 19:10:37,829 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 50.53 sec
2013-09-11 19:10:38,835 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 50.53 sec
2013-09-11 19:10:39,841 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 50.53 sec
2013-09-11 19:10:40,848 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 50.53 sec
2013-09-11 19:10:41,855 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 52.6 sec
2013-09-11 19:10:42,861 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 52.6 sec
2013-09-11 19:10:43,867 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 52.6 sec
MapReduce Total cumulative CPU time: 52 seconds 600 msec
Ended Job = job_201309101627_0321
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0322
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 19:10:47,360 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:10:48,366 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:10:49,371 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:10:50,377 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:10:51,382 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:10:52,387 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:10:53,392 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:10:54,396 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:10:55,402 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:10:56,407 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.2 sec
2013-09-11 19:10:57,412 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.2 sec
2013-09-11 19:10:58,418 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.2 sec
MapReduce Total cumulative CPU time: 2 seconds 200 msec
Ended Job = job_201309101627_0322
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 52.6 sec HDFS Read: 298803179 HDFS Write: 12221 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.2 sec HDFS Read: 12990 HDFS Write: 2646 SUCCESS
Total MapReduce CPU Time Spent: 54 seconds 800 msec
OK
Time taken: 56.389 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_18714@mturlrep13_201309111911_81889019.txt
hive> ;
hive> quit;
times: 2
query: SELECT SearchPhrase, MAX(URL), MAX(Title), count(*) AS c, count(DISTINCT UserID) FROM hits_10m WHERE Title LIKE '%Яндекс%' AND URL NOT LIKE '%.yandex.%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_19205@mturlrep13_201309111911_2026522553.txt
hive> SELECT SearchPhrase, MAX(URL), MAX(Title), count(*) AS c, count(DISTINCT UserID) FROM hits_10m WHERE Title LIKE '%Яндекс%' AND URL NOT LIKE '%.yandex.%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0323
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:11:11,864 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:11:19,897 Stage-1 map = 29%, reduce = 0%
2013-09-11 19:11:22,917 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 25.28 sec
2013-09-11 19:11:23,924 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 25.28 sec
2013-09-11 19:11:24,933 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 25.28 sec
2013-09-11 19:11:25,939 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 25.28 sec
2013-09-11 19:11:26,946 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 25.28 sec
2013-09-11 19:11:27,952 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 25.28 sec
2013-09-11 19:11:28,958 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 25.28 sec
2013-09-11 19:11:29,963 Stage-1 map = 76%, reduce = 17%, Cumulative CPU 25.28 sec
2013-09-11 19:11:30,969 Stage-1 map = 76%, reduce = 17%, Cumulative CPU 25.28 sec
2013-09-11 19:11:31,974 Stage-1 map = 89%, reduce = 17%, Cumulative CPU 36.84 sec
2013-09-11 19:11:32,980 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 49.31 sec
2013-09-11 19:11:33,985 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 49.31 sec
2013-09-11 19:11:34,991 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 49.31 sec
2013-09-11 19:11:35,998 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 53.48 sec
2013-09-11 19:11:37,004 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 53.48 sec
2013-09-11 19:11:38,011 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 53.48 sec
MapReduce Total cumulative CPU time: 53 seconds 480 msec
Ended Job = job_201309101627_0323
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0324
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 19:11:40,494 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:11:42,503 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:11:43,508 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:11:44,514 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:11:45,519 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:11:46,524 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:11:47,529 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:11:48,534 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:11:49,540 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 0.76 sec
2013-09-11 19:11:50,545 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.15 sec
2013-09-11 19:11:51,551 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.15 sec
MapReduce Total cumulative CPU time: 2 seconds 150 msec
Ended Job = job_201309101627_0324
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 53.48 sec HDFS Read: 298803179 HDFS Write: 12221 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.15 sec HDFS Read: 12988 HDFS Write: 2646 SUCCESS
Total MapReduce CPU Time Spent: 55 seconds 630 msec
OK
Time taken: 47.357 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_21577@mturlrep13_201309111911_1538548655.txt
hive> ;
hive> quit;
times: 3
query: SELECT SearchPhrase, MAX(URL), MAX(Title), count(*) AS c, count(DISTINCT UserID) FROM hits_10m WHERE Title LIKE '%Яндекс%' AND URL NOT LIKE '%.yandex.%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_22017@mturlrep13_201309111911_1851586484.txt
hive> SELECT SearchPhrase, MAX(URL), MAX(Title), count(*) AS c, count(DISTINCT UserID) FROM hits_10m WHERE Title LIKE '%Яндекс%' AND URL NOT LIKE '%.yandex.%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0325
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:12:06,075 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:12:13,105 Stage-1 map = 29%, reduce = 0%
2013-09-11 19:12:15,124 Stage-1 map = 39%, reduce = 0%, Cumulative CPU 12.29 sec
2013-09-11 19:12:16,133 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.56 sec
2013-09-11 19:12:17,141 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.56 sec
2013-09-11 19:12:18,148 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.56 sec
2013-09-11 19:12:19,155 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.56 sec
2013-09-11 19:12:20,162 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.56 sec
2013-09-11 19:12:21,169 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.56 sec
2013-09-11 19:12:22,176 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.56 sec
2013-09-11 19:12:23,182 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 24.56 sec
2013-09-11 19:12:24,188 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 24.56 sec
2013-09-11 19:12:25,194 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 24.56 sec
2013-09-11 19:12:26,209 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.93 sec
2013-09-11 19:12:27,215 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.93 sec
2013-09-11 19:12:28,221 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.93 sec
2013-09-11 19:12:29,229 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 53.09 sec
2013-09-11 19:12:30,235 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 53.09 sec
MapReduce Total cumulative CPU time: 53 seconds 90 msec
Ended Job = job_201309101627_0325
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0326
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 19:12:33,748 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:12:34,753 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.88 sec
2013-09-11 19:12:35,759 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.88 sec
2013-09-11 19:12:36,764 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.88 sec
2013-09-11 19:12:37,769 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.88 sec
2013-09-11 19:12:38,775 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.88 sec
2013-09-11 19:12:39,780 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.88 sec
2013-09-11 19:12:40,785 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.88 sec
2013-09-11 19:12:41,790 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.88 sec
2013-09-11 19:12:42,796 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.28 sec
2013-09-11 19:12:43,802 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.28 sec
2013-09-11 19:12:44,808 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.28 sec
MapReduce Total cumulative CPU time: 2 seconds 280 msec
Ended Job = job_201309101627_0326
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 53.09 sec HDFS Read: 298803179 HDFS Write: 12221 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.28 sec HDFS Read: 12990 HDFS Write: 2646 SUCCESS
Total MapReduce CPU Time Spent: 55 seconds 370 msec
OK
Time taken: 47.653 seconds, Fetched: 10 row(s)
hive> quit;
-- чуть больше столбцы.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_24402@mturlrep13_201309111912_1264895416.txt
hive> ;
hive> quit;
times: 1
query: SELECT * FROM hits_10m WHERE URL LIKE '%metrika%' ORDER BY EventTime LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_24883@mturlrep13_201309111912_1271282115.txt
hive> SELECT * FROM hits_10m WHERE URL LIKE '%metrika%' ORDER BY EventTime LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0327
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 19:13:05,825 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:13:15,872 Stage-1 map = 7%, reduce = 0%
2013-09-11 19:13:18,884 Stage-1 map = 14%, reduce = 0%
2013-09-11 19:13:21,898 Stage-1 map = 22%, reduce = 0%
2013-09-11 19:13:27,920 Stage-1 map = 29%, reduce = 0%
2013-09-11 19:13:30,932 Stage-1 map = 36%, reduce = 0%
2013-09-11 19:13:36,953 Stage-1 map = 43%, reduce = 0%
2013-09-11 19:13:37,964 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 93.08 sec
2013-09-11 19:13:38,971 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 93.08 sec
2013-09-11 19:13:39,978 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 93.08 sec
2013-09-11 19:13:40,984 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 93.08 sec
2013-09-11 19:13:41,990 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 93.08 sec
2013-09-11 19:13:42,995 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 93.08 sec
2013-09-11 19:13:44,001 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 93.08 sec
2013-09-11 19:13:45,007 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 93.08 sec
2013-09-11 19:13:46,011 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 93.08 sec
2013-09-11 19:13:47,016 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 93.08 sec
2013-09-11 19:13:48,020 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 93.08 sec
2013-09-11 19:13:49,060 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 93.08 sec
2013-09-11 19:13:50,074 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 93.08 sec
2013-09-11 19:13:51,078 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 93.08 sec
2013-09-11 19:13:52,082 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 93.08 sec
2013-09-11 19:13:53,086 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 93.08 sec
2013-09-11 19:13:54,090 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 93.08 sec
2013-09-11 19:13:55,095 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 93.08 sec
2013-09-11 19:13:56,100 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 93.08 sec
2013-09-11 19:13:57,105 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 93.08 sec
2013-09-11 19:13:58,110 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 93.08 sec
2013-09-11 19:13:59,115 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 93.08 sec
2013-09-11 19:14:00,120 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 93.08 sec
2013-09-11 19:14:01,125 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 93.08 sec
2013-09-11 19:14:02,130 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 93.08 sec
2013-09-11 19:14:03,140 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 155.04 sec
2013-09-11 19:14:04,145 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 155.04 sec
2013-09-11 19:14:05,153 Stage-1 map = 93%, reduce = 17%, Cumulative CPU 159.81 sec
2013-09-11 19:14:06,158 Stage-1 map = 93%, reduce = 17%, Cumulative CPU 159.81 sec
2013-09-11 19:14:07,163 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 159.81 sec
2013-09-11 19:14:08,168 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 159.81 sec
2013-09-11 19:14:09,173 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 159.81 sec
2013-09-11 19:14:10,178 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 175.38 sec
2013-09-11 19:14:11,183 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 175.38 sec
2013-09-11 19:14:12,188 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 175.38 sec
2013-09-11 19:14:13,193 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 175.38 sec
2013-09-11 19:14:14,197 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 175.38 sec
2013-09-11 19:14:15,202 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 175.38 sec
2013-09-11 19:14:16,210 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 177.7 sec
2013-09-11 19:14:17,215 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 177.7 sec
2013-09-11 19:14:18,221 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 177.7 sec
MapReduce Total cumulative CPU time: 2 minutes 57 seconds 700 msec
Ended Job = job_201309101627_0327
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 177.7 sec HDFS Read: 1082943442 HDFS Write: 5318 SUCCESS
Total MapReduce CPU Time Spent: 2 minutes 57 seconds 700 msec
OK
Time taken: 82.942 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_26833@mturlrep13_201309111914_284626838.txt
hive> ;
hive> quit;
times: 2
query: SELECT * FROM hits_10m WHERE URL LIKE '%metrika%' ORDER BY EventTime LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_27262@mturlrep13_201309111914_572123966.txt
hive> SELECT * FROM hits_10m WHERE URL LIKE '%metrika%' ORDER BY EventTime LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0328
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 19:14:31,952 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:14:39,983 Stage-1 map = 4%, reduce = 0%
2013-09-11 19:14:42,994 Stage-1 map = 7%, reduce = 0%
2013-09-11 19:14:46,006 Stage-1 map = 14%, reduce = 0%
2013-09-11 19:14:49,018 Stage-1 map = 22%, reduce = 0%
2013-09-11 19:14:52,064 Stage-1 map = 29%, reduce = 0%
2013-09-11 19:14:55,075 Stage-1 map = 36%, reduce = 0%
2013-09-11 19:15:01,097 Stage-1 map = 43%, reduce = 0%
2013-09-11 19:15:02,183 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 76.71 sec
2013-09-11 19:15:03,191 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 76.71 sec
2013-09-11 19:15:04,196 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 76.71 sec
2013-09-11 19:15:05,202 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 76.71 sec
2013-09-11 19:15:06,208 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 76.71 sec
2013-09-11 19:15:07,213 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 76.71 sec
2013-09-11 19:15:08,219 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 76.71 sec
2013-09-11 19:15:09,224 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 76.71 sec
2013-09-11 19:15:10,230 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 76.71 sec
2013-09-11 19:15:11,235 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 76.71 sec
2013-09-11 19:15:12,240 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 76.71 sec
2013-09-11 19:15:13,247 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:14,252 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:15,257 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:16,263 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:17,267 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:18,272 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:19,276 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:20,281 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:21,286 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:22,291 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:23,296 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:24,306 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:25,311 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:26,316 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:27,381 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:28,387 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:29,392 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:30,396 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:31,401 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:32,406 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:33,411 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:34,416 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:35,421 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:36,425 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:37,429 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 76.71 sec
2013-09-11 19:15:38,434 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 111.68 sec
2013-09-11 19:15:39,439 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 111.68 sec
2013-09-11 19:15:40,443 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 150.15 sec
2013-09-11 19:15:41,447 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 150.15 sec
2013-09-11 19:15:42,452 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 150.15 sec
2013-09-11 19:15:43,457 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 150.15 sec
2013-09-11 19:15:44,462 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 150.15 sec
2013-09-11 19:15:45,466 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 150.15 sec
2013-09-11 19:15:46,501 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 153.5 sec
2013-09-11 19:15:47,506 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 153.5 sec
2013-09-11 19:15:48,511 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 153.5 sec
2013-09-11 19:15:49,516 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 153.5 sec
MapReduce Total cumulative CPU time: 2 minutes 33 seconds 500 msec
Ended Job = job_201309101627_0328
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 153.5 sec HDFS Read: 1082943442 HDFS Write: 5318 SUCCESS
Total MapReduce CPU Time Spent: 2 minutes 33 seconds 500 msec
OK
Time taken: 85.578 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_30831@mturlrep13_201309111915_2050397294.txt
hive> ;
hive> quit;
times: 3
query: SELECT * FROM hits_10m WHERE URL LIKE '%metrika%' ORDER BY EventTime LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_31290@mturlrep13_201309111915_470444048.txt
hive> SELECT * FROM hits_10m WHERE URL LIKE '%metrika%' ORDER BY EventTime LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0329
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 19:16:06,376 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:16:16,414 Stage-1 map = 7%, reduce = 0%
2013-09-11 19:16:19,425 Stage-1 map = 14%, reduce = 0%
2013-09-11 19:16:22,437 Stage-1 map = 22%, reduce = 0%
2013-09-11 19:16:25,449 Stage-1 map = 29%, reduce = 0%
2013-09-11 19:16:31,471 Stage-1 map = 36%, reduce = 0%
2013-09-11 19:16:34,482 Stage-1 map = 43%, reduce = 0%
2013-09-11 19:16:37,501 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 94.38 sec
2013-09-11 19:16:38,508 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 94.38 sec
2013-09-11 19:16:39,514 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 94.38 sec
2013-09-11 19:16:40,521 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 94.38 sec
2013-09-11 19:16:41,526 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 94.38 sec
2013-09-11 19:16:42,532 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 94.38 sec
2013-09-11 19:16:43,538 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 94.38 sec
2013-09-11 19:16:44,543 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 94.38 sec
2013-09-11 19:16:45,549 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 94.38 sec
2013-09-11 19:16:46,554 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 94.38 sec
2013-09-11 19:16:47,559 Stage-1 map = 54%, reduce = 17%, Cumulative CPU 94.38 sec
2013-09-11 19:16:48,565 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 94.38 sec
2013-09-11 19:16:49,570 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 94.38 sec
2013-09-11 19:16:50,575 Stage-1 map = 61%, reduce = 17%, Cumulative CPU 94.38 sec
2013-09-11 19:16:51,580 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 94.38 sec
2013-09-11 19:16:52,585 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 94.38 sec
2013-09-11 19:16:53,590 Stage-1 map = 69%, reduce = 17%, Cumulative CPU 94.38 sec
2013-09-11 19:16:54,595 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 94.38 sec
2013-09-11 19:16:55,600 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 94.38 sec
2013-09-11 19:16:56,605 Stage-1 map = 76%, reduce = 17%, Cumulative CPU 94.38 sec
2013-09-11 19:16:57,610 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 94.38 sec
2013-09-11 19:16:58,616 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 94.38 sec
2013-09-11 19:16:59,621 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 94.38 sec
2013-09-11 19:17:00,626 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 94.38 sec
2013-09-11 19:17:01,630 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 94.38 sec
2013-09-11 19:17:02,644 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 94.38 sec
2013-09-11 19:17:03,668 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 164.11 sec
2013-09-11 19:17:04,676 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 164.11 sec
2013-09-11 19:17:05,681 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 172.49 sec
2013-09-11 19:17:06,687 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 172.49 sec
2013-09-11 19:17:07,691 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 179.48 sec
2013-09-11 19:17:08,696 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 179.48 sec
2013-09-11 19:17:09,701 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 179.48 sec
2013-09-11 19:17:10,707 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 181.61 sec
2013-09-11 19:17:11,712 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 181.61 sec
2013-09-11 19:17:12,717 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 181.61 sec
MapReduce Total cumulative CPU time: 3 minutes 1 seconds 610 msec
Ended Job = job_201309101627_0329
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 181.61 sec HDFS Read: 1082943442 HDFS Write: 5318 SUCCESS
Total MapReduce CPU Time Spent: 3 minutes 1 seconds 610 msec
OK
Time taken: 77.012 seconds, Fetched: 10 row(s)
hive> quit;
-- плохой запрос - вынимаем все столбцы.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_745@mturlrep13_201309111917_444909204.txt
hive> ;
hive> quit;
times: 1
query: SELECT SearchPhrase, EventTime FROM hits_10m WHERE SearchPhrase != '' ORDER BY EventTime LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_1250@mturlrep13_201309111917_461322150.txt
hive> SELECT SearchPhrase, EventTime FROM hits_10m WHERE SearchPhrase != '' ORDER BY EventTime LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0330
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 19:17:33,067 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:17:40,099 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.68 sec
2013-09-11 19:17:41,107 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.68 sec
2013-09-11 19:17:42,114 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.68 sec
2013-09-11 19:17:43,120 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.68 sec
2013-09-11 19:17:44,125 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.68 sec
2013-09-11 19:17:45,131 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.68 sec
2013-09-11 19:17:46,136 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.68 sec
2013-09-11 19:17:47,142 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.68 sec
2013-09-11 19:17:48,148 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 28.05 sec
2013-09-11 19:17:49,153 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.3 sec
2013-09-11 19:17:50,158 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.3 sec
2013-09-11 19:17:51,163 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.3 sec
2013-09-11 19:17:52,168 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.3 sec
2013-09-11 19:17:53,173 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.3 sec
2013-09-11 19:17:54,179 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.3 sec
2013-09-11 19:17:55,185 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 43.51 sec
2013-09-11 19:17:56,191 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 43.51 sec
2013-09-11 19:17:57,196 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 43.51 sec
MapReduce Total cumulative CPU time: 43 seconds 510 msec
Ended Job = job_201309101627_0330
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 43.51 sec HDFS Read: 28228143 HDFS Write: 766 SUCCESS
Total MapReduce CPU Time Spent: 43 seconds 510 msec
OK
Time taken: 33.875 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_2899@mturlrep13_201309111917_223797260.txt
hive> ;
hive> quit;
times: 2
query: SELECT SearchPhrase, EventTime FROM hits_10m WHERE SearchPhrase != '' ORDER BY EventTime LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_3331@mturlrep13_201309111918_1567198931.txt
hive> SELECT SearchPhrase, EventTime FROM hits_10m WHERE SearchPhrase != '' ORDER BY EventTime LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0331
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 19:18:10,975 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:18:18,012 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.48 sec
2013-09-11 19:18:19,020 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.48 sec
2013-09-11 19:18:20,027 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.48 sec
2013-09-11 19:18:21,033 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.48 sec
2013-09-11 19:18:22,039 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.48 sec
2013-09-11 19:18:23,045 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.48 sec
2013-09-11 19:18:24,051 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.48 sec
2013-09-11 19:18:25,056 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 37.82 sec
2013-09-11 19:18:26,062 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.82 sec
2013-09-11 19:18:27,067 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.82 sec
2013-09-11 19:18:28,072 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.82 sec
2013-09-11 19:18:29,077 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.82 sec
2013-09-11 19:18:30,082 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.82 sec
2013-09-11 19:18:31,087 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.82 sec
2013-09-11 19:18:32,092 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.82 sec
2013-09-11 19:18:33,099 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 42.82 sec
2013-09-11 19:18:34,105 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 42.82 sec
MapReduce Total cumulative CPU time: 42 seconds 820 msec
Ended Job = job_201309101627_0331
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 42.82 sec HDFS Read: 28228143 HDFS Write: 766 SUCCESS
Total MapReduce CPU Time Spent: 42 seconds 820 msec
OK
Time taken: 31.269 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_5060@mturlrep13_201309111918_467637722.txt
hive> ;
hive> quit;
times: 3
query: SELECT SearchPhrase, EventTime FROM hits_10m WHERE SearchPhrase != '' ORDER BY EventTime LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_5623@mturlrep13_201309111918_113416530.txt
hive> SELECT SearchPhrase, EventTime FROM hits_10m WHERE SearchPhrase != '' ORDER BY EventTime LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0332
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 19:18:47,895 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:18:54,933 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.56 sec
2013-09-11 19:18:55,941 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.56 sec
2013-09-11 19:18:56,948 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.56 sec
2013-09-11 19:18:57,955 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.56 sec
2013-09-11 19:18:58,961 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.56 sec
2013-09-11 19:18:59,966 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.56 sec
2013-09-11 19:19:00,977 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.56 sec
2013-09-11 19:19:01,983 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 36.79 sec
2013-09-11 19:19:02,990 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 36.79 sec
2013-09-11 19:19:03,996 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.65 sec
2013-09-11 19:19:05,001 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.65 sec
2013-09-11 19:19:06,007 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.65 sec
2013-09-11 19:19:07,012 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.65 sec
2013-09-11 19:19:08,018 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.65 sec
2013-09-11 19:19:09,024 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.65 sec
2013-09-11 19:19:10,031 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.96 sec
2013-09-11 19:19:11,037 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.96 sec
MapReduce Total cumulative CPU time: 41 seconds 960 msec
Ended Job = job_201309101627_0332
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 41.96 sec HDFS Read: 28228143 HDFS Write: 766 SUCCESS
Total MapReduce CPU Time Spent: 41 seconds 960 msec
OK
Time taken: 31.323 seconds, Fetched: 10 row(s)
hive> quit;
-- большая сортировка.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_7228@mturlrep13_201309111919_1142490525.txt
hive> ;
hive> quit;
times: 1
query: SELECT SearchPhrase FROM hits_10m WHERE SearchPhrase != '' ORDER BY SearchPhrase LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_7745@mturlrep13_201309111919_1527730811.txt
hive> SELECT SearchPhrase FROM hits_10m WHERE SearchPhrase != '' ORDER BY SearchPhrase LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0333
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 19:19:32,813 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:19:39,842 Stage-1 map = 43%, reduce = 0%
2013-09-11 19:19:40,855 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.55 sec
2013-09-11 19:19:41,862 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.55 sec
2013-09-11 19:19:42,869 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.55 sec
2013-09-11 19:19:43,875 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.55 sec
2013-09-11 19:19:44,881 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.55 sec
2013-09-11 19:19:45,887 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.55 sec
2013-09-11 19:19:46,893 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.55 sec
2013-09-11 19:19:47,900 Stage-1 map = 97%, reduce = 8%, Cumulative CPU 30.4 sec
2013-09-11 19:19:48,906 Stage-1 map = 100%, reduce = 8%, Cumulative CPU 40.45 sec
2013-09-11 19:19:49,911 Stage-1 map = 100%, reduce = 8%, Cumulative CPU 40.45 sec
2013-09-11 19:19:50,916 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.45 sec
2013-09-11 19:19:51,921 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.45 sec
2013-09-11 19:19:52,926 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.45 sec
2013-09-11 19:19:53,931 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 40.45 sec
2013-09-11 19:19:54,936 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 40.45 sec
2013-09-11 19:19:55,942 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 40.45 sec
2013-09-11 19:19:56,947 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.45 sec
2013-09-11 19:19:57,955 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 46.75 sec
2013-09-11 19:19:58,961 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 46.75 sec
MapReduce Total cumulative CPU time: 46 seconds 750 msec
Ended Job = job_201309101627_0333
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 46.75 sec HDFS Read: 27820105 HDFS Write: 666 SUCCESS
Total MapReduce CPU Time Spent: 46 seconds 750 msec
OK
ялта интурист
! как одеть трехнедельного ребенка при температуре 20 градусов
! отель rattana beach hotel 3*
! официальный сайт ооо "группа аист"г москва, ул коцюбинского, д 4, офис 343
! официальный сайт ооо "группа аист"г москва, ул коцюбинского, д 4, офис 343
!( центробежный скважинный калибр форумы)
!(!(storm master silmarils))
!(!(storm master silmarils))
!(!(title:(схема sputnik hi 4000)))
!(44-фз о контрактной системе)
Time taken: 35.874 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9340@mturlrep13_201309111920_1347531078.txt
hive> ;
hive> quit;
times: 2
query: SELECT SearchPhrase FROM hits_10m WHERE SearchPhrase != '' ORDER BY SearchPhrase LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9842@mturlrep13_201309111920_1738943444.txt
hive> SELECT SearchPhrase FROM hits_10m WHERE SearchPhrase != '' ORDER BY SearchPhrase LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0334
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 19:20:13,005 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:20:20,042 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.62 sec
2013-09-11 19:20:21,051 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.62 sec
2013-09-11 19:20:22,057 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.62 sec
2013-09-11 19:20:23,064 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.62 sec
2013-09-11 19:20:24,070 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.62 sec
2013-09-11 19:20:25,076 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.62 sec
2013-09-11 19:20:26,082 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.62 sec
2013-09-11 19:20:27,088 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.62 sec
2013-09-11 19:20:28,094 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.25 sec
2013-09-11 19:20:29,099 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.25 sec
2013-09-11 19:20:30,104 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.25 sec
2013-09-11 19:20:31,109 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.25 sec
2013-09-11 19:20:32,114 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.25 sec
2013-09-11 19:20:33,120 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.25 sec
2013-09-11 19:20:34,125 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.25 sec
2013-09-11 19:20:35,131 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.25 sec
2013-09-11 19:20:36,138 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 45.44 sec
2013-09-11 19:20:37,144 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 45.44 sec
MapReduce Total cumulative CPU time: 45 seconds 440 msec
Ended Job = job_201309101627_0334
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 45.44 sec HDFS Read: 27820105 HDFS Write: 666 SUCCESS
Total MapReduce CPU Time Spent: 45 seconds 440 msec
OK
ялта интурист
! как одеть трехнедельного ребенка при температуре 20 градусов
! отель rattana beach hotel 3*
! официальный сайт ооо "группа аист"г москва, ул коцюбинского, д 4, офис 343
! официальный сайт ооо "группа аист"г москва, ул коцюбинского, д 4, офис 343
!( центробежный скважинный калибр форумы)
!(!(storm master silmarils))
!(!(storm master silmarils))
!(!(title:(схема sputnik hi 4000)))
!(44-фз о контрактной системе)
Time taken: 32.533 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_11939@mturlrep13_201309111920_1151781772.txt
hive> ;
hive> quit;
times: 3
query: SELECT SearchPhrase FROM hits_10m WHERE SearchPhrase != '' ORDER BY SearchPhrase LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_12380@mturlrep13_201309111920_806872053.txt
hive> SELECT SearchPhrase FROM hits_10m WHERE SearchPhrase != '' ORDER BY SearchPhrase LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0335
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 19:20:49,926 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:20:57,966 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.91 sec
2013-09-11 19:20:58,974 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.91 sec
2013-09-11 19:20:59,981 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.91 sec
2013-09-11 19:21:00,987 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.91 sec
2013-09-11 19:21:01,993 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.91 sec
2013-09-11 19:21:02,999 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.91 sec
2013-09-11 19:21:04,006 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.91 sec
2013-09-11 19:21:05,013 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 29.4 sec
2013-09-11 19:21:06,019 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.14 sec
2013-09-11 19:21:07,024 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.14 sec
2013-09-11 19:21:08,030 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.14 sec
2013-09-11 19:21:09,036 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.14 sec
2013-09-11 19:21:10,041 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.14 sec
2013-09-11 19:21:11,047 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.14 sec
2013-09-11 19:21:12,052 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.14 sec
2013-09-11 19:21:13,058 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.14 sec
2013-09-11 19:21:14,066 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 45.25 sec
2013-09-11 19:21:15,072 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 45.25 sec
MapReduce Total cumulative CPU time: 45 seconds 250 msec
Ended Job = job_201309101627_0335
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 45.25 sec HDFS Read: 27820105 HDFS Write: 666 SUCCESS
Total MapReduce CPU Time Spent: 45 seconds 250 msec
OK
ялта интурист
! как одеть трехнедельного ребенка при температуре 20 градусов
! отель rattana beach hotel 3*
! официальный сайт ооо "группа аист"г москва, ул коцюбинского, д 4, офис 343
! официальный сайт ооо "группа аист"г москва, ул коцюбинского, д 4, офис 343
!( центробежный скважинный калибр форумы)
!(!(storm master silmarils))
!(!(storm master silmarils))
!(!(title:(схема sputnik hi 4000)))
!(44-фз о контрактной системе)
Time taken: 32.402 seconds, Fetched: 10 row(s)
hive> quit;
-- большая сортировка по строкам.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_14015@mturlrep13_201309111921_1013339895.txt
hive> ;
hive> quit;
times: 1
query: SELECT SearchPhrase, EventTime FROM hits_10m WHERE SearchPhrase != '' ORDER BY EventTime, SearchPhrase LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_14464@mturlrep13_201309111921_1424399418.txt
hive> SELECT SearchPhrase, EventTime FROM hits_10m WHERE SearchPhrase != '' ORDER BY EventTime, SearchPhrase LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0336
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 19:21:37,862 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:21:44,890 Stage-1 map = 43%, reduce = 0%
2013-09-11 19:21:45,902 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.73 sec
2013-09-11 19:21:46,908 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.73 sec
2013-09-11 19:21:47,915 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.73 sec
2013-09-11 19:21:48,921 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.73 sec
2013-09-11 19:21:49,926 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.73 sec
2013-09-11 19:21:50,932 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.73 sec
2013-09-11 19:21:51,938 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.73 sec
2013-09-11 19:21:52,944 Stage-1 map = 72%, reduce = 17%, Cumulative CPU 20.73 sec
2013-09-11 19:21:53,953 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.75 sec
2013-09-11 19:21:54,959 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.75 sec
2013-09-11 19:21:55,964 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.75 sec
2013-09-11 19:21:56,970 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.75 sec
2013-09-11 19:21:57,975 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.75 sec
2013-09-11 19:21:58,980 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.75 sec
2013-09-11 19:21:59,984 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.75 sec
2013-09-11 19:22:00,989 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.75 sec
2013-09-11 19:22:01,996 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 47.37 sec
2013-09-11 19:22:03,001 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 47.37 sec
2013-09-11 19:22:04,007 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 47.37 sec
MapReduce Total cumulative CPU time: 47 seconds 370 msec
Ended Job = job_201309101627_0336
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 47.37 sec HDFS Read: 28228143 HDFS Write: 762 SUCCESS
Total MapReduce CPU Time Spent: 47 seconds 370 msec
OK
Time taken: 36.108 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_16194@mturlrep13_201309111922_1473276999.txt
hive> ;
hive> quit;
times: 2
query: SELECT SearchPhrase, EventTime FROM hits_10m WHERE SearchPhrase != '' ORDER BY EventTime, SearchPhrase LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_16658@mturlrep13_201309111922_883471210.txt
hive> SELECT SearchPhrase, EventTime FROM hits_10m WHERE SearchPhrase != '' ORDER BY EventTime, SearchPhrase LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0337
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 19:22:16,915 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:22:24,952 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.15 sec
2013-09-11 19:22:25,959 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.15 sec
2013-09-11 19:22:26,966 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.15 sec
2013-09-11 19:22:27,972 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.15 sec
2013-09-11 19:22:28,978 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.15 sec
2013-09-11 19:22:29,984 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.15 sec
2013-09-11 19:22:30,990 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.15 sec
2013-09-11 19:22:32,000 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.15 sec
2013-09-11 19:22:33,005 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.72 sec
2013-09-11 19:22:34,011 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.72 sec
2013-09-11 19:22:35,016 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.72 sec
2013-09-11 19:22:36,022 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.72 sec
2013-09-11 19:22:37,027 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.72 sec
2013-09-11 19:22:38,032 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.72 sec
2013-09-11 19:22:39,038 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.72 sec
2013-09-11 19:22:40,043 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.72 sec
2013-09-11 19:22:41,051 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 46.37 sec
2013-09-11 19:22:42,056 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 46.37 sec
MapReduce Total cumulative CPU time: 46 seconds 370 msec
Ended Job = job_201309101627_0337
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 46.37 sec HDFS Read: 28228143 HDFS Write: 762 SUCCESS
Total MapReduce CPU Time Spent: 46 seconds 370 msec
OK
Time taken: 32.47 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_18370@mturlrep13_201309111922_201833234.txt
hive> ;
hive> quit;
times: 3
query: SELECT SearchPhrase, EventTime FROM hits_10m WHERE SearchPhrase != '' ORDER BY EventTime, SearchPhrase LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_18842@mturlrep13_201309111922_1636378734.txt
hive> SELECT SearchPhrase, EventTime FROM hits_10m WHERE SearchPhrase != '' ORDER BY EventTime, SearchPhrase LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0338
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 19:22:56,099 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:23:03,136 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.89 sec
2013-09-11 19:23:04,144 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.89 sec
2013-09-11 19:23:05,151 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.89 sec
2013-09-11 19:23:06,158 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.89 sec
2013-09-11 19:23:07,164 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.89 sec
2013-09-11 19:23:08,170 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.89 sec
2013-09-11 19:23:09,177 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.89 sec
2013-09-11 19:23:10,183 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.89 sec
2013-09-11 19:23:11,190 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.16 sec
2013-09-11 19:23:12,195 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.16 sec
2013-09-11 19:23:13,200 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.16 sec
2013-09-11 19:23:14,205 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.16 sec
2013-09-11 19:23:15,210 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.16 sec
2013-09-11 19:23:16,216 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.16 sec
2013-09-11 19:23:17,223 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.16 sec
2013-09-11 19:23:18,229 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.16 sec
2013-09-11 19:23:19,236 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 47.73 sec
2013-09-11 19:23:20,242 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 47.73 sec
MapReduce Total cumulative CPU time: 47 seconds 730 msec
Ended Job = job_201309101627_0338
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 47.73 sec HDFS Read: 28228143 HDFS Write: 762 SUCCESS
Total MapReduce CPU Time Spent: 47 seconds 730 msec
OK
Time taken: 32.441 seconds, Fetched: 10 row(s)
hive> quit;
-- большая сортировка по кортежу.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_20518@mturlrep13_201309111923_1417644101.txt
hive> ;
hive> quit;
times: 1
query: SELECT CounterID, avg(length(URL)) AS l, count(*) AS c FROM hits_10m WHERE URL != '' GROUP BY CounterID HAVING count(*) > 100000 ORDER BY l DESC LIMIT 25;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_21000@mturlrep13_201309111923_777732561.txt
hive> SELECT CounterID, avg(length(URL)) AS l, count(*) AS c FROM hits_10m WHERE URL != '' GROUP BY CounterID HAVING count(*) > 100000 ORDER BY l DESC LIMIT 25;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0339
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:23:40,966 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:23:47,994 Stage-1 map = 14%, reduce = 0%
2013-09-11 19:23:51,006 Stage-1 map = 22%, reduce = 0%
2013-09-11 19:23:54,019 Stage-1 map = 36%, reduce = 0%
2013-09-11 19:23:57,032 Stage-1 map = 43%, reduce = 0%
2013-09-11 19:24:00,051 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 46.36 sec
2013-09-11 19:24:01,058 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 46.36 sec
2013-09-11 19:24:02,065 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 46.36 sec
2013-09-11 19:24:03,071 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 46.36 sec
2013-09-11 19:24:04,078 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 46.36 sec
2013-09-11 19:24:05,084 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 46.36 sec
2013-09-11 19:24:06,090 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 46.36 sec
2013-09-11 19:24:07,096 Stage-1 map = 65%, reduce = 4%, Cumulative CPU 46.36 sec
2013-09-11 19:24:08,101 Stage-1 map = 65%, reduce = 13%, Cumulative CPU 46.36 sec
2013-09-11 19:24:09,107 Stage-1 map = 65%, reduce = 13%, Cumulative CPU 46.36 sec
2013-09-11 19:24:10,113 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 46.36 sec
2013-09-11 19:24:11,119 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 46.36 sec
2013-09-11 19:24:12,124 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 46.36 sec
2013-09-11 19:24:13,130 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 46.36 sec
2013-09-11 19:24:14,136 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 46.36 sec
2013-09-11 19:24:15,142 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 46.36 sec
2013-09-11 19:24:16,148 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 46.36 sec
2013-09-11 19:24:17,153 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 66.94 sec
2013-09-11 19:24:18,159 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 66.94 sec
2013-09-11 19:24:19,164 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 90.24 sec
2013-09-11 19:24:20,170 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 90.24 sec
2013-09-11 19:24:21,175 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 90.24 sec
2013-09-11 19:24:22,180 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 90.24 sec
2013-09-11 19:24:23,185 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 90.24 sec
2013-09-11 19:24:24,190 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 90.24 sec
2013-09-11 19:24:25,196 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 90.24 sec
2013-09-11 19:24:26,202 Stage-1 map = 100%, reduce = 51%, Cumulative CPU 90.24 sec
2013-09-11 19:24:27,207 Stage-1 map = 100%, reduce = 51%, Cumulative CPU 90.24 sec
2013-09-11 19:24:28,212 Stage-1 map = 100%, reduce = 68%, Cumulative CPU 90.24 sec
2013-09-11 19:24:29,219 Stage-1 map = 100%, reduce = 83%, Cumulative CPU 99.81 sec
2013-09-11 19:24:30,224 Stage-1 map = 100%, reduce = 83%, Cumulative CPU 99.81 sec
2013-09-11 19:24:31,230 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 99.81 sec
2013-09-11 19:24:32,468 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 99.81 sec
2013-09-11 19:24:33,480 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 99.81 sec
2013-09-11 19:24:34,496 Stage-1 map = 100%, reduce = 98%, Cumulative CPU 99.81 sec
2013-09-11 19:24:35,501 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 116.36 sec
2013-09-11 19:24:36,506 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 116.36 sec
2013-09-11 19:24:37,511 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 116.36 sec
MapReduce Total cumulative CPU time: 1 minutes 56 seconds 360 msec
Ended Job = job_201309101627_0339
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0340
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 19:24:41,030 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:24:43,038 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
2013-09-11 19:24:44,043 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
2013-09-11 19:24:45,048 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
2013-09-11 19:24:46,052 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
2013-09-11 19:24:47,056 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
2013-09-11 19:24:48,060 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
2013-09-11 19:24:49,065 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
2013-09-11 19:24:50,070 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.44 sec
2013-09-11 19:24:51,075 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.44 sec
2013-09-11 19:24:52,080 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.44 sec
MapReduce Total cumulative CPU time: 2 seconds 440 msec
Ended Job = job_201309101627_0340
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 116.36 sec HDFS Read: 117363067 HDFS Write: 794 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.44 sec HDFS Read: 1563 HDFS Write: 571 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 58 seconds 800 msec
OK
Time taken: 81.295 seconds, Fetched: 19 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_23567@mturlrep13_201309111924_1954971285.txt
hive> ;
hive> quit;
times: 2
query: SELECT CounterID, avg(length(URL)) AS l, count(*) AS c FROM hits_10m WHERE URL != '' GROUP BY CounterID HAVING count(*) > 100000 ORDER BY l DESC LIMIT 25;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_23997@mturlrep13_201309111924_515409695.txt
hive> SELECT CounterID, avg(length(URL)) AS l, count(*) AS c FROM hits_10m WHERE URL != '' GROUP BY CounterID HAVING count(*) > 100000 ORDER BY l DESC LIMIT 25;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0341
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:25:06,659 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:25:13,685 Stage-1 map = 14%, reduce = 0%
2013-09-11 19:25:16,697 Stage-1 map = 22%, reduce = 0%
2013-09-11 19:25:19,708 Stage-1 map = 36%, reduce = 0%
2013-09-11 19:25:22,720 Stage-1 map = 43%, reduce = 0%
2013-09-11 19:25:24,736 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 46.82 sec
2013-09-11 19:25:25,743 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 46.82 sec
2013-09-11 19:25:26,752 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 46.82 sec
2013-09-11 19:25:27,758 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 46.82 sec
2013-09-11 19:25:28,764 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 46.82 sec
2013-09-11 19:25:29,770 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 46.82 sec
2013-09-11 19:25:30,776 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 46.82 sec
2013-09-11 19:25:31,782 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 46.82 sec
2013-09-11 19:25:32,788 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 46.82 sec
2013-09-11 19:25:33,793 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 46.82 sec
2013-09-11 19:25:34,799 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 46.82 sec
2013-09-11 19:25:35,804 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 46.82 sec
2013-09-11 19:25:36,809 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 46.82 sec
2013-09-11 19:25:37,815 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 46.82 sec
2013-09-11 19:25:38,821 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 46.82 sec
2013-09-11 19:25:39,827 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 46.82 sec
2013-09-11 19:25:40,832 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 46.82 sec
2013-09-11 19:25:41,838 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 67.29 sec
2013-09-11 19:25:42,843 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 89.57 sec
2013-09-11 19:25:43,848 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 89.57 sec
2013-09-11 19:25:44,854 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 89.57 sec
2013-09-11 19:25:45,860 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 89.57 sec
2013-09-11 19:25:46,865 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 89.57 sec
2013-09-11 19:25:47,871 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 89.57 sec
2013-09-11 19:25:48,876 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 89.57 sec
2013-09-11 19:25:49,884 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 89.57 sec
2013-09-11 19:25:50,891 Stage-1 map = 100%, reduce = 68%, Cumulative CPU 89.57 sec
2013-09-11 19:25:51,896 Stage-1 map = 100%, reduce = 68%, Cumulative CPU 89.57 sec
2013-09-11 19:25:52,901 Stage-1 map = 100%, reduce = 68%, Cumulative CPU 89.57 sec
2013-09-11 19:25:53,906 Stage-1 map = 100%, reduce = 83%, Cumulative CPU 89.57 sec
2013-09-11 19:25:54,913 Stage-1 map = 100%, reduce = 87%, Cumulative CPU 99.36 sec
2013-09-11 19:25:55,919 Stage-1 map = 100%, reduce = 87%, Cumulative CPU 99.36 sec
2013-09-11 19:25:56,925 Stage-1 map = 100%, reduce = 95%, Cumulative CPU 99.36 sec
2013-09-11 19:25:57,931 Stage-1 map = 100%, reduce = 95%, Cumulative CPU 99.36 sec
2013-09-11 19:25:58,936 Stage-1 map = 100%, reduce = 95%, Cumulative CPU 99.36 sec
2013-09-11 19:25:59,952 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 114.98 sec
2013-09-11 19:26:00,959 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 114.98 sec
MapReduce Total cumulative CPU time: 1 minutes 54 seconds 980 msec
Ended Job = job_201309101627_0341
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0342
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 19:26:03,406 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:26:05,415 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:26:06,420 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:26:07,426 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:26:08,431 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:26:09,436 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:26:10,441 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:26:11,445 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-11 19:26:12,450 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 0.76 sec
2013-09-11 19:26:13,456 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.17 sec
2013-09-11 19:26:14,461 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.17 sec
MapReduce Total cumulative CPU time: 2 seconds 170 msec
Ended Job = job_201309101627_0342
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 114.98 sec HDFS Read: 117363067 HDFS Write: 794 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.17 sec HDFS Read: 1563 HDFS Write: 571 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 57 seconds 150 msec
OK
Time taken: 76.426 seconds, Fetched: 19 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_27255@mturlrep13_201309111926_1222835314.txt
hive> ;
hive> quit;
times: 3
query: SELECT CounterID, avg(length(URL)) AS l, count(*) AS c FROM hits_10m WHERE URL != '' GROUP BY CounterID HAVING count(*) > 100000 ORDER BY l DESC LIMIT 25;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_27706@mturlrep13_201309111926_904396134.txt
hive> SELECT CounterID, avg(length(URL)) AS l, count(*) AS c FROM hits_10m WHERE URL != '' GROUP BY CounterID HAVING count(*) > 100000 ORDER BY l DESC LIMIT 25;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0343
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:26:27,528 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:26:35,559 Stage-1 map = 14%, reduce = 0%
2013-09-11 19:26:38,572 Stage-1 map = 22%, reduce = 0%
2013-09-11 19:26:41,585 Stage-1 map = 36%, reduce = 0%
2013-09-11 19:26:44,599 Stage-1 map = 43%, reduce = 0%
2013-09-11 19:26:45,611 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 44.89 sec
2013-09-11 19:26:46,618 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 44.89 sec
2013-09-11 19:26:47,627 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 44.89 sec
2013-09-11 19:26:48,633 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 44.89 sec
2013-09-11 19:26:49,638 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 44.89 sec
2013-09-11 19:26:50,644 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 44.89 sec
2013-09-11 19:26:51,649 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 44.89 sec
2013-09-11 19:26:52,655 Stage-1 map = 57%, reduce = 8%, Cumulative CPU 44.89 sec
2013-09-11 19:26:53,661 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 44.89 sec
2013-09-11 19:26:54,666 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 44.89 sec
2013-09-11 19:26:55,673 Stage-1 map = 69%, reduce = 17%, Cumulative CPU 44.89 sec
2013-09-11 19:26:56,679 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 44.89 sec
2013-09-11 19:26:57,685 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 44.89 sec
2013-09-11 19:26:58,691 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 44.89 sec
2013-09-11 19:26:59,696 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 44.89 sec
2013-09-11 19:27:00,717 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 44.89 sec
2013-09-11 19:27:01,723 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 44.89 sec
2013-09-11 19:27:02,728 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 65.71 sec
2013-09-11 19:27:03,734 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 88.34 sec
2013-09-11 19:27:04,740 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 92.03 sec
2013-09-11 19:27:05,745 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 92.03 sec
2013-09-11 19:27:06,751 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 92.03 sec
2013-09-11 19:27:07,757 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 92.03 sec
2013-09-11 19:27:08,763 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 92.03 sec
2013-09-11 19:27:09,769 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 92.03 sec
2013-09-11 19:27:10,775 Stage-1 map = 100%, reduce = 50%, Cumulative CPU 92.03 sec
2013-09-11 19:27:11,781 Stage-1 map = 100%, reduce = 68%, Cumulative CPU 92.03 sec
2013-09-11 19:27:12,786 Stage-1 map = 100%, reduce = 68%, Cumulative CPU 92.03 sec
2013-09-11 19:27:13,792 Stage-1 map = 100%, reduce = 72%, Cumulative CPU 92.03 sec
2013-09-11 19:27:14,799 Stage-1 map = 100%, reduce = 87%, Cumulative CPU 100.81 sec
2013-09-11 19:27:15,805 Stage-1 map = 100%, reduce = 87%, Cumulative CPU 100.81 sec
2013-09-11 19:27:16,811 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 100.81 sec
2013-09-11 19:27:17,818 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 100.81 sec
2013-09-11 19:27:18,823 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 100.81 sec
2013-09-11 19:27:19,829 Stage-1 map = 100%, reduce = 98%, Cumulative CPU 100.81 sec
2013-09-11 19:27:20,846 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 114.65 sec
2013-09-11 19:27:21,852 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 114.65 sec
MapReduce Total cumulative CPU time: 1 minutes 54 seconds 650 msec
Ended Job = job_201309101627_0343
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0344
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 19:27:24,306 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:27:26,313 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 19:27:27,318 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 19:27:28,323 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 19:27:29,327 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 19:27:30,331 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 19:27:31,335 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 19:27:32,340 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 19:27:33,345 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 0.75 sec
2013-09-11 19:27:34,351 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.15 sec
2013-09-11 19:27:35,356 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.15 sec
MapReduce Total cumulative CPU time: 2 seconds 150 msec
Ended Job = job_201309101627_0344
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 114.65 sec HDFS Read: 117363067 HDFS Write: 794 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.15 sec HDFS Read: 1563 HDFS Write: 571 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 56 seconds 800 msec
OK
Time taken: 75.21 seconds, Fetched: 19 row(s)
hive> quit;
-- считаем средние длины URL для крупных счётчиков.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_30248@mturlrep13_201309111927_36693068.txt
hive> ;
hive> quit;
times: 1
query: SELECT SUBSTRING(SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2), 1, GREATEST(0, FIND_IN_SET('/', SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2)) - 1)), avg(length(Referer)) AS l, count(*) AS c, MAX(Referer) FROM hits_10m WHERE Referer != '' GROUP BY SUBSTRING(SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2), 1, GREATEST(0, FIND_IN_SET('/', SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2)) - 1)) HAVING count(*) > 100000 ORDER BY l DESC LIMIT 25;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_30881@mturlrep13_201309111927_1823096494.txt
hive> SELECT SUBSTRING(SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2), 1, GREATEST(0, FIND_IN_SET('/', SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2)) - 1)), avg(length(Referer)) AS l, count(*) AS c, MAX(Referer) FROM hits_10m WHERE Referer != '' GROUP BY SUBSTRING(SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2), 1, GREATEST(0, FIND_IN_SET('/', SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2)) - 1)) HAVING count(*) > 100000 ORDER BY l DESC LIMIT 25;;
FAILED: SemanticException [Error 10011]: Line 1:336 Invalid function 'GREATEST'
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_31193@mturlrep13_201309111927_277766864.txt
hive> ;
hive> quit;
times: 2
query: SELECT SUBSTRING(SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2), 1, GREATEST(0, FIND_IN_SET('/', SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2)) - 1)), avg(length(Referer)) AS l, count(*) AS c, MAX(Referer) FROM hits_10m WHERE Referer != '' GROUP BY SUBSTRING(SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2), 1, GREATEST(0, FIND_IN_SET('/', SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2)) - 1)) HAVING count(*) > 100000 ORDER BY l DESC LIMIT 25;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_31619@mturlrep13_201309111927_804737407.txt
hive> SELECT SUBSTRING(SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2), 1, GREATEST(0, FIND_IN_SET('/', SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2)) - 1)), avg(length(Referer)) AS l, count(*) AS c, MAX(Referer) FROM hits_10m WHERE Referer != '' GROUP BY SUBSTRING(SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2), 1, GREATEST(0, FIND_IN_SET('/', SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2)) - 1)) HAVING count(*) > 100000 ORDER BY l DESC LIMIT 25;;
FAILED: SemanticException [Error 10011]: Line 1:336 Invalid function 'GREATEST'
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_31900@mturlrep13_201309111928_2860526.txt
hive> ;
hive> quit;
times: 3
query: SELECT SUBSTRING(SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2), 1, GREATEST(0, FIND_IN_SET('/', SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2)) - 1)), avg(length(Referer)) AS l, count(*) AS c, MAX(Referer) FROM hits_10m WHERE Referer != '' GROUP BY SUBSTRING(SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2), 1, GREATEST(0, FIND_IN_SET('/', SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2)) - 1)) HAVING count(*) > 100000 ORDER BY l DESC LIMIT 25;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_32327@mturlrep13_201309111928_474072567.txt
hive> SELECT SUBSTRING(SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2), 1, GREATEST(0, FIND_IN_SET('/', SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2)) - 1)), avg(length(Referer)) AS l, count(*) AS c, MAX(Referer) FROM hits_10m WHERE Referer != '' GROUP BY SUBSTRING(SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2), 1, GREATEST(0, FIND_IN_SET('/', SUBSTRING(Referer, FIND_IN_SET('//', Referer) + 2)) - 1)) HAVING count(*) > 100000 ORDER BY l DESC LIMIT 25;;
FAILED: SemanticException [Error 10011]: Line 1:336 Invalid function 'GREATEST'
hive> quit;
-- то же самое, но с разбивкой по доменам.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_32599@mturlrep13_201309111928_1292767865.txt
hive> ;
hive> quit;
times: 1
query: SELECT sum(ResolutionWidth), sum(ResolutionWidth + 1), sum(ResolutionWidth + 2), sum(ResolutionWidth + 3), sum(ResolutionWidth + 4), sum(ResolutionWidth + 5), sum(ResolutionWidth + 6), sum(ResolutionWidth + 7), sum(ResolutionWidth + 8), sum(ResolutionWidth + 9), sum(ResolutionWidth + 10), sum(ResolutionWidth + 11), sum(ResolutionWidth + 12), sum(ResolutionWidth + 13), sum(ResolutionWidth + 14), sum(ResolutionWidth + 15), sum(ResolutionWidth + 16), sum(ResolutionWidth + 17), sum(ResolutionWidth + 18), sum(ResolutionWidth + 19), sum(ResolutionWidth + 20), sum(ResolutionWidth + 21), sum(ResolutionWidth + 22), sum(ResolutionWidth + 23), sum(ResolutionWidth + 24), sum(ResolutionWidth + 25), sum(ResolutionWidth + 26), sum(ResolutionWidth + 27), sum(ResolutionWidth + 28), sum(ResolutionWidth + 29), sum(ResolutionWidth + 30), sum(ResolutionWidth + 31), sum(ResolutionWidth + 32), sum(ResolutionWidth + 33), sum(ResolutionWidth + 34), sum(ResolutionWidth + 35), sum(ResolutionWidth + 36), sum(ResolutionWidth + 37), sum(ResolutionWidth + 38), sum(ResolutionWidth + 39), sum(ResolutionWidth + 40), sum(ResolutionWidth + 41), sum(ResolutionWidth + 42), sum(ResolutionWidth + 43), sum(ResolutionWidth + 44), sum(ResolutionWidth + 45), sum(ResolutionWidth + 46), sum(ResolutionWidth + 47), sum(ResolutionWidth + 48), sum(ResolutionWidth + 49), sum(ResolutionWidth + 50), sum(ResolutionWidth + 51), sum(ResolutionWidth + 52), sum(ResolutionWidth + 53), sum(ResolutionWidth + 54), sum(ResolutionWidth + 55), sum(ResolutionWidth + 56), sum(ResolutionWidth + 57), sum(ResolutionWidth + 58), sum(ResolutionWidth + 59), sum(ResolutionWidth + 60), sum(ResolutionWidth + 61), sum(ResolutionWidth + 62), sum(ResolutionWidth + 63), sum(ResolutionWidth + 64), sum(ResolutionWidth + 65), sum(ResolutionWidth + 66), sum(ResolutionWidth + 67), sum(ResolutionWidth + 68), sum(ResolutionWidth + 69), sum(ResolutionWidth + 70), sum(ResolutionWidth + 71), sum(ResolutionWidth + 72), sum(ResolutionWidth + 73), sum(ResolutionWidth + 74), sum(ResolutionWidth + 75), sum(ResolutionWidth + 76), sum(ResolutionWidth + 77), sum(ResolutionWidth + 78), sum(ResolutionWidth + 79), sum(ResolutionWidth + 80), sum(ResolutionWidth + 81), sum(ResolutionWidth + 82), sum(ResolutionWidth + 83), sum(ResolutionWidth + 84), sum(ResolutionWidth + 85), sum(ResolutionWidth + 86), sum(ResolutionWidth + 87), sum(ResolutionWidth + 88), sum(ResolutionWidth + 89) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_624@mturlrep13_201309111928_489280386.txt
hive> SELECT sum(ResolutionWidth), sum(ResolutionWidth + 1), sum(ResolutionWidth + 2), sum(ResolutionWidth + 3), sum(ResolutionWidth + 4), sum(ResolutionWidth + 5), sum(ResolutionWidth + 6), sum(ResolutionWidth + 7), sum(ResolutionWidth + 8), sum(ResolutionWidth + 9), sum(ResolutionWidth + 10), sum(ResolutionWidth + 11), sum(ResolutionWidth + 12), sum(ResolutionWidth + 13), sum(ResolutionWidth + 14), sum(ResolutionWidth + 15), sum(ResolutionWidth + 16), sum(ResolutionWidth + 17), sum(ResolutionWidth + 18), sum(ResolutionWidth + 19), sum(ResolutionWidth + 20), sum(ResolutionWidth + 21), sum(ResolutionWidth + 22), sum(ResolutionWidth + 23), sum(ResolutionWidth + 24), sum(ResolutionWidth + 25), sum(ResolutionWidth + 26), sum(ResolutionWidth + 27), sum(ResolutionWidth + 28), sum(ResolutionWidth + 29), sum(ResolutionWidth + 30), sum(ResolutionWidth + 31), sum(ResolutionWidth + 32), sum(ResolutionWidth + 33), sum(ResolutionWidth + 34), sum(ResolutionWidth + 35), sum(ResolutionWidth + 36), sum(ResolutionWidth + 37), sum(ResolutionWidth + 38), sum(ResolutionWidth + 39), sum(ResolutionWidth + 40), sum(ResolutionWidth + 41), sum(ResolutionWidth + 42), sum(ResolutionWidth + 43), sum(ResolutionWidth + 44), sum(ResolutionWidth + 45), sum(ResolutionWidth + 46), sum(ResolutionWidth + 47), sum(ResolutionWidth + 48), sum(ResolutionWidth + 49), sum(ResolutionWidth + 50), sum(ResolutionWidth + 51), sum(ResolutionWidth + 52), sum(ResolutionWidth + 53), sum(ResolutionWidth + 54), sum(ResolutionWidth + 55), sum(ResolutionWidth + 56), sum(ResolutionWidth + 57), sum(ResolutionWidth + 58), sum(ResolutionWidth + 59), sum(ResolutionWidth + 60), sum(ResolutionWidth + 61), sum(ResolutionWidth + 62), sum(ResolutionWidth + 63), sum(ResolutionWidth + 64), sum(ResolutionWidth + 65), sum(ResolutionWidth + 66), sum(ResolutionWidth + 67), sum(ResolutionWidth + 68), sum(ResolutionWidth + 69), sum(ResolutionWidth + 70), sum(ResolutionWidth + 71), sum(ResolutionWidth + 72), sum(ResolutionWidth + 73), sum(ResolutionWidth + 74), sum(ResolutionWidth + 75), sum(ResolutionWidth + 76), sum(ResolutionWidth + 77), sum(ResolutionWidth + 78), sum(ResolutionWidth + 79), sum(ResolutionWidth + 80), sum(ResolutionWidth + 81), sum(ResolutionWidth + 82), sum(ResolutionWidth + 83), sum(ResolutionWidth + 84), sum(ResolutionWidth + 85), sum(ResolutionWidth + 86), sum(ResolutionWidth + 87), sum(ResolutionWidth + 88), sum(ResolutionWidth + 89) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0345
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 19:28:32,803 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:28:45,869 Stage-1 map = 7%, reduce = 0%
2013-09-11 19:28:51,894 Stage-1 map = 14%, reduce = 0%
2013-09-11 19:28:57,917 Stage-1 map = 22%, reduce = 0%
2013-09-11 19:29:04,947 Stage-1 map = 25%, reduce = 0%, Cumulative CPU 69.54 sec
2013-09-11 19:29:05,953 Stage-1 map = 25%, reduce = 0%, Cumulative CPU 69.54 sec
2013-09-11 19:29:06,959 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 69.54 sec
2013-09-11 19:29:07,965 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 69.54 sec
2013-09-11 19:29:08,970 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 69.54 sec
2013-09-11 19:29:09,987 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 69.54 sec
2013-09-11 19:29:10,992 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 69.54 sec
2013-09-11 19:29:11,998 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 69.54 sec
2013-09-11 19:29:13,003 Stage-1 map = 32%, reduce = 0%, Cumulative CPU 69.54 sec
2013-09-11 19:29:14,008 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 69.54 sec
2013-09-11 19:29:15,013 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 69.54 sec
2013-09-11 19:29:16,017 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 69.54 sec
2013-09-11 19:29:17,022 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 69.54 sec
2013-09-11 19:29:18,027 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 69.54 sec
2013-09-11 19:29:19,031 Stage-1 map = 39%, reduce = 0%, Cumulative CPU 69.54 sec
2013-09-11 19:29:20,036 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 69.54 sec
2013-09-11 19:29:21,040 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 69.54 sec
2013-09-11 19:29:22,045 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 69.54 sec
2013-09-11 19:29:23,050 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 69.54 sec
2013-09-11 19:29:24,055 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 69.54 sec
2013-09-11 19:29:25,080 Stage-1 map = 47%, reduce = 0%, Cumulative CPU 90.43 sec
2013-09-11 19:29:26,085 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 112.27 sec
2013-09-11 19:29:27,090 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 112.27 sec
2013-09-11 19:29:28,095 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 112.27 sec
2013-09-11 19:29:29,100 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 112.27 sec
2013-09-11 19:29:30,105 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 112.27 sec
2013-09-11 19:29:31,109 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 112.27 sec
2013-09-11 19:29:32,114 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:33,119 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:34,123 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:35,177 Stage-1 map = 54%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:36,182 Stage-1 map = 54%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:37,186 Stage-1 map = 54%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:38,191 Stage-1 map = 54%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:39,195 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:40,226 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:41,230 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:42,234 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:43,239 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:44,243 Stage-1 map = 61%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:45,267 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:46,271 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:47,276 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:48,280 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:49,284 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:50,289 Stage-1 map = 69%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:51,293 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:52,297 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:53,301 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:54,306 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:55,311 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:56,316 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:57,321 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:58,332 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:29:59,337 Stage-1 map = 76%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:30:00,342 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:30:01,347 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:30:02,352 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:30:03,357 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 112.27 sec
2013-09-11 19:30:04,363 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 195.43 sec
2013-09-11 19:30:05,368 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 195.43 sec
2013-09-11 19:30:06,374 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 195.43 sec
2013-09-11 19:30:07,379 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 195.43 sec
2013-09-11 19:30:08,384 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 195.43 sec
2013-09-11 19:30:09,389 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 195.43 sec
2013-09-11 19:30:10,395 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 195.43 sec
2013-09-11 19:30:11,400 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 195.43 sec
2013-09-11 19:30:12,405 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 195.43 sec
2013-09-11 19:30:13,411 Stage-1 map = 93%, reduce = 17%, Cumulative CPU 205.31 sec
2013-09-11 19:30:14,416 Stage-1 map = 93%, reduce = 17%, Cumulative CPU 205.31 sec
2013-09-11 19:30:15,421 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 205.31 sec
2013-09-11 19:30:16,426 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 205.31 sec
2013-09-11 19:30:17,431 Stage-1 map = 97%, reduce = 25%, Cumulative CPU 205.31 sec
2013-09-11 19:30:18,436 Stage-1 map = 97%, reduce = 25%, Cumulative CPU 205.31 sec
2013-09-11 19:30:19,441 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 222.12 sec
2013-09-11 19:30:20,446 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 222.12 sec
2013-09-11 19:30:21,451 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 222.12 sec
2013-09-11 19:30:22,456 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 222.12 sec
2013-09-11 19:30:23,463 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 224.88 sec
2013-09-11 19:30:24,469 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 224.88 sec
2013-09-11 19:30:25,475 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 224.88 sec
MapReduce Total cumulative CPU time: 3 minutes 44 seconds 880 msec
Ended Job = job_201309101627_0345
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 224.88 sec HDFS Read: 7797536 HDFS Write: 1080 SUCCESS
Total MapReduce CPU Time Spent: 3 minutes 44 seconds 880 msec
OK
Time taken: 123.601 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_3693@mturlrep13_201309111930_78582272.txt
hive> ;
hive> quit;
times: 2
query: SELECT sum(ResolutionWidth), sum(ResolutionWidth + 1), sum(ResolutionWidth + 2), sum(ResolutionWidth + 3), sum(ResolutionWidth + 4), sum(ResolutionWidth + 5), sum(ResolutionWidth + 6), sum(ResolutionWidth + 7), sum(ResolutionWidth + 8), sum(ResolutionWidth + 9), sum(ResolutionWidth + 10), sum(ResolutionWidth + 11), sum(ResolutionWidth + 12), sum(ResolutionWidth + 13), sum(ResolutionWidth + 14), sum(ResolutionWidth + 15), sum(ResolutionWidth + 16), sum(ResolutionWidth + 17), sum(ResolutionWidth + 18), sum(ResolutionWidth + 19), sum(ResolutionWidth + 20), sum(ResolutionWidth + 21), sum(ResolutionWidth + 22), sum(ResolutionWidth + 23), sum(ResolutionWidth + 24), sum(ResolutionWidth + 25), sum(ResolutionWidth + 26), sum(ResolutionWidth + 27), sum(ResolutionWidth + 28), sum(ResolutionWidth + 29), sum(ResolutionWidth + 30), sum(ResolutionWidth + 31), sum(ResolutionWidth + 32), sum(ResolutionWidth + 33), sum(ResolutionWidth + 34), sum(ResolutionWidth + 35), sum(ResolutionWidth + 36), sum(ResolutionWidth + 37), sum(ResolutionWidth + 38), sum(ResolutionWidth + 39), sum(ResolutionWidth + 40), sum(ResolutionWidth + 41), sum(ResolutionWidth + 42), sum(ResolutionWidth + 43), sum(ResolutionWidth + 44), sum(ResolutionWidth + 45), sum(ResolutionWidth + 46), sum(ResolutionWidth + 47), sum(ResolutionWidth + 48), sum(ResolutionWidth + 49), sum(ResolutionWidth + 50), sum(ResolutionWidth + 51), sum(ResolutionWidth + 52), sum(ResolutionWidth + 53), sum(ResolutionWidth + 54), sum(ResolutionWidth + 55), sum(ResolutionWidth + 56), sum(ResolutionWidth + 57), sum(ResolutionWidth + 58), sum(ResolutionWidth + 59), sum(ResolutionWidth + 60), sum(ResolutionWidth + 61), sum(ResolutionWidth + 62), sum(ResolutionWidth + 63), sum(ResolutionWidth + 64), sum(ResolutionWidth + 65), sum(ResolutionWidth + 66), sum(ResolutionWidth + 67), sum(ResolutionWidth + 68), sum(ResolutionWidth + 69), sum(ResolutionWidth + 70), sum(ResolutionWidth + 71), sum(ResolutionWidth + 72), sum(ResolutionWidth + 73), sum(ResolutionWidth + 74), sum(ResolutionWidth + 75), sum(ResolutionWidth + 76), sum(ResolutionWidth + 77), sum(ResolutionWidth + 78), sum(ResolutionWidth + 79), sum(ResolutionWidth + 80), sum(ResolutionWidth + 81), sum(ResolutionWidth + 82), sum(ResolutionWidth + 83), sum(ResolutionWidth + 84), sum(ResolutionWidth + 85), sum(ResolutionWidth + 86), sum(ResolutionWidth + 87), sum(ResolutionWidth + 88), sum(ResolutionWidth + 89) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_4150@mturlrep13_201309111930_88890337.txt
hive> SELECT sum(ResolutionWidth), sum(ResolutionWidth + 1), sum(ResolutionWidth + 2), sum(ResolutionWidth + 3), sum(ResolutionWidth + 4), sum(ResolutionWidth + 5), sum(ResolutionWidth + 6), sum(ResolutionWidth + 7), sum(ResolutionWidth + 8), sum(ResolutionWidth + 9), sum(ResolutionWidth + 10), sum(ResolutionWidth + 11), sum(ResolutionWidth + 12), sum(ResolutionWidth + 13), sum(ResolutionWidth + 14), sum(ResolutionWidth + 15), sum(ResolutionWidth + 16), sum(ResolutionWidth + 17), sum(ResolutionWidth + 18), sum(ResolutionWidth + 19), sum(ResolutionWidth + 20), sum(ResolutionWidth + 21), sum(ResolutionWidth + 22), sum(ResolutionWidth + 23), sum(ResolutionWidth + 24), sum(ResolutionWidth + 25), sum(ResolutionWidth + 26), sum(ResolutionWidth + 27), sum(ResolutionWidth + 28), sum(ResolutionWidth + 29), sum(ResolutionWidth + 30), sum(ResolutionWidth + 31), sum(ResolutionWidth + 32), sum(ResolutionWidth + 33), sum(ResolutionWidth + 34), sum(ResolutionWidth + 35), sum(ResolutionWidth + 36), sum(ResolutionWidth + 37), sum(ResolutionWidth + 38), sum(ResolutionWidth + 39), sum(ResolutionWidth + 40), sum(ResolutionWidth + 41), sum(ResolutionWidth + 42), sum(ResolutionWidth + 43), sum(ResolutionWidth + 44), sum(ResolutionWidth + 45), sum(ResolutionWidth + 46), sum(ResolutionWidth + 47), sum(ResolutionWidth + 48), sum(ResolutionWidth + 49), sum(ResolutionWidth + 50), sum(ResolutionWidth + 51), sum(ResolutionWidth + 52), sum(ResolutionWidth + 53), sum(ResolutionWidth + 54), sum(ResolutionWidth + 55), sum(ResolutionWidth + 56), sum(ResolutionWidth + 57), sum(ResolutionWidth + 58), sum(ResolutionWidth + 59), sum(ResolutionWidth + 60), sum(ResolutionWidth + 61), sum(ResolutionWidth + 62), sum(ResolutionWidth + 63), sum(ResolutionWidth + 64), sum(ResolutionWidth + 65), sum(ResolutionWidth + 66), sum(ResolutionWidth + 67), sum(ResolutionWidth + 68), sum(ResolutionWidth + 69), sum(ResolutionWidth + 70), sum(ResolutionWidth + 71), sum(ResolutionWidth + 72), sum(ResolutionWidth + 73), sum(ResolutionWidth + 74), sum(ResolutionWidth + 75), sum(ResolutionWidth + 76), sum(ResolutionWidth + 77), sum(ResolutionWidth + 78), sum(ResolutionWidth + 79), sum(ResolutionWidth + 80), sum(ResolutionWidth + 81), sum(ResolutionWidth + 82), sum(ResolutionWidth + 83), sum(ResolutionWidth + 84), sum(ResolutionWidth + 85), sum(ResolutionWidth + 86), sum(ResolutionWidth + 87), sum(ResolutionWidth + 88), sum(ResolutionWidth + 89) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0346
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 19:30:39,700 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:30:50,746 Stage-1 map = 7%, reduce = 0%
2013-09-11 19:30:59,777 Stage-1 map = 14%, reduce = 0%
2013-09-11 19:31:05,802 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:06,807 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:07,813 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:08,820 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:09,825 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:10,829 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:11,833 Stage-1 map = 25%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:12,838 Stage-1 map = 25%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:13,842 Stage-1 map = 25%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:14,847 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:15,852 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:16,857 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:17,861 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:18,866 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:19,870 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:20,875 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:21,879 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:22,883 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:23,887 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:24,892 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:25,896 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:26,900 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:27,904 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:28,908 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:29,912 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:30,917 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:31,921 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 50.77 sec
2013-09-11 19:31:32,928 Stage-1 map = 46%, reduce = 0%, Cumulative CPU 81.31 sec
2013-09-11 19:31:33,934 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 112.9 sec
2013-09-11 19:31:34,939 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 112.9 sec
2013-09-11 19:31:35,944 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 112.9 sec
2013-09-11 19:31:36,949 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 112.9 sec
2013-09-11 19:31:37,954 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 112.9 sec
2013-09-11 19:31:38,960 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 112.9 sec
2013-09-11 19:31:39,965 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:31:40,970 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:31:41,975 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:31:42,980 Stage-1 map = 54%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:31:43,985 Stage-1 map = 54%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:31:44,990 Stage-1 map = 54%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:31:45,994 Stage-1 map = 54%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:31:47,031 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:31:48,036 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:31:49,041 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:31:50,046 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:31:51,052 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:31:52,081 Stage-1 map = 61%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:31:53,086 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:31:54,091 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:31:55,096 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:31:56,101 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:31:57,115 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:31:58,119 Stage-1 map = 69%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:31:59,128 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:32:00,132 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:32:01,137 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:32:02,165 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:32:03,170 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:32:04,175 Stage-1 map = 76%, reduce = 17%, Cumulative CPU 112.9 sec
2013-09-11 19:32:05,181 Stage-1 map = 76%, reduce = 17%, Cumulative CPU 181.27 sec
2013-09-11 19:32:06,186 Stage-1 map = 76%, reduce = 17%, Cumulative CPU 181.27 sec
2013-09-11 19:32:07,191 Stage-1 map = 76%, reduce = 17%, Cumulative CPU 181.27 sec
2013-09-11 19:32:08,196 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 181.27 sec
2013-09-11 19:32:09,201 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 181.27 sec
2013-09-11 19:32:10,206 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 181.27 sec
2013-09-11 19:32:11,210 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 181.27 sec
2013-09-11 19:32:12,215 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 181.27 sec
2013-09-11 19:32:13,220 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 181.27 sec
2013-09-11 19:32:14,226 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 181.27 sec
2013-09-11 19:32:15,230 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 181.27 sec
2013-09-11 19:32:16,236 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 181.27 sec
2013-09-11 19:32:17,241 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 181.27 sec
2013-09-11 19:32:18,246 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 181.27 sec
2013-09-11 19:32:19,251 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 181.27 sec
2013-09-11 19:32:20,256 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 200.03 sec
2013-09-11 19:32:21,260 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 200.03 sec
2013-09-11 19:32:22,268 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 200.03 sec
2013-09-11 19:32:23,273 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 200.03 sec
2013-09-11 19:32:24,278 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 200.03 sec
2013-09-11 19:32:25,282 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 221.55 sec
2013-09-11 19:32:26,287 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 221.55 sec
2013-09-11 19:32:27,291 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 221.55 sec
2013-09-11 19:32:28,296 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 221.55 sec
2013-09-11 19:32:29,300 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 221.55 sec
2013-09-11 19:32:30,304 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 221.55 sec
2013-09-11 19:32:31,311 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 224.51 sec
2013-09-11 19:32:32,315 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 224.51 sec
2013-09-11 19:32:33,320 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 224.51 sec
MapReduce Total cumulative CPU time: 3 minutes 44 seconds 510 msec
Ended Job = job_201309101627_0346
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 224.51 sec HDFS Read: 7797536 HDFS Write: 1080 SUCCESS
Total MapReduce CPU Time Spent: 3 minutes 44 seconds 510 msec
OK
Time taken: 121.982 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_6746@mturlrep13_201309111932_688661171.txt
hive> ;
hive> quit;
times: 3
query: SELECT sum(ResolutionWidth), sum(ResolutionWidth + 1), sum(ResolutionWidth + 2), sum(ResolutionWidth + 3), sum(ResolutionWidth + 4), sum(ResolutionWidth + 5), sum(ResolutionWidth + 6), sum(ResolutionWidth + 7), sum(ResolutionWidth + 8), sum(ResolutionWidth + 9), sum(ResolutionWidth + 10), sum(ResolutionWidth + 11), sum(ResolutionWidth + 12), sum(ResolutionWidth + 13), sum(ResolutionWidth + 14), sum(ResolutionWidth + 15), sum(ResolutionWidth + 16), sum(ResolutionWidth + 17), sum(ResolutionWidth + 18), sum(ResolutionWidth + 19), sum(ResolutionWidth + 20), sum(ResolutionWidth + 21), sum(ResolutionWidth + 22), sum(ResolutionWidth + 23), sum(ResolutionWidth + 24), sum(ResolutionWidth + 25), sum(ResolutionWidth + 26), sum(ResolutionWidth + 27), sum(ResolutionWidth + 28), sum(ResolutionWidth + 29), sum(ResolutionWidth + 30), sum(ResolutionWidth + 31), sum(ResolutionWidth + 32), sum(ResolutionWidth + 33), sum(ResolutionWidth + 34), sum(ResolutionWidth + 35), sum(ResolutionWidth + 36), sum(ResolutionWidth + 37), sum(ResolutionWidth + 38), sum(ResolutionWidth + 39), sum(ResolutionWidth + 40), sum(ResolutionWidth + 41), sum(ResolutionWidth + 42), sum(ResolutionWidth + 43), sum(ResolutionWidth + 44), sum(ResolutionWidth + 45), sum(ResolutionWidth + 46), sum(ResolutionWidth + 47), sum(ResolutionWidth + 48), sum(ResolutionWidth + 49), sum(ResolutionWidth + 50), sum(ResolutionWidth + 51), sum(ResolutionWidth + 52), sum(ResolutionWidth + 53), sum(ResolutionWidth + 54), sum(ResolutionWidth + 55), sum(ResolutionWidth + 56), sum(ResolutionWidth + 57), sum(ResolutionWidth + 58), sum(ResolutionWidth + 59), sum(ResolutionWidth + 60), sum(ResolutionWidth + 61), sum(ResolutionWidth + 62), sum(ResolutionWidth + 63), sum(ResolutionWidth + 64), sum(ResolutionWidth + 65), sum(ResolutionWidth + 66), sum(ResolutionWidth + 67), sum(ResolutionWidth + 68), sum(ResolutionWidth + 69), sum(ResolutionWidth + 70), sum(ResolutionWidth + 71), sum(ResolutionWidth + 72), sum(ResolutionWidth + 73), sum(ResolutionWidth + 74), sum(ResolutionWidth + 75), sum(ResolutionWidth + 76), sum(ResolutionWidth + 77), sum(ResolutionWidth + 78), sum(ResolutionWidth + 79), sum(ResolutionWidth + 80), sum(ResolutionWidth + 81), sum(ResolutionWidth + 82), sum(ResolutionWidth + 83), sum(ResolutionWidth + 84), sum(ResolutionWidth + 85), sum(ResolutionWidth + 86), sum(ResolutionWidth + 87), sum(ResolutionWidth + 88), sum(ResolutionWidth + 89) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_7179@mturlrep13_201309111932_74538691.txt
hive> SELECT sum(ResolutionWidth), sum(ResolutionWidth + 1), sum(ResolutionWidth + 2), sum(ResolutionWidth + 3), sum(ResolutionWidth + 4), sum(ResolutionWidth + 5), sum(ResolutionWidth + 6), sum(ResolutionWidth + 7), sum(ResolutionWidth + 8), sum(ResolutionWidth + 9), sum(ResolutionWidth + 10), sum(ResolutionWidth + 11), sum(ResolutionWidth + 12), sum(ResolutionWidth + 13), sum(ResolutionWidth + 14), sum(ResolutionWidth + 15), sum(ResolutionWidth + 16), sum(ResolutionWidth + 17), sum(ResolutionWidth + 18), sum(ResolutionWidth + 19), sum(ResolutionWidth + 20), sum(ResolutionWidth + 21), sum(ResolutionWidth + 22), sum(ResolutionWidth + 23), sum(ResolutionWidth + 24), sum(ResolutionWidth + 25), sum(ResolutionWidth + 26), sum(ResolutionWidth + 27), sum(ResolutionWidth + 28), sum(ResolutionWidth + 29), sum(ResolutionWidth + 30), sum(ResolutionWidth + 31), sum(ResolutionWidth + 32), sum(ResolutionWidth + 33), sum(ResolutionWidth + 34), sum(ResolutionWidth + 35), sum(ResolutionWidth + 36), sum(ResolutionWidth + 37), sum(ResolutionWidth + 38), sum(ResolutionWidth + 39), sum(ResolutionWidth + 40), sum(ResolutionWidth + 41), sum(ResolutionWidth + 42), sum(ResolutionWidth + 43), sum(ResolutionWidth + 44), sum(ResolutionWidth + 45), sum(ResolutionWidth + 46), sum(ResolutionWidth + 47), sum(ResolutionWidth + 48), sum(ResolutionWidth + 49), sum(ResolutionWidth + 50), sum(ResolutionWidth + 51), sum(ResolutionWidth + 52), sum(ResolutionWidth + 53), sum(ResolutionWidth + 54), sum(ResolutionWidth + 55), sum(ResolutionWidth + 56), sum(ResolutionWidth + 57), sum(ResolutionWidth + 58), sum(ResolutionWidth + 59), sum(ResolutionWidth + 60), sum(ResolutionWidth + 61), sum(ResolutionWidth + 62), sum(ResolutionWidth + 63), sum(ResolutionWidth + 64), sum(ResolutionWidth + 65), sum(ResolutionWidth + 66), sum(ResolutionWidth + 67), sum(ResolutionWidth + 68), sum(ResolutionWidth + 69), sum(ResolutionWidth + 70), sum(ResolutionWidth + 71), sum(ResolutionWidth + 72), sum(ResolutionWidth + 73), sum(ResolutionWidth + 74), sum(ResolutionWidth + 75), sum(ResolutionWidth + 76), sum(ResolutionWidth + 77), sum(ResolutionWidth + 78), sum(ResolutionWidth + 79), sum(ResolutionWidth + 80), sum(ResolutionWidth + 81), sum(ResolutionWidth + 82), sum(ResolutionWidth + 83), sum(ResolutionWidth + 84), sum(ResolutionWidth + 85), sum(ResolutionWidth + 86), sum(ResolutionWidth + 87), sum(ResolutionWidth + 88), sum(ResolutionWidth + 89) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0347
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-11 19:32:47,696 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:32:58,740 Stage-1 map = 4%, reduce = 0%
2013-09-11 19:33:01,753 Stage-1 map = 7%, reduce = 0%
2013-09-11 19:33:07,785 Stage-1 map = 14%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:08,792 Stage-1 map = 14%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:09,798 Stage-1 map = 14%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:10,803 Stage-1 map = 14%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:11,809 Stage-1 map = 14%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:12,814 Stage-1 map = 14%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:13,819 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:14,824 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:15,829 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:16,834 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:17,840 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:18,845 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:19,850 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:20,855 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:21,860 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:22,865 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:23,870 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:24,876 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:25,881 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:26,886 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:27,891 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:28,896 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:29,901 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:30,906 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:31,911 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:32,915 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:33,920 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:34,925 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:35,930 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:36,935 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:37,939 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:38,944 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:39,952 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 39.05 sec
2013-09-11 19:33:40,959 Stage-1 map = 47%, reduce = 0%, Cumulative CPU 76.73 sec
2013-09-11 19:33:41,965 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 114.74 sec
2013-09-11 19:33:42,970 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 114.74 sec
2013-09-11 19:33:43,975 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 114.74 sec
2013-09-11 19:33:44,981 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 114.74 sec
2013-09-11 19:33:45,986 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 114.74 sec
2013-09-11 19:33:46,991 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 114.74 sec
2013-09-11 19:33:47,996 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 114.74 sec
2013-09-11 19:33:49,020 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 114.74 sec
2013-09-11 19:33:50,025 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 114.74 sec
2013-09-11 19:33:51,030 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 114.74 sec
2013-09-11 19:33:52,035 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 114.74 sec
2013-09-11 19:33:53,040 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 114.74 sec
2013-09-11 19:33:54,045 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 114.74 sec
2013-09-11 19:33:55,050 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 114.74 sec
2013-09-11 19:33:56,055 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 114.74 sec
2013-09-11 19:33:57,060 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 114.74 sec
2013-09-11 19:33:58,064 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 114.74 sec
2013-09-11 19:33:59,113 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 114.74 sec
2013-09-11 19:34:00,118 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 114.74 sec
2013-09-11 19:34:01,123 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 114.74 sec
2013-09-11 19:34:02,128 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 114.74 sec
2013-09-11 19:34:03,132 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 114.74 sec
2013-09-11 19:34:04,146 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 114.74 sec
2013-09-11 19:34:05,152 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:06,161 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:07,166 Stage-1 map = 69%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:08,171 Stage-1 map = 69%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:09,196 Stage-1 map = 69%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:10,201 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:11,206 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:12,211 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:13,217 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:14,237 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:15,242 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:16,247 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:17,252 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:18,257 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:19,262 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:20,267 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:21,272 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:22,277 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:23,282 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:24,287 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:25,292 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:26,297 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:27,302 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:28,307 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:29,311 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 167.26 sec
2013-09-11 19:34:30,316 Stage-1 map = 93%, reduce = 17%, Cumulative CPU 193.91 sec
2013-09-11 19:34:31,320 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 193.91 sec
2013-09-11 19:34:32,325 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 193.91 sec
2013-09-11 19:34:33,330 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 193.91 sec
2013-09-11 19:34:34,335 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 193.91 sec
2013-09-11 19:34:35,339 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 193.91 sec
2013-09-11 19:34:36,344 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 226.92 sec
2013-09-11 19:34:37,348 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 226.92 sec
2013-09-11 19:34:38,353 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 226.92 sec
2013-09-11 19:34:39,358 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 226.92 sec
2013-09-11 19:34:40,364 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 230.01 sec
2013-09-11 19:34:41,369 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 230.01 sec
MapReduce Total cumulative CPU time: 3 minutes 50 seconds 10 msec
Ended Job = job_201309101627_0347
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 230.01 sec HDFS Read: 7797536 HDFS Write: 1080 SUCCESS
Total MapReduce CPU Time Spent: 3 minutes 50 seconds 10 msec
OK
Time taken: 122.089 seconds, Fetched: 1 row(s)
hive> quit;
-- много тупых агрегатных функций.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9487@mturlrep13_201309111934_1498253935.txt
hive> ;
hive> quit;
times: 1
query: SELECT SearchEngineID, ClientIP, count(*) AS c, sum(Refresh), avg(ResolutionWidth) FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchEngineID, ClientIP ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9958@mturlrep13_201309111934_990229488.txt
hive> SELECT SearchEngineID, ClientIP, count(*) AS c, sum(Refresh), avg(ResolutionWidth) FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchEngineID, ClientIP ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0348
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:35:01,929 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:35:08,962 Stage-1 map = 29%, reduce = 0%
2013-09-11 19:35:11,984 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 25.29 sec
2013-09-11 19:35:12,991 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 25.29 sec
2013-09-11 19:35:13,999 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 25.29 sec
2013-09-11 19:35:15,006 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 25.29 sec
2013-09-11 19:35:16,013 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 25.29 sec
2013-09-11 19:35:17,021 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 25.29 sec
2013-09-11 19:35:18,028 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 25.29 sec
2013-09-11 19:35:19,035 Stage-1 map = 80%, reduce = 8%, Cumulative CPU 25.29 sec
2013-09-11 19:35:20,041 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 25.29 sec
2013-09-11 19:35:21,048 Stage-1 map = 89%, reduce = 17%, Cumulative CPU 36.02 sec
2013-09-11 19:35:22,054 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.81 sec
2013-09-11 19:35:23,061 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.81 sec
2013-09-11 19:35:24,068 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.81 sec
2013-09-11 19:35:25,074 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.81 sec
2013-09-11 19:35:26,080 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.81 sec
2013-09-11 19:35:27,087 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.81 sec
2013-09-11 19:35:28,094 Stage-1 map = 100%, reduce = 54%, Cumulative CPU 47.81 sec
2013-09-11 19:35:29,102 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 62.62 sec
2013-09-11 19:35:30,109 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 62.62 sec
2013-09-11 19:35:31,116 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 62.62 sec
MapReduce Total cumulative CPU time: 1 minutes 2 seconds 620 msec
Ended Job = job_201309101627_0348
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0349
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 19:35:34,611 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:35:41,634 Stage-2 map = 52%, reduce = 0%
2013-09-11 19:35:43,642 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 11.27 sec
2013-09-11 19:35:44,647 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 11.27 sec
2013-09-11 19:35:45,653 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 11.27 sec
2013-09-11 19:35:46,657 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 11.27 sec
2013-09-11 19:35:47,662 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 11.27 sec
2013-09-11 19:35:48,667 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 11.27 sec
2013-09-11 19:35:49,672 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 11.27 sec
2013-09-11 19:35:50,678 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 11.27 sec
2013-09-11 19:35:51,683 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 13.93 sec
2013-09-11 19:35:52,688 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 13.93 sec
2013-09-11 19:35:53,694 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 13.93 sec
MapReduce Total cumulative CPU time: 13 seconds 930 msec
Ended Job = job_201309101627_0349
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 62.62 sec HDFS Read: 69312553 HDFS Write: 31841963 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 13.93 sec HDFS Read: 31842732 HDFS Write: 372 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 16 seconds 550 msec
OK
Time taken: 61.8 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_13220@mturlrep13_201309111935_1450466202.txt
hive> ;
hive> quit;
times: 2
query: SELECT SearchEngineID, ClientIP, count(*) AS c, sum(Refresh), avg(ResolutionWidth) FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchEngineID, ClientIP ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_13656@mturlrep13_201309111935_882257175.txt
hive> SELECT SearchEngineID, ClientIP, count(*) AS c, sum(Refresh), avg(ResolutionWidth) FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchEngineID, ClientIP ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0350
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:36:06,591 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:36:14,621 Stage-1 map = 39%, reduce = 0%
2013-09-11 19:36:15,633 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.79 sec
2013-09-11 19:36:16,640 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.79 sec
2013-09-11 19:36:17,650 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.79 sec
2013-09-11 19:36:18,657 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.79 sec
2013-09-11 19:36:19,663 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.79 sec
2013-09-11 19:36:20,670 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.79 sec
2013-09-11 19:36:21,677 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.79 sec
2013-09-11 19:36:22,683 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.79 sec
2013-09-11 19:36:23,689 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 23.79 sec
2013-09-11 19:36:24,694 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.4 sec
2013-09-11 19:36:25,700 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.4 sec
2013-09-11 19:36:26,706 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.4 sec
2013-09-11 19:36:27,711 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.4 sec
2013-09-11 19:36:28,717 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.4 sec
2013-09-11 19:36:29,723 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.4 sec
2013-09-11 19:36:30,728 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.4 sec
2013-09-11 19:36:31,734 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 47.4 sec
2013-09-11 19:36:32,740 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 47.4 sec
2013-09-11 19:36:33,748 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 62.39 sec
2013-09-11 19:36:34,754 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 62.39 sec
MapReduce Total cumulative CPU time: 1 minutes 2 seconds 390 msec
Ended Job = job_201309101627_0350
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0351
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 19:36:38,242 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:36:45,265 Stage-2 map = 52%, reduce = 0%
2013-09-11 19:36:47,275 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 12.17 sec
2013-09-11 19:36:48,280 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 12.17 sec
2013-09-11 19:36:49,285 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 12.17 sec
2013-09-11 19:36:50,289 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 12.17 sec
2013-09-11 19:36:51,294 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 12.17 sec
2013-09-11 19:36:52,299 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 12.17 sec
2013-09-11 19:36:53,303 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 12.17 sec
2013-09-11 19:36:54,308 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 12.17 sec
2013-09-11 19:36:55,313 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 12.17 sec
2013-09-11 19:36:56,319 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 14.91 sec
2013-09-11 19:36:57,324 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 14.91 sec
MapReduce Total cumulative CPU time: 14 seconds 910 msec
Ended Job = job_201309101627_0351
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 62.39 sec HDFS Read: 69312553 HDFS Write: 31841963 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 14.91 sec HDFS Read: 31842732 HDFS Write: 372 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 17 seconds 300 msec
OK
Time taken: 58.141 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_16233@mturlrep13_201309111936_495916780.txt
hive> ;
hive> quit;
times: 3
query: SELECT SearchEngineID, ClientIP, count(*) AS c, sum(Refresh), avg(ResolutionWidth) FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchEngineID, ClientIP ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_16705@mturlrep13_201309111937_103662092.txt
hive> SELECT SearchEngineID, ClientIP, count(*) AS c, sum(Refresh), avg(ResolutionWidth) FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchEngineID, ClientIP ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0352
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:37:11,503 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:37:18,531 Stage-1 map = 36%, reduce = 0%
2013-09-11 19:37:20,547 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.6 sec
2013-09-11 19:37:21,555 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.6 sec
2013-09-11 19:37:22,563 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.6 sec
2013-09-11 19:37:23,569 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.6 sec
2013-09-11 19:37:24,576 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.6 sec
2013-09-11 19:37:25,582 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.6 sec
2013-09-11 19:37:26,589 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.6 sec
2013-09-11 19:37:27,596 Stage-1 map = 88%, reduce = 8%, Cumulative CPU 24.6 sec
2013-09-11 19:37:28,602 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 24.6 sec
2013-09-11 19:37:29,608 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.04 sec
2013-09-11 19:37:30,613 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.04 sec
2013-09-11 19:37:31,618 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.04 sec
2013-09-11 19:37:32,624 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.04 sec
2013-09-11 19:37:33,630 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.04 sec
2013-09-11 19:37:34,635 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.04 sec
2013-09-11 19:37:35,640 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.04 sec
2013-09-11 19:37:36,645 Stage-1 map = 100%, reduce = 55%, Cumulative CPU 48.04 sec
2013-09-11 19:37:37,652 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 62.72 sec
2013-09-11 19:37:38,658 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 62.72 sec
2013-09-11 19:37:39,664 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 62.72 sec
MapReduce Total cumulative CPU time: 1 minutes 2 seconds 720 msec
Ended Job = job_201309101627_0352
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0353
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 19:37:42,202 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:37:50,230 Stage-2 map = 52%, reduce = 0%
2013-09-11 19:37:51,236 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 11.46 sec
2013-09-11 19:37:52,243 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 11.46 sec
2013-09-11 19:37:53,248 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 11.46 sec
2013-09-11 19:37:54,253 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 11.46 sec
2013-09-11 19:37:55,258 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 11.46 sec
2013-09-11 19:37:56,263 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 11.46 sec
2013-09-11 19:37:57,268 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 11.46 sec
2013-09-11 19:37:58,272 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 11.46 sec
2013-09-11 19:37:59,277 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 11.46 sec
2013-09-11 19:38:00,283 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 14.2 sec
2013-09-11 19:38:01,289 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 14.2 sec
MapReduce Total cumulative CPU time: 14 seconds 200 msec
Ended Job = job_201309101627_0353
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 62.72 sec HDFS Read: 69312553 HDFS Write: 31841963 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 14.2 sec HDFS Read: 31842732 HDFS Write: 372 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 16 seconds 920 msec
OK
Time taken: 58.209 seconds, Fetched: 10 row(s)
hive> quit;
-- сложная агрегация, для больших таблиц может не хватить оперативки.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_19487@mturlrep13_201309111938_449325071.txt
hive> ;
hive> quit;
times: 1
query: SELECT WatchID, ClientIP, count(*) AS c, sum(Refresh), avg(ResolutionWidth) FROM hits_10m WHERE SearchPhrase != '' GROUP BY WatchID, ClientIP ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_19952@mturlrep13_201309111938_524378054.txt
hive> SELECT WatchID, ClientIP, count(*) AS c, sum(Refresh), avg(ResolutionWidth) FROM hits_10m WHERE SearchPhrase != '' GROUP BY WatchID, ClientIP ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0354
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:38:23,549 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:38:30,587 Stage-1 map = 29%, reduce = 0%
2013-09-11 19:38:33,607 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.07 sec
2013-09-11 19:38:34,614 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.07 sec
2013-09-11 19:38:35,622 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.07 sec
2013-09-11 19:38:36,628 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.07 sec
2013-09-11 19:38:37,635 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.07 sec
2013-09-11 19:38:38,642 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.07 sec
2013-09-11 19:38:39,649 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.07 sec
2013-09-11 19:38:40,655 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.07 sec
2013-09-11 19:38:41,660 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 24.07 sec
2013-09-11 19:38:42,665 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 24.07 sec
2013-09-11 19:38:43,671 Stage-1 map = 89%, reduce = 17%, Cumulative CPU 36.3 sec
2013-09-11 19:38:44,676 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.95 sec
2013-09-11 19:38:45,681 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.95 sec
2013-09-11 19:38:46,686 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.95 sec
2013-09-11 19:38:47,691 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.95 sec
2013-09-11 19:38:48,696 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.95 sec
2013-09-11 19:38:49,702 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.95 sec
2013-09-11 19:38:50,707 Stage-1 map = 100%, reduce = 86%, Cumulative CPU 48.95 sec
2013-09-11 19:38:51,715 Stage-1 map = 100%, reduce = 93%, Cumulative CPU 57.98 sec
2013-09-11 19:38:52,721 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 66.7 sec
2013-09-11 19:38:53,727 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 66.7 sec
MapReduce Total cumulative CPU time: 1 minutes 6 seconds 700 msec
Ended Job = job_201309101627_0354
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0355
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 19:38:57,273 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:39:07,307 Stage-2 map = 50%, reduce = 0%, Cumulative CPU 10.01 sec
2013-09-11 19:39:08,312 Stage-2 map = 50%, reduce = 0%, Cumulative CPU 10.01 sec
2013-09-11 19:39:09,316 Stage-2 map = 50%, reduce = 0%, Cumulative CPU 10.01 sec
2013-09-11 19:39:10,321 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.48 sec
2013-09-11 19:39:11,326 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.48 sec
2013-09-11 19:39:12,331 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.48 sec
2013-09-11 19:39:13,336 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.48 sec
2013-09-11 19:39:14,341 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.48 sec
2013-09-11 19:39:15,346 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.48 sec
2013-09-11 19:39:16,350 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.48 sec
2013-09-11 19:39:17,356 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 17.48 sec
2013-09-11 19:39:18,361 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 17.48 sec
2013-09-11 19:39:19,366 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 22.34 sec
2013-09-11 19:39:20,372 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 22.34 sec
2013-09-11 19:39:21,377 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 22.34 sec
MapReduce Total cumulative CPU time: 22 seconds 340 msec
Ended Job = job_201309101627_0355
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 66.7 sec HDFS Read: 112931901 HDFS Write: 72725701 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 22.34 sec HDFS Read: 72726470 HDFS Write: 417 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 29 seconds 40 msec
OK
Time taken: 67.908 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_22641@mturlrep13_201309111939_2035435508.txt
hive> ;
hive> quit;
times: 2
query: SELECT WatchID, ClientIP, count(*) AS c, sum(Refresh), avg(ResolutionWidth) FROM hits_10m WHERE SearchPhrase != '' GROUP BY WatchID, ClientIP ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_23093@mturlrep13_201309111939_526667313.txt
hive> SELECT WatchID, ClientIP, count(*) AS c, sum(Refresh), avg(ResolutionWidth) FROM hits_10m WHERE SearchPhrase != '' GROUP BY WatchID, ClientIP ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0356
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:39:35,562 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:39:42,588 Stage-1 map = 36%, reduce = 0%
2013-09-11 19:39:44,605 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.84 sec
2013-09-11 19:39:45,612 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.84 sec
2013-09-11 19:39:46,620 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.84 sec
2013-09-11 19:39:47,626 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.84 sec
2013-09-11 19:39:48,632 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.84 sec
2013-09-11 19:39:49,638 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.84 sec
2013-09-11 19:39:50,644 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.84 sec
2013-09-11 19:39:51,650 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.84 sec
2013-09-11 19:39:52,655 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 24.84 sec
2013-09-11 19:39:53,660 Stage-1 map = 93%, reduce = 17%, Cumulative CPU 36.18 sec
2013-09-11 19:39:54,665 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.13 sec
2013-09-11 19:39:55,671 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.13 sec
2013-09-11 19:39:56,676 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.13 sec
2013-09-11 19:39:57,682 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.13 sec
2013-09-11 19:39:58,688 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.13 sec
2013-09-11 19:39:59,693 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.13 sec
2013-09-11 19:40:00,698 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.13 sec
2013-09-11 19:40:01,705 Stage-1 map = 100%, reduce = 84%, Cumulative CPU 48.13 sec
2013-09-11 19:40:02,710 Stage-1 map = 100%, reduce = 84%, Cumulative CPU 48.13 sec
2013-09-11 19:40:03,717 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 66.55 sec
2013-09-11 19:40:04,723 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 66.55 sec
MapReduce Total cumulative CPU time: 1 minutes 6 seconds 550 msec
Ended Job = job_201309101627_0356
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0357
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 19:40:08,243 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:40:18,275 Stage-2 map = 50%, reduce = 0%
2013-09-11 19:40:21,285 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.04 sec
2013-09-11 19:40:22,290 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.04 sec
2013-09-11 19:40:23,295 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.04 sec
2013-09-11 19:40:24,299 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.04 sec
2013-09-11 19:40:25,303 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.04 sec
2013-09-11 19:40:26,308 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.04 sec
2013-09-11 19:40:27,312 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.04 sec
2013-09-11 19:40:28,319 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 18.04 sec
2013-09-11 19:40:29,325 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 18.04 sec
2013-09-11 19:40:30,331 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 22.87 sec
2013-09-11 19:40:31,336 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 22.87 sec
2013-09-11 19:40:32,341 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 22.87 sec
MapReduce Total cumulative CPU time: 22 seconds 870 msec
Ended Job = job_201309101627_0357
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 66.55 sec HDFS Read: 112931901 HDFS Write: 72725701 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 22.87 sec HDFS Read: 72726470 HDFS Write: 417 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 29 seconds 420 msec
OK
Time taken: 65.342 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_26473@mturlrep13_201309111940_1279166240.txt
hive> ;
hive> quit;
times: 3
query: SELECT WatchID, ClientIP, count(*) AS c, sum(Refresh), avg(ResolutionWidth) FROM hits_10m WHERE SearchPhrase != '' GROUP BY WatchID, ClientIP ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_26931@mturlrep13_201309111940_790078242.txt
hive> SELECT WatchID, ClientIP, count(*) AS c, sum(Refresh), avg(ResolutionWidth) FROM hits_10m WHERE SearchPhrase != '' GROUP BY WatchID, ClientIP ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0358
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:40:46,365 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:40:53,395 Stage-1 map = 36%, reduce = 0%
2013-09-11 19:40:55,412 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.12 sec
2013-09-11 19:40:56,420 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.12 sec
2013-09-11 19:40:57,429 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.12 sec
2013-09-11 19:40:58,435 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.12 sec
2013-09-11 19:40:59,442 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.12 sec
2013-09-11 19:41:00,449 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.12 sec
2013-09-11 19:41:01,457 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.12 sec
2013-09-11 19:41:02,463 Stage-1 map = 68%, reduce = 8%, Cumulative CPU 24.12 sec
2013-09-11 19:41:03,469 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 24.12 sec
2013-09-11 19:41:04,475 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.93 sec
2013-09-11 19:41:05,481 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.42 sec
2013-09-11 19:41:06,487 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.42 sec
2013-09-11 19:41:07,493 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.42 sec
2013-09-11 19:41:08,499 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.42 sec
2013-09-11 19:41:09,505 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.42 sec
2013-09-11 19:41:10,511 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.42 sec
2013-09-11 19:41:11,518 Stage-1 map = 100%, reduce = 50%, Cumulative CPU 48.42 sec
2013-09-11 19:41:12,524 Stage-1 map = 100%, reduce = 84%, Cumulative CPU 48.42 sec
2013-09-11 19:41:13,531 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 56.93 sec
2013-09-11 19:41:14,537 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 65.67 sec
2013-09-11 19:41:15,544 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 65.67 sec
MapReduce Total cumulative CPU time: 1 minutes 5 seconds 670 msec
Ended Job = job_201309101627_0358
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0359
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 19:41:19,228 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:41:29,262 Stage-2 map = 50%, reduce = 0%
2013-09-11 19:41:32,273 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.08 sec
2013-09-11 19:41:33,278 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.08 sec
2013-09-11 19:41:34,283 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.08 sec
2013-09-11 19:41:35,287 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.08 sec
2013-09-11 19:41:36,291 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.08 sec
2013-09-11 19:41:37,296 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.08 sec
2013-09-11 19:41:38,301 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 17.08 sec
2013-09-11 19:41:39,307 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 17.08 sec
2013-09-11 19:41:40,311 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 17.08 sec
2013-09-11 19:41:41,317 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 21.69 sec
2013-09-11 19:41:42,323 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 21.69 sec
2013-09-11 19:41:43,328 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 21.69 sec
MapReduce Total cumulative CPU time: 21 seconds 690 msec
Ended Job = job_201309101627_0359
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 65.67 sec HDFS Read: 112931901 HDFS Write: 72725701 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 21.69 sec HDFS Read: 72726470 HDFS Write: 417 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 27 seconds 360 msec
OK
Time taken: 65.41 seconds, Fetched: 10 row(s)
hive> quit;
-- агрегация по двум полям, которая ничего не агрегирует. Для больших таблиц выполнить не получится.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_29665@mturlrep13_201309111941_1737478499.txt
hive> ;
hive> quit;
times: 1
query: SELECT WatchID, ClientIP, count(*) AS c, sum(Refresh), avg(ResolutionWidth) FROM hits_10m GROUP BY WatchID, ClientIP ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_30133@mturlrep13_201309111941_1994654944.txt
hive> SELECT WatchID, ClientIP, count(*) AS c, sum(Refresh), avg(ResolutionWidth) FROM hits_10m GROUP BY WatchID, ClientIP ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0360
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:42:06,244 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:42:13,269 Stage-1 map = 7%, reduce = 0%
2013-09-11 19:42:16,281 Stage-1 map = 18%, reduce = 0%
2013-09-11 19:42:19,293 Stage-1 map = 29%, reduce = 0%
2013-09-11 19:42:22,306 Stage-1 map = 36%, reduce = 0%
2013-09-11 19:42:25,316 Stage-1 map = 43%, reduce = 0%
2013-09-11 19:42:27,330 Stage-1 map = 46%, reduce = 0%, Cumulative CPU 25.54 sec
2013-09-11 19:42:28,336 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 53.11 sec
2013-09-11 19:42:29,343 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 53.11 sec
2013-09-11 19:42:30,349 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 53.11 sec
2013-09-11 19:42:31,354 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 53.11 sec
2013-09-11 19:42:32,359 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 53.11 sec
2013-09-11 19:42:33,365 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 53.11 sec
2013-09-11 19:42:34,370 Stage-1 map = 54%, reduce = 13%, Cumulative CPU 53.11 sec
2013-09-11 19:42:35,375 Stage-1 map = 57%, reduce = 13%, Cumulative CPU 53.11 sec
2013-09-11 19:42:36,381 Stage-1 map = 57%, reduce = 13%, Cumulative CPU 53.11 sec
2013-09-11 19:42:37,386 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 53.11 sec
2013-09-11 19:42:38,391 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 53.11 sec
2013-09-11 19:42:39,397 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 53.11 sec
2013-09-11 19:42:40,403 Stage-1 map = 76%, reduce = 17%, Cumulative CPU 53.11 sec
2013-09-11 19:42:41,408 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 53.11 sec
2013-09-11 19:42:42,423 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 53.11 sec
2013-09-11 19:42:43,428 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 53.11 sec
2013-09-11 19:42:44,433 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 53.11 sec
2013-09-11 19:42:45,439 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 53.11 sec
2013-09-11 19:42:46,444 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 76.02 sec
2013-09-11 19:42:47,449 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 76.02 sec
2013-09-11 19:42:48,454 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 102.55 sec
2013-09-11 19:42:49,458 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 102.55 sec
2013-09-11 19:42:50,462 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 102.55 sec
2013-09-11 19:42:51,466 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 102.55 sec
2013-09-11 19:42:52,471 Stage-1 map = 100%, reduce = 46%, Cumulative CPU 102.55 sec
2013-09-11 19:42:53,475 Stage-1 map = 100%, reduce = 46%, Cumulative CPU 102.55 sec
2013-09-11 19:42:54,479 Stage-1 map = 100%, reduce = 46%, Cumulative CPU 102.55 sec
2013-09-11 19:42:55,484 Stage-1 map = 100%, reduce = 52%, Cumulative CPU 102.55 sec
2013-09-11 19:42:57,407 Stage-1 map = 100%, reduce = 52%, Cumulative CPU 102.55 sec
2013-09-11 19:42:58,412 Stage-1 map = 100%, reduce = 70%, Cumulative CPU 102.55 sec
2013-09-11 19:42:59,417 Stage-1 map = 100%, reduce = 70%, Cumulative CPU 102.55 sec
2013-09-11 19:43:00,422 Stage-1 map = 100%, reduce = 70%, Cumulative CPU 102.55 sec
2013-09-11 19:43:01,427 Stage-1 map = 100%, reduce = 74%, Cumulative CPU 102.55 sec
2013-09-11 19:43:02,432 Stage-1 map = 100%, reduce = 74%, Cumulative CPU 102.55 sec
2013-09-11 19:43:03,438 Stage-1 map = 100%, reduce = 74%, Cumulative CPU 102.55 sec
2013-09-11 19:43:04,443 Stage-1 map = 100%, reduce = 78%, Cumulative CPU 102.55 sec
2013-09-11 19:43:05,450 Stage-1 map = 100%, reduce = 78%, Cumulative CPU 139.29 sec
2013-09-11 19:43:06,456 Stage-1 map = 100%, reduce = 78%, Cumulative CPU 139.29 sec
2013-09-11 19:43:07,461 Stage-1 map = 100%, reduce = 83%, Cumulative CPU 139.29 sec
2013-09-11 19:43:08,467 Stage-1 map = 100%, reduce = 83%, Cumulative CPU 139.29 sec
2013-09-11 19:43:09,472 Stage-1 map = 100%, reduce = 83%, Cumulative CPU 139.29 sec
2013-09-11 19:43:10,477 Stage-1 map = 100%, reduce = 88%, Cumulative CPU 139.29 sec
2013-09-11 19:43:11,487 Stage-1 map = 100%, reduce = 88%, Cumulative CPU 139.29 sec
2013-09-11 19:43:12,493 Stage-1 map = 100%, reduce = 88%, Cumulative CPU 139.29 sec
2013-09-11 19:43:13,498 Stage-1 map = 100%, reduce = 90%, Cumulative CPU 139.29 sec
2013-09-11 19:43:14,503 Stage-1 map = 100%, reduce = 93%, Cumulative CPU 139.29 sec
2013-09-11 19:43:15,508 Stage-1 map = 100%, reduce = 93%, Cumulative CPU 139.29 sec
2013-09-11 19:43:16,514 Stage-1 map = 100%, reduce = 94%, Cumulative CPU 151.55 sec
2013-09-11 19:43:17,625 Stage-1 map = 100%, reduce = 96%, Cumulative CPU 151.55 sec
2013-09-11 19:43:18,631 Stage-1 map = 100%, reduce = 96%, Cumulative CPU 151.55 sec
2013-09-11 19:43:19,637 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 151.55 sec
2013-09-11 19:43:20,642 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 151.55 sec
2013-09-11 19:43:21,646 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 151.55 sec
2013-09-11 19:43:22,850 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 169.64 sec
2013-09-11 19:43:23,855 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 169.64 sec
MapReduce Total cumulative CPU time: 2 minutes 49 seconds 640 msec
Ended Job = job_201309101627_0360
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0361
Hadoop job information for Stage-2: number of mappers: 2; number of reducers: 1
2013-09-11 19:43:27,476 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:43:40,514 Stage-2 map = 36%, reduce = 0%
2013-09-11 19:43:49,541 Stage-2 map = 72%, reduce = 0%
2013-09-11 19:43:51,548 Stage-2 map = 74%, reduce = 0%, Cumulative CPU 29.17 sec
2013-09-11 19:43:52,553 Stage-2 map = 74%, reduce = 0%, Cumulative CPU 29.17 sec
2013-09-11 19:43:53,557 Stage-2 map = 74%, reduce = 0%, Cumulative CPU 29.17 sec
2013-09-11 19:43:54,561 Stage-2 map = 74%, reduce = 0%, Cumulative CPU 29.17 sec
2013-09-11 19:43:55,565 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 29.17 sec
2013-09-11 19:43:56,817 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 29.17 sec
2013-09-11 19:43:57,821 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 29.17 sec
2013-09-11 19:43:58,825 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 29.17 sec
2013-09-11 19:43:59,829 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 29.17 sec
2013-09-11 19:44:00,834 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 29.17 sec
2013-09-11 19:44:01,838 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 29.17 sec
2013-09-11 19:44:02,842 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 29.17 sec
2013-09-11 19:44:03,846 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 29.17 sec
2013-09-11 19:44:04,850 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 29.17 sec
2013-09-11 19:44:05,855 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 75.56 sec
2013-09-11 19:44:06,860 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 75.56 sec
2013-09-11 19:44:07,865 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 75.56 sec
2013-09-11 19:44:08,869 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 79.0 sec
2013-09-11 19:44:09,874 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 79.0 sec
2013-09-11 19:44:10,878 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 79.0 sec
2013-09-11 19:44:11,882 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 79.0 sec
2013-09-11 19:44:12,886 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 79.0 sec
2013-09-11 19:44:13,891 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 79.0 sec
2013-09-11 19:44:14,896 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 79.0 sec
2013-09-11 19:44:15,900 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 79.0 sec
2013-09-11 19:44:16,904 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 79.0 sec
2013-09-11 19:44:17,908 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 79.0 sec
2013-09-11 19:44:18,913 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 79.0 sec
2013-09-11 19:44:19,917 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 79.0 sec
2013-09-11 19:44:20,921 Stage-2 map = 100%, reduce = 67%, Cumulative CPU 79.0 sec
2013-09-11 19:44:21,925 Stage-2 map = 100%, reduce = 67%, Cumulative CPU 79.0 sec
2013-09-11 19:44:23,426 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 88.07 sec
2013-09-11 19:44:24,431 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 88.07 sec
2013-09-11 19:44:25,436 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 88.07 sec
MapReduce Total cumulative CPU time: 1 minutes 28 seconds 70 msec
Ended Job = job_201309101627_0361
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 169.64 sec HDFS Read: 85707829 HDFS Write: 413932232 SUCCESS
Job 1: Map: 2 Reduce: 1 Cumulative CPU: 88.07 sec HDFS Read: 413942944 HDFS Write: 420 SUCCESS
Total MapReduce CPU Time Spent: 4 minutes 17 seconds 710 msec
OK
Time taken: 149.238 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_1184@mturlrep13_201309111944_629860670.txt
hive> ;
hive> quit;
times: 2
query: SELECT WatchID, ClientIP, count(*) AS c, sum(Refresh), avg(ResolutionWidth) FROM hits_10m GROUP BY WatchID, ClientIP ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_1658@mturlrep13_201309111944_819122527.txt
hive> SELECT WatchID, ClientIP, count(*) AS c, sum(Refresh), avg(ResolutionWidth) FROM hits_10m GROUP BY WatchID, ClientIP ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0362
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:44:39,567 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:44:49,606 Stage-1 map = 7%, reduce = 0%
2013-09-11 19:44:52,619 Stage-1 map = 14%, reduce = 0%
2013-09-11 19:44:55,633 Stage-1 map = 29%, reduce = 0%
2013-09-11 19:44:58,646 Stage-1 map = 36%, reduce = 0%
2013-09-11 19:45:01,658 Stage-1 map = 43%, reduce = 0%
2013-09-11 19:45:03,673 Stage-1 map = 46%, reduce = 0%, Cumulative CPU 25.75 sec
2013-09-11 19:45:04,680 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 53.01 sec
2013-09-11 19:45:05,689 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 53.01 sec
2013-09-11 19:45:06,695 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 53.01 sec
2013-09-11 19:45:07,702 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 53.01 sec
2013-09-11 19:45:08,708 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 53.01 sec
2013-09-11 19:45:09,714 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 53.01 sec
2013-09-11 19:45:10,720 Stage-1 map = 54%, reduce = 8%, Cumulative CPU 53.01 sec
2013-09-11 19:45:11,726 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 53.01 sec
2013-09-11 19:45:12,732 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 53.01 sec
2013-09-11 19:45:13,738 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 53.01 sec
2013-09-11 19:45:14,744 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 53.01 sec
2013-09-11 19:45:15,750 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 53.01 sec
2013-09-11 19:45:16,756 Stage-1 map = 76%, reduce = 17%, Cumulative CPU 53.01 sec
2013-09-11 19:45:17,762 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 53.01 sec
2013-09-11 19:45:18,768 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 53.01 sec
2013-09-11 19:45:19,774 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 53.01 sec
2013-09-11 19:45:20,780 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 53.01 sec
2013-09-11 19:45:21,786 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 76.34 sec
2013-09-11 19:45:22,792 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 76.34 sec
2013-09-11 19:45:23,798 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 102.7 sec
2013-09-11 19:45:24,803 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 102.7 sec
2013-09-11 19:45:25,808 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 102.7 sec
2013-09-11 19:45:26,814 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 102.7 sec
2013-09-11 19:45:27,820 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 102.7 sec
2013-09-11 19:45:28,826 Stage-1 map = 100%, reduce = 50%, Cumulative CPU 102.7 sec
2013-09-11 19:45:29,831 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 102.7 sec
2013-09-11 19:45:30,837 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 102.7 sec
2013-09-11 19:45:31,842 Stage-1 map = 100%, reduce = 68%, Cumulative CPU 102.7 sec
2013-09-11 19:45:33,451 Stage-1 map = 100%, reduce = 70%, Cumulative CPU 102.7 sec
2013-09-11 19:45:34,456 Stage-1 map = 100%, reduce = 70%, Cumulative CPU 102.7 sec
2013-09-11 19:45:35,462 Stage-1 map = 100%, reduce = 75%, Cumulative CPU 102.7 sec
2013-09-11 19:45:36,468 Stage-1 map = 100%, reduce = 75%, Cumulative CPU 102.7 sec
2013-09-11 19:45:37,473 Stage-1 map = 100%, reduce = 75%, Cumulative CPU 102.7 sec
2013-09-11 19:45:38,479 Stage-1 map = 100%, reduce = 79%, Cumulative CPU 102.7 sec
2013-09-11 19:45:39,485 Stage-1 map = 100%, reduce = 79%, Cumulative CPU 102.7 sec
2013-09-11 19:45:40,490 Stage-1 map = 100%, reduce = 79%, Cumulative CPU 102.7 sec
2013-09-11 19:45:41,496 Stage-1 map = 100%, reduce = 84%, Cumulative CPU 102.7 sec
2013-09-11 19:45:42,502 Stage-1 map = 100%, reduce = 84%, Cumulative CPU 102.7 sec
2013-09-11 19:45:43,508 Stage-1 map = 100%, reduce = 84%, Cumulative CPU 102.7 sec
2013-09-11 19:45:44,514 Stage-1 map = 100%, reduce = 89%, Cumulative CPU 102.7 sec
2013-09-11 19:45:45,519 Stage-1 map = 100%, reduce = 89%, Cumulative CPU 102.7 sec
2013-09-11 19:45:46,525 Stage-1 map = 100%, reduce = 89%, Cumulative CPU 102.7 sec
2013-09-11 19:45:47,537 Stage-1 map = 100%, reduce = 94%, Cumulative CPU 102.7 sec
2013-09-11 19:45:48,542 Stage-1 map = 100%, reduce = 94%, Cumulative CPU 102.7 sec
2013-09-11 19:45:49,548 Stage-1 map = 100%, reduce = 94%, Cumulative CPU 102.7 sec
2013-09-11 19:45:50,554 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 102.7 sec
2013-09-11 19:45:51,561 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 134.99 sec
2013-09-11 19:45:52,567 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 168.39 sec
2013-09-11 19:45:53,572 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 168.39 sec
MapReduce Total cumulative CPU time: 2 minutes 48 seconds 390 msec
Ended Job = job_201309101627_0362
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0363
Hadoop job information for Stage-2: number of mappers: 2; number of reducers: 1
2013-09-11 19:45:57,372 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:46:10,035 Stage-2 map = 36%, reduce = 0%, Cumulative CPU 19.18 sec
2013-09-11 19:46:11,039 Stage-2 map = 36%, reduce = 0%, Cumulative CPU 19.18 sec
2013-09-11 19:46:12,044 Stage-2 map = 36%, reduce = 0%, Cumulative CPU 19.18 sec
2013-09-11 19:46:13,049 Stage-2 map = 36%, reduce = 0%, Cumulative CPU 19.18 sec
2013-09-11 19:46:14,053 Stage-2 map = 36%, reduce = 0%, Cumulative CPU 19.18 sec
2013-09-11 19:46:15,058 Stage-2 map = 36%, reduce = 0%, Cumulative CPU 19.18 sec
2013-09-11 19:46:16,062 Stage-2 map = 36%, reduce = 0%, Cumulative CPU 19.18 sec
2013-09-11 19:46:17,067 Stage-2 map = 36%, reduce = 0%, Cumulative CPU 19.18 sec
2013-09-11 19:46:18,071 Stage-2 map = 36%, reduce = 0%, Cumulative CPU 19.18 sec
2013-09-11 19:46:19,076 Stage-2 map = 48%, reduce = 0%, Cumulative CPU 19.18 sec
2013-09-11 19:46:20,081 Stage-2 map = 72%, reduce = 0%, Cumulative CPU 19.18 sec
2013-09-11 19:46:21,086 Stage-2 map = 74%, reduce = 0%, Cumulative CPU 37.8 sec
2013-09-11 19:46:22,091 Stage-2 map = 74%, reduce = 0%, Cumulative CPU 37.8 sec
2013-09-11 19:46:23,095 Stage-2 map = 74%, reduce = 0%, Cumulative CPU 37.8 sec
2013-09-11 19:46:24,100 Stage-2 map = 74%, reduce = 0%, Cumulative CPU 37.8 sec
2013-09-11 19:46:25,104 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 37.8 sec
2013-09-11 19:46:26,109 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 37.8 sec
2013-09-11 19:46:27,115 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 37.8 sec
2013-09-11 19:46:28,120 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 37.8 sec
2013-09-11 19:46:29,124 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 37.8 sec
2013-09-11 19:46:30,128 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 37.8 sec
2013-09-11 19:46:31,133 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 37.8 sec
2013-09-11 19:46:32,138 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 37.8 sec
2013-09-11 19:46:33,142 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 37.8 sec
2013-09-11 19:46:34,147 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 37.8 sec
2013-09-11 19:46:35,152 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 37.8 sec
2013-09-11 19:46:36,157 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 37.8 sec
2013-09-11 19:46:37,161 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 37.8 sec
2013-09-11 19:46:38,166 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 37.8 sec
2013-09-11 19:46:39,171 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 37.8 sec
2013-09-11 19:46:40,175 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 77.37 sec
2013-09-11 19:46:41,180 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 77.37 sec
2013-09-11 19:46:42,184 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 77.37 sec
2013-09-11 19:46:43,189 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 77.37 sec
2013-09-11 19:46:44,194 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 77.37 sec
2013-09-11 19:46:45,198 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 77.37 sec
2013-09-11 19:46:46,202 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 77.37 sec
2013-09-11 19:46:47,207 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 77.37 sec
2013-09-11 19:46:48,211 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 77.37 sec
2013-09-11 19:46:49,216 Stage-2 map = 100%, reduce = 67%, Cumulative CPU 77.37 sec
2013-09-11 19:46:50,222 Stage-2 map = 100%, reduce = 67%, Cumulative CPU 77.37 sec
2013-09-11 19:46:51,227 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 87.68 sec
2013-09-11 19:46:52,232 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 87.68 sec
MapReduce Total cumulative CPU time: 1 minutes 27 seconds 680 msec
Ended Job = job_201309101627_0363
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 168.39 sec HDFS Read: 85707829 HDFS Write: 413932232 SUCCESS
Job 1: Map: 2 Reduce: 1 Cumulative CPU: 87.68 sec HDFS Read: 413942944 HDFS Write: 420 SUCCESS
Total MapReduce CPU Time Spent: 4 minutes 16 seconds 70 msec
OK
Time taken: 141.031 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_5940@mturlrep13_201309111946_577038948.txt
hive> ;
hive> quit;
times: 3
query: SELECT WatchID, ClientIP, count(*) AS c, sum(Refresh), avg(ResolutionWidth) FROM hits_10m GROUP BY WatchID, ClientIP ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_6372@mturlrep13_201309111946_455901052.txt
hive> SELECT WatchID, ClientIP, count(*) AS c, sum(Refresh), avg(ResolutionWidth) FROM hits_10m GROUP BY WatchID, ClientIP ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0364
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:47:05,315 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:47:13,350 Stage-1 map = 14%, reduce = 0%
2013-09-11 19:47:16,363 Stage-1 map = 22%, reduce = 0%
2013-09-11 19:47:19,377 Stage-1 map = 32%, reduce = 0%
2013-09-11 19:47:22,391 Stage-1 map = 43%, reduce = 0%
2013-09-11 19:47:25,410 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.14 sec
2013-09-11 19:47:26,416 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.14 sec
2013-09-11 19:47:27,424 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.14 sec
2013-09-11 19:47:28,430 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.14 sec
2013-09-11 19:47:29,436 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.14 sec
2013-09-11 19:47:30,442 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.14 sec
2013-09-11 19:47:31,448 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.14 sec
2013-09-11 19:47:32,453 Stage-1 map = 54%, reduce = 8%, Cumulative CPU 52.14 sec
2013-09-11 19:47:33,459 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 52.14 sec
2013-09-11 19:47:34,465 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 52.14 sec
2013-09-11 19:47:35,470 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 52.14 sec
2013-09-11 19:47:36,476 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 52.14 sec
2013-09-11 19:47:37,482 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 52.14 sec
2013-09-11 19:47:38,488 Stage-1 map = 76%, reduce = 17%, Cumulative CPU 52.14 sec
2013-09-11 19:47:39,493 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 52.14 sec
2013-09-11 19:47:40,499 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 52.14 sec
2013-09-11 19:47:41,505 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 52.14 sec
2013-09-11 19:47:42,511 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 52.14 sec
2013-09-11 19:47:43,517 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 75.85 sec
2013-09-11 19:47:44,523 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 75.85 sec
2013-09-11 19:47:45,529 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 103.18 sec
2013-09-11 19:47:46,534 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 103.18 sec
2013-09-11 19:47:47,539 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 103.18 sec
2013-09-11 19:47:48,544 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 103.18 sec
2013-09-11 19:47:49,550 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 103.18 sec
2013-09-11 19:47:50,555 Stage-1 map = 100%, reduce = 50%, Cumulative CPU 103.18 sec
2013-09-11 19:47:51,561 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 103.18 sec
2013-09-11 19:47:52,566 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 103.18 sec
2013-09-11 19:47:53,571 Stage-1 map = 100%, reduce = 69%, Cumulative CPU 103.18 sec
2013-09-11 19:47:54,577 Stage-1 map = 100%, reduce = 70%, Cumulative CPU 103.18 sec
2013-09-11 19:47:55,582 Stage-1 map = 100%, reduce = 70%, Cumulative CPU 103.18 sec
2013-09-11 19:47:56,587 Stage-1 map = 100%, reduce = 73%, Cumulative CPU 103.18 sec
2013-09-11 19:47:57,592 Stage-1 map = 100%, reduce = 75%, Cumulative CPU 103.18 sec
2013-09-11 19:47:59,138 Stage-1 map = 100%, reduce = 75%, Cumulative CPU 103.18 sec
2013-09-11 19:48:00,143 Stage-1 map = 100%, reduce = 78%, Cumulative CPU 103.18 sec
2013-09-11 19:48:01,149 Stage-1 map = 100%, reduce = 78%, Cumulative CPU 103.18 sec
2013-09-11 19:48:02,154 Stage-1 map = 100%, reduce = 78%, Cumulative CPU 103.18 sec
2013-09-11 19:48:03,159 Stage-1 map = 100%, reduce = 82%, Cumulative CPU 103.18 sec
2013-09-11 19:48:04,164 Stage-1 map = 100%, reduce = 82%, Cumulative CPU 103.18 sec
2013-09-11 19:48:05,169 Stage-1 map = 100%, reduce = 82%, Cumulative CPU 103.18 sec
2013-09-11 19:48:06,177 Stage-1 map = 100%, reduce = 87%, Cumulative CPU 151.44 sec
2013-09-11 19:48:07,182 Stage-1 map = 100%, reduce = 87%, Cumulative CPU 151.44 sec
2013-09-11 19:48:08,188 Stage-1 map = 100%, reduce = 87%, Cumulative CPU 151.44 sec
2013-09-11 19:48:09,193 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 151.44 sec
2013-09-11 19:48:10,199 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 151.44 sec
2013-09-11 19:48:11,210 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 151.44 sec
2013-09-11 19:48:12,216 Stage-1 map = 100%, reduce = 96%, Cumulative CPU 151.44 sec
2013-09-11 19:48:13,222 Stage-1 map = 100%, reduce = 96%, Cumulative CPU 151.44 sec
2013-09-11 19:48:14,228 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 169.64 sec
2013-09-11 19:48:15,234 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 169.64 sec
2013-09-11 19:48:16,239 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 169.64 sec
MapReduce Total cumulative CPU time: 2 minutes 49 seconds 640 msec
Ended Job = job_201309101627_0364
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0365
Hadoop job information for Stage-2: number of mappers: 2; number of reducers: 1
2013-09-11 19:48:18,890 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:48:32,931 Stage-2 map = 36%, reduce = 0%
2013-09-11 19:48:41,958 Stage-2 map = 72%, reduce = 0%
2013-09-11 19:48:43,966 Stage-2 map = 74%, reduce = 0%, Cumulative CPU 29.01 sec
2013-09-11 19:48:44,971 Stage-2 map = 74%, reduce = 0%, Cumulative CPU 29.01 sec
2013-09-11 19:48:45,976 Stage-2 map = 74%, reduce = 0%, Cumulative CPU 29.01 sec
2013-09-11 19:48:46,981 Stage-2 map = 74%, reduce = 0%, Cumulative CPU 29.01 sec
2013-09-11 19:48:47,986 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 29.01 sec
2013-09-11 19:48:48,991 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 29.01 sec
2013-09-11 19:48:49,996 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 29.01 sec
2013-09-11 19:48:51,001 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 29.01 sec
2013-09-11 19:48:52,006 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 29.01 sec
2013-09-11 19:48:53,011 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 29.01 sec
2013-09-11 19:48:54,028 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 29.01 sec
2013-09-11 19:48:55,033 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 29.01 sec
2013-09-11 19:48:56,038 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 29.01 sec
2013-09-11 19:48:57,043 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 29.01 sec
2013-09-11 19:48:58,048 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 29.01 sec
2013-09-11 19:48:59,053 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 29.01 sec
2013-09-11 19:49:00,058 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 29.01 sec
2013-09-11 19:49:01,063 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 77.48 sec
2013-09-11 19:49:02,068 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 77.48 sec
2013-09-11 19:49:03,072 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 77.48 sec
2013-09-11 19:49:04,077 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 77.48 sec
2013-09-11 19:49:05,081 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 77.48 sec
2013-09-11 19:49:06,086 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 79.28 sec
2013-09-11 19:49:07,091 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 79.28 sec
2013-09-11 19:49:08,095 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 79.28 sec
2013-09-11 19:49:09,100 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 79.28 sec
2013-09-11 19:49:10,104 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 79.28 sec
2013-09-11 19:49:11,109 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 79.28 sec
2013-09-11 19:49:12,113 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 79.28 sec
2013-09-11 19:49:13,117 Stage-2 map = 100%, reduce = 67%, Cumulative CPU 79.28 sec
2013-09-11 19:49:14,122 Stage-2 map = 100%, reduce = 67%, Cumulative CPU 79.28 sec
2013-09-11 19:49:15,556 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 87.92 sec
2013-09-11 19:49:16,561 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 87.92 sec
MapReduce Total cumulative CPU time: 1 minutes 27 seconds 920 msec
Ended Job = job_201309101627_0365
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 169.64 sec HDFS Read: 85707829 HDFS Write: 413932232 SUCCESS
Job 1: Map: 2 Reduce: 1 Cumulative CPU: 87.92 sec HDFS Read: 413942944 HDFS Write: 420 SUCCESS
Total MapReduce CPU Time Spent: 4 minutes 17 seconds 560 msec
OK
Time taken: 138.703 seconds, Fetched: 10 row(s)
hive> quit;
-- то же самое, но ещё и без фильтрации.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9642@mturlrep13_201309111949_27347921.txt
hive> ;
hive> quit;
times: 1
query: SELECT URL, count(*) AS c FROM hits_10m GROUP BY URL ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_10116@mturlrep13_201309111949_204794215.txt
hive> SELECT URL, count(*) AS c FROM hits_10m GROUP BY URL ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0366
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:49:39,677 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:49:46,707 Stage-1 map = 14%, reduce = 0%
2013-09-11 19:49:49,719 Stage-1 map = 22%, reduce = 0%
2013-09-11 19:49:52,731 Stage-1 map = 29%, reduce = 0%
2013-09-11 19:49:55,744 Stage-1 map = 39%, reduce = 0%
2013-09-11 19:49:58,755 Stage-1 map = 43%, reduce = 0%
2013-09-11 19:50:00,769 Stage-1 map = 46%, reduce = 0%, Cumulative CPU 27.34 sec
2013-09-11 19:50:01,775 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.01 sec
2013-09-11 19:50:02,783 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.01 sec
2013-09-11 19:50:03,789 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.01 sec
2013-09-11 19:50:04,794 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.01 sec
2013-09-11 19:50:05,800 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.01 sec
2013-09-11 19:50:06,806 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.01 sec
2013-09-11 19:50:07,812 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.01 sec
2013-09-11 19:50:08,818 Stage-1 map = 57%, reduce = 8%, Cumulative CPU 55.01 sec
2013-09-11 19:50:09,824 Stage-1 map = 57%, reduce = 8%, Cumulative CPU 55.01 sec
2013-09-11 19:50:10,830 Stage-1 map = 57%, reduce = 8%, Cumulative CPU 55.01 sec
2013-09-11 19:50:11,836 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 55.01 sec
2013-09-11 19:50:12,841 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 55.01 sec
2013-09-11 19:50:13,847 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 55.01 sec
2013-09-11 19:50:14,853 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 55.01 sec
2013-09-11 19:50:15,859 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 55.01 sec
2013-09-11 19:50:16,864 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 55.01 sec
2013-09-11 19:50:17,871 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 55.01 sec
2013-09-11 19:50:18,877 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 55.01 sec
2013-09-11 19:50:19,882 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 55.01 sec
2013-09-11 19:50:20,888 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 55.01 sec
2013-09-11 19:50:21,893 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 79.56 sec
2013-09-11 19:50:22,898 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 106.88 sec
2013-09-11 19:50:23,903 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 106.88 sec
2013-09-11 19:50:24,909 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 106.88 sec
2013-09-11 19:50:25,914 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 106.88 sec
2013-09-11 19:50:26,919 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 106.88 sec
2013-09-11 19:50:27,924 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 106.88 sec
2013-09-11 19:50:28,929 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 106.88 sec
2013-09-11 19:50:29,935 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 106.88 sec
2013-09-11 19:50:32,380 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 106.88 sec
2013-09-11 19:50:34,489 Stage-1 map = 100%, reduce = 72%, Cumulative CPU 106.88 sec
2013-09-11 19:50:35,495 Stage-1 map = 100%, reduce = 77%, Cumulative CPU 106.88 sec
2013-09-11 19:50:36,500 Stage-1 map = 100%, reduce = 77%, Cumulative CPU 106.88 sec
2013-09-11 19:50:37,506 Stage-1 map = 100%, reduce = 82%, Cumulative CPU 106.88 sec
2013-09-11 19:50:38,511 Stage-1 map = 100%, reduce = 87%, Cumulative CPU 106.88 sec
2013-09-11 19:50:39,517 Stage-1 map = 100%, reduce = 87%, Cumulative CPU 106.88 sec
2013-09-11 19:50:40,522 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 106.88 sec
2013-09-11 19:50:41,531 Stage-1 map = 100%, reduce = 96%, Cumulative CPU 132.09 sec
2013-09-11 19:50:42,536 Stage-1 map = 100%, reduce = 96%, Cumulative CPU 132.09 sec
2013-09-11 19:50:43,542 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 158.78 sec
2013-09-11 19:50:44,569 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 158.78 sec
MapReduce Total cumulative CPU time: 2 minutes 38 seconds 780 msec
Ended Job = job_201309101627_0366
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0367
Hadoop job information for Stage-2: number of mappers: 2; number of reducers: 1
2013-09-11 19:50:48,203 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:50:55,225 Stage-2 map = 25%, reduce = 0%
2013-09-11 19:50:57,233 Stage-2 map = 50%, reduce = 0%, Cumulative CPU 13.37 sec
2013-09-11 19:50:58,238 Stage-2 map = 75%, reduce = 0%, Cumulative CPU 13.37 sec
2013-09-11 19:50:59,243 Stage-2 map = 75%, reduce = 0%, Cumulative CPU 13.37 sec
2013-09-11 19:51:00,248 Stage-2 map = 75%, reduce = 0%, Cumulative CPU 13.37 sec
2013-09-11 19:51:02,193 Stage-2 map = 87%, reduce = 0%, Cumulative CPU 13.37 sec
2013-09-11 19:51:03,198 Stage-2 map = 87%, reduce = 0%, Cumulative CPU 13.37 sec
2013-09-11 19:51:04,203 Stage-2 map = 87%, reduce = 17%, Cumulative CPU 13.37 sec
2013-09-11 19:51:05,207 Stage-2 map = 87%, reduce = 17%, Cumulative CPU 13.37 sec
2013-09-11 19:51:06,213 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.58 sec
2013-09-11 19:51:07,218 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.58 sec
2013-09-11 19:51:08,223 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.58 sec
2013-09-11 19:51:09,227 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.58 sec
2013-09-11 19:51:10,232 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.58 sec
2013-09-11 19:51:11,237 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.58 sec
2013-09-11 19:51:12,242 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.58 sec
2013-09-11 19:51:13,591 Stage-2 map = 100%, reduce = 67%, Cumulative CPU 37.58 sec
2013-09-11 19:51:14,596 Stage-2 map = 100%, reduce = 67%, Cumulative CPU 37.58 sec
2013-09-11 19:51:15,601 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 43.87 sec
2013-09-11 19:51:16,606 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 43.87 sec
MapReduce Total cumulative CPU time: 43 seconds 870 msec
Ended Job = job_201309101627_0367
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 158.78 sec HDFS Read: 109451651 HDFS Write: 399298510 SUCCESS
Job 1: Map: 2 Reduce: 1 Cumulative CPU: 43.87 sec HDFS Read: 399308173 HDFS Write: 445 SUCCESS
Total MapReduce CPU Time Spent: 3 minutes 22 seconds 650 msec
OK
Time taken: 107.308 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_13554@mturlrep13_201309111951_64701143.txt
hive> ;
hive> quit;
times: 2
query: SELECT URL, count(*) AS c FROM hits_10m GROUP BY URL ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_13986@mturlrep13_201309111951_819836063.txt
hive> SELECT URL, count(*) AS c FROM hits_10m GROUP BY URL ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0368
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:51:31,116 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:51:38,143 Stage-1 map = 14%, reduce = 0%
2013-09-11 19:51:41,156 Stage-1 map = 22%, reduce = 0%
2013-09-11 19:51:44,168 Stage-1 map = 32%, reduce = 0%
2013-09-11 19:51:47,181 Stage-1 map = 43%, reduce = 0%
2013-09-11 19:51:51,201 Stage-1 map = 46%, reduce = 0%, Cumulative CPU 26.66 sec
2013-09-11 19:51:52,207 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.03 sec
2013-09-11 19:51:53,215 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.03 sec
2013-09-11 19:51:54,221 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.03 sec
2013-09-11 19:51:55,227 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.03 sec
2013-09-11 19:51:56,233 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.03 sec
2013-09-11 19:51:57,239 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.03 sec
2013-09-11 19:51:58,244 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.03 sec
2013-09-11 19:51:59,249 Stage-1 map = 57%, reduce = 8%, Cumulative CPU 54.03 sec
2013-09-11 19:52:00,255 Stage-1 map = 57%, reduce = 8%, Cumulative CPU 54.03 sec
2013-09-11 19:52:01,259 Stage-1 map = 57%, reduce = 8%, Cumulative CPU 54.03 sec
2013-09-11 19:52:02,265 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 54.03 sec
2013-09-11 19:52:03,271 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 54.03 sec
2013-09-11 19:52:04,276 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 54.03 sec
2013-09-11 19:52:05,281 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 54.03 sec
2013-09-11 19:52:06,287 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 96.31 sec
2013-09-11 19:52:07,292 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 96.31 sec
2013-09-11 19:52:08,298 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 96.31 sec
2013-09-11 19:52:09,303 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 96.31 sec
2013-09-11 19:52:10,309 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 96.31 sec
2013-09-11 19:52:11,314 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 96.31 sec
2013-09-11 19:52:12,318 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 104.46 sec
2013-09-11 19:52:13,323 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 114.6 sec
2013-09-11 19:52:14,328 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 114.6 sec
2013-09-11 19:52:15,333 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 114.6 sec
2013-09-11 19:52:16,338 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 114.6 sec
2013-09-11 19:52:17,343 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 114.6 sec
2013-09-11 19:52:18,349 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 114.6 sec
2013-09-11 19:52:19,354 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 114.6 sec
2013-09-11 19:52:20,359 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 114.6 sec
2013-09-11 19:52:21,364 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 114.6 sec
2013-09-11 19:52:23,754 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 114.6 sec
2013-09-11 19:52:25,932 Stage-1 map = 100%, reduce = 75%, Cumulative CPU 114.6 sec
2013-09-11 19:52:26,938 Stage-1 map = 100%, reduce = 75%, Cumulative CPU 114.6 sec
2013-09-11 19:52:27,944 Stage-1 map = 100%, reduce = 80%, Cumulative CPU 114.6 sec
2013-09-11 19:52:28,949 Stage-1 map = 100%, reduce = 85%, Cumulative CPU 114.6 sec
2013-09-11 19:52:29,955 Stage-1 map = 100%, reduce = 85%, Cumulative CPU 114.6 sec
2013-09-11 19:52:30,961 Stage-1 map = 100%, reduce = 90%, Cumulative CPU 114.6 sec
2013-09-11 19:52:31,966 Stage-1 map = 100%, reduce = 94%, Cumulative CPU 114.6 sec
2013-09-11 19:52:32,974 Stage-1 map = 100%, reduce = 97%, Cumulative CPU 137.85 sec
2013-09-11 19:52:33,979 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 159.23 sec
2013-09-11 19:52:34,984 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 159.23 sec
MapReduce Total cumulative CPU time: 2 minutes 39 seconds 230 msec
Ended Job = job_201309101627_0368
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0369
Hadoop job information for Stage-2: number of mappers: 2; number of reducers: 1
2013-09-11 19:52:38,543 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:52:45,564 Stage-2 map = 25%, reduce = 0%
2013-09-11 19:52:48,575 Stage-2 map = 75%, reduce = 0%, Cumulative CPU 13.71 sec
2013-09-11 19:52:49,579 Stage-2 map = 75%, reduce = 0%, Cumulative CPU 13.71 sec
2013-09-11 19:52:50,583 Stage-2 map = 75%, reduce = 0%, Cumulative CPU 13.71 sec
2013-09-11 19:52:51,588 Stage-2 map = 87%, reduce = 0%, Cumulative CPU 13.71 sec
2013-09-11 19:52:52,593 Stage-2 map = 87%, reduce = 0%, Cumulative CPU 13.71 sec
2013-09-11 19:52:53,597 Stage-2 map = 87%, reduce = 0%, Cumulative CPU 13.71 sec
2013-09-11 19:52:54,602 Stage-2 map = 87%, reduce = 0%, Cumulative CPU 13.71 sec
2013-09-11 19:52:55,607 Stage-2 map = 87%, reduce = 17%, Cumulative CPU 13.71 sec
2013-09-11 19:52:56,611 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.61 sec
2013-09-11 19:52:57,615 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.61 sec
2013-09-11 19:52:58,619 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.61 sec
2013-09-11 19:52:59,623 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.61 sec
2013-09-11 19:53:00,627 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.61 sec
2013-09-11 19:53:01,631 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.61 sec
2013-09-11 19:53:02,635 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.61 sec
2013-09-11 19:53:03,639 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.61 sec
2013-09-11 19:53:04,644 Stage-2 map = 100%, reduce = 67%, Cumulative CPU 37.61 sec
2013-09-11 19:53:05,647 Stage-2 map = 100%, reduce = 67%, Cumulative CPU 37.61 sec
2013-09-11 19:53:06,652 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 44.99 sec
2013-09-11 19:53:07,656 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 44.99 sec
MapReduce Total cumulative CPU time: 44 seconds 990 msec
Ended Job = job_201309101627_0369
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 159.23 sec HDFS Read: 109451651 HDFS Write: 399298510 SUCCESS
Job 1: Map: 2 Reduce: 1 Cumulative CPU: 44.99 sec HDFS Read: 399308173 HDFS Write: 445 SUCCESS
Total MapReduce CPU Time Spent: 3 minutes 24 seconds 220 msec
OK
Time taken: 104.847 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_16749@mturlrep13_201309111953_313353407.txt
hive> ;
hive> quit;
times: 3
query: SELECT URL, count(*) AS c FROM hits_10m GROUP BY URL ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_17196@mturlrep13_201309111953_1275155665.txt
hive> SELECT URL, count(*) AS c FROM hits_10m GROUP BY URL ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0370
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:53:21,695 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:53:28,723 Stage-1 map = 14%, reduce = 0%
2013-09-11 19:53:31,736 Stage-1 map = 22%, reduce = 0%
2013-09-11 19:53:34,749 Stage-1 map = 29%, reduce = 0%
2013-09-11 19:53:37,762 Stage-1 map = 43%, reduce = 0%
2013-09-11 19:53:42,786 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 56.41 sec
2013-09-11 19:53:43,792 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 56.41 sec
2013-09-11 19:53:44,801 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 56.41 sec
2013-09-11 19:53:45,807 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 56.41 sec
2013-09-11 19:53:46,812 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 56.41 sec
2013-09-11 19:53:47,817 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 56.41 sec
2013-09-11 19:53:48,823 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 56.41 sec
2013-09-11 19:53:49,829 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 56.41 sec
2013-09-11 19:53:50,834 Stage-1 map = 61%, reduce = 8%, Cumulative CPU 56.41 sec
2013-09-11 19:53:51,840 Stage-1 map = 61%, reduce = 8%, Cumulative CPU 56.41 sec
2013-09-11 19:53:52,846 Stage-1 map = 61%, reduce = 8%, Cumulative CPU 56.41 sec
2013-09-11 19:53:53,852 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 56.41 sec
2013-09-11 19:53:54,857 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 56.41 sec
2013-09-11 19:53:55,863 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 56.41 sec
2013-09-11 19:53:56,869 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 56.41 sec
2013-09-11 19:53:57,875 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 56.41 sec
2013-09-11 19:53:58,881 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 56.41 sec
2013-09-11 19:53:59,887 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 56.41 sec
2013-09-11 19:54:00,893 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 56.41 sec
2013-09-11 19:54:01,898 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 80.95 sec
2013-09-11 19:54:02,903 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 80.95 sec
2013-09-11 19:54:03,908 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 107.94 sec
2013-09-11 19:54:04,913 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 107.94 sec
2013-09-11 19:54:05,918 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 107.94 sec
2013-09-11 19:54:06,924 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 116.14 sec
2013-09-11 19:54:07,929 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 116.14 sec
2013-09-11 19:54:08,935 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 116.14 sec
2013-09-11 19:54:09,940 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 116.14 sec
2013-09-11 19:54:10,945 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 116.14 sec
2013-09-11 19:54:11,951 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 116.14 sec
2013-09-11 19:54:12,956 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 116.14 sec
2013-09-11 19:54:15,279 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 116.14 sec
2013-09-11 19:54:16,285 Stage-1 map = 100%, reduce = 73%, Cumulative CPU 116.14 sec
2013-09-11 19:54:17,533 Stage-1 map = 100%, reduce = 73%, Cumulative CPU 116.14 sec
2013-09-11 19:54:18,539 Stage-1 map = 100%, reduce = 80%, Cumulative CPU 116.14 sec
2013-09-11 19:54:19,545 Stage-1 map = 100%, reduce = 80%, Cumulative CPU 116.14 sec
2013-09-11 19:54:20,551 Stage-1 map = 100%, reduce = 80%, Cumulative CPU 116.14 sec
2013-09-11 19:54:21,556 Stage-1 map = 100%, reduce = 90%, Cumulative CPU 116.14 sec
2013-09-11 19:54:22,562 Stage-1 map = 100%, reduce = 90%, Cumulative CPU 116.14 sec
2013-09-11 19:54:23,568 Stage-1 map = 100%, reduce = 90%, Cumulative CPU 116.14 sec
2013-09-11 19:54:24,573 Stage-1 map = 100%, reduce = 90%, Cumulative CPU 116.14 sec
2013-09-11 19:54:25,581 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 161.84 sec
2013-09-11 19:54:26,586 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 161.84 sec
MapReduce Total cumulative CPU time: 2 minutes 41 seconds 840 msec
Ended Job = job_201309101627_0370
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0371
Hadoop job information for Stage-2: number of mappers: 2; number of reducers: 1
2013-09-11 19:54:29,028 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:54:37,052 Stage-2 map = 25%, reduce = 0%
2013-09-11 19:54:39,060 Stage-2 map = 50%, reduce = 0%, Cumulative CPU 13.75 sec
2013-09-11 19:54:40,065 Stage-2 map = 75%, reduce = 0%, Cumulative CPU 13.75 sec
2013-09-11 19:54:41,070 Stage-2 map = 75%, reduce = 0%, Cumulative CPU 13.75 sec
2013-09-11 19:54:42,074 Stage-2 map = 75%, reduce = 0%, Cumulative CPU 13.75 sec
2013-09-11 19:54:43,079 Stage-2 map = 87%, reduce = 0%, Cumulative CPU 13.75 sec
2013-09-11 19:54:44,667 Stage-2 map = 87%, reduce = 0%, Cumulative CPU 13.75 sec
2013-09-11 19:54:45,672 Stage-2 map = 87%, reduce = 0%, Cumulative CPU 13.75 sec
2013-09-11 19:54:46,677 Stage-2 map = 87%, reduce = 17%, Cumulative CPU 13.75 sec
2013-09-11 19:54:47,682 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.64 sec
2013-09-11 19:54:48,687 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.64 sec
2013-09-11 19:54:49,692 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.64 sec
2013-09-11 19:54:50,697 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.64 sec
2013-09-11 19:54:51,702 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.64 sec
2013-09-11 19:54:52,707 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.64 sec
2013-09-11 19:54:53,712 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.64 sec
2013-09-11 19:54:54,717 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.64 sec
2013-09-11 19:54:55,781 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 37.64 sec
2013-09-11 19:54:56,785 Stage-2 map = 100%, reduce = 67%, Cumulative CPU 37.64 sec
2013-09-11 19:54:57,791 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 45.18 sec
2013-09-11 19:54:58,796 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 45.18 sec
2013-09-11 19:54:59,801 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 45.18 sec
MapReduce Total cumulative CPU time: 45 seconds 180 msec
Ended Job = job_201309101627_0371
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 161.84 sec HDFS Read: 109451651 HDFS Write: 399298510 SUCCESS
Job 1: Map: 2 Reduce: 1 Cumulative CPU: 45.18 sec HDFS Read: 399308173 HDFS Write: 445 SUCCESS
Total MapReduce CPU Time Spent: 3 minutes 27 seconds 20 msec
OK
Time taken: 106.335 seconds, Fetched: 10 row(s)
hive> quit;
-- агрегация по URL.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_20260@mturlrep13_201309111955_930446055.txt
hive> ;
hive> quit;
times: 1
query: SELECT 1, URL, count(*) AS c FROM hits_10m GROUP BY 1, URL ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_20919@mturlrep13_201309111955_1684376997.txt
hive> SELECT 1, URL, count(*) AS c FROM hits_10m GROUP BY 1, URL ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0372
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:55:24,288 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:55:31,317 Stage-1 map = 7%, reduce = 0%
2013-09-11 19:55:34,328 Stage-1 map = 22%, reduce = 0%
2013-09-11 19:55:37,340 Stage-1 map = 29%, reduce = 0%
2013-09-11 19:55:40,353 Stage-1 map = 36%, reduce = 0%
2013-09-11 19:55:43,364 Stage-1 map = 43%, reduce = 0%
2013-09-11 19:55:48,389 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 59.66 sec
2013-09-11 19:55:49,396 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 59.66 sec
2013-09-11 19:55:50,403 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 59.66 sec
2013-09-11 19:55:51,409 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 59.66 sec
2013-09-11 19:55:52,414 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 59.66 sec
2013-09-11 19:55:53,420 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 59.66 sec
2013-09-11 19:55:54,425 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 59.66 sec
2013-09-11 19:55:55,431 Stage-1 map = 57%, reduce = 4%, Cumulative CPU 59.66 sec
2013-09-11 19:55:56,436 Stage-1 map = 57%, reduce = 4%, Cumulative CPU 59.66 sec
2013-09-11 19:55:57,443 Stage-1 map = 57%, reduce = 4%, Cumulative CPU 59.66 sec
2013-09-11 19:55:58,448 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 59.66 sec
2013-09-11 19:55:59,454 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 59.66 sec
2013-09-11 19:56:00,459 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 59.66 sec
2013-09-11 19:56:01,464 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 59.66 sec
2013-09-11 19:56:02,470 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 59.66 sec
2013-09-11 19:56:03,475 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 59.66 sec
2013-09-11 19:56:04,479 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 59.66 sec
2013-09-11 19:56:05,484 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 59.66 sec
2013-09-11 19:56:06,489 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 59.66 sec
2013-09-11 19:56:07,494 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 111.58 sec
2013-09-11 19:56:08,499 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 116.3 sec
2013-09-11 19:56:09,503 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 116.3 sec
2013-09-11 19:56:10,509 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 123.6 sec
2013-09-11 19:56:11,514 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 123.6 sec
2013-09-11 19:56:12,518 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 123.6 sec
2013-09-11 19:56:13,523 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 123.6 sec
2013-09-11 19:56:14,529 Stage-1 map = 100%, reduce = 21%, Cumulative CPU 123.6 sec
2013-09-11 19:56:15,534 Stage-1 map = 100%, reduce = 21%, Cumulative CPU 123.6 sec
2013-09-11 19:56:16,538 Stage-1 map = 100%, reduce = 29%, Cumulative CPU 123.6 sec
2013-09-11 19:56:18,600 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 123.6 sec
2013-09-11 19:56:19,605 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 123.6 sec
2013-09-11 19:56:20,610 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 123.6 sec
2013-09-11 19:56:21,616 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 123.6 sec
2013-09-11 19:56:22,621 Stage-1 map = 100%, reduce = 50%, Cumulative CPU 123.6 sec
2013-09-11 19:56:23,627 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 123.6 sec
2013-09-11 19:56:24,633 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 123.6 sec
2013-09-11 19:56:25,638 Stage-1 map = 100%, reduce = 70%, Cumulative CPU 123.6 sec
2013-09-11 19:56:26,644 Stage-1 map = 100%, reduce = 74%, Cumulative CPU 123.6 sec
2013-09-11 19:56:27,649 Stage-1 map = 100%, reduce = 74%, Cumulative CPU 123.6 sec
2013-09-11 19:56:28,655 Stage-1 map = 100%, reduce = 79%, Cumulative CPU 123.6 sec
2013-09-11 19:56:29,661 Stage-1 map = 100%, reduce = 84%, Cumulative CPU 123.6 sec
2013-09-11 19:56:30,666 Stage-1 map = 100%, reduce = 84%, Cumulative CPU 123.6 sec
2013-09-11 19:56:31,672 Stage-1 map = 100%, reduce = 88%, Cumulative CPU 123.6 sec
2013-09-11 19:56:32,677 Stage-1 map = 100%, reduce = 93%, Cumulative CPU 123.6 sec
2013-09-11 19:56:33,683 Stage-1 map = 100%, reduce = 93%, Cumulative CPU 123.6 sec
2013-09-11 19:56:34,692 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 169.4 sec
2013-09-11 19:56:35,701 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 169.4 sec
2013-09-11 19:56:36,706 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 169.4 sec
MapReduce Total cumulative CPU time: 2 minutes 49 seconds 400 msec
Ended Job = job_201309101627_0372
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0373
Hadoop job information for Stage-2: number of mappers: 2; number of reducers: 1
2013-09-11 19:56:39,223 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:56:47,249 Stage-2 map = 25%, reduce = 0%
2013-09-11 19:56:50,259 Stage-2 map = 62%, reduce = 0%, Cumulative CPU 14.04 sec
2013-09-11 19:56:51,263 Stage-2 map = 62%, reduce = 0%, Cumulative CPU 14.04 sec
2013-09-11 19:56:52,268 Stage-2 map = 62%, reduce = 0%, Cumulative CPU 14.04 sec
2013-09-11 19:56:53,273 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 14.04 sec
2013-09-11 19:56:54,278 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 14.04 sec
2013-09-11 19:56:55,283 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 14.04 sec
2013-09-11 19:56:56,288 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 14.04 sec
2013-09-11 19:56:57,293 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 14.04 sec
2013-09-11 19:56:58,297 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 14.04 sec
2013-09-11 19:56:59,302 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 38.58 sec
2013-09-11 19:57:00,306 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 38.58 sec
2013-09-11 19:57:01,310 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 38.58 sec
2013-09-11 19:57:02,315 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 38.58 sec
2013-09-11 19:57:03,320 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 38.58 sec
2013-09-11 19:57:04,324 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 38.58 sec
2013-09-11 19:57:05,328 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 38.58 sec
2013-09-11 19:57:06,333 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 38.58 sec
2013-09-11 19:57:07,343 Stage-2 map = 100%, reduce = 67%, Cumulative CPU 44.0 sec
2013-09-11 19:57:08,348 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 46.0 sec
2013-09-11 19:57:09,352 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 46.0 sec
2013-09-11 19:57:10,357 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 46.0 sec
MapReduce Total cumulative CPU time: 46 seconds 0 msec
Ended Job = job_201309101627_0373
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 169.4 sec HDFS Read: 109451651 HDFS Write: 402873759 SUCCESS
Job 1: Map: 2 Reduce: 1 Cumulative CPU: 46.0 sec HDFS Read: 402889658 HDFS Write: 465 SUCCESS
Total MapReduce CPU Time Spent: 3 minutes 35 seconds 400 msec
OK
Time taken: 116.435 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_24131@mturlrep13_201309111957_2140771388.txt
hive> ;
hive> quit;
times: 2
query: SELECT 1, URL, count(*) AS c FROM hits_10m GROUP BY 1, URL ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_24572@mturlrep13_201309111957_461782804.txt
hive> SELECT 1, URL, count(*) AS c FROM hits_10m GROUP BY 1, URL ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0374
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:57:25,327 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:57:32,354 Stage-1 map = 14%, reduce = 0%
2013-09-11 19:57:35,367 Stage-1 map = 22%, reduce = 0%
2013-09-11 19:57:38,379 Stage-1 map = 29%, reduce = 0%
2013-09-11 19:57:41,393 Stage-1 map = 36%, reduce = 0%
2013-09-11 19:57:44,406 Stage-1 map = 43%, reduce = 0%
2013-09-11 19:57:47,423 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 57.83 sec
2013-09-11 19:57:48,429 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 57.83 sec
2013-09-11 19:57:49,437 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 57.83 sec
2013-09-11 19:57:50,443 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 57.83 sec
2013-09-11 19:57:51,449 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 57.83 sec
2013-09-11 19:57:52,455 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 57.83 sec
2013-09-11 19:57:53,460 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 57.83 sec
2013-09-11 19:57:54,466 Stage-1 map = 57%, reduce = 4%, Cumulative CPU 57.83 sec
2013-09-11 19:57:55,472 Stage-1 map = 57%, reduce = 8%, Cumulative CPU 57.83 sec
2013-09-11 19:57:56,477 Stage-1 map = 57%, reduce = 8%, Cumulative CPU 57.83 sec
2013-09-11 19:57:57,488 Stage-1 map = 73%, reduce = 13%, Cumulative CPU 57.83 sec
2013-09-11 19:57:58,494 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 57.83 sec
2013-09-11 19:57:59,500 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 57.83 sec
2013-09-11 19:58:00,505 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 57.83 sec
2013-09-11 19:58:01,511 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 57.83 sec
2013-09-11 19:58:02,516 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 57.83 sec
2013-09-11 19:58:03,522 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 57.83 sec
2013-09-11 19:58:04,528 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 57.83 sec
2013-09-11 19:58:05,534 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 57.83 sec
2013-09-11 19:58:06,539 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 57.83 sec
2013-09-11 19:58:07,545 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 117.68 sec
2013-09-11 19:58:08,550 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 117.68 sec
2013-09-11 19:58:09,555 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 120.18 sec
2013-09-11 19:58:10,561 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 120.18 sec
2013-09-11 19:58:11,569 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 120.18 sec
2013-09-11 19:58:12,577 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 120.18 sec
2013-09-11 19:58:13,582 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 120.18 sec
2013-09-11 19:58:14,587 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 120.18 sec
2013-09-11 19:58:15,593 Stage-1 map = 100%, reduce = 50%, Cumulative CPU 120.18 sec
2013-09-11 19:58:17,436 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 120.18 sec
2013-09-11 19:58:18,441 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 120.18 sec
2013-09-11 19:58:20,040 Stage-1 map = 100%, reduce = 74%, Cumulative CPU 120.18 sec
2013-09-11 19:58:21,044 Stage-1 map = 100%, reduce = 74%, Cumulative CPU 120.18 sec
2013-09-11 19:58:22,050 Stage-1 map = 100%, reduce = 81%, Cumulative CPU 120.18 sec
2013-09-11 19:58:23,055 Stage-1 map = 100%, reduce = 81%, Cumulative CPU 120.18 sec
2013-09-11 19:58:24,060 Stage-1 map = 100%, reduce = 81%, Cumulative CPU 120.18 sec
2013-09-11 19:58:25,065 Stage-1 map = 100%, reduce = 90%, Cumulative CPU 120.18 sec
2013-09-11 19:58:26,071 Stage-1 map = 100%, reduce = 90%, Cumulative CPU 120.18 sec
2013-09-11 19:58:27,076 Stage-1 map = 100%, reduce = 90%, Cumulative CPU 120.18 sec
2013-09-11 19:58:28,082 Stage-1 map = 100%, reduce = 98%, Cumulative CPU 120.18 sec
2013-09-11 19:58:29,089 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 166.94 sec
2013-09-11 19:58:30,095 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 166.94 sec
MapReduce Total cumulative CPU time: 2 minutes 46 seconds 940 msec
Ended Job = job_201309101627_0374
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0375
Hadoop job information for Stage-2: number of mappers: 2; number of reducers: 1
2013-09-11 19:58:32,596 Stage-2 map = 0%, reduce = 0%
2013-09-11 19:58:40,620 Stage-2 map = 25%, reduce = 0%
2013-09-11 19:58:43,629 Stage-2 map = 62%, reduce = 0%, Cumulative CPU 14.59 sec
2013-09-11 19:58:44,634 Stage-2 map = 62%, reduce = 0%, Cumulative CPU 14.59 sec
2013-09-11 19:58:45,638 Stage-2 map = 62%, reduce = 0%, Cumulative CPU 14.59 sec
2013-09-11 19:58:47,175 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 14.59 sec
2013-09-11 19:58:48,180 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 14.59 sec
2013-09-11 19:58:49,185 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 14.59 sec
2013-09-11 19:58:50,189 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 14.59 sec
2013-09-11 19:58:51,194 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 14.59 sec
2013-09-11 19:58:52,199 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 39.13 sec
2013-09-11 19:58:53,203 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 39.13 sec
2013-09-11 19:58:54,208 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 39.13 sec
2013-09-11 19:58:55,213 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 39.13 sec
2013-09-11 19:58:56,217 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 39.13 sec
2013-09-11 19:58:57,222 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 39.13 sec
2013-09-11 19:58:58,227 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 39.13 sec
2013-09-11 19:58:59,232 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 39.13 sec
2013-09-11 19:59:00,237 Stage-2 map = 100%, reduce = 67%, Cumulative CPU 39.13 sec
2013-09-11 19:59:01,242 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 46.59 sec
2013-09-11 19:59:02,248 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 46.59 sec
2013-09-11 19:59:03,253 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 46.59 sec
MapReduce Total cumulative CPU time: 46 seconds 590 msec
Ended Job = job_201309101627_0375
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 166.94 sec HDFS Read: 109451651 HDFS Write: 402873759 SUCCESS
Job 1: Map: 2 Reduce: 1 Cumulative CPU: 46.59 sec HDFS Read: 402889658 HDFS Write: 465 SUCCESS
Total MapReduce CPU Time Spent: 3 minutes 33 seconds 530 msec
OK
Time taken: 106.259 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_27337@mturlrep13_201309111959_812215985.txt
hive> ;
hive> quit;
times: 3
query: SELECT 1, URL, count(*) AS c FROM hits_10m GROUP BY 1, URL ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_27769@mturlrep13_201309111959_1007582094.txt
hive> SELECT 1, URL, count(*) AS c FROM hits_10m GROUP BY 1, URL ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0376
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 19:59:17,040 Stage-1 map = 0%, reduce = 0%
2013-09-11 19:59:25,072 Stage-1 map = 14%, reduce = 0%
2013-09-11 19:59:28,085 Stage-1 map = 22%, reduce = 0%
2013-09-11 19:59:31,097 Stage-1 map = 29%, reduce = 0%
2013-09-11 19:59:34,111 Stage-1 map = 43%, reduce = 0%
2013-09-11 19:59:39,135 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.82 sec
2013-09-11 19:59:40,142 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.82 sec
2013-09-11 19:59:41,150 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.82 sec
2013-09-11 19:59:42,156 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.82 sec
2013-09-11 19:59:43,161 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.82 sec
2013-09-11 19:59:44,167 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.82 sec
2013-09-11 19:59:45,174 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 55.82 sec
2013-09-11 19:59:46,180 Stage-1 map = 54%, reduce = 4%, Cumulative CPU 55.82 sec
2013-09-11 19:59:47,185 Stage-1 map = 57%, reduce = 8%, Cumulative CPU 55.82 sec
2013-09-11 19:59:48,191 Stage-1 map = 57%, reduce = 8%, Cumulative CPU 55.82 sec
2013-09-11 19:59:49,196 Stage-1 map = 65%, reduce = 13%, Cumulative CPU 55.82 sec
2013-09-11 19:59:50,202 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 55.82 sec
2013-09-11 19:59:51,208 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 55.82 sec
2013-09-11 19:59:52,214 Stage-1 map = 76%, reduce = 17%, Cumulative CPU 55.82 sec
2013-09-11 19:59:53,219 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 55.82 sec
2013-09-11 19:59:54,225 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 55.82 sec
2013-09-11 19:59:55,230 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 55.82 sec
2013-09-11 19:59:56,246 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 55.82 sec
2013-09-11 19:59:57,252 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 55.82 sec
2013-09-11 19:59:58,257 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 55.82 sec
2013-09-11 19:59:59,263 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 81.61 sec
2013-09-11 20:00:00,269 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 81.61 sec
2013-09-11 20:00:01,274 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 109.62 sec
2013-09-11 20:00:02,280 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 109.62 sec
2013-09-11 20:00:03,287 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 109.62 sec
2013-09-11 20:00:04,292 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 109.62 sec
2013-09-11 20:00:05,298 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 109.62 sec
2013-09-11 20:00:06,303 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 109.62 sec
2013-09-11 20:00:08,494 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 129.4 sec
2013-09-11 20:00:09,500 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 129.4 sec
2013-09-11 20:00:10,505 Stage-1 map = 100%, reduce = 69%, Cumulative CPU 129.4 sec
2013-09-11 20:00:11,511 Stage-1 map = 100%, reduce = 70%, Cumulative CPU 129.4 sec
2013-09-11 20:00:12,517 Stage-1 map = 100%, reduce = 70%, Cumulative CPU 129.4 sec
2013-09-11 20:00:13,522 Stage-1 map = 100%, reduce = 70%, Cumulative CPU 129.4 sec
2013-09-11 20:00:14,528 Stage-1 map = 100%, reduce = 78%, Cumulative CPU 129.4 sec
2013-09-11 20:00:15,533 Stage-1 map = 100%, reduce = 78%, Cumulative CPU 129.4 sec
2013-09-11 20:00:16,539 Stage-1 map = 100%, reduce = 78%, Cumulative CPU 129.4 sec
2013-09-11 20:00:17,545 Stage-1 map = 100%, reduce = 87%, Cumulative CPU 129.4 sec
2013-09-11 20:00:18,551 Stage-1 map = 100%, reduce = 87%, Cumulative CPU 129.4 sec
2013-09-11 20:00:19,559 Stage-1 map = 100%, reduce = 87%, Cumulative CPU 129.4 sec
2013-09-11 20:00:20,564 Stage-1 map = 100%, reduce = 95%, Cumulative CPU 129.4 sec
2013-09-11 20:00:21,572 Stage-1 map = 100%, reduce = 97%, Cumulative CPU 146.62 sec
2013-09-11 20:00:22,578 Stage-1 map = 100%, reduce = 97%, Cumulative CPU 146.62 sec
2013-09-11 20:00:23,583 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 164.63 sec
2013-09-11 20:00:24,588 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 164.63 sec
MapReduce Total cumulative CPU time: 2 minutes 44 seconds 630 msec
Ended Job = job_201309101627_0376
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0377
Hadoop job information for Stage-2: number of mappers: 2; number of reducers: 1
2013-09-11 20:00:27,378 Stage-2 map = 0%, reduce = 0%
2013-09-11 20:00:35,403 Stage-2 map = 25%, reduce = 0%
2013-09-11 20:00:38,414 Stage-2 map = 62%, reduce = 0%, Cumulative CPU 13.92 sec
2013-09-11 20:00:39,419 Stage-2 map = 62%, reduce = 0%, Cumulative CPU 13.92 sec
2013-09-11 20:00:40,424 Stage-2 map = 62%, reduce = 0%, Cumulative CPU 13.92 sec
2013-09-11 20:00:41,430 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 13.92 sec
2013-09-11 20:00:42,587 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 13.92 sec
2013-09-11 20:00:43,592 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 13.92 sec
2013-09-11 20:00:44,598 Stage-2 map = 88%, reduce = 0%, Cumulative CPU 13.92 sec
2013-09-11 20:00:45,602 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 13.92 sec
2013-09-11 20:00:46,607 Stage-2 map = 88%, reduce = 17%, Cumulative CPU 13.92 sec
2013-09-11 20:00:47,612 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 38.24 sec
2013-09-11 20:00:48,616 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 38.24 sec
2013-09-11 20:00:49,621 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 38.24 sec
2013-09-11 20:00:50,626 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 38.24 sec
2013-09-11 20:00:51,631 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 38.24 sec
2013-09-11 20:00:52,636 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 38.24 sec
2013-09-11 20:00:53,641 Stage-2 map = 100%, reduce = 17%, Cumulative CPU 38.24 sec
2013-09-11 20:00:54,646 Stage-2 map = 100%, reduce = 67%, Cumulative CPU 38.24 sec
2013-09-11 20:00:55,651 Stage-2 map = 100%, reduce = 67%, Cumulative CPU 38.24 sec
2013-09-11 20:00:56,656 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 45.8 sec
2013-09-11 20:00:57,662 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 45.8 sec
MapReduce Total cumulative CPU time: 45 seconds 800 msec
Ended Job = job_201309101627_0377
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 164.63 sec HDFS Read: 109451651 HDFS Write: 402873759 SUCCESS
Job 1: Map: 2 Reduce: 1 Cumulative CPU: 45.8 sec HDFS Read: 402889658 HDFS Write: 465 SUCCESS
Total MapReduce CPU Time Spent: 3 minutes 30 seconds 430 msec
OK
Time taken: 108.254 seconds, Fetched: 10 row(s)
hive> quit;
-- агрегация по URL и числу.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_31307@mturlrep13_201309112001_502707874.txt
hive> ;
hive> quit;
times: 1
query: SELECT ClientIP, ClientIP - 1, ClientIP - 2, ClientIP - 3, count(*) AS c FROM hits_10m GROUP BY ClientIP, ClientIP - 1, ClientIP - 2, ClientIP - 3 ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_31761@mturlrep13_201309112001_1366550388.txt
hive> SELECT ClientIP, ClientIP - 1, ClientIP - 2, ClientIP - 3, count(*) AS c FROM hits_10m GROUP BY ClientIP, ClientIP - 1, ClientIP - 2, ClientIP - 3 ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0378
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 20:01:22,869 Stage-1 map = 0%, reduce = 0%
2013-09-11 20:01:29,898 Stage-1 map = 29%, reduce = 0%
2013-09-11 20:01:32,911 Stage-1 map = 43%, reduce = 0%
2013-09-11 20:01:33,921 Stage-1 map = 46%, reduce = 0%, Cumulative CPU 15.13 sec
2013-09-11 20:01:34,929 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 29.89 sec
2013-09-11 20:01:35,937 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 29.89 sec
2013-09-11 20:01:36,944 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 29.89 sec
2013-09-11 20:01:37,951 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 29.89 sec
2013-09-11 20:01:38,957 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 29.89 sec
2013-09-11 20:01:39,962 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 29.89 sec
2013-09-11 20:01:40,968 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 29.89 sec
2013-09-11 20:01:41,974 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 29.89 sec
2013-09-11 20:01:42,980 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 29.89 sec
2013-09-11 20:01:43,986 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 29.89 sec
2013-09-11 20:01:44,992 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 29.89 sec
2013-09-11 20:01:45,998 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 60.94 sec
2013-09-11 20:01:47,004 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 60.94 sec
2013-09-11 20:01:48,009 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 60.94 sec
2013-09-11 20:01:49,015 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 60.94 sec
2013-09-11 20:01:50,020 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 60.94 sec
2013-09-11 20:01:51,026 Stage-1 map = 100%, reduce = 21%, Cumulative CPU 60.94 sec
2013-09-11 20:01:52,032 Stage-1 map = 100%, reduce = 21%, Cumulative CPU 60.94 sec
2013-09-11 20:01:53,040 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 62.99 sec
2013-09-11 20:01:54,046 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 62.99 sec
2013-09-11 20:01:55,052 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 62.99 sec
2013-09-11 20:01:56,066 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 62.99 sec
2013-09-11 20:01:57,072 Stage-1 map = 100%, reduce = 93%, Cumulative CPU 62.99 sec
2013-09-11 20:01:58,078 Stage-1 map = 100%, reduce = 93%, Cumulative CPU 62.99 sec
2013-09-11 20:01:59,084 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 73.53 sec
2013-09-11 20:02:00,090 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 73.53 sec
MapReduce Total cumulative CPU time: 1 minutes 13 seconds 530 msec
Ended Job = job_201309101627_0378
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0379
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 20:02:02,569 Stage-2 map = 0%, reduce = 0%
2013-09-11 20:02:14,605 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.88 sec
2013-09-11 20:02:15,610 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.88 sec
2013-09-11 20:02:16,615 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.88 sec
2013-09-11 20:02:17,620 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.88 sec
2013-09-11 20:02:18,625 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.88 sec
2013-09-11 20:02:19,630 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.88 sec
2013-09-11 20:02:20,635 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.88 sec
2013-09-11 20:02:21,641 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 15.88 sec
2013-09-11 20:02:22,646 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 15.88 sec
2013-09-11 20:02:23,651 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 20.04 sec
2013-09-11 20:02:24,656 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 20.04 sec
2013-09-11 20:02:25,661 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 20.04 sec
MapReduce Total cumulative CPU time: 20 seconds 40 msec
Ended Job = job_201309101627_0379
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 73.53 sec HDFS Read: 31344843 HDFS Write: 51717050 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 20.04 sec HDFS Read: 51717819 HDFS Write: 490 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 33 seconds 570 msec
OK
Time taken: 72.762 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_1702@mturlrep13_201309112002_1101587188.txt
hive> ;
hive> quit;
times: 2
query: SELECT ClientIP, ClientIP - 1, ClientIP - 2, ClientIP - 3, count(*) AS c FROM hits_10m GROUP BY ClientIP, ClientIP - 1, ClientIP - 2, ClientIP - 3 ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_2174@mturlrep13_201309112002_314206404.txt
hive> SELECT ClientIP, ClientIP - 1, ClientIP - 2, ClientIP - 3, count(*) AS c FROM hits_10m GROUP BY ClientIP, ClientIP - 1, ClientIP - 2, ClientIP - 3 ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0380
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 20:02:38,506 Stage-1 map = 0%, reduce = 0%
2013-09-11 20:02:46,538 Stage-1 map = 32%, reduce = 0%
2013-09-11 20:02:49,550 Stage-1 map = 43%, reduce = 0%
2013-09-11 20:02:50,562 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 30.29 sec
2013-09-11 20:02:51,569 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 30.29 sec
2013-09-11 20:02:52,577 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 30.29 sec
2013-09-11 20:02:53,583 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 30.29 sec
2013-09-11 20:02:54,590 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 30.29 sec
2013-09-11 20:02:55,596 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 30.29 sec
2013-09-11 20:02:56,602 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 30.29 sec
2013-09-11 20:02:57,608 Stage-1 map = 68%, reduce = 8%, Cumulative CPU 30.29 sec
2013-09-11 20:02:58,613 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 30.29 sec
2013-09-11 20:02:59,618 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 30.29 sec
2013-09-11 20:03:00,623 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 30.29 sec
2013-09-11 20:03:01,628 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 61.31 sec
2013-09-11 20:03:02,633 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 61.31 sec
2013-09-11 20:03:03,638 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 61.31 sec
2013-09-11 20:03:04,644 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 61.31 sec
2013-09-11 20:03:05,649 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 61.31 sec
2013-09-11 20:03:06,654 Stage-1 map = 100%, reduce = 21%, Cumulative CPU 61.31 sec
2013-09-11 20:03:07,661 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 63.28 sec
2013-09-11 20:03:08,669 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 64.61 sec
2013-09-11 20:03:09,675 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 64.61 sec
2013-09-11 20:03:10,680 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 64.61 sec
2013-09-11 20:03:11,685 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 64.61 sec
2013-09-11 20:03:12,691 Stage-1 map = 100%, reduce = 93%, Cumulative CPU 64.61 sec
2013-09-11 20:03:13,696 Stage-1 map = 100%, reduce = 93%, Cumulative CPU 64.61 sec
2013-09-11 20:03:14,702 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 74.03 sec
2013-09-11 20:03:15,708 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 74.03 sec
2013-09-11 20:03:16,713 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 74.03 sec
MapReduce Total cumulative CPU time: 1 minutes 14 seconds 30 msec
Ended Job = job_201309101627_0380
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0381
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 20:03:20,309 Stage-2 map = 0%, reduce = 0%
2013-09-11 20:03:31,341 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.54 sec
2013-09-11 20:03:32,346 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.54 sec
2013-09-11 20:03:33,351 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.54 sec
2013-09-11 20:03:34,355 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.54 sec
2013-09-11 20:03:35,361 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.54 sec
2013-09-11 20:03:36,365 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.54 sec
2013-09-11 20:03:37,370 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.54 sec
2013-09-11 20:03:38,375 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 15.54 sec
2013-09-11 20:03:39,380 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 15.54 sec
2013-09-11 20:03:40,385 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 19.84 sec
2013-09-11 20:03:41,392 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 19.84 sec
2013-09-11 20:03:42,397 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 19.84 sec
MapReduce Total cumulative CPU time: 19 seconds 840 msec
Ended Job = job_201309101627_0381
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 74.03 sec HDFS Read: 31344843 HDFS Write: 51717050 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 19.84 sec HDFS Read: 51717819 HDFS Write: 490 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 33 seconds 870 msec
OK
Time taken: 71.2 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_4511@mturlrep13_201309112003_47105365.txt
hive> ;
hive> quit;
times: 3
query: SELECT ClientIP, ClientIP - 1, ClientIP - 2, ClientIP - 3, count(*) AS c FROM hits_10m GROUP BY ClientIP, ClientIP - 1, ClientIP - 2, ClientIP - 3 ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_5089@mturlrep13_201309112003_1512209035.txt
hive> SELECT ClientIP, ClientIP - 1, ClientIP - 2, ClientIP - 3, count(*) AS c FROM hits_10m GROUP BY ClientIP, ClientIP - 1, ClientIP - 2, ClientIP - 3 ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0382
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 20:03:55,705 Stage-1 map = 0%, reduce = 0%
2013-09-11 20:04:03,735 Stage-1 map = 29%, reduce = 0%
2013-09-11 20:04:06,747 Stage-1 map = 43%, reduce = 0%
2013-09-11 20:04:07,758 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 31.07 sec
2013-09-11 20:04:08,765 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 31.07 sec
2013-09-11 20:04:09,773 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 31.07 sec
2013-09-11 20:04:10,780 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 31.07 sec
2013-09-11 20:04:11,786 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 31.07 sec
2013-09-11 20:04:12,792 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 31.07 sec
2013-09-11 20:04:13,797 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 31.07 sec
2013-09-11 20:04:14,802 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 31.07 sec
2013-09-11 20:04:15,808 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 31.07 sec
2013-09-11 20:04:16,813 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 31.07 sec
2013-09-11 20:04:17,818 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 31.07 sec
2013-09-11 20:04:18,824 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 46.19 sec
2013-09-11 20:04:19,829 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 63.28 sec
2013-09-11 20:04:20,835 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 63.28 sec
2013-09-11 20:04:21,840 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 63.28 sec
2013-09-11 20:04:22,845 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 63.28 sec
2013-09-11 20:04:23,851 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 63.28 sec
2013-09-11 20:04:24,857 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 63.28 sec
2013-09-11 20:04:25,863 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 63.28 sec
2013-09-11 20:04:26,870 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 65.42 sec
2013-09-11 20:04:27,876 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 65.42 sec
2013-09-11 20:04:28,880 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 65.42 sec
2013-09-11 20:04:29,886 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 65.42 sec
2013-09-11 20:04:30,891 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 65.42 sec
2013-09-11 20:04:31,896 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 65.42 sec
2013-09-11 20:04:32,902 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 76.37 sec
2013-09-11 20:04:33,908 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 76.37 sec
MapReduce Total cumulative CPU time: 1 minutes 16 seconds 370 msec
Ended Job = job_201309101627_0382
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0383
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 20:04:36,402 Stage-2 map = 0%, reduce = 0%
2013-09-11 20:04:48,438 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.09 sec
2013-09-11 20:04:49,443 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.09 sec
2013-09-11 20:04:50,448 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.09 sec
2013-09-11 20:04:51,452 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.09 sec
2013-09-11 20:04:52,456 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.09 sec
2013-09-11 20:04:53,461 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.09 sec
2013-09-11 20:04:54,466 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 15.09 sec
2013-09-11 20:04:55,471 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 15.09 sec
2013-09-11 20:04:56,476 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 15.09 sec
2013-09-11 20:04:57,481 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 19.29 sec
2013-09-11 20:04:58,486 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 19.29 sec
MapReduce Total cumulative CPU time: 19 seconds 290 msec
Ended Job = job_201309101627_0383
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 76.37 sec HDFS Read: 31344843 HDFS Write: 51717050 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 19.29 sec HDFS Read: 51717819 HDFS Write: 490 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 35 seconds 660 msec
OK
Time taken: 70.36 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_7537@mturlrep13_201309112005_898269574.txt
hive> ;
hive> quit;
times: 1
query:
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_8303@mturlrep13_201309112005_242909153.txt
hive> ;
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_8520@mturlrep13_201309112005_740096707.txt
hive> ;
hive> quit;
times: 2
query:
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9324@mturlrep13_201309112005_1365009510.txt
hive> ;
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9538@mturlrep13_201309112005_81537852.txt
hive> ;
hive> quit;
times: 3
query:
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9966@mturlrep13_201309112005_1125410388.txt
hive> ;
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_10208@mturlrep13_201309112005_1554289038.txt
hive> ;
hive> quit;
times: 1
query: SELECT URL, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT DontCountHits != 0 AND NOT Refresh != 0 AND URL != '' GROUP BY URL ORDER BY PageViews DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_10658@mturlrep13_201309112005_874745706.txt
hive> SELECT URL, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT DontCountHits != 0 AND NOT Refresh != 0 AND URL != '' GROUP BY URL ORDER BY PageViews DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0384
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 20:05:43,792 Stage-1 map = 0%, reduce = 0%
2013-09-11 20:05:50,819 Stage-1 map = 32%, reduce = 0%
2013-09-11 20:05:52,837 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.84 sec
2013-09-11 20:05:53,845 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.84 sec
2013-09-11 20:05:54,853 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.84 sec
2013-09-11 20:05:55,859 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.84 sec
2013-09-11 20:05:56,866 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.84 sec
2013-09-11 20:05:57,873 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.84 sec
2013-09-11 20:05:58,880 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.84 sec
2013-09-11 20:05:59,885 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 18.84 sec
2013-09-11 20:06:00,891 Stage-1 map = 89%, reduce = 17%, Cumulative CPU 27.75 sec
2013-09-11 20:06:01,896 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.3 sec
2013-09-11 20:06:02,902 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.3 sec
2013-09-11 20:06:03,907 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.3 sec
2013-09-11 20:06:04,913 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.3 sec
2013-09-11 20:06:05,920 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.53 sec
2013-09-11 20:06:06,926 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.53 sec
2013-09-11 20:06:07,932 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.53 sec
MapReduce Total cumulative CPU time: 41 seconds 530 msec
Ended Job = job_201309101627_0384
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0385
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 20:06:10,591 Stage-2 map = 0%, reduce = 0%
2013-09-11 20:06:12,599 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-11 20:06:13,604 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-11 20:06:14,609 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-11 20:06:15,614 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-11 20:06:16,618 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-11 20:06:17,622 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-11 20:06:18,627 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-11 20:06:19,633 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 0.74 sec
2013-09-11 20:06:20,638 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.08 sec
2013-09-11 20:06:21,644 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.08 sec
MapReduce Total cumulative CPU time: 2 seconds 80 msec
Ended Job = job_201309101627_0385
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 41.53 sec HDFS Read: 118784021 HDFS Write: 192 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.08 sec HDFS Read: 961 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 43 seconds 610 msec
OK
Time taken: 47.898 seconds
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_13025@mturlrep13_201309112006_366484138.txt
hive> ;
hive> quit;
times: 2
query: SELECT URL, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT DontCountHits != 0 AND NOT Refresh != 0 AND URL != '' GROUP BY URL ORDER BY PageViews DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_13471@mturlrep13_201309112006_1061969476.txt
hive> SELECT URL, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT DontCountHits != 0 AND NOT Refresh != 0 AND URL != '' GROUP BY URL ORDER BY PageViews DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0386
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 20:06:36,447 Stage-1 map = 0%, reduce = 0%
2013-09-11 20:06:43,482 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 9.43 sec
2013-09-11 20:06:44,489 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.35 sec
2013-09-11 20:06:45,496 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.35 sec
2013-09-11 20:06:46,503 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.35 sec
2013-09-11 20:06:47,508 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.35 sec
2013-09-11 20:06:48,514 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.35 sec
2013-09-11 20:06:49,520 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.35 sec
2013-09-11 20:06:50,526 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.35 sec
2013-09-11 20:06:51,533 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.7 sec
2013-09-11 20:06:52,538 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.7 sec
2013-09-11 20:06:53,543 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.7 sec
2013-09-11 20:06:54,548 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.7 sec
2013-09-11 20:06:55,554 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.7 sec
2013-09-11 20:06:56,560 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.7 sec
2013-09-11 20:06:57,568 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.93 sec
2013-09-11 20:06:58,810 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.93 sec
MapReduce Total cumulative CPU time: 41 seconds 930 msec
Ended Job = job_201309101627_0386
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0387
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 20:07:01,348 Stage-2 map = 0%, reduce = 0%
2013-09-11 20:07:03,357 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.71 sec
2013-09-11 20:07:04,362 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.71 sec
2013-09-11 20:07:05,367 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.71 sec
2013-09-11 20:07:06,371 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.71 sec
2013-09-11 20:07:07,376 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.71 sec
2013-09-11 20:07:08,381 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.71 sec
2013-09-11 20:07:09,387 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.71 sec
2013-09-11 20:07:10,392 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 0.71 sec
2013-09-11 20:07:11,397 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.1 sec
2013-09-11 20:07:12,402 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.1 sec
MapReduce Total cumulative CPU time: 2 seconds 100 msec
Ended Job = job_201309101627_0387
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 41.93 sec HDFS Read: 118784021 HDFS Write: 192 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.1 sec HDFS Read: 961 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 44 seconds 30 msec
OK
Time taken: 44.248 seconds
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_15787@mturlrep13_201309112007_1440494394.txt
hive> ;
hive> quit;
times: 3
query: SELECT URL, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT DontCountHits != 0 AND NOT Refresh != 0 AND URL != '' GROUP BY URL ORDER BY PageViews DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_16251@mturlrep13_201309112007_1706863363.txt
hive> SELECT URL, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT DontCountHits != 0 AND NOT Refresh != 0 AND URL != '' GROUP BY URL ORDER BY PageViews DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0388
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 20:07:26,087 Stage-1 map = 0%, reduce = 0%
2013-09-11 20:07:34,130 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 9.01 sec
2013-09-11 20:07:35,137 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.94 sec
2013-09-11 20:07:36,145 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.94 sec
2013-09-11 20:07:37,152 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.94 sec
2013-09-11 20:07:38,158 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.94 sec
2013-09-11 20:07:39,165 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.94 sec
2013-09-11 20:07:40,172 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.94 sec
2013-09-11 20:07:41,179 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.94 sec
2013-09-11 20:07:42,185 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.34 sec
2013-09-11 20:07:43,190 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.34 sec
2013-09-11 20:07:44,195 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.34 sec
2013-09-11 20:07:45,201 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.34 sec
2013-09-11 20:07:46,206 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.34 sec
2013-09-11 20:07:47,214 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 39.41 sec
2013-09-11 20:07:48,220 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.49 sec
2013-09-11 20:07:49,226 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.49 sec
MapReduce Total cumulative CPU time: 41 seconds 490 msec
Ended Job = job_201309101627_0388
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0389
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 20:07:51,717 Stage-2 map = 0%, reduce = 0%
2013-09-11 20:07:53,726 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-11 20:07:54,731 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-11 20:07:55,737 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-11 20:07:56,742 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-11 20:07:57,747 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-11 20:07:58,752 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-11 20:07:59,757 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-11 20:08:00,763 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 0.91 sec
2013-09-11 20:08:01,768 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.27 sec
2013-09-11 20:08:02,774 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.27 sec
2013-09-11 20:08:03,780 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.27 sec
MapReduce Total cumulative CPU time: 2 seconds 270 msec
Ended Job = job_201309101627_0389
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 41.49 sec HDFS Read: 118784021 HDFS Write: 192 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.27 sec HDFS Read: 961 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 43 seconds 760 msec
OK
Time taken: 45.022 seconds
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_18797@mturlrep13_201309112008_775683828.txt
hive> ;
hive> quit;
times: 1
query: SELECT Title, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT DontCountHits != 0 AND NOT Refresh != 0 AND Title != '' GROUP BY Title ORDER BY PageViews DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_19297@mturlrep13_201309112008_761964500.txt
hive> SELECT Title, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT DontCountHits != 0 AND NOT Refresh != 0 AND Title != '' GROUP BY Title ORDER BY PageViews DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0390
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 20:08:25,097 Stage-1 map = 0%, reduce = 0%
2013-09-11 20:08:32,126 Stage-1 map = 36%, reduce = 0%
2013-09-11 20:08:34,143 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.7 sec
2013-09-11 20:08:35,151 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.7 sec
2013-09-11 20:08:36,159 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.7 sec
2013-09-11 20:08:37,166 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.7 sec
2013-09-11 20:08:38,172 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.7 sec
2013-09-11 20:08:39,181 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.7 sec
2013-09-11 20:08:40,191 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.7 sec
2013-09-11 20:08:41,197 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 18.7 sec
2013-09-11 20:08:42,203 Stage-1 map = 93%, reduce = 17%, Cumulative CPU 27.69 sec
2013-09-11 20:08:43,208 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.14 sec
2013-09-11 20:08:44,214 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.14 sec
2013-09-11 20:08:45,219 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.14 sec
2013-09-11 20:08:46,227 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.14 sec
2013-09-11 20:08:47,235 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.23 sec
2013-09-11 20:08:48,242 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.23 sec
2013-09-11 20:08:49,248 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.23 sec
MapReduce Total cumulative CPU time: 41 seconds 230 msec
Ended Job = job_201309101627_0390
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0391
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 20:08:51,794 Stage-2 map = 0%, reduce = 0%
2013-09-11 20:08:53,820 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.72 sec
2013-09-11 20:08:54,826 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.72 sec
2013-09-11 20:08:55,832 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.72 sec
2013-09-11 20:08:56,836 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.72 sec
2013-09-11 20:08:57,842 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.72 sec
2013-09-11 20:08:58,848 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.72 sec
2013-09-11 20:08:59,853 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.72 sec
2013-09-11 20:09:00,858 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 0.72 sec
2013-09-11 20:09:01,864 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.02 sec
2013-09-11 20:09:02,870 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.02 sec
MapReduce Total cumulative CPU time: 2 seconds 20 msec
Ended Job = job_201309101627_0391
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 41.23 sec HDFS Read: 115339269 HDFS Write: 192 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.02 sec HDFS Read: 961 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 43 seconds 250 msec
OK
Time taken: 47.69 seconds
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_21543@mturlrep13_201309112009_592955581.txt
hive> ;
hive> quit;
times: 2
query: SELECT Title, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT DontCountHits != 0 AND NOT Refresh != 0 AND Title != '' GROUP BY Title ORDER BY PageViews DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_21971@mturlrep13_201309112009_2025172247.txt
hive> SELECT Title, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT DontCountHits != 0 AND NOT Refresh != 0 AND Title != '' GROUP BY Title ORDER BY PageViews DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0392
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 20:09:16,753 Stage-1 map = 0%, reduce = 0%
2013-09-11 20:09:23,786 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 9.28 sec
2013-09-11 20:09:24,794 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.17 sec
2013-09-11 20:09:25,802 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.17 sec
2013-09-11 20:09:26,809 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.17 sec
2013-09-11 20:09:27,816 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.17 sec
2013-09-11 20:09:28,823 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.17 sec
2013-09-11 20:09:29,829 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.17 sec
2013-09-11 20:09:30,835 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.17 sec
2013-09-11 20:09:31,842 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.06 sec
2013-09-11 20:09:32,847 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.06 sec
2013-09-11 20:09:33,853 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.06 sec
2013-09-11 20:09:34,858 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.06 sec
2013-09-11 20:09:35,864 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.06 sec
2013-09-11 20:09:36,870 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.06 sec
2013-09-11 20:09:37,877 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.27 sec
2013-09-11 20:09:38,884 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.27 sec
MapReduce Total cumulative CPU time: 41 seconds 270 msec
Ended Job = job_201309101627_0392
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0393
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 20:09:42,378 Stage-2 map = 0%, reduce = 0%
2013-09-11 20:09:43,384 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.83 sec
2013-09-11 20:09:44,390 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.83 sec
2013-09-11 20:09:45,395 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.83 sec
2013-09-11 20:09:46,400 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.83 sec
2013-09-11 20:09:47,405 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.83 sec
2013-09-11 20:09:48,410 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.83 sec
2013-09-11 20:09:49,415 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.83 sec
2013-09-11 20:09:50,420 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.83 sec
2013-09-11 20:09:51,426 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.2 sec
2013-09-11 20:09:52,432 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.2 sec
2013-09-11 20:09:53,437 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.2 sec
MapReduce Total cumulative CPU time: 2 seconds 200 msec
Ended Job = job_201309101627_0393
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 41.27 sec HDFS Read: 115339269 HDFS Write: 192 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.2 sec HDFS Read: 961 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 43 seconds 470 msec
OK
Time taken: 44.993 seconds
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_24204@mturlrep13_201309112009_1407745219.txt
hive> ;
hive> quit;
times: 3
query: SELECT Title, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT DontCountHits != 0 AND NOT Refresh != 0 AND Title != '' GROUP BY Title ORDER BY PageViews DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_24634@mturlrep13_201309112009_1617977624.txt
hive> SELECT Title, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT DontCountHits != 0 AND NOT Refresh != 0 AND Title != '' GROUP BY Title ORDER BY PageViews DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0394
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 20:10:07,429 Stage-1 map = 0%, reduce = 0%
2013-09-11 20:10:14,464 Stage-1 map = 46%, reduce = 0%, Cumulative CPU 9.2 sec
2013-09-11 20:10:15,472 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.91 sec
2013-09-11 20:10:16,480 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.91 sec
2013-09-11 20:10:17,486 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.91 sec
2013-09-11 20:10:18,492 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.91 sec
2013-09-11 20:10:19,499 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.91 sec
2013-09-11 20:10:20,505 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.91 sec
2013-09-11 20:10:21,511 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.91 sec
2013-09-11 20:10:22,517 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.24 sec
2013-09-11 20:10:23,523 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.24 sec
2013-09-11 20:10:24,528 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.24 sec
2013-09-11 20:10:25,533 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.24 sec
2013-09-11 20:10:26,539 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.24 sec
2013-09-11 20:10:27,544 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.24 sec
2013-09-11 20:10:28,552 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.33 sec
2013-09-11 20:10:29,557 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.33 sec
MapReduce Total cumulative CPU time: 41 seconds 330 msec
Ended Job = job_201309101627_0394
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0395
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 20:10:33,096 Stage-2 map = 0%, reduce = 0%
2013-09-11 20:10:34,102 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.69 sec
2013-09-11 20:10:35,108 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.69 sec
2013-09-11 20:10:36,113 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.69 sec
2013-09-11 20:10:37,118 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.69 sec
2013-09-11 20:10:38,123 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.69 sec
2013-09-11 20:10:39,127 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.69 sec
2013-09-11 20:10:40,132 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.69 sec
2013-09-11 20:10:41,137 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.69 sec
2013-09-11 20:10:42,143 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.09 sec
2013-09-11 20:10:43,148 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.09 sec
2013-09-11 20:10:44,154 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.09 sec
MapReduce Total cumulative CPU time: 2 seconds 90 msec
Ended Job = job_201309101627_0395
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 41.33 sec HDFS Read: 115339269 HDFS Write: 192 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.09 sec HDFS Read: 961 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 43 seconds 420 msec
OK
Time taken: 45.13 seconds
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_27651@mturlrep13_201309112010_413283612.txt
hive> ;
hive> quit;
times: 1
query: SELECT URL, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT Refresh != 0 AND IsLink != 0 AND NOT IsDownload != 0 GROUP BY URL ORDER BY PageViews DESC LIMIT 1000;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_28105@mturlrep13_201309112010_1811330810.txt
hive> SELECT URL, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT Refresh != 0 AND IsLink != 0 AND NOT IsDownload != 0 GROUP BY URL ORDER BY PageViews DESC LIMIT 1000;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0396
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 20:11:08,317 Stage-1 map = 0%, reduce = 0%
2013-09-11 20:11:15,347 Stage-1 map = 32%, reduce = 0%
2013-09-11 20:11:17,363 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.33 sec
2013-09-11 20:11:18,371 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.33 sec
2013-09-11 20:11:19,380 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.33 sec
2013-09-11 20:11:20,386 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.33 sec
2013-09-11 20:11:21,393 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.33 sec
2013-09-11 20:11:22,399 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.33 sec
2013-09-11 20:11:23,406 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.33 sec
2013-09-11 20:11:24,413 Stage-1 map = 68%, reduce = 8%, Cumulative CPU 19.33 sec
2013-09-11 20:11:25,420 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 19.33 sec
2013-09-11 20:11:26,427 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.65 sec
2013-09-11 20:11:27,432 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.65 sec
2013-09-11 20:11:28,438 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.65 sec
2013-09-11 20:11:29,443 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.65 sec
2013-09-11 20:11:30,451 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.61 sec
2013-09-11 20:11:31,458 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.61 sec
2013-09-11 20:11:32,464 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.61 sec
MapReduce Total cumulative CPU time: 41 seconds 610 msec
Ended Job = job_201309101627_0396
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0397
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 20:11:34,967 Stage-2 map = 0%, reduce = 0%
2013-09-11 20:11:36,976 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 20:11:37,981 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 20:11:38,986 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 20:11:39,990 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 20:11:40,995 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 20:11:42,000 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 20:11:43,038 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.77 sec
2013-09-11 20:11:44,044 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 0.77 sec
2013-09-11 20:11:45,050 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.18 sec
2013-09-11 20:11:46,056 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.18 sec
2013-09-11 20:11:47,062 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.18 sec
MapReduce Total cumulative CPU time: 2 seconds 180 msec
Ended Job = job_201309101627_0397
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 41.61 sec HDFS Read: 118662691 HDFS Write: 192 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.18 sec HDFS Read: 959 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 43 seconds 790 msec
OK
Time taken: 48.915 seconds
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_30480@mturlrep13_201309112011_933995601.txt
hive> ;
hive> quit;
times: 2
query: SELECT URL, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT Refresh != 0 AND IsLink != 0 AND NOT IsDownload != 0 GROUP BY URL ORDER BY PageViews DESC LIMIT 1000;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_30907@mturlrep13_201309112011_251050369.txt
hive> SELECT URL, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT Refresh != 0 AND IsLink != 0 AND NOT IsDownload != 0 GROUP BY URL ORDER BY PageViews DESC LIMIT 1000;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0398
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 20:12:01,259 Stage-1 map = 0%, reduce = 0%
2013-09-11 20:12:08,296 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.56 sec
2013-09-11 20:12:09,304 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.56 sec
2013-09-11 20:12:10,313 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.56 sec
2013-09-11 20:12:11,320 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.56 sec
2013-09-11 20:12:12,326 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.56 sec
2013-09-11 20:12:13,333 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.56 sec
2013-09-11 20:12:14,339 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.56 sec
2013-09-11 20:12:15,346 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.56 sec
2013-09-11 20:12:16,352 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 36.26 sec
2013-09-11 20:12:17,358 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 36.26 sec
2013-09-11 20:12:18,363 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 36.26 sec
2013-09-11 20:12:19,368 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 36.26 sec
2013-09-11 20:12:20,373 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 36.26 sec
2013-09-11 20:12:21,379 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 36.26 sec
2013-09-11 20:12:22,387 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.33 sec
2013-09-11 20:12:23,393 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.33 sec
MapReduce Total cumulative CPU time: 40 seconds 330 msec
Ended Job = job_201309101627_0398
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0399
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 20:12:26,898 Stage-2 map = 0%, reduce = 0%
2013-09-11 20:12:27,903 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 20:12:28,909 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 20:12:29,915 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 20:12:30,920 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 20:12:31,925 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 20:12:32,929 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 20:12:33,934 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 20:12:34,939 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 20:12:35,944 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.25 sec
2013-09-11 20:12:36,950 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.25 sec
2013-09-11 20:12:37,956 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.25 sec
MapReduce Total cumulative CPU time: 2 seconds 250 msec
Ended Job = job_201309101627_0399
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 40.33 sec HDFS Read: 118662691 HDFS Write: 192 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.25 sec HDFS Read: 961 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 42 seconds 580 msec
OK
Time taken: 45.122 seconds
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_782@mturlrep13_201309112012_979888029.txt
hive> ;
hive> quit;
times: 3
query: SELECT URL, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT Refresh != 0 AND IsLink != 0 AND NOT IsDownload != 0 GROUP BY URL ORDER BY PageViews DESC LIMIT 1000;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_1252@mturlrep13_201309112012_1568189503.txt
hive> SELECT URL, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT Refresh != 0 AND IsLink != 0 AND NOT IsDownload != 0 GROUP BY URL ORDER BY PageViews DESC LIMIT 1000;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0400
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 20:12:51,075 Stage-1 map = 0%, reduce = 0%
2013-09-11 20:12:59,115 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 9.84 sec
2013-09-11 20:13:00,123 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.99 sec
2013-09-11 20:13:01,130 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.99 sec
2013-09-11 20:13:02,136 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.99 sec
2013-09-11 20:13:03,142 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.99 sec
2013-09-11 20:13:04,149 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.99 sec
2013-09-11 20:13:05,156 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.99 sec
2013-09-11 20:13:06,163 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.99 sec
2013-09-11 20:13:07,169 Stage-1 map = 93%, reduce = 17%, Cumulative CPU 28.64 sec
2013-09-11 20:13:08,176 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.72 sec
2013-09-11 20:13:09,181 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.72 sec
2013-09-11 20:13:10,188 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.72 sec
2013-09-11 20:13:11,193 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.72 sec
2013-09-11 20:13:12,201 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 41.17 sec
2013-09-11 20:13:13,208 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 42.56 sec
2013-09-11 20:13:14,214 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 42.56 sec
MapReduce Total cumulative CPU time: 42 seconds 560 msec
Ended Job = job_201309101627_0400
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0401
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 20:13:16,674 Stage-2 map = 0%, reduce = 0%
2013-09-11 20:13:18,684 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.8 sec
2013-09-11 20:13:19,689 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.8 sec
2013-09-11 20:13:20,695 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.8 sec
2013-09-11 20:13:21,700 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.8 sec
2013-09-11 20:13:22,705 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.8 sec
2013-09-11 20:13:23,711 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.8 sec
2013-09-11 20:13:24,716 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.8 sec
2013-09-11 20:13:25,721 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 0.8 sec
2013-09-11 20:13:26,727 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.2 sec
2013-09-11 20:13:27,734 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.2 sec
2013-09-11 20:13:28,740 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.2 sec
MapReduce Total cumulative CPU time: 2 seconds 200 msec
Ended Job = job_201309101627_0401
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 42.56 sec HDFS Read: 118662691 HDFS Write: 192 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.2 sec HDFS Read: 961 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 44 seconds 760 msec
OK
Time taken: 44.996 seconds
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_3657@mturlrep13_201309112013_152749800.txt
hive> ;
hive> quit;
times: 1
query: SELECT TraficSourceID, SearchEngineID, AdvEngineID, CASE WHEN SearchEngineID = 0 AND AdvEngineID = 0 THEN Referer ELSE '' END AS Src, URL AS Dst, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT Refresh != 0 GROUP BY TraficSourceID, SearchEngineID, AdvEngineID, Src, Dst ORDER BY PageViews DESC LIMIT 1000;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_4123@mturlrep13_201309112013_250715100.txt
hive> SELECT TraficSourceID, SearchEngineID, AdvEngineID, CASE WHEN SearchEngineID = 0 AND AdvEngineID = 0 THEN Referer ELSE '' END AS Src, URL AS Dst, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT Refresh != 0 GROUP BY TraficSourceID, SearchEngineID, AdvEngineID, Src, Dst ORDER BY PageViews DESC LIMIT 1000; ;
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_4384@mturlrep13_201309112013_11229958.txt
hive> ;
hive> quit;
times: 2
query: SELECT TraficSourceID, SearchEngineID, AdvEngineID, CASE WHEN SearchEngineID = 0 AND AdvEngineID = 0 THEN Referer ELSE '' END AS Src, URL AS Dst, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT Refresh != 0 GROUP BY TraficSourceID, SearchEngineID, AdvEngineID, Src, Dst ORDER BY PageViews DESC LIMIT 1000;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_4967@mturlrep13_201309112013_1609437791.txt
hive> SELECT TraficSourceID, SearchEngineID, AdvEngineID, CASE WHEN SearchEngineID = 0 AND AdvEngineID = 0 THEN Referer ELSE '' END AS Src, URL AS Dst, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT Refresh != 0 GROUP BY TraficSourceID, SearchEngineID, AdvEngineID, Src, Dst ORDER BY PageViews DESC LIMIT 1000; ;
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_5327@mturlrep13_201309112013_984749990.txt
hive> ;
hive> quit;
times: 3
query: SELECT TraficSourceID, SearchEngineID, AdvEngineID, CASE WHEN SearchEngineID = 0 AND AdvEngineID = 0 THEN Referer ELSE '' END AS Src, URL AS Dst, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT Refresh != 0 GROUP BY TraficSourceID, SearchEngineID, AdvEngineID, Src, Dst ORDER BY PageViews DESC LIMIT 1000;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_5791@mturlrep13_201309112014_528239556.txt
hive> SELECT TraficSourceID, SearchEngineID, AdvEngineID, CASE WHEN SearchEngineID = 0 AND AdvEngineID = 0 THEN Referer ELSE '' END AS Src, URL AS Dst, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT Refresh != 0 GROUP BY TraficSourceID, SearchEngineID, AdvEngineID, Src, Dst ORDER BY PageViews DESC LIMIT 1000; ;
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_6065@mturlrep13_201309112014_1043898379.txt
hive> ;
hive> quit;
times: 1
query: SELECT URLHash, EventDate, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT Refresh != 0 AND TraficSourceID IN (-1, 6) AND RefererHash = 6202628419148573758 GROUP BY URLHash, EventDate ORDER BY PageViews DESC LIMIT 100000;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_6531@mturlrep13_201309112014_1491442849.txt
hive> SELECT URLHash, EventDate, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT Refresh != 0 AND TraficSourceID IN (-1, 6) AND RefererHash = 6202628419148573758 GROUP BY URLHash, EventDate ORDER BY PageViews DESC LIMIT 100000; ;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0402
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 20:14:25,167 Stage-1 map = 0%, reduce = 0%
2013-09-11 20:14:32,197 Stage-1 map = 36%, reduce = 0%
2013-09-11 20:14:33,208 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.37 sec
2013-09-11 20:14:34,216 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.37 sec
2013-09-11 20:14:35,224 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.37 sec
2013-09-11 20:14:36,231 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.37 sec
2013-09-11 20:14:37,237 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.37 sec
2013-09-11 20:14:38,243 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.37 sec
2013-09-11 20:14:39,250 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.37 sec
2013-09-11 20:14:40,257 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.37 sec
2013-09-11 20:14:41,263 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 19.37 sec
2013-09-11 20:14:42,271 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.01 sec
2013-09-11 20:14:43,276 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.01 sec
2013-09-11 20:14:44,282 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.01 sec
2013-09-11 20:14:45,287 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.01 sec
2013-09-11 20:14:46,293 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.01 sec
2013-09-11 20:14:47,301 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 42.21 sec
2013-09-11 20:14:48,308 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 42.21 sec
MapReduce Total cumulative CPU time: 42 seconds 210 msec
Ended Job = job_201309101627_0402
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0403
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 20:14:50,774 Stage-2 map = 0%, reduce = 0%
2013-09-11 20:14:52,782 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 20:14:53,788 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 20:14:54,792 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 20:14:55,797 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 20:14:56,802 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 20:14:57,807 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 20:14:58,813 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.75 sec
2013-09-11 20:14:59,819 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 0.75 sec
2013-09-11 20:15:00,824 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.13 sec
2013-09-11 20:15:01,831 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.13 sec
MapReduce Total cumulative CPU time: 2 seconds 130 msec
Ended Job = job_201309101627_0403
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 42.21 sec HDFS Read: 148406904 HDFS Write: 192 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.13 sec HDFS Read: 961 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 44 seconds 340 msec
OK
Time taken: 46.841 seconds
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_8933@mturlrep13_201309112015_1862976498.txt
hive> ;
hive> quit;
times: 2
query: SELECT URLHash, EventDate, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT Refresh != 0 AND TraficSourceID IN (-1, 6) AND RefererHash = 6202628419148573758 GROUP BY URLHash, EventDate ORDER BY PageViews DESC LIMIT 100000;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9402@mturlrep13_201309112015_389918397.txt
hive> SELECT URLHash, EventDate, count(*) AS PageViews FROM hits_10m WHERE CounterID = 34 AND EventDate >= TIMESTAMP('2013-07-01') AND EventDate <= TIMESTAMP('2013-07-31') AND NOT Refresh != 0 AND TraficSourceID IN (-1, 6) AND RefererHash = 6202628419148573758 GROUP BY URLHash, EventDate ORDER BY PageViews DESC LIMIT 100000; ;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0404
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-11 20:15:15,126 Stage-1 map = 0%, reduce = 0%
2013-09-11 20:15:23,165 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.36 sec
2013-09-11 20:15:24,174 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.36 sec
2013-09-11 20:15:25,181 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.36 sec
2013-09-11 20:15:26,188 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.36 sec
2013-09-11 20:15:27,195 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.36 sec
2013-09-11 20:15:28,201 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.36 sec
2013-09-11 20:15:29,208 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.36 sec
2013-09-11 20:15:30,214 Stage-1 map = 97%, reduce = 8%, Cumulative CPU 26.7 sec
2013-09-11 20:15:31,220 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.89 sec
2013-09-11 20:15:32,226 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.89 sec
2013-09-11 20:15:33,231 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.89 sec
2013-09-11 20:15:34,237 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.89 sec
2013-09-11 20:15:35,243 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.89 sec
2013-09-11 20:15:36,251 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.1 sec
2013-09-11 20:15:37,257 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.1 sec
2013-09-11 20:15:38,263 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.1 sec
MapReduce Total cumulative CPU time: 40 seconds 100 msec
Ended Job = job_201309101627_0404
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0405
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-11 20:15:40,779 Stage-2 map = 0%, reduce = 0%
2013-09-11 20:15:42,789 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-11 20:15:43,794 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec