ClickHouse/dbms/benchmark/hive/log/log_10m/log_10m_
2016-02-08 00:58:58 +03:00

5507 lines
330 KiB
Plaintext
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

start time: Вт. сент. 10 19:59:13 MSK 2013
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_30031@mturlrep13_201309101959_1531819947.txt
hive> ;
hive> quit;
times: 1
query: SELECT count(*) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_30467@mturlrep13_201309101959_742405451.txt
hive> SELECT count(*) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0102
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 19:59:33,010 Stage-1 map = 0%, reduce = 0%
2013-09-10 19:59:40,041 Stage-1 map = 7%, reduce = 0%
2013-09-10 19:59:46,067 Stage-1 map = 14%, reduce = 0%
2013-09-10 19:59:49,084 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 45.21 sec
2013-09-10 19:59:50,090 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 45.21 sec
2013-09-10 19:59:51,096 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 45.21 sec
2013-09-10 19:59:52,102 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 45.21 sec
2013-09-10 19:59:53,107 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 45.21 sec
2013-09-10 19:59:54,112 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 45.21 sec
2013-09-10 19:59:55,145 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 45.21 sec
2013-09-10 19:59:56,150 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 45.21 sec
2013-09-10 19:59:57,155 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 45.21 sec
2013-09-10 19:59:58,160 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 45.21 sec
2013-09-10 19:59:59,165 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 45.21 sec
2013-09-10 20:00:00,170 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 45.21 sec
2013-09-10 20:00:01,177 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 70.99 sec
2013-09-10 20:00:02,183 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 70.99 sec
2013-09-10 20:00:03,189 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 70.99 sec
2013-09-10 20:00:04,194 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 70.99 sec
2013-09-10 20:00:05,221 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 70.99 sec
2013-09-10 20:00:06,226 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 70.99 sec
2013-09-10 20:00:07,231 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 70.99 sec
2013-09-10 20:00:08,237 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 70.99 sec
2013-09-10 20:00:09,242 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 70.99 sec
2013-09-10 20:00:10,247 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 70.99 sec
2013-09-10 20:00:11,253 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 70.99 sec
2013-09-10 20:00:12,258 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 70.99 sec
2013-09-10 20:00:13,264 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 70.99 sec
2013-09-10 20:00:14,269 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 70.99 sec
2013-09-10 20:00:15,303 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 70.99 sec
2013-09-10 20:00:16,308 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 70.99 sec
2013-09-10 20:00:17,313 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 70.99 sec
2013-09-10 20:00:18,317 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 70.99 sec
2013-09-10 20:00:19,322 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 70.99 sec
2013-09-10 20:00:20,327 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 70.99 sec
2013-09-10 20:00:21,332 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 70.99 sec
2013-09-10 20:00:22,337 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 70.99 sec
2013-09-10 20:00:23,342 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 70.99 sec
2013-09-10 20:00:24,347 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 70.99 sec
2013-09-10 20:00:25,353 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 70.99 sec
2013-09-10 20:00:26,358 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 70.99 sec
2013-09-10 20:00:27,362 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 107.19 sec
2013-09-10 20:00:28,367 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 107.19 sec
2013-09-10 20:00:29,371 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 145.24 sec
2013-09-10 20:00:30,376 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 145.24 sec
2013-09-10 20:00:31,380 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 145.24 sec
2013-09-10 20:00:32,385 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 145.24 sec
2013-09-10 20:00:33,390 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 145.24 sec
2013-09-10 20:00:34,397 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 147.71 sec
2013-09-10 20:00:35,402 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 147.71 sec
MapReduce Total cumulative CPU time: 2 minutes 27 seconds 710 msec
Ended Job = job_201309101627_0102
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 147.71 sec HDFS Read: 1082943442 HDFS Write: 9 SUCCESS
Total MapReduce CPU Time Spent: 2 minutes 27 seconds 710 msec
OK
10000000
Time taken: 72.111 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_32544@mturlrep13_201309102000_1646880380.txt
hive> ;
hive> quit;
times: 2
query: SELECT count(*) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_518@mturlrep13_201309102000_754325810.txt
hive> SELECT count(*) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0103
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:00:48,452 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:00:56,488 Stage-1 map = 7%, reduce = 0%
2013-09-10 20:00:59,500 Stage-1 map = 14%, reduce = 0%
2013-09-10 20:01:05,526 Stage-1 map = 22%, reduce = 0%
2013-09-10 20:01:08,538 Stage-1 map = 29%, reduce = 0%
2013-09-10 20:01:11,550 Stage-1 map = 36%, reduce = 0%
2013-09-10 20:01:14,561 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:01:16,575 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 80.56 sec
2013-09-10 20:01:17,582 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 80.56 sec
2013-09-10 20:01:18,587 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 80.56 sec
2013-09-10 20:01:19,593 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 80.56 sec
2013-09-10 20:01:20,598 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 80.56 sec
2013-09-10 20:01:21,603 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 80.56 sec
2013-09-10 20:01:22,608 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 80.56 sec
2013-09-10 20:01:23,613 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 80.56 sec
2013-09-10 20:01:24,619 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 80.56 sec
2013-09-10 20:01:25,624 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 80.56 sec
2013-09-10 20:01:26,629 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 80.56 sec
2013-09-10 20:01:27,634 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 80.56 sec
2013-09-10 20:01:28,639 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 80.56 sec
2013-09-10 20:01:29,645 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 80.56 sec
2013-09-10 20:01:30,650 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 80.56 sec
2013-09-10 20:01:31,654 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 80.56 sec
2013-09-10 20:01:32,659 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 80.56 sec
2013-09-10 20:01:33,665 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 80.56 sec
2013-09-10 20:01:34,670 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 80.56 sec
2013-09-10 20:01:35,675 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 80.56 sec
2013-09-10 20:01:36,680 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 80.56 sec
2013-09-10 20:01:37,685 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 80.56 sec
2013-09-10 20:01:38,690 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 80.56 sec
2013-09-10 20:01:39,695 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 80.56 sec
2013-09-10 20:01:40,700 Stage-1 map = 93%, reduce = 17%, Cumulative CPU 110.63 sec
2013-09-10 20:01:41,705 Stage-1 map = 93%, reduce = 17%, Cumulative CPU 110.63 sec
2013-09-10 20:01:42,710 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 110.63 sec
2013-09-10 20:01:43,715 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 145.16 sec
2013-09-10 20:01:44,719 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 145.16 sec
2013-09-10 20:01:45,723 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 145.16 sec
2013-09-10 20:01:46,727 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 145.16 sec
2013-09-10 20:01:47,732 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 145.16 sec
2013-09-10 20:01:48,737 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 145.16 sec
2013-09-10 20:01:49,744 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 147.56 sec
2013-09-10 20:01:50,761 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 147.56 sec
2013-09-10 20:01:51,766 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 147.56 sec
MapReduce Total cumulative CPU time: 2 minutes 27 seconds 560 msec
Ended Job = job_201309101627_0103
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 147.56 sec HDFS Read: 1082943442 HDFS Write: 9 SUCCESS
Total MapReduce CPU Time Spent: 2 minutes 27 seconds 560 msec
OK
10000000
Time taken: 70.619 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_2229@mturlrep13_201309102001_150357587.txt
hive> ;
hive> quit;
times: 3
query: SELECT count(*) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_2633@mturlrep13_201309102001_764470607.txt
hive> SELECT count(*) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0104
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:02:04,867 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:02:12,918 Stage-1 map = 7%, reduce = 0%
2013-09-10 20:02:15,932 Stage-1 map = 14%, reduce = 0%
2013-09-10 20:02:21,957 Stage-1 map = 22%, reduce = 0%
2013-09-10 20:02:24,969 Stage-1 map = 29%, reduce = 0%
2013-09-10 20:02:27,981 Stage-1 map = 36%, reduce = 0%
2013-09-10 20:02:30,995 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:02:32,005 Stage-1 map = 46%, reduce = 0%, Cumulative CPU 34.44 sec
2013-09-10 20:02:33,013 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 76.6 sec
2013-09-10 20:02:34,019 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 76.6 sec
2013-09-10 20:02:35,032 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 76.6 sec
2013-09-10 20:02:36,044 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 76.6 sec
2013-09-10 20:02:37,049 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 76.6 sec
2013-09-10 20:02:38,055 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 76.6 sec
2013-09-10 20:02:39,060 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 76.6 sec
2013-09-10 20:02:40,065 Stage-1 map = 54%, reduce = 17%, Cumulative CPU 76.6 sec
2013-09-10 20:02:41,071 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 76.6 sec
2013-09-10 20:02:42,076 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 76.6 sec
2013-09-10 20:02:43,082 Stage-1 map = 61%, reduce = 17%, Cumulative CPU 76.6 sec
2013-09-10 20:02:44,087 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 76.6 sec
2013-09-10 20:02:45,092 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 76.6 sec
2013-09-10 20:02:46,112 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 76.6 sec
2013-09-10 20:02:47,116 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 76.6 sec
2013-09-10 20:02:48,131 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 76.6 sec
2013-09-10 20:02:49,136 Stage-1 map = 69%, reduce = 17%, Cumulative CPU 76.6 sec
2013-09-10 20:02:50,142 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 122.35 sec
2013-09-10 20:02:51,147 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 122.35 sec
2013-09-10 20:02:52,151 Stage-1 map = 76%, reduce = 17%, Cumulative CPU 122.35 sec
2013-09-10 20:02:53,156 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 122.35 sec
2013-09-10 20:02:54,161 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 122.35 sec
2013-09-10 20:02:55,167 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 122.35 sec
2013-09-10 20:02:56,195 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 122.35 sec
2013-09-10 20:02:57,200 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 122.35 sec
2013-09-10 20:02:58,205 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 137.05 sec
2013-09-10 20:02:59,210 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 137.05 sec
2013-09-10 20:03:00,215 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 149.43 sec
2013-09-10 20:03:01,220 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 149.43 sec
2013-09-10 20:03:02,224 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 149.43 sec
2013-09-10 20:03:03,228 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 149.43 sec
2013-09-10 20:03:04,232 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 149.43 sec
2013-09-10 20:03:05,239 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 151.13 sec
2013-09-10 20:03:06,246 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 151.13 sec
2013-09-10 20:03:07,252 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 151.13 sec
MapReduce Total cumulative CPU time: 2 minutes 31 seconds 130 msec
Ended Job = job_201309101627_0104
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 151.13 sec HDFS Read: 1082943442 HDFS Write: 9 SUCCESS
Total MapReduce CPU Time Spent: 2 minutes 31 seconds 130 msec
OK
10000000
Time taken: 69.712 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_4005@mturlrep13_201309102003_1205264534.txt
hive> ;
hive> quit;
times: 1
query: SELECT count(*) FROM hits_10m WHERE AdvEngineID != 0;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_4453@mturlrep13_201309102003_994011847.txt
hive> SELECT count(*) FROM hits_10m WHERE AdvEngineID != 0;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0105
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:03:27,324 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:03:32,353 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.26 sec
2013-09-10 20:03:33,361 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.26 sec
2013-09-10 20:03:34,368 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.26 sec
2013-09-10 20:03:35,374 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.26 sec
2013-09-10 20:03:36,380 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.26 sec
2013-09-10 20:03:37,386 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 17.88 sec
2013-09-10 20:03:38,392 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.98 sec
2013-09-10 20:03:39,398 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.98 sec
2013-09-10 20:03:40,407 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 25.98 sec
2013-09-10 20:03:41,413 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 25.98 sec
2013-09-10 20:03:42,420 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 25.98 sec
MapReduce Total cumulative CPU time: 25 seconds 980 msec
Ended Job = job_201309101627_0105
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 25.98 sec HDFS Read: 907716 HDFS Write: 7 SUCCESS
Total MapReduce CPU Time Spent: 25 seconds 980 msec
OK
171127
Time taken: 24.908 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_5972@mturlrep13_201309102003_1892761219.txt
hive> ;
hive> quit;
times: 2
query: SELECT count(*) FROM hits_10m WHERE AdvEngineID != 0;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_6381@mturlrep13_201309102003_2036843139.txt
hive> SELECT count(*) FROM hits_10m WHERE AdvEngineID != 0;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0106
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:03:55,419 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:04:00,446 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.18 sec
2013-09-10 20:04:01,454 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.18 sec
2013-09-10 20:04:02,462 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.18 sec
2013-09-10 20:04:03,469 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.18 sec
2013-09-10 20:04:04,475 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.18 sec
2013-09-10 20:04:05,481 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.79 sec
2013-09-10 20:04:06,487 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.79 sec
2013-09-10 20:04:07,493 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.79 sec
2013-09-10 20:04:08,501 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 25.71 sec
2013-09-10 20:04:09,508 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 25.71 sec
2013-09-10 20:04:10,515 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 25.71 sec
MapReduce Total cumulative CPU time: 25 seconds 710 msec
Ended Job = job_201309101627_0106
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 25.71 sec HDFS Read: 907716 HDFS Write: 7 SUCCESS
Total MapReduce CPU Time Spent: 25 seconds 710 msec
OK
171127
Time taken: 22.567 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_7629@mturlrep13_201309102004_1501464055.txt
hive> ;
hive> quit;
times: 3
query: SELECT count(*) FROM hits_10m WHERE AdvEngineID != 0;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_8043@mturlrep13_201309102004_2101165091.txt
hive> SELECT count(*) FROM hits_10m WHERE AdvEngineID != 0;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0107
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:04:23,561 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:04:28,586 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.66 sec
2013-09-10 20:04:29,594 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.66 sec
2013-09-10 20:04:30,602 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.66 sec
2013-09-10 20:04:31,608 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.66 sec
2013-09-10 20:04:32,613 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.66 sec
2013-09-10 20:04:33,618 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.37 sec
2013-09-10 20:04:34,623 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.37 sec
2013-09-10 20:04:35,628 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.37 sec
2013-09-10 20:04:36,635 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 25.28 sec
2013-09-10 20:04:37,641 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 25.28 sec
2013-09-10 20:04:38,648 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 25.28 sec
MapReduce Total cumulative CPU time: 25 seconds 280 msec
Ended Job = job_201309101627_0107
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 25.28 sec HDFS Read: 907716 HDFS Write: 7 SUCCESS
Total MapReduce CPU Time Spent: 25 seconds 280 msec
OK
171127
Time taken: 22.452 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9266@mturlrep13_201309102004_1013038557.txt
hive> ;
hive> quit;
times: 1
query: SELECT sum(AdvEngineID), count(*), avg(ResolutionWidth) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9693@mturlrep13_201309102004_1030085538.txt
hive> SELECT sum(AdvEngineID), count(*), avg(ResolutionWidth) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0108
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:04:59,466 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:05:06,500 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.22 sec
2013-09-10 20:05:07,507 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.22 sec
2013-09-10 20:05:08,515 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.22 sec
2013-09-10 20:05:09,520 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.22 sec
2013-09-10 20:05:10,526 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.22 sec
2013-09-10 20:05:11,531 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.22 sec
2013-09-10 20:05:12,538 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.22 sec
2013-09-10 20:05:13,544 Stage-1 map = 97%, reduce = 0%, Cumulative CPU 23.47 sec
2013-09-10 20:05:14,550 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 31.35 sec
2013-09-10 20:05:15,554 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 31.35 sec
2013-09-10 20:05:16,559 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 31.35 sec
2013-09-10 20:05:17,564 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 31.35 sec
2013-09-10 20:05:18,569 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 31.35 sec
2013-09-10 20:05:19,576 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 33.46 sec
2013-09-10 20:05:20,581 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 33.46 sec
2013-09-10 20:05:21,587 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 33.46 sec
MapReduce Total cumulative CPU time: 33 seconds 460 msec
Ended Job = job_201309101627_0108
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 33.46 sec HDFS Read: 8109219 HDFS Write: 30 SUCCESS
Total MapReduce CPU Time Spent: 33 seconds 460 msec
OK
Time taken: 32.261 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_11697@mturlrep13_201309102005_1481049011.txt
hive> ;
hive> quit;
times: 2
query: SELECT sum(AdvEngineID), count(*), avg(ResolutionWidth) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_12122@mturlrep13_201309102005_1165569106.txt
hive> SELECT sum(AdvEngineID), count(*), avg(ResolutionWidth) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0109
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:05:35,519 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:05:41,547 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.67 sec
2013-09-10 20:05:42,554 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.67 sec
2013-09-10 20:05:43,560 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.67 sec
2013-09-10 20:05:44,565 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.67 sec
2013-09-10 20:05:45,570 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.67 sec
2013-09-10 20:05:46,575 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.67 sec
2013-09-10 20:05:47,581 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.67 sec
2013-09-10 20:05:48,586 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 32.39 sec
2013-09-10 20:05:49,591 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 32.39 sec
2013-09-10 20:05:50,597 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 32.98 sec
2013-09-10 20:05:51,602 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 32.98 sec
2013-09-10 20:05:52,607 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 32.98 sec
2013-09-10 20:05:53,611 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 32.98 sec
2013-09-10 20:05:54,618 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 34.53 sec
2013-09-10 20:05:55,624 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 34.53 sec
2013-09-10 20:05:56,629 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 34.53 sec
MapReduce Total cumulative CPU time: 34 seconds 530 msec
Ended Job = job_201309101627_0109
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 34.53 sec HDFS Read: 8109219 HDFS Write: 30 SUCCESS
Total MapReduce CPU Time Spent: 34 seconds 530 msec
OK
Time taken: 29.475 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_13449@mturlrep13_201309102005_1900832859.txt
hive> ;
hive> quit;
times: 3
query: SELECT sum(AdvEngineID), count(*), avg(ResolutionWidth) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_13873@mturlrep13_201309102006_801912719.txt
hive> SELECT sum(AdvEngineID), count(*), avg(ResolutionWidth) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0110
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:06:10,558 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:06:16,584 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.41 sec
2013-09-10 20:06:17,590 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.41 sec
2013-09-10 20:06:18,599 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.41 sec
2013-09-10 20:06:19,606 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.41 sec
2013-09-10 20:06:20,611 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.41 sec
2013-09-10 20:06:21,616 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.41 sec
2013-09-10 20:06:22,622 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.41 sec
2013-09-10 20:06:23,627 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 32.36 sec
2013-09-10 20:06:24,632 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 32.36 sec
2013-09-10 20:06:25,638 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 32.36 sec
2013-09-10 20:06:26,643 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 32.36 sec
2013-09-10 20:06:27,648 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 32.36 sec
2013-09-10 20:06:28,653 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 32.36 sec
2013-09-10 20:06:29,657 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 32.36 sec
2013-09-10 20:06:30,664 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 34.45 sec
2013-09-10 20:06:31,670 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 34.45 sec
MapReduce Total cumulative CPU time: 34 seconds 450 msec
Ended Job = job_201309101627_0110
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 34.45 sec HDFS Read: 8109219 HDFS Write: 30 SUCCESS
Total MapReduce CPU Time Spent: 34 seconds 450 msec
OK
Time taken: 29.445 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_15232@mturlrep13_201309102006_1831985638.txt
hive> ;
hive> quit;
times: 1
query: SELECT sum(UserID) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_15654@mturlrep13_201309102006_663118057.txt
hive> SELECT sum(UserID) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0111
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:06:52,411 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:06:59,445 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.89 sec
2013-09-10 20:07:00,452 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.89 sec
2013-09-10 20:07:01,459 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.89 sec
2013-09-10 20:07:02,465 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.89 sec
2013-09-10 20:07:03,470 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.89 sec
2013-09-10 20:07:04,476 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.89 sec
2013-09-10 20:07:05,481 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 21.89 sec
2013-09-10 20:07:06,487 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 29.59 sec
2013-09-10 20:07:07,492 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 29.59 sec
2013-09-10 20:07:08,497 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 29.59 sec
2013-09-10 20:07:09,502 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 29.59 sec
2013-09-10 20:07:10,506 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 29.59 sec
2013-09-10 20:07:11,511 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 29.59 sec
2013-09-10 20:07:12,518 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 31.64 sec
2013-09-10 20:07:13,524 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 31.64 sec
MapReduce Total cumulative CPU time: 31 seconds 640 msec
Ended Job = job_201309101627_0111
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 31.64 sec HDFS Read: 57312623 HDFS Write: 21 SUCCESS
Total MapReduce CPU Time Spent: 31 seconds 640 msec
OK
-4662894107982093709
Time taken: 30.886 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_16961@mturlrep13_201309102007_560489574.txt
hive> ;
hive> quit;
times: 2
query: SELECT sum(UserID) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_17378@mturlrep13_201309102007_948891522.txt
hive> SELECT sum(UserID) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0112
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:07:27,959 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:07:33,989 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.72 sec
2013-09-10 20:07:34,997 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.72 sec
2013-09-10 20:07:36,004 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.72 sec
2013-09-10 20:07:37,010 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.72 sec
2013-09-10 20:07:38,016 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.72 sec
2013-09-10 20:07:39,022 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 16.72 sec
2013-09-10 20:07:40,028 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 31.86 sec
2013-09-10 20:07:41,032 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 31.86 sec
2013-09-10 20:07:42,038 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 31.86 sec
2013-09-10 20:07:43,043 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 31.86 sec
2013-09-10 20:07:44,049 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 31.86 sec
2013-09-10 20:07:45,054 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 31.86 sec
2013-09-10 20:07:46,059 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 31.86 sec
2013-09-10 20:07:47,066 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 33.75 sec
2013-09-10 20:07:48,071 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 33.75 sec
MapReduce Total cumulative CPU time: 33 seconds 750 msec
Ended Job = job_201309101627_0112
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 33.75 sec HDFS Read: 57312623 HDFS Write: 21 SUCCESS
Total MapReduce CPU Time Spent: 33 seconds 750 msec
OK
-4662894107982093709
Time taken: 28.62 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_18627@mturlrep13_201309102007_1570970698.txt
hive> ;
hive> quit;
times: 3
query: SELECT sum(UserID) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_19048@mturlrep13_201309102007_265385188.txt
hive> SELECT sum(UserID) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0113
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:08:01,232 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:08:07,261 Stage-1 map = 25%, reduce = 0%, Cumulative CPU 7.12 sec
2013-09-10 20:08:08,269 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.41 sec
2013-09-10 20:08:09,277 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.41 sec
2013-09-10 20:08:10,282 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.41 sec
2013-09-10 20:08:11,288 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.41 sec
2013-09-10 20:08:12,294 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 15.41 sec
2013-09-10 20:08:13,300 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 21.76 sec
2013-09-10 20:08:14,304 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 29.62 sec
2013-09-10 20:08:15,309 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 29.62 sec
2013-09-10 20:08:16,315 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 29.62 sec
2013-09-10 20:08:17,320 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 29.62 sec
2013-09-10 20:08:18,325 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 29.62 sec
2013-09-10 20:08:19,329 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 29.62 sec
2013-09-10 20:08:20,334 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 29.62 sec
2013-09-10 20:08:21,341 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 31.69 sec
2013-09-10 20:08:22,347 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 31.69 sec
MapReduce Total cumulative CPU time: 31 seconds 690 msec
Ended Job = job_201309101627_0113
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 31.69 sec HDFS Read: 57312623 HDFS Write: 21 SUCCESS
Total MapReduce CPU Time Spent: 31 seconds 690 msec
OK
-4662894107982093709
Time taken: 28.461 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_20298@mturlrep13_201309102008_320778143.txt
hive> ;
hive> quit;
times: 1
query: SELECT count(DISTINCT UserID) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_20705@mturlrep13_201309102008_862349350.txt
hive> SELECT count(DISTINCT UserID) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0114
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:08:42,650 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:08:49,678 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:08:52,698 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.9 sec
2013-09-10 20:08:53,705 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.9 sec
2013-09-10 20:08:54,712 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.9 sec
2013-09-10 20:08:55,718 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.9 sec
2013-09-10 20:08:56,723 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.9 sec
2013-09-10 20:08:57,729 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.9 sec
2013-09-10 20:08:58,734 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.9 sec
2013-09-10 20:08:59,739 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.9 sec
2013-09-10 20:09:00,744 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 27.9 sec
2013-09-10 20:09:01,749 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 27.9 sec
2013-09-10 20:09:02,754 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.85 sec
2013-09-10 20:09:03,758 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.85 sec
2013-09-10 20:09:04,763 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.85 sec
2013-09-10 20:09:05,768 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.85 sec
2013-09-10 20:09:06,774 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.85 sec
2013-09-10 20:09:07,779 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.85 sec
2013-09-10 20:09:08,784 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.85 sec
2013-09-10 20:09:09,790 Stage-1 map = 100%, reduce = 88%, Cumulative CPU 54.85 sec
2013-09-10 20:09:10,797 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 62.65 sec
2013-09-10 20:09:11,802 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 62.65 sec
MapReduce Total cumulative CPU time: 1 minutes 2 seconds 650 msec
Ended Job = job_201309101627_0114
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 62.65 sec HDFS Read: 57312623 HDFS Write: 8 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 2 seconds 650 msec
OK
2037258
Time taken: 39.152 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_22014@mturlrep13_201309102009_2085887753.txt
hive> ;
hive> quit;
times: 2
query: SELECT count(DISTINCT UserID) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_22418@mturlrep13_201309102009_1216544523.txt
hive> SELECT count(DISTINCT UserID) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0115
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:09:25,056 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:09:33,088 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:09:35,102 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.41 sec
2013-09-10 20:09:36,108 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.41 sec
2013-09-10 20:09:37,115 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.41 sec
2013-09-10 20:09:38,121 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.41 sec
2013-09-10 20:09:39,127 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.41 sec
2013-09-10 20:09:40,134 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.41 sec
2013-09-10 20:09:41,140 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.41 sec
2013-09-10 20:09:42,150 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.41 sec
2013-09-10 20:09:43,155 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 27.41 sec
2013-09-10 20:09:44,160 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 27.41 sec
2013-09-10 20:09:45,165 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.74 sec
2013-09-10 20:09:46,171 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.74 sec
2013-09-10 20:09:47,176 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.74 sec
2013-09-10 20:09:48,182 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.74 sec
2013-09-10 20:09:49,187 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.74 sec
2013-09-10 20:09:50,193 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 56.57 sec
2013-09-10 20:09:51,198 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 56.57 sec
2013-09-10 20:09:52,205 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 63.64 sec
2013-09-10 20:09:53,211 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 63.64 sec
2013-09-10 20:09:54,217 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 63.64 sec
MapReduce Total cumulative CPU time: 1 minutes 3 seconds 640 msec
Ended Job = job_201309101627_0115
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 63.64 sec HDFS Read: 57312623 HDFS Write: 8 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 3 seconds 640 msec
OK
2037258
Time taken: 36.46 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_23691@mturlrep13_201309102009_809944135.txt
hive> ;
hive> quit;
times: 3
query: SELECT count(DISTINCT UserID) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_24111@mturlrep13_201309102009_812926126.txt
hive> SELECT count(DISTINCT UserID) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0116
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:10:08,163 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:10:15,213 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:10:17,227 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.08 sec
2013-09-10 20:10:18,233 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.08 sec
2013-09-10 20:10:19,240 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.08 sec
2013-09-10 20:10:20,246 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.08 sec
2013-09-10 20:10:21,252 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.08 sec
2013-09-10 20:10:22,258 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.08 sec
2013-09-10 20:10:23,265 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.08 sec
2013-09-10 20:10:24,270 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.08 sec
2013-09-10 20:10:25,275 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 27.08 sec
2013-09-10 20:10:26,280 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 39.75 sec
2013-09-10 20:10:27,285 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.02 sec
2013-09-10 20:10:28,290 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.02 sec
2013-09-10 20:10:29,295 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.02 sec
2013-09-10 20:10:30,300 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.02 sec
2013-09-10 20:10:31,305 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.02 sec
2013-09-10 20:10:32,311 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.02 sec
2013-09-10 20:10:33,315 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.02 sec
2013-09-10 20:10:34,321 Stage-1 map = 100%, reduce = 89%, Cumulative CPU 54.02 sec
2013-09-10 20:10:35,327 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 61.73 sec
2013-09-10 20:10:36,333 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 61.73 sec
MapReduce Total cumulative CPU time: 1 minutes 1 seconds 730 msec
Ended Job = job_201309101627_0116
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 61.73 sec HDFS Read: 57312623 HDFS Write: 8 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 1 seconds 730 msec
OK
2037258
Time taken: 36.551 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_26095@mturlrep13_201309102010_1548501792.txt
hive> ;
hive> quit;
times: 1
query: SELECT count(DISTINCT SearchPhrase) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_26522@mturlrep13_201309102010_1522319778.txt
hive> SELECT count(DISTINCT SearchPhrase) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0117
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:10:57,776 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:11:04,804 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:11:05,815 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.53 sec
2013-09-10 20:11:06,821 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.53 sec
2013-09-10 20:11:07,829 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.53 sec
2013-09-10 20:11:08,835 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.53 sec
2013-09-10 20:11:09,840 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.53 sec
2013-09-10 20:11:10,848 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.53 sec
2013-09-10 20:11:11,854 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.53 sec
2013-09-10 20:11:12,861 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.53 sec
2013-09-10 20:11:13,866 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 21.53 sec
2013-09-10 20:11:14,871 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.24 sec
2013-09-10 20:11:15,876 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.24 sec
2013-09-10 20:11:16,880 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.24 sec
2013-09-10 20:11:17,885 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.24 sec
2013-09-10 20:11:18,890 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.24 sec
2013-09-10 20:11:19,895 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.24 sec
2013-09-10 20:11:20,899 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.24 sec
2013-09-10 20:11:21,904 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.24 sec
2013-09-10 20:11:22,909 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 42.24 sec
2013-09-10 20:11:23,916 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 50.0 sec
2013-09-10 20:11:24,922 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 50.0 sec
MapReduce Total cumulative CPU time: 50 seconds 0 msec
Ended Job = job_201309101627_0117
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 50.0 sec HDFS Read: 27820105 HDFS Write: 8 SUCCESS
Total MapReduce CPU Time Spent: 50 seconds 0 msec
OK
1110413
Time taken: 36.986 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_27838@mturlrep13_201309102011_1558608875.txt
hive> ;
hive> quit;
times: 2
query: SELECT count(DISTINCT SearchPhrase) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_28243@mturlrep13_201309102011_1322698871.txt
hive> SELECT count(DISTINCT SearchPhrase) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0118
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:11:37,820 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:11:45,856 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:11:46,868 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.55 sec
2013-09-10 20:11:47,874 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.55 sec
2013-09-10 20:11:48,881 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.55 sec
2013-09-10 20:11:49,887 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.55 sec
2013-09-10 20:11:50,893 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.55 sec
2013-09-10 20:11:51,899 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.55 sec
2013-09-10 20:11:52,906 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.55 sec
2013-09-10 20:11:53,911 Stage-1 map = 72%, reduce = 17%, Cumulative CPU 22.55 sec
2013-09-10 20:11:54,917 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 32.95 sec
2013-09-10 20:11:55,922 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.61 sec
2013-09-10 20:11:56,926 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.61 sec
2013-09-10 20:11:57,931 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.61 sec
2013-09-10 20:11:58,936 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.61 sec
2013-09-10 20:11:59,941 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.61 sec
2013-09-10 20:12:00,946 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.61 sec
2013-09-10 20:12:01,962 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.61 sec
2013-09-10 20:12:02,967 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 44.61 sec
2013-09-10 20:12:03,975 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 52.41 sec
2013-09-10 20:12:04,981 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 52.41 sec
MapReduce Total cumulative CPU time: 52 seconds 410 msec
Ended Job = job_201309101627_0118
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 52.41 sec HDFS Read: 27820105 HDFS Write: 8 SUCCESS
Total MapReduce CPU Time Spent: 52 seconds 410 msec
OK
1110413
Time taken: 34.466 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_29534@mturlrep13_201309102012_702635740.txt
hive> ;
hive> quit;
times: 3
query: SELECT count(DISTINCT SearchPhrase) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_29941@mturlrep13_201309102012_2027998571.txt
hive> SELECT count(DISTINCT SearchPhrase) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0119
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:12:17,991 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:12:26,021 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:12:27,031 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.05 sec
2013-09-10 20:12:28,038 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.05 sec
2013-09-10 20:12:29,045 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.05 sec
2013-09-10 20:12:30,051 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.05 sec
2013-09-10 20:12:31,057 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.05 sec
2013-09-10 20:12:32,063 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.05 sec
2013-09-10 20:12:33,070 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.05 sec
2013-09-10 20:12:34,076 Stage-1 map = 72%, reduce = 17%, Cumulative CPU 22.05 sec
2013-09-10 20:12:35,081 Stage-1 map = 99%, reduce = 17%, Cumulative CPU 33.57 sec
2013-09-10 20:12:36,086 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.61 sec
2013-09-10 20:12:37,091 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.61 sec
2013-09-10 20:12:38,096 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.61 sec
2013-09-10 20:12:39,101 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.61 sec
2013-09-10 20:12:40,107 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.61 sec
2013-09-10 20:12:41,112 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.61 sec
2013-09-10 20:12:42,117 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 44.61 sec
2013-09-10 20:12:43,122 Stage-1 map = 100%, reduce = 94%, Cumulative CPU 44.61 sec
2013-09-10 20:12:44,130 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 52.1 sec
2013-09-10 20:12:45,135 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 52.1 sec
MapReduce Total cumulative CPU time: 52 seconds 100 msec
Ended Job = job_201309101627_0119
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 52.1 sec HDFS Read: 27820105 HDFS Write: 8 SUCCESS
Total MapReduce CPU Time Spent: 52 seconds 100 msec
OK
1110413
Time taken: 34.424 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_31270@mturlrep13_201309102012_1959011426.txt
hive> ;
hive> quit;
times: 1
query: SELECT min(EventDate), max(EventDate) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_31694@mturlrep13_201309102012_1837865750.txt
hive> SELECT min(EventDate), max(EventDate) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0120
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:13:04,874 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:13:10,902 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.53 sec
2013-09-10 20:13:11,910 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.53 sec
2013-09-10 20:13:12,917 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.53 sec
2013-09-10 20:13:13,923 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.53 sec
2013-09-10 20:13:14,928 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.53 sec
2013-09-10 20:13:15,934 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.53 sec
2013-09-10 20:13:16,939 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 21.15 sec
2013-09-10 20:13:17,945 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 28.11 sec
2013-09-10 20:13:18,950 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 28.11 sec
2013-09-10 20:13:19,956 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 28.11 sec
2013-09-10 20:13:20,961 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 28.11 sec
2013-09-10 20:13:21,965 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 28.11 sec
2013-09-10 20:13:22,970 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 28.11 sec
2013-09-10 20:13:23,975 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 28.11 sec
2013-09-10 20:13:24,982 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 30.21 sec
2013-09-10 20:13:25,988 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 30.21 sec
MapReduce Total cumulative CPU time: 30 seconds 210 msec
Ended Job = job_201309101627_0120
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 30.21 sec HDFS Read: 597016 HDFS Write: 6 SUCCESS
Total MapReduce CPU Time Spent: 30 seconds 210 msec
OK
Time taken: 31.036 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_484@mturlrep13_201309102013_979592639.txt
hive> ;
hive> quit;
times: 2
query: SELECT min(EventDate), max(EventDate) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_912@mturlrep13_201309102013_1524044255.txt
hive> SELECT min(EventDate), max(EventDate) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0121
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:13:40,111 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:13:45,139 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.05 sec
2013-09-10 20:13:46,146 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.05 sec
2013-09-10 20:13:47,154 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.05 sec
2013-09-10 20:13:48,159 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.05 sec
2013-09-10 20:13:49,165 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.05 sec
2013-09-10 20:13:50,171 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 19.88 sec
2013-09-10 20:13:51,176 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 26.39 sec
2013-09-10 20:13:52,181 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 26.39 sec
2013-09-10 20:13:53,188 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 28.17 sec
2013-09-10 20:13:54,195 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 28.17 sec
2013-09-10 20:13:55,201 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 28.17 sec
MapReduce Total cumulative CPU time: 28 seconds 170 msec
Ended Job = job_201309101627_0121
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 28.17 sec HDFS Read: 597016 HDFS Write: 6 SUCCESS
Total MapReduce CPU Time Spent: 28 seconds 170 msec
OK
Time taken: 23.561 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_2425@mturlrep13_201309102013_943607470.txt
hive> ;
hive> quit;
times: 3
query: SELECT min(EventDate), max(EventDate) FROM hits_10m;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_2853@mturlrep13_201309102014_7623515.txt
hive> SELECT min(EventDate), max(EventDate) FROM hits_10m;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0122
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:14:08,198 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:14:14,228 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.03 sec
2013-09-10 20:14:15,235 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.03 sec
2013-09-10 20:14:16,243 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.03 sec
2013-09-10 20:14:17,249 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.03 sec
2013-09-10 20:14:18,254 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.03 sec
2013-09-10 20:14:19,260 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 14.03 sec
2013-09-10 20:14:20,265 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 20.82 sec
2013-09-10 20:14:21,270 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 28.54 sec
2013-09-10 20:14:22,276 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 28.54 sec
2013-09-10 20:14:23,284 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 30.46 sec
2013-09-10 20:14:24,289 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 30.46 sec
MapReduce Total cumulative CPU time: 30 seconds 460 msec
Ended Job = job_201309101627_0122
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 30.46 sec HDFS Read: 597016 HDFS Write: 6 SUCCESS
Total MapReduce CPU Time Spent: 30 seconds 460 msec
OK
Time taken: 23.42 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_4095@mturlrep13_201309102014_942872156.txt
hive> ;
hive> quit;
times: 1
query: SELECT AdvEngineID, count(*) AS c FROM hits_10m WHERE AdvEngineID != 0 GROUP BY AdvEngineID ORDER BY c DESC;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_4560@mturlrep13_201309102014_1712040963.txt
hive> SELECT AdvEngineID, count(*) AS c FROM hits_10m WHERE AdvEngineID != 0 GROUP BY AdvEngineID ORDER BY c DESC;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0123
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:14:44,237 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:14:49,266 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.11 sec
2013-09-10 20:14:50,273 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.11 sec
2013-09-10 20:14:51,282 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.11 sec
2013-09-10 20:14:52,288 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.11 sec
2013-09-10 20:14:53,294 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.11 sec
2013-09-10 20:14:54,301 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 24.77 sec
2013-09-10 20:14:55,307 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 24.77 sec
2013-09-10 20:14:56,313 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 24.77 sec
2013-09-10 20:14:57,320 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 28.57 sec
2013-09-10 20:14:58,326 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 28.57 sec
2013-09-10 20:14:59,332 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 28.57 sec
MapReduce Total cumulative CPU time: 28 seconds 570 msec
Ended Job = job_201309101627_0123
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0124
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:15:01,785 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:15:03,793 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-10 20:15:04,798 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-10 20:15:05,809 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-10 20:15:06,814 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-10 20:15:07,820 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-10 20:15:08,825 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-10 20:15:09,830 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-10 20:15:10,835 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 0.82 sec
2013-09-10 20:15:11,840 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.23 sec
2013-09-10 20:15:12,846 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.23 sec
MapReduce Total cumulative CPU time: 2 seconds 230 msec
Ended Job = job_201309101627_0124
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 28.57 sec HDFS Read: 907716 HDFS Write: 384 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.23 sec HDFS Read: 1151 HDFS Write: 60 SUCCESS
Total MapReduce CPU Time Spent: 30 seconds 800 msec
OK
Time taken: 38.302 seconds, Fetched: 9 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_7012@mturlrep13_201309102015_1321687480.txt
hive> ;
hive> quit;
times: 2
query: SELECT AdvEngineID, count(*) AS c FROM hits_10m WHERE AdvEngineID != 0 GROUP BY AdvEngineID ORDER BY c DESC;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_7736@mturlrep13_201309102015_1218789323.txt
hive> SELECT AdvEngineID, count(*) AS c FROM hits_10m WHERE AdvEngineID != 0 GROUP BY AdvEngineID ORDER BY c DESC;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0125
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:15:25,872 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:15:30,907 Stage-1 map = 25%, reduce = 0%, Cumulative CPU 5.97 sec
2013-09-10 20:15:31,916 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.9 sec
2013-09-10 20:15:32,924 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.9 sec
2013-09-10 20:15:33,931 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.9 sec
2013-09-10 20:15:34,938 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.9 sec
2013-09-10 20:15:35,944 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 18.67 sec
2013-09-10 20:15:36,950 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 24.98 sec
2013-09-10 20:15:37,957 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 24.98 sec
2013-09-10 20:15:38,965 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 26.8 sec
2013-09-10 20:15:39,972 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 28.6 sec
2013-09-10 20:15:40,978 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 28.6 sec
MapReduce Total cumulative CPU time: 28 seconds 600 msec
Ended Job = job_201309101627_0125
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0126
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:15:44,517 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:15:45,522 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 20:15:46,528 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 20:15:47,534 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 20:15:48,539 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 20:15:49,544 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 20:15:50,549 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 20:15:51,555 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 20:15:52,561 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 20:15:53,567 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.24 sec
2013-09-10 20:15:54,573 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.24 sec
2013-09-10 20:15:55,578 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.24 sec
MapReduce Total cumulative CPU time: 2 seconds 240 msec
Ended Job = job_201309101627_0126
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 28.6 sec HDFS Read: 907716 HDFS Write: 384 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.24 sec HDFS Read: 1149 HDFS Write: 60 SUCCESS
Total MapReduce CPU Time Spent: 30 seconds 840 msec
OK
Time taken: 37.076 seconds, Fetched: 9 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9595@mturlrep13_201309102015_955881835.txt
hive> ;
hive> quit;
times: 3
query: SELECT AdvEngineID, count(*) AS c FROM hits_10m WHERE AdvEngineID != 0 GROUP BY AdvEngineID ORDER BY c DESC;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_10020@mturlrep13_201309102016_1021454413.txt
hive> SELECT AdvEngineID, count(*) AS c FROM hits_10m WHERE AdvEngineID != 0 GROUP BY AdvEngineID ORDER BY c DESC;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0127
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:16:08,481 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:16:13,508 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.5 sec
2013-09-10 20:16:14,515 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.5 sec
2013-09-10 20:16:15,522 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.5 sec
2013-09-10 20:16:16,528 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.5 sec
2013-09-10 20:16:17,535 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.5 sec
2013-09-10 20:16:18,542 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.29 sec
2013-09-10 20:16:19,547 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.29 sec
2013-09-10 20:16:20,553 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.29 sec
2013-09-10 20:16:21,560 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 25.17 sec
2013-09-10 20:16:22,567 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 27.08 sec
2013-09-10 20:16:23,574 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 27.08 sec
MapReduce Total cumulative CPU time: 27 seconds 80 msec
Ended Job = job_201309101627_0127
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0128
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:16:27,070 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:16:28,075 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-10 20:16:29,081 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-10 20:16:30,090 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-10 20:16:31,096 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-10 20:16:32,101 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-10 20:16:33,106 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-10 20:16:34,111 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-10 20:16:35,117 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.76 sec
2013-09-10 20:16:36,122 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.12 sec
2013-09-10 20:16:37,127 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.12 sec
2013-09-10 20:16:38,133 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.12 sec
MapReduce Total cumulative CPU time: 2 seconds 120 msec
Ended Job = job_201309101627_0128
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 27.08 sec HDFS Read: 907716 HDFS Write: 384 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.12 sec HDFS Read: 1153 HDFS Write: 60 SUCCESS
Total MapReduce CPU Time Spent: 29 seconds 200 msec
OK
Time taken: 37.118 seconds, Fetched: 9 row(s)
hive> quit;
-- мощная фильтрация. После фильтрации почти ничего не остаётся, но делаем ещё агрегацию.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_11865@mturlrep13_201309102016_504239459.txt
hive> ;
hive> quit;
times: 1
query: SELECT RegionID, count(DISTINCT UserID) AS u FROM hits_10m GROUP BY RegionID ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_12310@mturlrep13_201309102016_364343400.txt
hive> SELECT RegionID, count(DISTINCT UserID) AS u FROM hits_10m GROUP BY RegionID ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0129
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:16:59,009 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:17:06,036 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:17:10,057 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 29.36 sec
2013-09-10 20:17:11,064 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 29.36 sec
2013-09-10 20:17:12,072 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 29.36 sec
2013-09-10 20:17:13,079 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 29.36 sec
2013-09-10 20:17:14,086 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 29.36 sec
2013-09-10 20:17:15,092 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 29.36 sec
2013-09-10 20:17:16,099 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 29.36 sec
2013-09-10 20:17:17,105 Stage-1 map = 72%, reduce = 8%, Cumulative CPU 29.36 sec
2013-09-10 20:17:18,111 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 29.36 sec
2013-09-10 20:17:19,117 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 29.36 sec
2013-09-10 20:17:20,122 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 29.36 sec
2013-09-10 20:17:21,127 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.96 sec
2013-09-10 20:17:22,133 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.96 sec
2013-09-10 20:17:23,139 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.96 sec
2013-09-10 20:17:24,146 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.96 sec
2013-09-10 20:17:25,151 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.96 sec
2013-09-10 20:17:26,158 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 63.53 sec
2013-09-10 20:17:27,165 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 70.82 sec
2013-09-10 20:17:28,171 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 70.82 sec
MapReduce Total cumulative CPU time: 1 minutes 10 seconds 820 msec
Ended Job = job_201309101627_0129
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0130
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:17:30,606 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:17:32,615 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.51 sec
2013-09-10 20:17:33,621 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.51 sec
2013-09-10 20:17:34,626 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.51 sec
2013-09-10 20:17:35,631 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.51 sec
2013-09-10 20:17:36,637 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.51 sec
2013-09-10 20:17:37,642 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.51 sec
2013-09-10 20:17:38,648 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.51 sec
2013-09-10 20:17:39,653 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.51 sec
2013-09-10 20:17:40,659 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.05 sec
2013-09-10 20:17:41,665 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.05 sec
2013-09-10 20:17:42,670 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.05 sec
MapReduce Total cumulative CPU time: 3 seconds 50 msec
Ended Job = job_201309101627_0130
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 70.82 sec HDFS Read: 67340015 HDFS Write: 100142 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 3.05 sec HDFS Read: 100911 HDFS Write: 96 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 13 seconds 870 msec
OK
Time taken: 53.485 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_14188@mturlrep13_201309102017_126837512.txt
hive> ;
hive> quit;
times: 2
query: SELECT RegionID, count(DISTINCT UserID) AS u FROM hits_10m GROUP BY RegionID ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_14629@mturlrep13_201309102017_1747557025.txt
hive> SELECT RegionID, count(DISTINCT UserID) AS u FROM hits_10m GROUP BY RegionID ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0131
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:17:56,720 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:18:03,746 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:18:06,763 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.46 sec
2013-09-10 20:18:07,770 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.46 sec
2013-09-10 20:18:08,777 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.46 sec
2013-09-10 20:18:09,783 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.46 sec
2013-09-10 20:18:10,789 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.46 sec
2013-09-10 20:18:11,795 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.46 sec
2013-09-10 20:18:12,801 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 28.46 sec
2013-09-10 20:18:13,812 Stage-1 map = 96%, reduce = 8%, Cumulative CPU 28.46 sec
2013-09-10 20:18:14,818 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 28.46 sec
2013-09-10 20:18:15,823 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 28.46 sec
2013-09-10 20:18:16,829 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.8 sec
2013-09-10 20:18:17,834 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.8 sec
2013-09-10 20:18:18,839 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.8 sec
2013-09-10 20:18:19,845 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.8 sec
2013-09-10 20:18:20,850 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.8 sec
2013-09-10 20:18:21,855 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 57.8 sec
2013-09-10 20:18:22,863 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 63.32 sec
2013-09-10 20:18:23,869 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 70.39 sec
2013-09-10 20:18:24,875 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 70.39 sec
MapReduce Total cumulative CPU time: 1 minutes 10 seconds 390 msec
Ended Job = job_201309101627_0131
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0132
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:18:27,568 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:18:30,579 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.46 sec
2013-09-10 20:18:31,584 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.46 sec
2013-09-10 20:18:32,589 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.46 sec
2013-09-10 20:18:33,594 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.46 sec
2013-09-10 20:18:34,599 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.46 sec
2013-09-10 20:18:35,604 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.46 sec
2013-09-10 20:18:36,609 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.46 sec
2013-09-10 20:18:37,614 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.0 sec
2013-09-10 20:18:38,620 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.0 sec
2013-09-10 20:18:39,625 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.0 sec
MapReduce Total cumulative CPU time: 3 seconds 0 msec
Ended Job = job_201309101627_0132
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 70.39 sec HDFS Read: 67340015 HDFS Write: 100142 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 3.0 sec HDFS Read: 100911 HDFS Write: 96 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 13 seconds 390 msec
OK
Time taken: 51.134 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_16555@mturlrep13_201309102018_646695690.txt
hive> ;
hive> quit;
times: 3
query: SELECT RegionID, count(DISTINCT UserID) AS u FROM hits_10m GROUP BY RegionID ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_16977@mturlrep13_201309102018_162818540.txt
hive> SELECT RegionID, count(DISTINCT UserID) AS u FROM hits_10m GROUP BY RegionID ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0133
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:18:53,456 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:19:00,484 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:19:02,498 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.76 sec
2013-09-10 20:19:03,506 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.76 sec
2013-09-10 20:19:04,513 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.76 sec
2013-09-10 20:19:05,520 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.76 sec
2013-09-10 20:19:06,526 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.76 sec
2013-09-10 20:19:07,533 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.76 sec
2013-09-10 20:19:08,540 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.76 sec
2013-09-10 20:19:09,546 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.76 sec
2013-09-10 20:19:10,558 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 27.76 sec
2013-09-10 20:19:11,564 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 27.76 sec
2013-09-10 20:19:12,570 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 42.08 sec
2013-09-10 20:19:13,576 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 56.65 sec
2013-09-10 20:19:14,582 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 56.65 sec
2013-09-10 20:19:15,588 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 56.65 sec
2013-09-10 20:19:16,594 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 56.65 sec
2013-09-10 20:19:17,600 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 56.65 sec
2013-09-10 20:19:18,608 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 62.31 sec
2013-09-10 20:19:19,614 Stage-1 map = 100%, reduce = 98%, Cumulative CPU 62.31 sec
2013-09-10 20:19:20,621 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 69.64 sec
2013-09-10 20:19:21,627 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 69.64 sec
MapReduce Total cumulative CPU time: 1 minutes 9 seconds 640 msec
Ended Job = job_201309101627_0133
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0134
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:19:25,083 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:19:27,101 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.52 sec
2013-09-10 20:19:28,107 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.52 sec
2013-09-10 20:19:29,112 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.52 sec
2013-09-10 20:19:30,117 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.52 sec
2013-09-10 20:19:31,122 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.52 sec
2013-09-10 20:19:32,127 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.52 sec
2013-09-10 20:19:33,133 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.52 sec
2013-09-10 20:19:34,138 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 1.52 sec
2013-09-10 20:19:35,143 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.06 sec
2013-09-10 20:19:36,149 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.06 sec
MapReduce Total cumulative CPU time: 3 seconds 60 msec
Ended Job = job_201309101627_0134
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 69.64 sec HDFS Read: 67340015 HDFS Write: 100142 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 3.06 sec HDFS Read: 100911 HDFS Write: 96 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 12 seconds 700 msec
OK
Time taken: 50.893 seconds, Fetched: 10 row(s)
hive> quit;
-- агрегация, среднее количество ключей.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_18929@mturlrep13_201309102019_1852281941.txt
hive> ;
hive> quit;
times: 1
query: SELECT RegionID, sum(AdvEngineID), count(*) AS c, avg(ResolutionWidth), count(DISTINCT UserID) FROM hits_10m GROUP BY RegionID ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_19369@mturlrep13_201309102019_1237243258.txt
hive> SELECT RegionID, sum(AdvEngineID), count(*) AS c, avg(ResolutionWidth), count(DISTINCT UserID) FROM hits_10m GROUP BY RegionID ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0135
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:19:56,153 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:20:03,183 Stage-1 map = 29%, reduce = 0%
2013-09-10 20:20:06,195 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:20:09,213 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.32 sec
2013-09-10 20:20:10,221 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.32 sec
2013-09-10 20:20:11,229 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.32 sec
2013-09-10 20:20:12,236 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.32 sec
2013-09-10 20:20:13,242 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.32 sec
2013-09-10 20:20:14,249 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.32 sec
2013-09-10 20:20:15,255 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.32 sec
2013-09-10 20:20:16,261 Stage-1 map = 64%, reduce = 8%, Cumulative CPU 34.32 sec
2013-09-10 20:20:17,275 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 34.32 sec
2013-09-10 20:20:18,328 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 34.32 sec
2013-09-10 20:20:19,334 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 34.32 sec
2013-09-10 20:20:20,340 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 34.32 sec
2013-09-10 20:20:21,347 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 34.32 sec
2013-09-10 20:20:22,353 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.37 sec
2013-09-10 20:20:23,358 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.37 sec
2013-09-10 20:20:24,363 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.37 sec
2013-09-10 20:20:25,369 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.37 sec
2013-09-10 20:20:26,375 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.37 sec
2013-09-10 20:20:27,380 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.37 sec
2013-09-10 20:20:28,386 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.37 sec
2013-09-10 20:20:29,391 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.37 sec
2013-09-10 20:20:30,397 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.37 sec
2013-09-10 20:20:31,404 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 74.61 sec
2013-09-10 20:20:32,409 Stage-1 map = 100%, reduce = 96%, Cumulative CPU 74.61 sec
2013-09-10 20:20:33,415 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 83.92 sec
2013-09-10 20:20:34,421 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 83.92 sec
MapReduce Total cumulative CPU time: 1 minutes 23 seconds 920 msec
Ended Job = job_201309101627_0135
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0136
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:20:36,980 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:20:39,990 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.64 sec
2013-09-10 20:20:40,995 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.64 sec
2013-09-10 20:20:42,000 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.64 sec
2013-09-10 20:20:43,005 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.64 sec
2013-09-10 20:20:44,010 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.64 sec
2013-09-10 20:20:45,016 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.64 sec
2013-09-10 20:20:46,021 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.64 sec
2013-09-10 20:20:47,027 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.21 sec
2013-09-10 20:20:48,032 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.21 sec
2013-09-10 20:20:49,038 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.21 sec
MapReduce Total cumulative CPU time: 3 seconds 210 msec
Ended Job = job_201309101627_0136
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 83.92 sec HDFS Read: 74853201 HDFS Write: 148871 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 3.21 sec HDFS Read: 149640 HDFS Write: 414 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 27 seconds 130 msec
OK
Time taken: 62.752 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_22157@mturlrep13_201309102020_335437932.txt
hive> ;
hive> quit;
times: 2
query: SELECT RegionID, sum(AdvEngineID), count(*) AS c, avg(ResolutionWidth), count(DISTINCT UserID) FROM hits_10m GROUP BY RegionID ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_22568@mturlrep13_201309102020_882304972.txt
hive> SELECT RegionID, sum(AdvEngineID), count(*) AS c, avg(ResolutionWidth), count(DISTINCT UserID) FROM hits_10m GROUP BY RegionID ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0137
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:21:02,043 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:21:10,075 Stage-1 map = 36%, reduce = 0%
2013-09-10 20:21:13,089 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:21:15,103 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.24 sec
2013-09-10 20:21:16,110 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.24 sec
2013-09-10 20:21:17,119 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.24 sec
2013-09-10 20:21:18,125 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.24 sec
2013-09-10 20:21:19,132 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.24 sec
2013-09-10 20:21:20,138 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.24 sec
2013-09-10 20:21:21,145 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.24 sec
2013-09-10 20:21:22,151 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.24 sec
2013-09-10 20:21:23,157 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 34.24 sec
2013-09-10 20:21:24,162 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 34.24 sec
2013-09-10 20:21:25,168 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 34.24 sec
2013-09-10 20:21:26,174 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 34.24 sec
2013-09-10 20:21:27,179 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 49.89 sec
2013-09-10 20:21:28,185 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.04 sec
2013-09-10 20:21:29,190 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.04 sec
2013-09-10 20:21:30,195 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.04 sec
2013-09-10 20:21:31,202 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.04 sec
2013-09-10 20:21:32,209 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.04 sec
2013-09-10 20:21:33,214 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.04 sec
2013-09-10 20:21:34,220 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.04 sec
2013-09-10 20:21:35,225 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.04 sec
2013-09-10 20:21:36,231 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.04 sec
2013-09-10 20:21:37,239 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 75.13 sec
2013-09-10 20:21:38,244 Stage-1 map = 100%, reduce = 96%, Cumulative CPU 75.13 sec
2013-09-10 20:21:39,250 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 84.3 sec
2013-09-10 20:21:40,256 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 84.3 sec
2013-09-10 20:21:41,262 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 84.3 sec
MapReduce Total cumulative CPU time: 1 minutes 24 seconds 300 msec
Ended Job = job_201309101627_0137
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0138
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:21:43,758 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:21:45,767 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.81 sec
2013-09-10 20:21:46,772 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.81 sec
2013-09-10 20:21:47,776 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.81 sec
2013-09-10 20:21:48,781 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.81 sec
2013-09-10 20:21:49,785 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.81 sec
2013-09-10 20:21:50,789 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.81 sec
2013-09-10 20:21:51,793 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.81 sec
2013-09-10 20:21:52,798 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.81 sec
2013-09-10 20:21:53,803 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.33 sec
2013-09-10 20:21:54,809 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.33 sec
2013-09-10 20:21:55,814 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.33 sec
MapReduce Total cumulative CPU time: 3 seconds 330 msec
Ended Job = job_201309101627_0138
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 84.3 sec HDFS Read: 74853201 HDFS Write: 148871 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 3.33 sec HDFS Read: 149640 HDFS Write: 414 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 27 seconds 630 msec
OK
Time taken: 61.04 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_24690@mturlrep13_201309102021_1595959377.txt
hive> ;
hive> quit;
times: 3
query: SELECT RegionID, sum(AdvEngineID), count(*) AS c, avg(ResolutionWidth), count(DISTINCT UserID) FROM hits_10m GROUP BY RegionID ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_25093@mturlrep13_201309102022_1638369950.txt
hive> SELECT RegionID, sum(AdvEngineID), count(*) AS c, avg(ResolutionWidth), count(DISTINCT UserID) FROM hits_10m GROUP BY RegionID ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0139
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:22:08,691 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:22:16,719 Stage-1 map = 36%, reduce = 0%
2013-09-10 20:22:19,729 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:22:21,743 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 32.81 sec
2013-09-10 20:22:22,750 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 32.81 sec
2013-09-10 20:22:23,757 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 32.81 sec
2013-09-10 20:22:24,763 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 32.81 sec
2013-09-10 20:22:25,772 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 32.81 sec
2013-09-10 20:22:26,789 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 32.81 sec
2013-09-10 20:22:27,794 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 32.81 sec
2013-09-10 20:22:28,800 Stage-1 map = 88%, reduce = 8%, Cumulative CPU 32.81 sec
2013-09-10 20:22:29,804 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 32.81 sec
2013-09-10 20:22:30,809 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 32.81 sec
2013-09-10 20:22:31,815 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 32.81 sec
2013-09-10 20:22:32,820 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 48.46 sec
2013-09-10 20:22:33,825 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 66.91 sec
2013-09-10 20:22:34,830 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 66.91 sec
2013-09-10 20:22:35,835 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 66.91 sec
2013-09-10 20:22:36,840 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 66.91 sec
2013-09-10 20:22:37,846 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 66.91 sec
2013-09-10 20:22:38,851 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 66.91 sec
2013-09-10 20:22:39,857 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 66.91 sec
2013-09-10 20:22:40,862 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 66.91 sec
2013-09-10 20:22:41,867 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 66.91 sec
2013-09-10 20:22:42,872 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 66.91 sec
2013-09-10 20:22:43,879 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 74.06 sec
2013-09-10 20:22:44,885 Stage-1 map = 100%, reduce = 95%, Cumulative CPU 74.06 sec
2013-09-10 20:22:45,890 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 83.62 sec
2013-09-10 20:22:46,895 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 83.62 sec
MapReduce Total cumulative CPU time: 1 minutes 23 seconds 620 msec
Ended Job = job_201309101627_0139
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0140
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:22:49,470 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:22:52,481 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.58 sec
2013-09-10 20:22:53,486 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.58 sec
2013-09-10 20:22:54,491 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.58 sec
2013-09-10 20:22:55,496 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.58 sec
2013-09-10 20:22:56,501 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.58 sec
2013-09-10 20:22:57,505 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.58 sec
2013-09-10 20:22:58,510 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.58 sec
2013-09-10 20:22:59,516 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.1 sec
2013-09-10 20:23:00,521 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.1 sec
2013-09-10 20:23:01,527 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 3.1 sec
MapReduce Total cumulative CPU time: 3 seconds 100 msec
Ended Job = job_201309101627_0140
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 83.62 sec HDFS Read: 74853201 HDFS Write: 148871 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 3.1 sec HDFS Read: 149640 HDFS Write: 414 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 26 seconds 720 msec
OK
Time taken: 60.141 seconds, Fetched: 10 row(s)
hive> quit;
-- агрегация, среднее количество ключей, несколько агрегатных функций.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_27176@mturlrep13_201309102023_1934793750.txt
hive> ;
hive> quit;
times: 1
query: SELECT MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhoneModel ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_27614@mturlrep13_201309102023_1296494329.txt
hive> SELECT MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhoneModel ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0141
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:23:21,374 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:23:28,411 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.04 sec
2013-09-10 20:23:29,419 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.04 sec
2013-09-10 20:23:30,426 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.04 sec
2013-09-10 20:23:31,433 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.04 sec
2013-09-10 20:23:32,439 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.04 sec
2013-09-10 20:23:33,446 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.04 sec
2013-09-10 20:23:34,453 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 17.69 sec
2013-09-10 20:23:35,460 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 23.96 sec
2013-09-10 20:23:36,466 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 23.96 sec
2013-09-10 20:23:37,472 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 23.96 sec
2013-09-10 20:23:38,478 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 23.96 sec
2013-09-10 20:23:39,483 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 23.96 sec
2013-09-10 20:23:40,489 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 23.96 sec
2013-09-10 20:23:41,496 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 23.96 sec
2013-09-10 20:23:42,503 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 30.53 sec
2013-09-10 20:23:43,510 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 30.53 sec
MapReduce Total cumulative CPU time: 30 seconds 530 msec
Ended Job = job_201309101627_0141
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0142
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:23:46,001 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:23:48,011 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-10 20:23:49,017 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-10 20:23:50,022 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-10 20:23:51,028 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-10 20:23:52,033 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-10 20:23:53,038 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-10 20:23:54,043 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.82 sec
2013-09-10 20:23:55,049 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 0.82 sec
2013-09-10 20:23:56,055 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.21 sec
2013-09-10 20:23:57,060 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.21 sec
MapReduce Total cumulative CPU time: 2 seconds 210 msec
Ended Job = job_201309101627_0142
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 30.53 sec HDFS Read: 58273488 HDFS Write: 21128 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.21 sec HDFS Read: 21897 HDFS Write: 127 SUCCESS
Total MapReduce CPU Time Spent: 32 seconds 740 msec
OK
Time taken: 45.242 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_29548@mturlrep13_201309102023_2146255552.txt
hive> ;
hive> quit;
times: 2
query: SELECT MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhoneModel ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_29955@mturlrep13_201309102024_1375294080.txt
hive> SELECT MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhoneModel ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0143
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:24:10,138 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:24:16,170 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.73 sec
2013-09-10 20:24:17,178 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.73 sec
2013-09-10 20:24:18,185 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.73 sec
2013-09-10 20:24:19,192 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.73 sec
2013-09-10 20:24:20,198 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.73 sec
2013-09-10 20:24:21,205 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.73 sec
2013-09-10 20:24:22,212 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 17.85 sec
2013-09-10 20:24:23,218 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 24.46 sec
2013-09-10 20:24:24,224 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 24.46 sec
2013-09-10 20:24:25,230 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 24.46 sec
2013-09-10 20:24:26,236 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 24.46 sec
2013-09-10 20:24:27,242 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 24.46 sec
2013-09-10 20:24:28,247 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 24.46 sec
2013-09-10 20:24:29,253 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 24.46 sec
2013-09-10 20:24:30,261 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 31.02 sec
2013-09-10 20:24:31,267 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 31.02 sec
2013-09-10 20:24:32,273 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 31.02 sec
MapReduce Total cumulative CPU time: 31 seconds 20 msec
Ended Job = job_201309101627_0143
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0144
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:24:35,773 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:24:37,782 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.02 sec
2013-09-10 20:24:38,787 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.02 sec
2013-09-10 20:24:39,792 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.02 sec
2013-09-10 20:24:40,797 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.02 sec
2013-09-10 20:24:41,801 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.02 sec
2013-09-10 20:24:42,806 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.02 sec
2013-09-10 20:24:43,812 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 1.02 sec
2013-09-10 20:24:44,817 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.49 sec
2013-09-10 20:24:45,823 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.49 sec
2013-09-10 20:24:46,829 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.49 sec
MapReduce Total cumulative CPU time: 2 seconds 490 msec
Ended Job = job_201309101627_0144
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 31.02 sec HDFS Read: 58273488 HDFS Write: 21128 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.49 sec HDFS Read: 21897 HDFS Write: 127 SUCCESS
Total MapReduce CPU Time Spent: 33 seconds 510 msec
OK
Time taken: 44.206 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_31926@mturlrep13_201309102024_227013922.txt
hive> ;
hive> quit;
times: 3
query: SELECT MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhoneModel ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_32329@mturlrep13_201309102024_666768593.txt
hive> SELECT MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhoneModel ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0145
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:25:00,728 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:25:05,756 Stage-1 map = 25%, reduce = 0%, Cumulative CPU 5.9 sec
2013-09-10 20:25:06,765 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.21 sec
2013-09-10 20:25:07,772 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.21 sec
2013-09-10 20:25:08,779 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.21 sec
2013-09-10 20:25:09,786 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.21 sec
2013-09-10 20:25:10,792 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.21 sec
2013-09-10 20:25:11,798 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 12.21 sec
2013-09-10 20:25:12,804 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 28.24 sec
2013-09-10 20:25:13,810 Stage-1 map = 100%, reduce = 29%, Cumulative CPU 28.24 sec
2013-09-10 20:25:14,818 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 31.4 sec
2013-09-10 20:25:15,824 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 31.4 sec
2013-09-10 20:25:16,831 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 31.4 sec
2013-09-10 20:25:17,837 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 31.4 sec
2013-09-10 20:25:18,843 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 31.4 sec
2013-09-10 20:25:19,849 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 34.7 sec
2013-09-10 20:25:20,855 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 34.7 sec
2013-09-10 20:25:21,861 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 34.7 sec
MapReduce Total cumulative CPU time: 34 seconds 700 msec
Ended Job = job_201309101627_0145
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0146
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:25:24,374 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:25:26,382 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.88 sec
2013-09-10 20:25:27,388 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.88 sec
2013-09-10 20:25:28,393 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.88 sec
2013-09-10 20:25:29,398 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.88 sec
2013-09-10 20:25:30,402 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.88 sec
2013-09-10 20:25:31,407 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.88 sec
2013-09-10 20:25:32,412 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.88 sec
2013-09-10 20:25:33,418 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 0.88 sec
2013-09-10 20:25:34,423 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.41 sec
2013-09-10 20:25:35,429 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.41 sec
MapReduce Total cumulative CPU time: 2 seconds 410 msec
Ended Job = job_201309101627_0146
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 34.7 sec HDFS Read: 58273488 HDFS Write: 21128 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.41 sec HDFS Read: 21895 HDFS Write: 127 SUCCESS
Total MapReduce CPU Time Spent: 37 seconds 110 msec
OK
Time taken: 43.002 seconds, Fetched: 10 row(s)
hive> quit;
-- мощная фильтрация по строкам, затем агрегация по строкам.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_2839@mturlrep13_201309102025_1906588406.txt
hive> ;
hive> quit;
times: 1
query: SELECT MobilePhone, MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhone, MobilePhoneModel ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_3277@mturlrep13_201309102025_1190023510.txt
hive> SELECT MobilePhone, MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhone, MobilePhoneModel ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0147
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:25:56,480 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:26:02,515 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.68 sec
2013-09-10 20:26:03,523 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.68 sec
2013-09-10 20:26:04,531 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.68 sec
2013-09-10 20:26:05,538 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.68 sec
2013-09-10 20:26:06,545 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.68 sec
2013-09-10 20:26:07,552 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.68 sec
2013-09-10 20:26:08,558 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.68 sec
2013-09-10 20:26:09,564 Stage-1 map = 100%, reduce = 8%, Cumulative CPU 23.83 sec
2013-09-10 20:26:10,570 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 23.83 sec
2013-09-10 20:26:11,577 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 23.83 sec
2013-09-10 20:26:12,583 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 23.83 sec
2013-09-10 20:26:13,589 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 23.83 sec
2013-09-10 20:26:14,595 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 23.83 sec
2013-09-10 20:26:15,601 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 23.83 sec
2013-09-10 20:26:16,609 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 31.35 sec
2013-09-10 20:26:17,636 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 31.35 sec
2013-09-10 20:26:18,643 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 31.35 sec
MapReduce Total cumulative CPU time: 31 seconds 350 msec
Ended Job = job_201309101627_0147
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0148
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:26:22,244 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:26:23,249 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-10 20:26:24,255 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-10 20:26:25,261 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-10 20:26:26,266 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-10 20:26:27,272 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-10 20:26:28,277 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-10 20:26:29,282 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-10 20:26:30,287 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-10 20:26:31,294 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.34 sec
2013-09-10 20:26:32,299 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.34 sec
2013-09-10 20:26:33,305 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.34 sec
MapReduce Total cumulative CPU time: 2 seconds 340 msec
Ended Job = job_201309101627_0148
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 31.35 sec HDFS Read: 59259422 HDFS Write: 22710 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.34 sec HDFS Read: 23479 HDFS Write: 149 SUCCESS
Total MapReduce CPU Time Spent: 33 seconds 690 msec
OK
Time taken: 46.938 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_5533@mturlrep13_201309102026_343654465.txt
hive> ;
hive> quit;
times: 2
query: SELECT MobilePhone, MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhone, MobilePhoneModel ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_5957@mturlrep13_201309102026_864824933.txt
hive> SELECT MobilePhone, MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhone, MobilePhoneModel ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0149
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:26:46,474 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:26:52,507 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.69 sec
2013-09-10 20:26:53,515 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.69 sec
2013-09-10 20:26:54,522 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.69 sec
2013-09-10 20:26:55,528 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.69 sec
2013-09-10 20:26:56,534 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.69 sec
2013-09-10 20:26:57,540 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.69 sec
2013-09-10 20:26:58,546 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.76 sec
2013-09-10 20:26:59,552 Stage-1 map = 100%, reduce = 13%, Cumulative CPU 23.76 sec
2013-09-10 20:27:00,559 Stage-1 map = 100%, reduce = 29%, Cumulative CPU 23.76 sec
2013-09-10 20:27:01,568 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 27.2 sec
2013-09-10 20:27:02,574 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 27.2 sec
2013-09-10 20:27:03,580 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 27.2 sec
2013-09-10 20:27:04,586 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 27.2 sec
2013-09-10 20:27:05,592 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 27.2 sec
2013-09-10 20:27:06,599 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 30.79 sec
2013-09-10 20:27:07,605 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 30.79 sec
MapReduce Total cumulative CPU time: 30 seconds 790 msec
Ended Job = job_201309101627_0149
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0150
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:27:10,095 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:27:12,104 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.87 sec
2013-09-10 20:27:13,109 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.87 sec
2013-09-10 20:27:14,115 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.87 sec
2013-09-10 20:27:15,120 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.87 sec
2013-09-10 20:27:16,125 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.87 sec
2013-09-10 20:27:17,130 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.87 sec
2013-09-10 20:27:18,136 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.87 sec
2013-09-10 20:27:19,141 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.87 sec
2013-09-10 20:27:20,148 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.35 sec
2013-09-10 20:27:21,154 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.35 sec
2013-09-10 20:27:22,160 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.35 sec
MapReduce Total cumulative CPU time: 2 seconds 350 msec
Ended Job = job_201309101627_0150
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 30.79 sec HDFS Read: 59259422 HDFS Write: 22710 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.35 sec HDFS Read: 23479 HDFS Write: 149 SUCCESS
Total MapReduce CPU Time Spent: 33 seconds 140 msec
OK
Time taken: 43.048 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_7889@mturlrep13_201309102027_433190922.txt
hive> ;
hive> quit;
times: 3
query: SELECT MobilePhone, MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhone, MobilePhoneModel ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_8295@mturlrep13_201309102027_581718614.txt
hive> SELECT MobilePhone, MobilePhoneModel, count(DISTINCT UserID) AS u FROM hits_10m WHERE MobilePhoneModel != '' GROUP BY MobilePhone, MobilePhoneModel ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0151
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:27:35,347 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:27:41,379 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.72 sec
2013-09-10 20:27:42,387 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.72 sec
2013-09-10 20:27:43,395 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.72 sec
2013-09-10 20:27:44,401 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.72 sec
2013-09-10 20:27:45,408 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.72 sec
2013-09-10 20:27:46,415 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 11.72 sec
2013-09-10 20:27:47,422 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.93 sec
2013-09-10 20:27:48,428 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 23.93 sec
2013-09-10 20:27:49,434 Stage-1 map = 100%, reduce = 29%, Cumulative CPU 23.93 sec
2013-09-10 20:27:50,442 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 27.74 sec
2013-09-10 20:27:51,449 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 28.35 sec
2013-09-10 20:27:52,455 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 28.35 sec
2013-09-10 20:27:53,461 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 28.35 sec
2013-09-10 20:27:54,467 Stage-1 map = 100%, reduce = 63%, Cumulative CPU 28.35 sec
2013-09-10 20:27:55,473 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 31.22 sec
2013-09-10 20:27:56,479 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 31.22 sec
2013-09-10 20:27:57,486 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 31.22 sec
MapReduce Total cumulative CPU time: 31 seconds 220 msec
Ended Job = job_201309101627_0151
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0152
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:28:00,985 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:28:02,995 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.96 sec
2013-09-10 20:28:04,000 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.96 sec
2013-09-10 20:28:05,005 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.96 sec
2013-09-10 20:28:06,010 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.96 sec
2013-09-10 20:28:07,016 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.96 sec
2013-09-10 20:28:08,021 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.96 sec
2013-09-10 20:28:09,027 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.96 sec
2013-09-10 20:28:10,033 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.4 sec
2013-09-10 20:28:11,039 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.4 sec
2013-09-10 20:28:12,044 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.4 sec
MapReduce Total cumulative CPU time: 2 seconds 400 msec
Ended Job = job_201309101627_0152
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 31.22 sec HDFS Read: 59259422 HDFS Write: 22710 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.4 sec HDFS Read: 23479 HDFS Write: 149 SUCCESS
Total MapReduce CPU Time Spent: 33 seconds 620 msec
OK
Time taken: 44.124 seconds, Fetched: 10 row(s)
hive> quit;
-- мощная фильтрация по строкам, затем агрегация по паре из числа и строки.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_10235@mturlrep13_201309102028_1893210303.txt
hive> ;
hive> quit;
times: 1
query: SELECT SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_10677@mturlrep13_201309102028_1721692497.txt
hive> SELECT SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0153
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:28:32,602 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:28:39,632 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:28:40,644 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.88 sec
2013-09-10 20:28:41,652 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.88 sec
2013-09-10 20:28:42,659 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.88 sec
2013-09-10 20:28:43,665 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.88 sec
2013-09-10 20:28:44,671 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.88 sec
2013-09-10 20:28:45,677 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.88 sec
2013-09-10 20:28:46,684 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.88 sec
2013-09-10 20:28:47,691 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.88 sec
2013-09-10 20:28:48,697 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.87 sec
2013-09-10 20:28:49,702 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.87 sec
2013-09-10 20:28:50,708 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.87 sec
2013-09-10 20:28:51,714 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.54 sec
2013-09-10 20:28:52,720 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.54 sec
2013-09-10 20:28:53,726 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.54 sec
2013-09-10 20:28:54,732 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.54 sec
2013-09-10 20:28:55,738 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.54 sec
2013-09-10 20:28:56,744 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 43.54 sec
2013-09-10 20:28:57,752 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 56.79 sec
2013-09-10 20:28:58,758 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 56.79 sec
2013-09-10 20:28:59,763 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 56.79 sec
MapReduce Total cumulative CPU time: 56 seconds 790 msec
Ended Job = job_201309101627_0153
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0154
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:29:03,287 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:29:10,313 Stage-2 map = 50%, reduce = 0%
2013-09-10 20:29:12,321 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.87 sec
2013-09-10 20:29:13,327 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.87 sec
2013-09-10 20:29:14,332 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.87 sec
2013-09-10 20:29:15,336 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.87 sec
2013-09-10 20:29:16,340 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.87 sec
2013-09-10 20:29:17,345 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.87 sec
2013-09-10 20:29:18,350 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.87 sec
2013-09-10 20:29:19,354 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.87 sec
2013-09-10 20:29:20,360 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 13.87 sec
2013-09-10 20:29:21,365 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.4 sec
2013-09-10 20:29:22,370 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.4 sec
2013-09-10 20:29:23,375 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.4 sec
MapReduce Total cumulative CPU time: 18 seconds 400 msec
Ended Job = job_201309101627_0154
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 56.79 sec HDFS Read: 27820105 HDFS Write: 79726641 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 18.4 sec HDFS Read: 79727410 HDFS Write: 275 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 15 seconds 190 msec
OK
Time taken: 60.48 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_12624@mturlrep13_201309102029_1863431716.txt
hive> ;
hive> quit;
times: 2
query: SELECT SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_13040@mturlrep13_201309102029_973897390.txt
hive> SELECT SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0155
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:29:37,380 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:29:44,415 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.02 sec
2013-09-10 20:29:45,423 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.02 sec
2013-09-10 20:29:46,431 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.02 sec
2013-09-10 20:29:47,438 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.02 sec
2013-09-10 20:29:48,445 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.02 sec
2013-09-10 20:29:49,452 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.02 sec
2013-09-10 20:29:50,458 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.02 sec
2013-09-10 20:29:51,465 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.02 sec
2013-09-10 20:29:52,472 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.52 sec
2013-09-10 20:29:53,478 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.52 sec
2013-09-10 20:29:54,484 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.52 sec
2013-09-10 20:29:55,490 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.52 sec
2013-09-10 20:29:56,496 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.52 sec
2013-09-10 20:29:57,502 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.52 sec
2013-09-10 20:29:58,508 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.52 sec
2013-09-10 20:29:59,514 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.52 sec
2013-09-10 20:30:00,520 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.52 sec
2013-09-10 20:30:01,529 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 56.86 sec
2013-09-10 20:30:02,535 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 56.86 sec
2013-09-10 20:30:03,541 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 56.86 sec
MapReduce Total cumulative CPU time: 56 seconds 860 msec
Ended Job = job_201309101627_0155
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0156
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:30:06,136 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:30:14,165 Stage-2 map = 50%, reduce = 0%
2013-09-10 20:30:16,174 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.7 sec
2013-09-10 20:30:17,180 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.7 sec
2013-09-10 20:30:18,186 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.7 sec
2013-09-10 20:30:19,191 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.7 sec
2013-09-10 20:30:20,195 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.7 sec
2013-09-10 20:30:21,200 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.7 sec
2013-09-10 20:30:22,205 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.7 sec
2013-09-10 20:30:23,210 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.7 sec
2013-09-10 20:30:24,216 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 13.7 sec
2013-09-10 20:30:25,221 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.4 sec
2013-09-10 20:30:26,227 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.4 sec
2013-09-10 20:30:27,232 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.4 sec
MapReduce Total cumulative CPU time: 18 seconds 400 msec
Ended Job = job_201309101627_0156
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 56.86 sec HDFS Read: 27820105 HDFS Write: 79726641 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 18.4 sec HDFS Read: 79727410 HDFS Write: 275 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 15 seconds 260 msec
OK
Time taken: 58.248 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_15653@mturlrep13_201309102030_1115123601.txt
hive> ;
hive> quit;
times: 3
query: SELECT SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_16121@mturlrep13_201309102030_1858922257.txt
hive> SELECT SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0157
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:30:40,546 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:30:48,583 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.04 sec
2013-09-10 20:30:49,592 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.04 sec
2013-09-10 20:30:50,600 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.04 sec
2013-09-10 20:30:51,608 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.04 sec
2013-09-10 20:30:52,614 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.04 sec
2013-09-10 20:30:53,621 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.04 sec
2013-09-10 20:30:54,628 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.04 sec
2013-09-10 20:30:55,635 Stage-1 map = 96%, reduce = 8%, Cumulative CPU 21.04 sec
2013-09-10 20:30:56,641 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.64 sec
2013-09-10 20:30:57,648 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.64 sec
2013-09-10 20:30:58,655 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.64 sec
2013-09-10 20:30:59,661 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.64 sec
2013-09-10 20:31:00,667 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.64 sec
2013-09-10 20:31:01,673 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.64 sec
2013-09-10 20:31:02,679 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.64 sec
2013-09-10 20:31:03,685 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 42.64 sec
2013-09-10 20:31:04,693 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 49.56 sec
2013-09-10 20:31:05,699 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 56.8 sec
2013-09-10 20:31:06,706 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 56.8 sec
MapReduce Total cumulative CPU time: 56 seconds 800 msec
Ended Job = job_201309101627_0157
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0158
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:31:10,220 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:31:17,244 Stage-2 map = 50%, reduce = 0%
2013-09-10 20:31:20,255 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.82 sec
2013-09-10 20:31:21,260 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.82 sec
2013-09-10 20:31:22,265 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.82 sec
2013-09-10 20:31:23,269 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.82 sec
2013-09-10 20:31:24,273 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.82 sec
2013-09-10 20:31:25,277 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.82 sec
2013-09-10 20:31:26,281 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.82 sec
2013-09-10 20:31:27,286 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 13.82 sec
2013-09-10 20:31:28,291 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 13.82 sec
2013-09-10 20:31:29,296 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.28 sec
2013-09-10 20:31:30,301 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.28 sec
MapReduce Total cumulative CPU time: 18 seconds 280 msec
Ended Job = job_201309101627_0158
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 56.8 sec HDFS Read: 27820105 HDFS Write: 79726641 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 18.28 sec HDFS Read: 79727410 HDFS Write: 275 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 15 seconds 80 msec
OK
Time taken: 57.158 seconds, Fetched: 10 row(s)
hive> quit;
-- средняя фильтрация по строкам, затем агрегация по строкам, большое количество ключей.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_18105@mturlrep13_201309102031_1802187609.txt
hive> ;
hive> quit;
times: 1
query: SELECT SearchPhrase, count(DISTINCT UserID) AS u FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_18522@mturlrep13_201309102031_2086673923.txt
hive> SELECT SearchPhrase, count(DISTINCT UserID) AS u FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0159
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:31:51,697 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:31:58,726 Stage-1 map = 36%, reduce = 0%
2013-09-10 20:32:00,742 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.86 sec
2013-09-10 20:32:01,750 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.86 sec
2013-09-10 20:32:02,758 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.86 sec
2013-09-10 20:32:03,765 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.86 sec
2013-09-10 20:32:04,772 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.86 sec
2013-09-10 20:32:05,778 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.86 sec
2013-09-10 20:32:06,785 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.86 sec
2013-09-10 20:32:07,792 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 21.86 sec
2013-09-10 20:32:08,798 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 21.86 sec
2013-09-10 20:32:09,804 Stage-1 map = 93%, reduce = 17%, Cumulative CPU 34.11 sec
2013-09-10 20:32:10,810 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 45.7 sec
2013-09-10 20:32:11,815 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 45.7 sec
2013-09-10 20:32:12,821 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 45.7 sec
2013-09-10 20:32:13,827 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 45.7 sec
2013-09-10 20:32:14,833 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 45.7 sec
2013-09-10 20:32:15,838 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 45.7 sec
2013-09-10 20:32:16,845 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 45.7 sec
2013-09-10 20:32:17,852 Stage-1 map = 100%, reduce = 94%, Cumulative CPU 53.79 sec
2013-09-10 20:32:18,858 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 62.45 sec
2013-09-10 20:32:19,864 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 62.45 sec
2013-09-10 20:32:20,870 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 62.45 sec
MapReduce Total cumulative CPU time: 1 minutes 2 seconds 450 msec
Ended Job = job_201309101627_0159
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0160
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:32:23,381 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:32:31,407 Stage-2 map = 50%, reduce = 0%
2013-09-10 20:32:33,415 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.98 sec
2013-09-10 20:32:34,420 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.98 sec
2013-09-10 20:32:35,425 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.98 sec
2013-09-10 20:32:36,430 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.98 sec
2013-09-10 20:32:37,435 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.98 sec
2013-09-10 20:32:38,439 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.98 sec
2013-09-10 20:32:39,444 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.98 sec
2013-09-10 20:32:40,450 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.98 sec
2013-09-10 20:32:41,454 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 13.98 sec
2013-09-10 20:32:42,460 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.54 sec
2013-09-10 20:32:43,465 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.54 sec
2013-09-10 20:32:44,470 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.54 sec
MapReduce Total cumulative CPU time: 18 seconds 540 msec
Ended Job = job_201309101627_0160
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 62.45 sec HDFS Read: 84536695 HDFS Write: 79726544 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 18.54 sec HDFS Read: 79727313 HDFS Write: 293 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 20 seconds 990 msec
OK
Time taken: 62.742 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_20555@mturlrep13_201309102032_475578712.txt
hive> ;
hive> quit;
times: 2
query: SELECT SearchPhrase, count(DISTINCT UserID) AS u FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_20972@mturlrep13_201309102032_588341028.txt
hive> SELECT SearchPhrase, count(DISTINCT UserID) AS u FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0161
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:32:58,361 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:33:05,390 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:33:06,404 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.79 sec
2013-09-10 20:33:07,412 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.79 sec
2013-09-10 20:33:08,419 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.79 sec
2013-09-10 20:33:09,426 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.79 sec
2013-09-10 20:33:10,433 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.79 sec
2013-09-10 20:33:11,439 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.79 sec
2013-09-10 20:33:12,445 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 22.79 sec
2013-09-10 20:33:13,453 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 22.79 sec
2013-09-10 20:33:14,459 Stage-1 map = 93%, reduce = 17%, Cumulative CPU 34.01 sec
2013-09-10 20:33:15,464 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.13 sec
2013-09-10 20:33:16,469 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.13 sec
2013-09-10 20:33:17,474 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.13 sec
2013-09-10 20:33:18,480 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.13 sec
2013-09-10 20:33:19,487 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.13 sec
2013-09-10 20:33:20,493 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.13 sec
2013-09-10 20:33:21,500 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.13 sec
2013-09-10 20:33:22,506 Stage-1 map = 100%, reduce = 85%, Cumulative CPU 46.13 sec
2013-09-10 20:33:23,514 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 54.52 sec
2013-09-10 20:33:24,520 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 63.25 sec
2013-09-10 20:33:25,526 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 63.25 sec
MapReduce Total cumulative CPU time: 1 minutes 3 seconds 250 msec
Ended Job = job_201309101627_0161
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0162
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:33:28,009 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:33:36,038 Stage-2 map = 50%, reduce = 0%
2013-09-10 20:33:38,047 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.62 sec
2013-09-10 20:33:39,052 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.62 sec
2013-09-10 20:33:40,057 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.62 sec
2013-09-10 20:33:41,062 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.62 sec
2013-09-10 20:33:42,067 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.62 sec
2013-09-10 20:33:43,072 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.62 sec
2013-09-10 20:33:44,077 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.62 sec
2013-09-10 20:33:45,083 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 14.62 sec
2013-09-10 20:33:46,088 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 14.62 sec
2013-09-10 20:33:47,094 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 19.02 sec
2013-09-10 20:33:48,100 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 19.02 sec
2013-09-10 20:33:49,106 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 19.02 sec
MapReduce Total cumulative CPU time: 19 seconds 20 msec
Ended Job = job_201309101627_0162
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 63.25 sec HDFS Read: 84536695 HDFS Write: 79726544 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 19.02 sec HDFS Read: 79727313 HDFS Write: 293 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 22 seconds 270 msec
OK
Time taken: 59.043 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_22965@mturlrep13_201309102033_23929963.txt
hive> ;
hive> quit;
times: 3
query: SELECT SearchPhrase, count(DISTINCT UserID) AS u FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_23374@mturlrep13_201309102033_541536373.txt
hive> SELECT SearchPhrase, count(DISTINCT UserID) AS u FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0163
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:34:02,415 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:34:10,447 Stage-1 map = 39%, reduce = 0%
2013-09-10 20:34:11,460 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.54 sec
2013-09-10 20:34:12,468 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.54 sec
2013-09-10 20:34:13,476 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.54 sec
2013-09-10 20:34:14,482 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.54 sec
2013-09-10 20:34:15,489 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.54 sec
2013-09-10 20:34:16,496 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.54 sec
2013-09-10 20:34:17,504 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.54 sec
2013-09-10 20:34:18,510 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.54 sec
2013-09-10 20:34:19,517 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 34.51 sec
2013-09-10 20:34:20,523 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.08 sec
2013-09-10 20:34:21,528 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.08 sec
2013-09-10 20:34:22,534 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.08 sec
2013-09-10 20:34:23,540 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.08 sec
2013-09-10 20:34:24,546 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.08 sec
2013-09-10 20:34:25,552 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.08 sec
2013-09-10 20:34:26,558 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 46.08 sec
2013-09-10 20:34:27,617 Stage-1 map = 100%, reduce = 53%, Cumulative CPU 46.08 sec
2013-09-10 20:34:28,624 Stage-1 map = 100%, reduce = 88%, Cumulative CPU 46.08 sec
2013-09-10 20:34:29,632 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 62.7 sec
2013-09-10 20:34:30,639 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 62.7 sec
MapReduce Total cumulative CPU time: 1 minutes 2 seconds 700 msec
Ended Job = job_201309101627_0163
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0164
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:34:33,122 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:34:41,155 Stage-2 map = 50%, reduce = 0%
2013-09-10 20:34:43,164 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 12.71 sec
2013-09-10 20:34:44,169 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 12.71 sec
2013-09-10 20:34:45,174 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 12.71 sec
2013-09-10 20:34:46,178 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 12.71 sec
2013-09-10 20:34:47,183 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 12.71 sec
2013-09-10 20:34:48,187 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 12.71 sec
2013-09-10 20:34:49,192 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 12.71 sec
2013-09-10 20:34:50,197 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 12.71 sec
2013-09-10 20:34:51,202 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 12.71 sec
2013-09-10 20:34:52,208 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 17.18 sec
2013-09-10 20:34:53,213 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 17.18 sec
2013-09-10 20:34:54,218 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 17.18 sec
MapReduce Total cumulative CPU time: 17 seconds 180 msec
Ended Job = job_201309101627_0164
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 62.7 sec HDFS Read: 84536695 HDFS Write: 79726544 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 17.18 sec HDFS Read: 79727313 HDFS Write: 293 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 19 seconds 880 msec
OK
Time taken: 59.131 seconds, Fetched: 10 row(s)
hive> quit;
-- агрегация чуть сложнее.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_25402@mturlrep13_201309102035_1149364562.txt
hive> ;
hive> quit;
times: 1
query: SELECT SearchEngineID, SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchEngineID, SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_25922@mturlrep13_201309102035_1476813424.txt
hive> SELECT SearchEngineID, SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchEngineID, SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0165
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:35:15,161 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:35:22,191 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:35:23,203 Stage-1 map = 47%, reduce = 0%, Cumulative CPU 12.86 sec
2013-09-10 20:35:24,211 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.67 sec
2013-09-10 20:35:25,219 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.67 sec
2013-09-10 20:35:26,225 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.67 sec
2013-09-10 20:35:27,231 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.67 sec
2013-09-10 20:35:28,238 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.67 sec
2013-09-10 20:35:29,244 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.67 sec
2013-09-10 20:35:30,251 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 23.67 sec
2013-09-10 20:35:31,257 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 23.67 sec
2013-09-10 20:35:32,262 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 49.57 sec
2013-09-10 20:35:33,268 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 49.57 sec
2013-09-10 20:35:34,273 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 49.57 sec
2013-09-10 20:35:35,279 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 49.57 sec
2013-09-10 20:35:36,285 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 49.57 sec
2013-09-10 20:35:37,290 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 49.57 sec
2013-09-10 20:35:38,296 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 49.57 sec
2013-09-10 20:35:39,302 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 49.57 sec
2013-09-10 20:35:40,308 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 49.57 sec
2013-09-10 20:35:41,315 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 65.45 sec
2013-09-10 20:35:42,321 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 65.45 sec
MapReduce Total cumulative CPU time: 1 minutes 5 seconds 450 msec
Ended Job = job_201309101627_0165
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0166
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:35:45,794 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:35:52,817 Stage-2 map = 50%, reduce = 0%
2013-09-10 20:35:55,829 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.25 sec
2013-09-10 20:35:56,834 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.25 sec
2013-09-10 20:35:57,839 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.25 sec
2013-09-10 20:35:58,844 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.25 sec
2013-09-10 20:35:59,849 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.25 sec
2013-09-10 20:36:00,854 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.25 sec
2013-09-10 20:36:01,859 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.25 sec
2013-09-10 20:36:02,864 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.25 sec
2013-09-10 20:36:03,871 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 14.25 sec
2013-09-10 20:36:04,875 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.73 sec
2013-09-10 20:36:05,880 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.73 sec
2013-09-10 20:36:06,885 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.73 sec
MapReduce Total cumulative CPU time: 18 seconds 730 msec
Ended Job = job_201309101627_0166
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 65.45 sec HDFS Read: 30310112 HDFS Write: 84160093 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 18.73 sec HDFS Read: 84160862 HDFS Write: 297 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 24 seconds 180 msec
OK
Time taken: 61.752 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_28465@mturlrep13_201309102036_191417244.txt
hive> ;
hive> quit;
times: 2
query: SELECT SearchEngineID, SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchEngineID, SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_28883@mturlrep13_201309102036_2013031139.txt
hive> SELECT SearchEngineID, SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchEngineID, SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0167
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:36:19,984 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:36:28,024 Stage-1 map = 47%, reduce = 0%, Cumulative CPU 13.29 sec
2013-09-10 20:36:29,031 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.97 sec
2013-09-10 20:36:30,039 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.97 sec
2013-09-10 20:36:31,045 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.97 sec
2013-09-10 20:36:32,051 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.97 sec
2013-09-10 20:36:33,057 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.97 sec
2013-09-10 20:36:34,064 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.97 sec
2013-09-10 20:36:35,071 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.97 sec
2013-09-10 20:36:36,077 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 27.97 sec
2013-09-10 20:36:37,083 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 52.14 sec
2013-09-10 20:36:38,088 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 52.14 sec
2013-09-10 20:36:39,094 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 52.14 sec
2013-09-10 20:36:40,100 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 52.14 sec
2013-09-10 20:36:41,106 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 52.14 sec
2013-09-10 20:36:42,113 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 52.14 sec
2013-09-10 20:36:43,119 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 52.14 sec
2013-09-10 20:36:44,125 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 52.14 sec
2013-09-10 20:36:45,133 Stage-1 map = 100%, reduce = 96%, Cumulative CPU 59.87 sec
2013-09-10 20:36:46,162 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 67.67 sec
2013-09-10 20:36:47,168 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 67.67 sec
MapReduce Total cumulative CPU time: 1 minutes 7 seconds 670 msec
Ended Job = job_201309101627_0167
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0168
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:36:50,673 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:36:57,699 Stage-2 map = 50%, reduce = 0%
2013-09-10 20:37:00,711 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.57 sec
2013-09-10 20:37:01,717 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.57 sec
2013-09-10 20:37:02,722 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.57 sec
2013-09-10 20:37:03,727 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.57 sec
2013-09-10 20:37:04,732 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.57 sec
2013-09-10 20:37:05,737 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.57 sec
2013-09-10 20:37:06,742 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.57 sec
2013-09-10 20:37:07,747 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 14.57 sec
2013-09-10 20:37:08,753 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 14.57 sec
2013-09-10 20:37:09,759 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 19.06 sec
2013-09-10 20:37:10,766 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 19.06 sec
2013-09-10 20:37:11,771 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 19.06 sec
MapReduce Total cumulative CPU time: 19 seconds 60 msec
Ended Job = job_201309101627_0168
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 67.67 sec HDFS Read: 30310112 HDFS Write: 84160093 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 19.06 sec HDFS Read: 84160862 HDFS Write: 297 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 26 seconds 730 msec
OK
Time taken: 59.316 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_30836@mturlrep13_201309102037_1364975139.txt
hive> ;
hive> quit;
times: 3
query: SELECT SearchEngineID, SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchEngineID, SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_31259@mturlrep13_201309102037_375310228.txt
hive> SELECT SearchEngineID, SearchPhrase, count(*) AS c FROM hits_10m WHERE SearchPhrase != '' GROUP BY SearchEngineID, SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0169
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:37:26,291 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:37:33,329 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.31 sec
2013-09-10 20:37:34,338 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.31 sec
2013-09-10 20:37:35,345 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.31 sec
2013-09-10 20:37:36,353 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.31 sec
2013-09-10 20:37:37,359 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.31 sec
2013-09-10 20:37:38,366 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.31 sec
2013-09-10 20:37:39,373 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.31 sec
2013-09-10 20:37:40,379 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.31 sec
2013-09-10 20:37:41,386 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.79 sec
2013-09-10 20:37:42,392 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.79 sec
2013-09-10 20:37:43,397 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.79 sec
2013-09-10 20:37:44,402 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.79 sec
2013-09-10 20:37:45,408 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.79 sec
2013-09-10 20:37:46,414 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.79 sec
2013-09-10 20:37:47,430 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.79 sec
2013-09-10 20:37:48,436 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.79 sec
2013-09-10 20:37:49,442 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 41.79 sec
2013-09-10 20:37:50,450 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 57.25 sec
2013-09-10 20:37:51,457 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 57.25 sec
2013-09-10 20:37:52,463 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 57.25 sec
MapReduce Total cumulative CPU time: 57 seconds 250 msec
Ended Job = job_201309101627_0169
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0170
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:37:55,055 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:38:03,082 Stage-2 map = 50%, reduce = 0%
2013-09-10 20:38:06,093 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.95 sec
2013-09-10 20:38:07,099 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.95 sec
2013-09-10 20:38:08,103 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.95 sec
2013-09-10 20:38:09,108 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.95 sec
2013-09-10 20:38:10,112 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.95 sec
2013-09-10 20:38:11,116 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.95 sec
2013-09-10 20:38:12,121 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 13.95 sec
2013-09-10 20:38:13,132 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 13.95 sec
2013-09-10 20:38:14,138 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 13.95 sec
2013-09-10 20:38:15,145 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.87 sec
2013-09-10 20:38:16,151 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 18.87 sec
MapReduce Total cumulative CPU time: 18 seconds 870 msec
Ended Job = job_201309101627_0170
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 57.25 sec HDFS Read: 30310112 HDFS Write: 84160093 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 18.87 sec HDFS Read: 84160862 HDFS Write: 297 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 16 seconds 120 msec
OK
Time taken: 58.243 seconds, Fetched: 10 row(s)
hive> quit;
-- агрегация по числу и строке, большое количество ключей.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_793@mturlrep13_201309102038_41120761.txt
hive> ;
hive> quit;
times: 1
query: SELECT UserID, count(*) AS c FROM hits_10m GROUP BY UserID ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_1406@mturlrep13_201309102038_1726333638.txt
hive> SELECT UserID, count(*) AS c FROM hits_10m GROUP BY UserID ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0171
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:38:38,249 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:38:45,278 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:38:48,297 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.52 sec
2013-09-10 20:38:49,304 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.52 sec
2013-09-10 20:38:50,311 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.52 sec
2013-09-10 20:38:51,317 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.52 sec
2013-09-10 20:38:52,325 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.52 sec
2013-09-10 20:38:53,333 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.52 sec
2013-09-10 20:38:54,339 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.52 sec
2013-09-10 20:38:55,346 Stage-1 map = 72%, reduce = 8%, Cumulative CPU 27.52 sec
2013-09-10 20:38:56,352 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 27.52 sec
2013-09-10 20:38:57,358 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 27.52 sec
2013-09-10 20:38:58,365 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 27.52 sec
2013-09-10 20:38:59,371 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.77 sec
2013-09-10 20:39:00,377 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.77 sec
2013-09-10 20:39:01,383 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.77 sec
2013-09-10 20:39:02,390 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.77 sec
2013-09-10 20:39:03,396 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.77 sec
2013-09-10 20:39:04,401 Stage-1 map = 100%, reduce = 55%, Cumulative CPU 55.77 sec
2013-09-10 20:39:05,409 Stage-1 map = 100%, reduce = 96%, Cumulative CPU 62.77 sec
2013-09-10 20:39:06,415 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 70.11 sec
2013-09-10 20:39:07,421 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 70.11 sec
MapReduce Total cumulative CPU time: 1 minutes 10 seconds 110 msec
Ended Job = job_201309101627_0171
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0172
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:39:10,920 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:39:20,949 Stage-2 map = 50%, reduce = 0%
2013-09-10 20:39:23,958 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 19.25 sec
2013-09-10 20:39:24,964 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 19.25 sec
2013-09-10 20:39:25,969 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 19.25 sec
2013-09-10 20:39:26,974 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 19.25 sec
2013-09-10 20:39:27,979 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 19.25 sec
2013-09-10 20:39:28,984 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 19.25 sec
2013-09-10 20:39:29,988 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 19.25 sec
2013-09-10 20:39:30,994 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 19.25 sec
2013-09-10 20:39:31,998 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 19.25 sec
2013-09-10 20:39:33,003 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 19.25 sec
2013-09-10 20:39:34,009 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 23.7 sec
2013-09-10 20:39:35,014 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 23.7 sec
MapReduce Total cumulative CPU time: 23 seconds 700 msec
Ended Job = job_201309101627_0172
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 70.11 sec HDFS Read: 57312623 HDFS Write: 55475412 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 23.7 sec HDFS Read: 55476181 HDFS Write: 246 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 33 seconds 810 msec
OK
Time taken: 66.769 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_3462@mturlrep13_201309102039_1149451168.txt
hive> ;
hive> quit;
times: 2
query: SELECT UserID, count(*) AS c FROM hits_10m GROUP BY UserID ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_3871@mturlrep13_201309102039_1735915084.txt
hive> SELECT UserID, count(*) AS c FROM hits_10m GROUP BY UserID ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0173
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:39:48,055 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:39:56,086 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:39:58,100 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-10 20:39:59,108 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-10 20:40:00,116 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-10 20:40:01,122 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-10 20:40:02,128 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-10 20:40:03,135 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-10 20:40:04,141 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 27.32 sec
2013-09-10 20:40:05,147 Stage-1 map = 72%, reduce = 8%, Cumulative CPU 27.32 sec
2013-09-10 20:40:06,152 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 27.32 sec
2013-09-10 20:40:07,158 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 39.83 sec
2013-09-10 20:40:08,163 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.14 sec
2013-09-10 20:40:09,168 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.14 sec
2013-09-10 20:40:10,174 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.14 sec
2013-09-10 20:40:11,179 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.14 sec
2013-09-10 20:40:12,185 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.14 sec
2013-09-10 20:40:13,190 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 55.14 sec
2013-09-10 20:40:14,196 Stage-1 map = 100%, reduce = 55%, Cumulative CPU 55.14 sec
2013-09-10 20:40:15,218 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 69.62 sec
2013-09-10 20:40:16,223 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 69.62 sec
2013-09-10 20:40:17,229 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 69.62 sec
MapReduce Total cumulative CPU time: 1 minutes 9 seconds 620 msec
Ended Job = job_201309101627_0173
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0174
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:40:19,719 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:40:30,752 Stage-2 map = 50%, reduce = 0%
2013-09-10 20:40:34,764 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.83 sec
2013-09-10 20:40:35,768 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.83 sec
2013-09-10 20:40:36,772 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.83 sec
2013-09-10 20:40:37,776 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.83 sec
2013-09-10 20:40:38,781 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.83 sec
2013-09-10 20:40:39,785 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.83 sec
2013-09-10 20:40:40,790 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.83 sec
2013-09-10 20:40:41,807 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 18.83 sec
2013-09-10 20:40:42,811 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 18.83 sec
2013-09-10 20:40:43,816 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 23.17 sec
2013-09-10 20:40:44,820 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 23.17 sec
MapReduce Total cumulative CPU time: 23 seconds 170 msec
Ended Job = job_201309101627_0174
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 69.62 sec HDFS Read: 57312623 HDFS Write: 55475412 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 23.17 sec HDFS Read: 55476181 HDFS Write: 246 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 32 seconds 790 msec
OK
Time taken: 64.078 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_6839@mturlrep13_201309102040_775437631.txt
hive> ;
hive> quit;
times: 3
query: SELECT UserID, count(*) AS c FROM hits_10m GROUP BY UserID ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_7242@mturlrep13_201309102040_2081233093.txt
hive> SELECT UserID, count(*) AS c FROM hits_10m GROUP BY UserID ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0175
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:40:57,539 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:41:05,573 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:41:07,588 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 26.59 sec
2013-09-10 20:41:08,596 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 26.59 sec
2013-09-10 20:41:09,603 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 26.59 sec
2013-09-10 20:41:10,609 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 26.59 sec
2013-09-10 20:41:11,616 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 26.59 sec
2013-09-10 20:41:12,623 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 26.59 sec
2013-09-10 20:41:13,629 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 26.59 sec
2013-09-10 20:41:14,635 Stage-1 map = 96%, reduce = 8%, Cumulative CPU 26.59 sec
2013-09-10 20:41:15,641 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 26.59 sec
2013-09-10 20:41:16,647 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 39.34 sec
2013-09-10 20:41:17,653 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.36 sec
2013-09-10 20:41:18,658 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.36 sec
2013-09-10 20:41:19,664 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.36 sec
2013-09-10 20:41:20,670 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.36 sec
2013-09-10 20:41:21,677 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.36 sec
2013-09-10 20:41:22,683 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 54.36 sec
2013-09-10 20:41:23,689 Stage-1 map = 100%, reduce = 55%, Cumulative CPU 54.36 sec
2013-09-10 20:41:24,697 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 68.52 sec
2013-09-10 20:41:25,704 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 68.52 sec
2013-09-10 20:41:26,709 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 68.52 sec
MapReduce Total cumulative CPU time: 1 minutes 8 seconds 520 msec
Ended Job = job_201309101627_0175
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0176
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:41:29,212 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:41:40,246 Stage-2 map = 50%, reduce = 0%
2013-09-10 20:41:44,259 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.47 sec
2013-09-10 20:41:45,264 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.47 sec
2013-09-10 20:41:46,268 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.47 sec
2013-09-10 20:41:47,273 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.47 sec
2013-09-10 20:41:48,278 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.47 sec
2013-09-10 20:41:49,283 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.47 sec
2013-09-10 20:41:50,287 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 18.47 sec
2013-09-10 20:41:51,293 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 18.47 sec
2013-09-10 20:41:52,298 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 18.47 sec
2013-09-10 20:41:53,303 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 22.87 sec
2013-09-10 20:41:54,309 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 22.87 sec
MapReduce Total cumulative CPU time: 22 seconds 870 msec
Ended Job = job_201309101627_0176
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 68.52 sec HDFS Read: 57312623 HDFS Write: 55475412 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 22.87 sec HDFS Read: 55476181 HDFS Write: 246 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 31 seconds 390 msec
OK
Time taken: 63.944 seconds, Fetched: 10 row(s)
hive> quit;
-- агрегация по очень большому количеству ключей, может не хватить оперативки.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9222@mturlrep13_201309102042_1508377166.txt
hive> ;
hive> quit;
times: 1
query: SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9653@mturlrep13_201309102042_1395015461.txt
hive> SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0177
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:42:17,156 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:42:24,184 Stage-1 map = 36%, reduce = 0%
2013-09-10 20:42:27,198 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:42:29,212 Stage-1 map = 47%, reduce = 0%, Cumulative CPU 16.85 sec
2013-09-10 20:42:30,219 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.6 sec
2013-09-10 20:42:31,227 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.6 sec
2013-09-10 20:42:32,234 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.6 sec
2013-09-10 20:42:33,240 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.6 sec
2013-09-10 20:42:34,246 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.6 sec
2013-09-10 20:42:35,251 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.6 sec
2013-09-10 20:42:36,257 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.6 sec
2013-09-10 20:42:37,262 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 34.6 sec
2013-09-10 20:42:38,272 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 34.6 sec
2013-09-10 20:42:39,277 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 34.6 sec
2013-09-10 20:42:40,283 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 34.6 sec
2013-09-10 20:42:41,289 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 34.6 sec
2013-09-10 20:42:42,295 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.15 sec
2013-09-10 20:42:43,301 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.15 sec
2013-09-10 20:42:44,306 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.15 sec
2013-09-10 20:42:45,312 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.15 sec
2013-09-10 20:42:46,318 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.15 sec
2013-09-10 20:42:47,324 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.15 sec
2013-09-10 20:42:48,329 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.15 sec
2013-09-10 20:42:49,336 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.15 sec
2013-09-10 20:42:50,341 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.15 sec
2013-09-10 20:42:51,347 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.15 sec
2013-09-10 20:42:52,352 Stage-1 map = 100%, reduce = 86%, Cumulative CPU 68.15 sec
2013-09-10 20:42:53,360 Stage-1 map = 100%, reduce = 86%, Cumulative CPU 84.75 sec
2013-09-10 20:42:54,366 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 89.79 sec
2013-09-10 20:42:55,371 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 89.79 sec
2013-09-10 20:42:56,377 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 89.79 sec
MapReduce Total cumulative CPU time: 1 minutes 29 seconds 790 msec
Ended Job = job_201309101627_0177
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0178
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:42:58,964 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:43:10,003 Stage-2 map = 46%, reduce = 0%
2013-09-10 20:43:13,012 Stage-2 map = 50%, reduce = 0%
2013-09-10 20:43:16,023 Stage-2 map = 96%, reduce = 0%
2013-09-10 20:43:18,031 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.46 sec
2013-09-10 20:43:19,036 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.46 sec
2013-09-10 20:43:20,041 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.46 sec
2013-09-10 20:43:21,046 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.46 sec
2013-09-10 20:43:22,050 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.46 sec
2013-09-10 20:43:23,055 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.46 sec
2013-09-10 20:43:24,059 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.46 sec
2013-09-10 20:43:25,063 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 24.46 sec
2013-09-10 20:43:26,067 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 24.46 sec
2013-09-10 20:43:27,072 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 24.46 sec
2013-09-10 20:43:28,077 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 28.87 sec
2013-09-10 20:43:29,082 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 28.87 sec
2013-09-10 20:43:30,087 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 28.87 sec
MapReduce Total cumulative CPU time: 28 seconds 870 msec
Ended Job = job_201309101627_0178
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 89.79 sec HDFS Read: 84536695 HDFS Write: 146202868 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 28.87 sec HDFS Read: 146210123 HDFS Write: 256 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 58 seconds 660 msec
OK
Time taken: 82.627 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_11739@mturlrep13_201309102043_755615207.txt
hive> ;
hive> quit;
times: 2
query: SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_12158@mturlrep13_201309102043_667274494.txt
hive> SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0179
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:43:44,421 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:43:51,451 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:43:55,477 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.5 sec
2013-09-10 20:43:56,485 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.5 sec
2013-09-10 20:43:57,492 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.5 sec
2013-09-10 20:43:58,500 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.5 sec
2013-09-10 20:43:59,508 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.5 sec
2013-09-10 20:44:00,514 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.5 sec
2013-09-10 20:44:01,521 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.5 sec
2013-09-10 20:44:02,527 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.5 sec
2013-09-10 20:44:03,534 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 33.5 sec
2013-09-10 20:44:04,541 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 33.5 sec
2013-09-10 20:44:05,547 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 33.5 sec
2013-09-10 20:44:06,553 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 33.5 sec
2013-09-10 20:44:07,559 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 49.48 sec
2013-09-10 20:44:08,565 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.45 sec
2013-09-10 20:44:09,570 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.45 sec
2013-09-10 20:44:10,575 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.45 sec
2013-09-10 20:44:11,580 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.45 sec
2013-09-10 20:44:12,586 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.45 sec
2013-09-10 20:44:13,594 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.45 sec
2013-09-10 20:44:14,600 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.45 sec
2013-09-10 20:44:15,606 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.45 sec
2013-09-10 20:44:16,612 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.45 sec
2013-09-10 20:44:17,618 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 67.45 sec
2013-09-10 20:44:18,624 Stage-1 map = 100%, reduce = 84%, Cumulative CPU 67.45 sec
2013-09-10 20:44:19,630 Stage-1 map = 100%, reduce = 84%, Cumulative CPU 67.45 sec
2013-09-10 20:44:20,638 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 89.72 sec
2013-09-10 20:44:21,644 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 89.72 sec
2013-09-10 20:44:22,650 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 89.72 sec
MapReduce Total cumulative CPU time: 1 minutes 29 seconds 720 msec
Ended Job = job_201309101627_0179
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0180
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:44:25,163 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:44:36,195 Stage-2 map = 46%, reduce = 0%
2013-09-10 20:44:39,205 Stage-2 map = 50%, reduce = 0%
2013-09-10 20:44:42,214 Stage-2 map = 96%, reduce = 0%
2013-09-10 20:44:44,222 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.57 sec
2013-09-10 20:44:45,227 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.57 sec
2013-09-10 20:44:46,231 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.57 sec
2013-09-10 20:44:47,236 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.57 sec
2013-09-10 20:44:48,240 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.57 sec
2013-09-10 20:44:49,244 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.57 sec
2013-09-10 20:44:50,249 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.57 sec
2013-09-10 20:44:51,254 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 23.57 sec
2013-09-10 20:44:52,259 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 23.57 sec
2013-09-10 20:44:53,265 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 24.62 sec
2013-09-10 20:44:54,271 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 28.03 sec
2013-09-10 20:44:55,276 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 28.03 sec
2013-09-10 20:44:56,282 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 28.03 sec
MapReduce Total cumulative CPU time: 28 seconds 30 msec
Ended Job = job_201309101627_0180
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 89.72 sec HDFS Read: 84536695 HDFS Write: 146202868 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 28.03 sec HDFS Read: 146210123 HDFS Write: 256 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 57 seconds 750 msec
OK
Time taken: 80.369 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_14245@mturlrep13_201309102044_1237091624.txt
hive> ;
hive> quit;
times: 3
query: SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_14650@mturlrep13_201309102045_372590589.txt
hive> SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0181
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:45:10,380 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:45:17,409 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:45:22,436 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.02 sec
2013-09-10 20:45:23,444 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.02 sec
2013-09-10 20:45:24,451 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.02 sec
2013-09-10 20:45:25,459 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.02 sec
2013-09-10 20:45:26,465 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.02 sec
2013-09-10 20:45:27,472 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.02 sec
2013-09-10 20:45:28,478 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 34.02 sec
2013-09-10 20:45:29,486 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 34.02 sec
2013-09-10 20:45:30,493 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 34.02 sec
2013-09-10 20:45:31,499 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 34.02 sec
2013-09-10 20:45:32,505 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 34.02 sec
2013-09-10 20:45:33,510 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 50.52 sec
2013-09-10 20:45:34,516 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.88 sec
2013-09-10 20:45:35,521 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.88 sec
2013-09-10 20:45:36,526 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.88 sec
2013-09-10 20:45:37,531 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.88 sec
2013-09-10 20:45:38,537 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.88 sec
2013-09-10 20:45:39,543 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.88 sec
2013-09-10 20:45:40,549 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.88 sec
2013-09-10 20:45:41,555 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.88 sec
2013-09-10 20:45:42,561 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.88 sec
2013-09-10 20:45:43,566 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.88 sec
2013-09-10 20:45:44,572 Stage-1 map = 100%, reduce = 86%, Cumulative CPU 68.88 sec
2013-09-10 20:45:45,578 Stage-1 map = 100%, reduce = 86%, Cumulative CPU 68.88 sec
2013-09-10 20:45:46,585 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 89.92 sec
2013-09-10 20:45:47,591 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 89.92 sec
2013-09-10 20:45:48,597 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 89.92 sec
MapReduce Total cumulative CPU time: 1 minutes 29 seconds 920 msec
Ended Job = job_201309101627_0181
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0182
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:45:52,108 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:46:02,139 Stage-2 map = 46%, reduce = 0%
2013-09-10 20:46:05,147 Stage-2 map = 50%, reduce = 0%
2013-09-10 20:46:08,156 Stage-2 map = 96%, reduce = 0%
2013-09-10 20:46:10,163 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.28 sec
2013-09-10 20:46:11,168 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.28 sec
2013-09-10 20:46:12,173 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.28 sec
2013-09-10 20:46:13,177 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.28 sec
2013-09-10 20:46:14,181 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.28 sec
2013-09-10 20:46:15,186 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.28 sec
2013-09-10 20:46:16,190 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.28 sec
2013-09-10 20:46:17,195 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 24.28 sec
2013-09-10 20:46:18,200 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 24.28 sec
2013-09-10 20:46:19,205 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 24.28 sec
2013-09-10 20:46:20,210 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 24.28 sec
2013-09-10 20:46:21,215 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 28.79 sec
2013-09-10 20:46:22,220 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 28.79 sec
MapReduce Total cumulative CPU time: 28 seconds 790 msec
Ended Job = job_201309101627_0182
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 89.92 sec HDFS Read: 84536695 HDFS Write: 146202868 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 28.79 sec HDFS Read: 146210123 HDFS Write: 256 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 58 seconds 710 msec
OK
Time taken: 80.288 seconds, Fetched: 10 row(s)
hive> quit;
-- ещё более сложная агрегация.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_17569@mturlrep13_201309102046_1421699291.txt
hive> ;
hive> quit;
times: 1
query: SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_17985@mturlrep13_201309102046_1356986531.txt
hive> SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0183
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:46:44,167 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:46:51,195 Stage-1 map = 32%, reduce = 0%
2013-09-10 20:46:54,211 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 18.03 sec
2013-09-10 20:46:55,217 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 18.03 sec
2013-09-10 20:46:56,226 Stage-1 map = 47%, reduce = 0%, Cumulative CPU 26.65 sec
2013-09-10 20:46:57,232 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.42 sec
2013-09-10 20:46:58,239 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.42 sec
2013-09-10 20:46:59,246 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.42 sec
2013-09-10 20:47:00,252 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.42 sec
2013-09-10 20:47:01,259 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.42 sec
2013-09-10 20:47:02,265 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.42 sec
2013-09-10 20:47:03,271 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 33.42 sec
2013-09-10 20:47:04,276 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 33.42 sec
2013-09-10 20:47:05,281 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 33.42 sec
2013-09-10 20:47:06,287 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 33.42 sec
2013-09-10 20:47:07,292 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 33.42 sec
2013-09-10 20:47:08,297 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 33.42 sec
2013-09-10 20:47:09,302 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 66.29 sec
2013-09-10 20:47:10,307 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 66.29 sec
2013-09-10 20:47:11,312 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 66.29 sec
2013-09-10 20:47:12,317 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 66.29 sec
2013-09-10 20:47:13,323 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 66.29 sec
2013-09-10 20:47:14,328 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 66.29 sec
2013-09-10 20:47:15,333 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 66.29 sec
2013-09-10 20:47:16,340 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 75.8 sec
2013-09-10 20:47:17,346 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 75.8 sec
2013-09-10 20:47:18,352 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 75.8 sec
MapReduce Total cumulative CPU time: 1 minutes 15 seconds 800 msec
Ended Job = job_201309101627_0183
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 75.8 sec HDFS Read: 84536695 HDFS Write: 889 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 15 seconds 800 msec
OK
Time taken: 44.05 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_19485@mturlrep13_201309102047_352390381.txt
hive> ;
hive> quit;
times: 2
query: SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_19906@mturlrep13_201309102047_307105282.txt
hive> SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0184
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:47:32,371 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:47:39,398 Stage-1 map = 39%, reduce = 0%
2013-09-10 20:47:42,409 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:47:43,421 Stage-1 map = 46%, reduce = 0%, Cumulative CPU 17.21 sec
2013-09-10 20:47:44,427 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.34 sec
2013-09-10 20:47:45,435 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.34 sec
2013-09-10 20:47:46,441 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.34 sec
2013-09-10 20:47:47,448 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.34 sec
2013-09-10 20:47:48,454 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.34 sec
2013-09-10 20:47:49,460 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.34 sec
2013-09-10 20:47:50,465 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.34 sec
2013-09-10 20:47:51,470 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 35.34 sec
2013-09-10 20:47:52,476 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 35.34 sec
2013-09-10 20:47:53,483 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 55.87 sec
2013-09-10 20:47:54,488 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 55.87 sec
2013-09-10 20:47:55,494 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 60.92 sec
2013-09-10 20:47:56,499 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.81 sec
2013-09-10 20:47:57,504 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.81 sec
2013-09-10 20:47:58,508 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.81 sec
2013-09-10 20:47:59,513 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.81 sec
2013-09-10 20:48:00,519 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.81 sec
2013-09-10 20:48:01,524 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.81 sec
2013-09-10 20:48:02,530 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 69.81 sec
2013-09-10 20:48:03,537 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 77.42 sec
2013-09-10 20:48:04,544 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 77.42 sec
2013-09-10 20:48:05,549 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 77.42 sec
MapReduce Total cumulative CPU time: 1 minutes 17 seconds 420 msec
Ended Job = job_201309101627_0184
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 77.42 sec HDFS Read: 84536695 HDFS Write: 889 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 17 seconds 420 msec
OK
Time taken: 41.574 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_21404@mturlrep13_201309102048_126554804.txt
hive> ;
hive> quit;
times: 3
query: SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_21810@mturlrep13_201309102048_1454592884.txt
hive> SELECT UserID, SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, SearchPhrase LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0185
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:48:18,529 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:48:26,558 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:48:30,578 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.15 sec
2013-09-10 20:48:31,586 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.15 sec
2013-09-10 20:48:32,594 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.15 sec
2013-09-10 20:48:33,601 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.15 sec
2013-09-10 20:48:34,606 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.15 sec
2013-09-10 20:48:35,613 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.15 sec
2013-09-10 20:48:36,618 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.15 sec
2013-09-10 20:48:37,623 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 35.15 sec
2013-09-10 20:48:38,632 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 35.15 sec
2013-09-10 20:48:39,638 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 35.15 sec
2013-09-10 20:48:40,654 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 35.15 sec
2013-09-10 20:48:41,660 Stage-1 map = 96%, reduce = 17%, Cumulative CPU 35.15 sec
2013-09-10 20:48:42,665 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 50.57 sec
2013-09-10 20:48:43,670 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.89 sec
2013-09-10 20:48:44,675 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.89 sec
2013-09-10 20:48:45,680 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.89 sec
2013-09-10 20:48:46,690 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 68.89 sec
2013-09-10 20:48:47,695 Stage-1 map = 100%, reduce = 21%, Cumulative CPU 68.89 sec
2013-09-10 20:48:48,701 Stage-1 map = 100%, reduce = 21%, Cumulative CPU 68.89 sec
2013-09-10 20:48:49,706 Stage-1 map = 100%, reduce = 21%, Cumulative CPU 68.89 sec
2013-09-10 20:48:50,711 Stage-1 map = 100%, reduce = 21%, Cumulative CPU 68.89 sec
2013-09-10 20:48:51,718 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 78.48 sec
2013-09-10 20:48:52,724 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 78.48 sec
MapReduce Total cumulative CPU time: 1 minutes 18 seconds 480 msec
Ended Job = job_201309101627_0185
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 78.48 sec HDFS Read: 84536695 HDFS Write: 889 SUCCESS
Total MapReduce CPU Time Spent: 1 minutes 18 seconds 480 msec
OK
Time taken: 41.516 seconds, Fetched: 10 row(s)
hive> quit;
-- то же самое, но без сортировки.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_23317@mturlrep13_201309102048_633775434.txt
hive> ;
hive> quit;
times: 1
query: SELECT UserID, minute(EventTime), SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, minute(EventTime), SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_23745@mturlrep13_201309102049_424444339.txt
hive> SELECT UserID, minute(EventTime), SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, minute(EventTime), SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0186
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:49:13,392 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:49:20,429 Stage-1 map = 7%, reduce = 0%
2013-09-10 20:49:23,442 Stage-1 map = 22%, reduce = 0%
2013-09-10 20:49:26,455 Stage-1 map = 29%, reduce = 0%
2013-09-10 20:49:29,469 Stage-1 map = 36%, reduce = 0%
2013-09-10 20:49:32,480 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:49:34,495 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.74 sec
2013-09-10 20:49:35,502 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.74 sec
2013-09-10 20:49:36,510 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.74 sec
2013-09-10 20:49:37,517 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.74 sec
2013-09-10 20:49:38,523 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.74 sec
2013-09-10 20:49:39,529 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.74 sec
2013-09-10 20:49:40,534 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 54.74 sec
2013-09-10 20:49:41,540 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 54.74 sec
2013-09-10 20:49:42,545 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 54.74 sec
2013-09-10 20:49:43,551 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 54.74 sec
2013-09-10 20:49:44,557 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 54.74 sec
2013-09-10 20:49:45,562 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 54.74 sec
2013-09-10 20:49:46,567 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 54.74 sec
2013-09-10 20:49:47,571 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 54.74 sec
2013-09-10 20:49:48,576 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 54.74 sec
2013-09-10 20:49:49,581 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 54.74 sec
2013-09-10 20:49:50,585 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 54.74 sec
2013-09-10 20:49:51,590 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 54.74 sec
2013-09-10 20:49:52,596 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 54.74 sec
2013-09-10 20:49:53,603 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 104.41 sec
2013-09-10 20:49:54,608 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 109.4 sec
2013-09-10 20:49:55,613 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 109.4 sec
2013-09-10 20:49:56,619 Stage-1 map = 100%, reduce = 29%, Cumulative CPU 109.4 sec
2013-09-10 20:49:57,624 Stage-1 map = 100%, reduce = 29%, Cumulative CPU 109.4 sec
2013-09-10 20:49:58,630 Stage-1 map = 100%, reduce = 29%, Cumulative CPU 109.4 sec
2013-09-10 20:49:59,636 Stage-1 map = 100%, reduce = 50%, Cumulative CPU 109.4 sec
2013-09-10 20:50:00,642 Stage-1 map = 100%, reduce = 50%, Cumulative CPU 109.4 sec
2013-09-10 20:50:01,648 Stage-1 map = 100%, reduce = 50%, Cumulative CPU 109.4 sec
2013-09-10 20:50:02,654 Stage-1 map = 100%, reduce = 72%, Cumulative CPU 109.4 sec
2013-09-10 20:50:03,660 Stage-1 map = 100%, reduce = 72%, Cumulative CPU 109.4 sec
2013-09-10 20:50:04,665 Stage-1 map = 100%, reduce = 72%, Cumulative CPU 109.4 sec
2013-09-10 20:50:06,204 Stage-1 map = 100%, reduce = 72%, Cumulative CPU 109.4 sec
2013-09-10 20:50:07,210 Stage-1 map = 100%, reduce = 79%, Cumulative CPU 109.4 sec
2013-09-10 20:50:08,216 Stage-1 map = 100%, reduce = 79%, Cumulative CPU 109.4 sec
2013-09-10 20:50:09,222 Stage-1 map = 100%, reduce = 79%, Cumulative CPU 109.4 sec
2013-09-10 20:50:10,228 Stage-1 map = 100%, reduce = 87%, Cumulative CPU 109.4 sec
2013-09-10 20:50:11,233 Stage-1 map = 100%, reduce = 87%, Cumulative CPU 109.4 sec
2013-09-10 20:50:12,238 Stage-1 map = 100%, reduce = 87%, Cumulative CPU 109.4 sec
2013-09-10 20:50:13,245 Stage-1 map = 100%, reduce = 96%, Cumulative CPU 109.4 sec
2013-09-10 20:50:14,251 Stage-1 map = 100%, reduce = 97%, Cumulative CPU 128.24 sec
2013-09-10 20:50:15,258 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 147.79 sec
2013-09-10 20:50:16,263 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 147.79 sec
MapReduce Total cumulative CPU time: 2 minutes 27 seconds 790 msec
Ended Job = job_201309101627_0186
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0187
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:50:19,831 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:50:32,871 Stage-2 map = 28%, reduce = 0%
2013-09-10 20:50:38,890 Stage-2 map = 50%, reduce = 0%
2013-09-10 20:50:48,919 Stage-2 map = 78%, reduce = 0%
2013-09-10 20:50:56,946 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.44 sec
2013-09-10 20:50:57,951 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.44 sec
2013-09-10 20:50:58,956 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.44 sec
2013-09-10 20:50:59,960 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.44 sec
2013-09-10 20:51:00,964 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.44 sec
2013-09-10 20:51:01,968 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.44 sec
2013-09-10 20:51:02,972 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.44 sec
2013-09-10 20:51:03,977 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.44 sec
2013-09-10 20:51:04,980 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.44 sec
2013-09-10 20:51:05,984 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.44 sec
2013-09-10 20:51:06,989 Stage-2 map = 100%, reduce = 67%, Cumulative CPU 43.44 sec
2013-09-10 20:51:07,994 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 49.97 sec
2013-09-10 20:51:08,999 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 49.97 sec
2013-09-10 20:51:10,004 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 49.97 sec
MapReduce Total cumulative CPU time: 49 seconds 970 msec
Ended Job = job_201309101627_0187
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 147.79 sec HDFS Read: 84944733 HDFS Write: 241346048 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 49.97 sec HDFS Read: 241349358 HDFS Write: 268 SUCCESS
Total MapReduce CPU Time Spent: 3 minutes 17 seconds 760 msec
OK
Time taken: 126.359 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_26780@mturlrep13_201309102051_75093307.txt
hive> ;
hive> quit;
times: 2
query: SELECT UserID, minute(EventTime), SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, minute(EventTime), SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_27199@mturlrep13_201309102051_564952806.txt
hive> SELECT UserID, minute(EventTime), SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, minute(EventTime), SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0188
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:51:23,085 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:51:31,119 Stage-1 map = 11%, reduce = 0%
2013-09-10 20:51:34,132 Stage-1 map = 22%, reduce = 0%
2013-09-10 20:51:37,144 Stage-1 map = 29%, reduce = 0%
2013-09-10 20:51:40,158 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:51:43,177 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.96 sec
2013-09-10 20:51:44,183 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.96 sec
2013-09-10 20:51:45,190 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.96 sec
2013-09-10 20:51:46,196 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.96 sec
2013-09-10 20:51:47,202 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.96 sec
2013-09-10 20:51:48,230 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.96 sec
2013-09-10 20:51:49,236 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.96 sec
2013-09-10 20:51:50,242 Stage-1 map = 54%, reduce = 8%, Cumulative CPU 52.96 sec
2013-09-10 20:51:51,247 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 52.96 sec
2013-09-10 20:51:52,253 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 52.96 sec
2013-09-10 20:51:53,259 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 81.09 sec
2013-09-10 20:51:54,265 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 81.09 sec
2013-09-10 20:51:55,271 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 81.09 sec
2013-09-10 20:51:56,276 Stage-1 map = 76%, reduce = 17%, Cumulative CPU 81.09 sec
2013-09-10 20:51:57,282 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 81.09 sec
2013-09-10 20:51:58,288 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 81.09 sec
2013-09-10 20:51:59,294 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 81.09 sec
2013-09-10 20:52:00,300 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 81.09 sec
2013-09-10 20:52:01,305 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 81.09 sec
2013-09-10 20:52:02,311 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 95.63 sec
2013-09-10 20:52:03,315 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 95.63 sec
2013-09-10 20:52:04,320 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 109.67 sec
2013-09-10 20:52:05,325 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 109.67 sec
2013-09-10 20:52:06,330 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 109.67 sec
2013-09-10 20:52:07,334 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 109.67 sec
2013-09-10 20:52:08,339 Stage-1 map = 100%, reduce = 50%, Cumulative CPU 109.67 sec
2013-09-10 20:52:09,344 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 109.67 sec
2013-09-10 20:52:10,349 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 109.67 sec
2013-09-10 20:52:11,355 Stage-1 map = 100%, reduce = 70%, Cumulative CPU 109.67 sec
2013-09-10 20:52:12,360 Stage-1 map = 100%, reduce = 74%, Cumulative CPU 109.67 sec
2013-09-10 20:52:13,366 Stage-1 map = 100%, reduce = 74%, Cumulative CPU 109.67 sec
2013-09-10 20:52:14,371 Stage-1 map = 100%, reduce = 78%, Cumulative CPU 109.67 sec
2013-09-10 20:52:15,377 Stage-1 map = 100%, reduce = 82%, Cumulative CPU 109.67 sec
2013-09-10 20:52:16,382 Stage-1 map = 100%, reduce = 82%, Cumulative CPU 109.67 sec
2013-09-10 20:52:17,388 Stage-1 map = 100%, reduce = 84%, Cumulative CPU 109.67 sec
2013-09-10 20:52:18,393 Stage-1 map = 100%, reduce = 88%, Cumulative CPU 109.67 sec
2013-09-10 20:52:19,399 Stage-1 map = 100%, reduce = 88%, Cumulative CPU 109.67 sec
2013-09-10 20:52:20,404 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 109.67 sec
2013-09-10 20:52:21,410 Stage-1 map = 100%, reduce = 97%, Cumulative CPU 109.67 sec
2013-09-10 20:52:22,417 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 148.66 sec
2013-09-10 20:52:23,428 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 148.66 sec
2013-09-10 20:52:24,433 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 148.66 sec
MapReduce Total cumulative CPU time: 2 minutes 28 seconds 660 msec
Ended Job = job_201309101627_0188
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0189
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:52:27,956 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:52:40,999 Stage-2 map = 28%, reduce = 0%
2013-09-10 20:52:47,512 Stage-2 map = 50%, reduce = 0%
2013-09-10 20:52:53,533 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 31.33 sec
2013-09-10 20:52:54,538 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 31.33 sec
2013-09-10 20:52:55,543 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 31.33 sec
2013-09-10 20:52:56,548 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 31.33 sec
2013-09-10 20:52:57,552 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 31.33 sec
2013-09-10 20:52:58,557 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 31.33 sec
2013-09-10 20:52:59,561 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 31.33 sec
2013-09-10 20:53:00,565 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 31.33 sec
2013-09-10 20:53:01,569 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 31.33 sec
2013-09-10 20:53:02,573 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 31.33 sec
2013-09-10 20:53:03,577 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.09 sec
2013-09-10 20:53:04,582 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.09 sec
2013-09-10 20:53:05,587 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.09 sec
2013-09-10 20:53:06,591 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.09 sec
2013-09-10 20:53:07,596 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.09 sec
2013-09-10 20:53:08,600 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.09 sec
2013-09-10 20:53:09,604 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.09 sec
2013-09-10 20:53:10,609 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.09 sec
2013-09-10 20:53:11,613 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.09 sec
2013-09-10 20:53:12,617 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.09 sec
2013-09-10 20:53:13,621 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 43.09 sec
2013-09-10 20:53:14,636 Stage-2 map = 100%, reduce = 68%, Cumulative CPU 43.09 sec
2013-09-10 20:53:15,641 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 49.5 sec
2013-09-10 20:53:16,645 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 49.5 sec
2013-09-10 20:53:17,650 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 49.5 sec
MapReduce Total cumulative CPU time: 49 seconds 500 msec
Ended Job = job_201309101627_0189
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 148.66 sec HDFS Read: 84944733 HDFS Write: 241346048 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 49.5 sec HDFS Read: 241349358 HDFS Write: 268 SUCCESS
Total MapReduce CPU Time Spent: 3 minutes 18 seconds 160 msec
OK
Time taken: 121.853 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_29500@mturlrep13_201309102053_2142354545.txt
hive> ;
hive> quit;
times: 3
query: SELECT UserID, minute(EventTime), SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, minute(EventTime), SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_29928@mturlrep13_201309102053_368055123.txt
hive> SELECT UserID, minute(EventTime), SearchPhrase, count(*) AS c FROM hits_10m GROUP BY UserID, minute(EventTime), SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0190
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:53:30,426 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:53:38,459 Stage-1 map = 14%, reduce = 0%
2013-09-10 20:53:41,471 Stage-1 map = 22%, reduce = 0%
2013-09-10 20:53:44,485 Stage-1 map = 32%, reduce = 0%
2013-09-10 20:53:47,498 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:53:50,516 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.53 sec
2013-09-10 20:53:51,523 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.53 sec
2013-09-10 20:53:52,538 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.53 sec
2013-09-10 20:53:53,544 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.53 sec
2013-09-10 20:53:54,550 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.53 sec
2013-09-10 20:53:55,556 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.53 sec
2013-09-10 20:53:56,562 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 52.53 sec
2013-09-10 20:53:57,568 Stage-1 map = 57%, reduce = 8%, Cumulative CPU 52.53 sec
2013-09-10 20:53:58,574 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 52.53 sec
2013-09-10 20:53:59,579 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 52.53 sec
2013-09-10 20:54:00,585 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 52.53 sec
2013-09-10 20:54:01,592 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 52.53 sec
2013-09-10 20:54:02,598 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 52.53 sec
2013-09-10 20:54:03,604 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 52.53 sec
2013-09-10 20:54:04,610 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 52.53 sec
2013-09-10 20:54:05,616 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 52.53 sec
2013-09-10 20:54:06,622 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 52.53 sec
2013-09-10 20:54:07,628 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 52.53 sec
2013-09-10 20:54:08,634 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 77.54 sec
2013-09-10 20:54:09,640 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 77.54 sec
2013-09-10 20:54:10,645 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 104.77 sec
2013-09-10 20:54:11,650 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 104.77 sec
2013-09-10 20:54:12,655 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 104.77 sec
2013-09-10 20:54:13,661 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 104.77 sec
2013-09-10 20:54:14,666 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 104.77 sec
2013-09-10 20:54:15,671 Stage-1 map = 100%, reduce = 50%, Cumulative CPU 104.77 sec
2013-09-10 20:54:16,677 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 104.77 sec
2013-09-10 20:54:17,682 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 104.77 sec
2013-09-10 20:54:18,688 Stage-1 map = 100%, reduce = 70%, Cumulative CPU 104.77 sec
2013-09-10 20:54:19,693 Stage-1 map = 100%, reduce = 73%, Cumulative CPU 104.77 sec
2013-09-10 20:54:20,699 Stage-1 map = 100%, reduce = 73%, Cumulative CPU 104.77 sec
2013-09-10 20:54:21,704 Stage-1 map = 100%, reduce = 77%, Cumulative CPU 104.77 sec
2013-09-10 20:54:23,614 Stage-1 map = 100%, reduce = 81%, Cumulative CPU 104.77 sec
2013-09-10 20:54:24,620 Stage-1 map = 100%, reduce = 84%, Cumulative CPU 104.77 sec
2013-09-10 20:54:25,626 Stage-1 map = 100%, reduce = 86%, Cumulative CPU 104.77 sec
2013-09-10 20:54:26,632 Stage-1 map = 100%, reduce = 86%, Cumulative CPU 104.77 sec
2013-09-10 20:54:27,638 Stage-1 map = 100%, reduce = 90%, Cumulative CPU 104.77 sec
2013-09-10 20:54:28,643 Stage-1 map = 100%, reduce = 95%, Cumulative CPU 104.77 sec
2013-09-10 20:54:29,649 Stage-1 map = 100%, reduce = 95%, Cumulative CPU 104.77 sec
2013-09-10 20:54:30,656 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 146.68 sec
2013-09-10 20:54:31,662 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 146.68 sec
MapReduce Total cumulative CPU time: 2 minutes 26 seconds 680 msec
Ended Job = job_201309101627_0190
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0191
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:54:35,128 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:54:48,165 Stage-2 map = 28%, reduce = 0%
2013-09-10 20:54:54,183 Stage-2 map = 50%, reduce = 0%, Cumulative CPU 20.36 sec
2013-09-10 20:54:55,188 Stage-2 map = 50%, reduce = 0%, Cumulative CPU 20.36 sec
2013-09-10 20:54:56,192 Stage-2 map = 50%, reduce = 0%, Cumulative CPU 20.36 sec
2013-09-10 20:54:57,197 Stage-2 map = 50%, reduce = 0%, Cumulative CPU 20.36 sec
2013-09-10 20:54:58,202 Stage-2 map = 50%, reduce = 0%, Cumulative CPU 20.36 sec
2013-09-10 20:54:59,207 Stage-2 map = 50%, reduce = 0%, Cumulative CPU 20.36 sec
2013-09-10 20:55:00,211 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 20.36 sec
2013-09-10 20:55:01,216 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 20.36 sec
2013-09-10 20:55:02,220 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 20.36 sec
2013-09-10 20:55:03,224 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 20.36 sec
2013-09-10 20:55:04,229 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 20.36 sec
2013-09-10 20:55:05,233 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 20.36 sec
2013-09-10 20:55:06,238 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 20.36 sec
2013-09-10 20:55:07,242 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 20.36 sec
2013-09-10 20:55:08,247 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 20.36 sec
2013-09-10 20:55:09,251 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 20.36 sec
2013-09-10 20:55:10,256 Stage-2 map = 78%, reduce = 0%, Cumulative CPU 20.36 sec
2013-09-10 20:55:11,261 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 42.54 sec
2013-09-10 20:55:12,265 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 42.54 sec
2013-09-10 20:55:13,270 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 42.54 sec
2013-09-10 20:55:14,274 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 42.54 sec
2013-09-10 20:55:15,286 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 42.54 sec
2013-09-10 20:55:16,291 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 42.54 sec
2013-09-10 20:55:17,296 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 42.54 sec
2013-09-10 20:55:18,301 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 42.54 sec
2013-09-10 20:55:19,307 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 42.54 sec
2013-09-10 20:55:20,311 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 42.54 sec
2013-09-10 20:55:21,316 Stage-2 map = 100%, reduce = 69%, Cumulative CPU 42.54 sec
2013-09-10 20:55:22,321 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 48.86 sec
2013-09-10 20:55:23,326 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 48.86 sec
2013-09-10 20:55:24,330 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 48.86 sec
MapReduce Total cumulative CPU time: 48 seconds 860 msec
Ended Job = job_201309101627_0191
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 146.68 sec HDFS Read: 84944733 HDFS Write: 241346048 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 48.86 sec HDFS Read: 241349358 HDFS Write: 268 SUCCESS
Total MapReduce CPU Time Spent: 3 minutes 15 seconds 540 msec
OK
Time taken: 121.047 seconds, Fetched: 10 row(s)
hive> quit;
-- ещё более сложная агрегация, не стоит выполнять на больших таблицах.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_506@mturlrep13_201309102055_4203966.txt
hive> ;
hive> quit;
times: 1
query: SELECT UserID FROM hits_10m WHERE UserID = 12345678901234567890;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_956@mturlrep13_201309102055_26161103.txt
hive> SELECT UserID FROM hits_10m WHERE UserID = 12345678901234567890;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0192
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 0
2013-09-10 20:55:43,857 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:55:48,887 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.59 sec
2013-09-10 20:55:49,894 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.59 sec
2013-09-10 20:55:50,902 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.59 sec
2013-09-10 20:55:51,908 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.59 sec
2013-09-10 20:55:52,914 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.59 sec
2013-09-10 20:55:53,920 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 15.27 sec
2013-09-10 20:55:54,925 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 15.27 sec
2013-09-10 20:55:55,931 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 15.27 sec
MapReduce Total cumulative CPU time: 15 seconds 270 msec
Ended Job = job_201309101627_0192
MapReduce Jobs Launched:
Job 0: Map: 4 Cumulative CPU: 15.27 sec HDFS Read: 57312623 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 15 seconds 270 msec
OK
Time taken: 21.635 seconds
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_2371@mturlrep13_201309102055_813101102.txt
hive> ;
hive> quit;
times: 2
query: SELECT UserID FROM hits_10m WHERE UserID = 12345678901234567890;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_2777@mturlrep13_201309102056_2011806893.txt
hive> SELECT UserID FROM hits_10m WHERE UserID = 12345678901234567890;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0193
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 0
2013-09-10 20:56:09,574 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:56:13,596 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.59 sec
2013-09-10 20:56:14,604 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.59 sec
2013-09-10 20:56:15,610 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.59 sec
2013-09-10 20:56:16,616 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.59 sec
2013-09-10 20:56:17,621 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 11.18 sec
2013-09-10 20:56:18,626 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 15.35 sec
2013-09-10 20:56:19,632 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 15.35 sec
MapReduce Total cumulative CPU time: 15 seconds 350 msec
Ended Job = job_201309101627_0193
MapReduce Jobs Launched:
Job 0: Map: 4 Cumulative CPU: 15.35 sec HDFS Read: 57312623 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 15 seconds 350 msec
OK
Time taken: 18.086 seconds
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_3888@mturlrep13_201309102056_1592116097.txt
hive> ;
hive> quit;
times: 3
query: SELECT UserID FROM hits_10m WHERE UserID = 12345678901234567890;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_4301@mturlrep13_201309102056_499033559.txt
hive> SELECT UserID FROM hits_10m WHERE UserID = 12345678901234567890;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0194
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 0
2013-09-10 20:56:32,600 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:56:37,626 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.67 sec
2013-09-10 20:56:38,633 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.67 sec
2013-09-10 20:56:39,640 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.67 sec
2013-09-10 20:56:40,645 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 7.67 sec
2013-09-10 20:56:41,651 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 15.06 sec
2013-09-10 20:56:42,656 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 15.06 sec
2013-09-10 20:56:43,661 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 15.06 sec
MapReduce Total cumulative CPU time: 15 seconds 60 msec
Ended Job = job_201309101627_0194
MapReduce Jobs Launched:
Job 0: Map: 4 Cumulative CPU: 15.06 sec HDFS Read: 57312623 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 15 seconds 60 msec
OK
Time taken: 18.296 seconds
hive> quit;
-- мощная фильтрация по столбцу типа UInt64.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_5720@mturlrep13_201309102056_94435251.txt
hive> ;
hive> quit;
times: 1
query: SELECT count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%';
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_6130@mturlrep13_201309102056_174777183.txt
hive> SELECT count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%';;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0195
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:57:03,409 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:57:10,436 Stage-1 map = 43%, reduce = 0%
2013-09-10 20:57:11,449 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.79 sec
2013-09-10 20:57:12,456 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.79 sec
2013-09-10 20:57:13,463 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.79 sec
2013-09-10 20:57:14,468 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.79 sec
2013-09-10 20:57:15,474 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.79 sec
2013-09-10 20:57:16,480 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.79 sec
2013-09-10 20:57:17,486 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.79 sec
2013-09-10 20:57:18,492 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 17.79 sec
2013-09-10 20:57:19,497 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.67 sec
2013-09-10 20:57:20,502 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.67 sec
2013-09-10 20:57:21,506 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.67 sec
2013-09-10 20:57:22,511 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.67 sec
2013-09-10 20:57:23,517 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 35.67 sec
2013-09-10 20:57:24,524 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 37.81 sec
2013-09-10 20:57:25,529 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 37.81 sec
MapReduce Total cumulative CPU time: 37 seconds 810 msec
Ended Job = job_201309101627_0195
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 37.81 sec HDFS Read: 109451651 HDFS Write: 5 SUCCESS
Total MapReduce CPU Time Spent: 37 seconds 810 msec
OK
8428
Time taken: 32.044 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_7386@mturlrep13_201309102057_110374524.txt
hive> ;
hive> quit;
times: 2
query: SELECT count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%';
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_7806@mturlrep13_201309102057_1983544680.txt
hive> SELECT count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%';;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0196
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:57:38,517 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:57:46,554 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.33 sec
2013-09-10 20:57:47,562 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.33 sec
2013-09-10 20:57:48,570 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.33 sec
2013-09-10 20:57:49,576 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.33 sec
2013-09-10 20:57:50,582 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.33 sec
2013-09-10 20:57:51,588 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.33 sec
2013-09-10 20:57:52,594 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 17.33 sec
2013-09-10 20:57:53,601 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 34.87 sec
2013-09-10 20:57:54,605 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 34.87 sec
2013-09-10 20:57:55,610 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 34.87 sec
2013-09-10 20:57:56,615 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 34.87 sec
2013-09-10 20:57:57,620 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 34.87 sec
2013-09-10 20:57:58,625 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 34.87 sec
2013-09-10 20:57:59,632 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 36.29 sec
2013-09-10 20:58:00,637 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 36.29 sec
MapReduce Total cumulative CPU time: 36 seconds 290 msec
Ended Job = job_201309101627_0196
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 36.29 sec HDFS Read: 109451651 HDFS Write: 5 SUCCESS
Total MapReduce CPU Time Spent: 36 seconds 290 msec
OK
8428
Time taken: 29.413 seconds, Fetched: 1 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9045@mturlrep13_201309102058_1522105504.txt
hive> ;
hive> quit;
times: 3
query: SELECT count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%';
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9464@mturlrep13_201309102058_1256270560.txt
hive> SELECT count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%';;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0197
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 20:58:13,484 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:58:21,525 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.01 sec
2013-09-10 20:58:22,534 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.01 sec
2013-09-10 20:58:23,540 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.01 sec
2013-09-10 20:58:24,546 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.01 sec
2013-09-10 20:58:25,551 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.01 sec
2013-09-10 20:58:26,556 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.01 sec
2013-09-10 20:58:27,562 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.01 sec
2013-09-10 20:58:28,568 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 34.97 sec
2013-09-10 20:58:29,572 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 34.97 sec
2013-09-10 20:58:30,577 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 34.97 sec
2013-09-10 20:58:31,582 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 34.97 sec
2013-09-10 20:58:32,587 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 34.97 sec
2013-09-10 20:58:33,591 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 34.97 sec
2013-09-10 20:58:34,599 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 36.93 sec
2013-09-10 20:58:35,604 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 36.93 sec
MapReduce Total cumulative CPU time: 36 seconds 930 msec
Ended Job = job_201309101627_0197
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 36.93 sec HDFS Read: 109451651 HDFS Write: 5 SUCCESS
Total MapReduce CPU Time Spent: 36 seconds 930 msec
OK
8428
Time taken: 29.432 seconds, Fetched: 1 row(s)
hive> quit;
-- фильтрация по поиску подстроки в строке.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_10706@mturlrep13_201309102058_1880832226.txt
hive> ;
hive> quit;
times: 1
query: SELECT SearchPhrase, MAX(URL), count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_11132@mturlrep13_201309102058_286164766.txt
hive> SELECT SearchPhrase, MAX(URL), count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0198
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:58:55,183 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:59:02,210 Stage-1 map = 36%, reduce = 0%
2013-09-10 20:59:03,222 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.12 sec
2013-09-10 20:59:04,230 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.12 sec
2013-09-10 20:59:05,237 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.12 sec
2013-09-10 20:59:06,243 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.12 sec
2013-09-10 20:59:07,248 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.12 sec
2013-09-10 20:59:08,253 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.12 sec
2013-09-10 20:59:09,258 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.12 sec
2013-09-10 20:59:10,265 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.12 sec
2013-09-10 20:59:11,270 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 19.12 sec
2013-09-10 20:59:12,276 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.14 sec
2013-09-10 20:59:13,281 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.14 sec
2013-09-10 20:59:14,286 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.14 sec
2013-09-10 20:59:15,291 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.14 sec
2013-09-10 20:59:16,296 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.14 sec
2013-09-10 20:59:17,304 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 42.42 sec
2013-09-10 20:59:18,311 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 42.42 sec
MapReduce Total cumulative CPU time: 42 seconds 420 msec
Ended Job = job_201309101627_0198
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0199
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 20:59:20,834 Stage-2 map = 0%, reduce = 0%
2013-09-10 20:59:22,842 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.78 sec
2013-09-10 20:59:23,847 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.78 sec
2013-09-10 20:59:24,852 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.78 sec
2013-09-10 20:59:25,856 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.78 sec
2013-09-10 20:59:26,860 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.78 sec
2013-09-10 20:59:27,865 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.78 sec
2013-09-10 20:59:28,870 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.78 sec
2013-09-10 20:59:29,875 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 0.78 sec
2013-09-10 20:59:30,881 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.17 sec
2013-09-10 20:59:31,886 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.17 sec
MapReduce Total cumulative CPU time: 2 seconds 170 msec
Ended Job = job_201309101627_0199
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 42.42 sec HDFS Read: 136675723 HDFS Write: 5172 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.17 sec HDFS Read: 5941 HDFS Write: 984 SUCCESS
Total MapReduce CPU Time Spent: 44 seconds 590 msec
OK
Time taken: 46.47 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_13069@mturlrep13_201309102059_1992501004.txt
hive> ;
hive> quit;
times: 2
query: SELECT SearchPhrase, MAX(URL), count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_13485@mturlrep13_201309102059_1401379353.txt
hive> SELECT SearchPhrase, MAX(URL), count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0200
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 20:59:44,971 Stage-1 map = 0%, reduce = 0%
2013-09-10 20:59:53,007 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.27 sec
2013-09-10 20:59:54,015 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.27 sec
2013-09-10 20:59:55,023 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.27 sec
2013-09-10 20:59:56,030 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.27 sec
2013-09-10 20:59:57,036 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.27 sec
2013-09-10 20:59:58,042 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.27 sec
2013-09-10 20:59:59,048 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.27 sec
2013-09-10 21:00:00,055 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.27 sec
2013-09-10 21:00:01,061 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 36.85 sec
2013-09-10 21:00:02,067 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 36.85 sec
2013-09-10 21:00:03,072 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 36.85 sec
2013-09-10 21:00:04,078 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 36.85 sec
2013-09-10 21:00:05,083 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 36.85 sec
2013-09-10 21:00:06,091 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 38.79 sec
2013-09-10 21:00:07,097 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.76 sec
2013-09-10 21:00:08,103 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 40.76 sec
MapReduce Total cumulative CPU time: 40 seconds 760 msec
Ended Job = job_201309101627_0200
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0201
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 21:00:10,622 Stage-2 map = 0%, reduce = 0%
2013-09-10 21:00:12,630 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.79 sec
2013-09-10 21:00:13,635 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.79 sec
2013-09-10 21:00:14,640 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.79 sec
2013-09-10 21:00:15,645 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.79 sec
2013-09-10 21:00:16,650 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.79 sec
2013-09-10 21:00:17,655 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.79 sec
2013-09-10 21:00:18,661 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.79 sec
2013-09-10 21:00:19,666 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 0.79 sec
2013-09-10 21:00:20,675 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.17 sec
2013-09-10 21:00:21,680 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.17 sec
MapReduce Total cumulative CPU time: 2 seconds 170 msec
Ended Job = job_201309101627_0201
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 40.76 sec HDFS Read: 136675723 HDFS Write: 5172 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.17 sec HDFS Read: 5941 HDFS Write: 984 SUCCESS
Total MapReduce CPU Time Spent: 42 seconds 930 msec
OK
Time taken: 44.122 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_16145@mturlrep13_201309102100_902755262.txt
hive> ;
hive> quit;
times: 3
query: SELECT SearchPhrase, MAX(URL), count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_16578@mturlrep13_201309102100_20618285.txt
hive> SELECT SearchPhrase, MAX(URL), count(*) AS c FROM hits_10m WHERE URL LIKE '%metrika%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0202
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 21:00:35,897 Stage-1 map = 0%, reduce = 0%
2013-09-10 21:00:42,923 Stage-1 map = 36%, reduce = 0%
2013-09-10 21:00:43,936 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.83 sec
2013-09-10 21:00:44,944 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.83 sec
2013-09-10 21:00:45,953 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.83 sec
2013-09-10 21:00:46,959 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.83 sec
2013-09-10 21:00:47,965 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.83 sec
2013-09-10 21:00:48,971 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.83 sec
2013-09-10 21:00:49,977 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.83 sec
2013-09-10 21:00:50,984 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.24 sec
2013-09-10 21:00:51,989 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.24 sec
2013-09-10 21:00:52,995 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.24 sec
2013-09-10 21:00:54,000 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.51 sec
2013-09-10 21:00:55,006 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.51 sec
2013-09-10 21:00:56,012 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.51 sec
2013-09-10 21:00:57,019 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.27 sec
2013-09-10 21:00:58,025 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.27 sec
2013-09-10 21:00:59,031 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 41.27 sec
MapReduce Total cumulative CPU time: 41 seconds 270 msec
Ended Job = job_201309101627_0202
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0203
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 21:01:01,551 Stage-2 map = 0%, reduce = 0%
2013-09-10 21:01:03,560 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 21:01:04,565 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 21:01:05,571 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 21:01:06,576 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 21:01:07,581 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 21:01:08,586 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 21:01:09,591 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 21:01:10,596 Stage-2 map = 100%, reduce = 33%, Cumulative CPU 0.74 sec
2013-09-10 21:01:11,602 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.13 sec
2013-09-10 21:01:12,607 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.13 sec
2013-09-10 21:01:13,612 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.13 sec
MapReduce Total cumulative CPU time: 2 seconds 130 msec
Ended Job = job_201309101627_0203
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 41.27 sec HDFS Read: 136675723 HDFS Write: 5172 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.13 sec HDFS Read: 5941 HDFS Write: 984 SUCCESS
Total MapReduce CPU Time Spent: 43 seconds 400 msec
OK
Time taken: 46.136 seconds, Fetched: 10 row(s)
hive> quit;
-- вынимаем большие столбцы, фильтрация по строке.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_18552@mturlrep13_201309102101_957503081.txt
hive> ;
hive> quit;
times: 1
query: SELECT SearchPhrase, MAX(URL), MAX(Title), count(*) AS c, count(DISTINCT UserID) FROM hits_10m WHERE Title LIKE '%Яндекс%' AND URL NOT LIKE '%.yandex.%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_18999@mturlrep13_201309102101_604919900.txt
hive> SELECT SearchPhrase, MAX(URL), MAX(Title), count(*) AS c, count(DISTINCT UserID) FROM hits_10m WHERE Title LIKE '%Яндекс%' AND URL NOT LIKE '%.yandex.%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0204
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 21:01:34,309 Stage-1 map = 0%, reduce = 0%
2013-09-10 21:01:41,339 Stage-1 map = 22%, reduce = 0%
2013-09-10 21:01:44,352 Stage-1 map = 43%, reduce = 0%
2013-09-10 21:01:45,365 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.13 sec
2013-09-10 21:01:46,373 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.13 sec
2013-09-10 21:01:47,381 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.13 sec
2013-09-10 21:01:48,388 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.13 sec
2013-09-10 21:01:49,395 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.13 sec
2013-09-10 21:01:50,402 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.13 sec
2013-09-10 21:01:51,408 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.13 sec
2013-09-10 21:01:52,414 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 24.13 sec
2013-09-10 21:01:53,420 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 24.13 sec
2013-09-10 21:01:54,426 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 43.07 sec
2013-09-10 21:01:55,433 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 46.23 sec
2013-09-10 21:01:56,439 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 50.07 sec
2013-09-10 21:01:57,446 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 50.07 sec
2013-09-10 21:01:58,454 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 53.11 sec
2013-09-10 21:01:59,460 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 53.11 sec
2013-09-10 21:02:00,466 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 53.11 sec
MapReduce Total cumulative CPU time: 53 seconds 110 msec
Ended Job = job_201309101627_0204
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0205
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 21:02:04,015 Stage-2 map = 0%, reduce = 0%
2013-09-10 21:02:05,020 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-10 21:02:06,026 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-10 21:02:07,032 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-10 21:02:08,036 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-10 21:02:09,041 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-10 21:02:10,046 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-10 21:02:11,051 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-10 21:02:12,056 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.91 sec
2013-09-10 21:02:13,077 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.29 sec
2013-09-10 21:02:14,084 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.29 sec
2013-09-10 21:02:15,089 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.29 sec
MapReduce Total cumulative CPU time: 2 seconds 290 msec
Ended Job = job_201309101627_0205
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 53.11 sec HDFS Read: 298803179 HDFS Write: 12221 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.29 sec HDFS Read: 12990 HDFS Write: 2646 SUCCESS
Total MapReduce CPU Time Spent: 55 seconds 400 msec
OK
Time taken: 50.819 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_21073@mturlrep13_201309102102_804313256.txt
hive> ;
hive> quit;
times: 2
query: SELECT SearchPhrase, MAX(URL), MAX(Title), count(*) AS c, count(DISTINCT UserID) FROM hits_10m WHERE Title LIKE '%Яндекс%' AND URL NOT LIKE '%.yandex.%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_21509@mturlrep13_201309102102_1172673920.txt
hive> SELECT SearchPhrase, MAX(URL), MAX(Title), count(*) AS c, count(DISTINCT UserID) FROM hits_10m WHERE Title LIKE '%Яндекс%' AND URL NOT LIKE '%.yandex.%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0206
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 21:02:29,084 Stage-1 map = 0%, reduce = 0%
2013-09-10 21:02:36,113 Stage-1 map = 29%, reduce = 0%
2013-09-10 21:02:39,134 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.74 sec
2013-09-10 21:02:40,142 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.74 sec
2013-09-10 21:02:41,150 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.74 sec
2013-09-10 21:02:42,156 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.74 sec
2013-09-10 21:02:43,163 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.74 sec
2013-09-10 21:02:44,170 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.74 sec
2013-09-10 21:02:45,176 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.74 sec
2013-09-10 21:02:46,181 Stage-1 map = 64%, reduce = 8%, Cumulative CPU 24.74 sec
2013-09-10 21:02:47,186 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 24.74 sec
2013-09-10 21:02:48,192 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 24.74 sec
2013-09-10 21:02:49,197 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.69 sec
2013-09-10 21:02:50,203 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.69 sec
2013-09-10 21:02:51,208 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.69 sec
2013-09-10 21:02:52,215 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 52.98 sec
2013-09-10 21:02:53,222 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 52.98 sec
2013-09-10 21:02:54,227 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 52.98 sec
MapReduce Total cumulative CPU time: 52 seconds 980 msec
Ended Job = job_201309101627_0206
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0207
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 21:02:58,034 Stage-2 map = 0%, reduce = 0%
2013-09-10 21:02:59,039 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 21:03:00,045 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 21:03:01,050 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 21:03:02,055 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 21:03:03,061 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 21:03:04,066 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 21:03:05,071 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 21:03:06,095 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.74 sec
2013-09-10 21:03:07,100 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.16 sec
2013-09-10 21:03:08,106 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.16 sec
2013-09-10 21:03:09,112 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.16 sec
MapReduce Total cumulative CPU time: 2 seconds 160 msec
Ended Job = job_201309101627_0207
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 52.98 sec HDFS Read: 298803179 HDFS Write: 12221 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.16 sec HDFS Read: 12990 HDFS Write: 2646 SUCCESS
Total MapReduce CPU Time Spent: 55 seconds 140 msec
OK
Time taken: 48.425 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_23586@mturlrep13_201309102103_1680692530.txt
hive> ;
hive> quit;
times: 3
query: SELECT SearchPhrase, MAX(URL), MAX(Title), count(*) AS c, count(DISTINCT UserID) FROM hits_10m WHERE Title LIKE '%Яндекс%' AND URL NOT LIKE '%.yandex.%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_23990@mturlrep13_201309102103_1640399179.txt
hive> SELECT SearchPhrase, MAX(URL), MAX(Title), count(*) AS c, count(DISTINCT UserID) FROM hits_10m WHERE Title LIKE '%Яндекс%' AND URL NOT LIKE '%.yandex.%' AND SearchPhrase != '' GROUP BY SearchPhrase ORDER BY c DESC LIMIT 10;;
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0208
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2
2013-09-10 21:03:23,428 Stage-1 map = 0%, reduce = 0%
2013-09-10 21:03:30,457 Stage-1 map = 29%, reduce = 0%
2013-09-10 21:03:33,485 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.39 sec
2013-09-10 21:03:34,492 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.39 sec
2013-09-10 21:03:35,500 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.39 sec
2013-09-10 21:03:36,507 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.39 sec
2013-09-10 21:03:37,513 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.39 sec
2013-09-10 21:03:38,521 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.39 sec
2013-09-10 21:03:39,526 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.39 sec
2013-09-10 21:03:40,532 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 24.39 sec
2013-09-10 21:03:41,538 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 24.39 sec
2013-09-10 21:03:42,544 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 24.39 sec
2013-09-10 21:03:43,550 Stage-1 map = 89%, reduce = 17%, Cumulative CPU 35.95 sec
2013-09-10 21:03:44,556 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.76 sec
2013-09-10 21:03:45,562 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 48.76 sec
2013-09-10 21:03:46,570 Stage-1 map = 100%, reduce = 58%, Cumulative CPU 50.88 sec
2013-09-10 21:03:47,576 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 52.9 sec
2013-09-10 21:03:48,582 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 52.9 sec
MapReduce Total cumulative CPU time: 52 seconds 900 msec
Ended Job = job_201309101627_0208
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0209
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2013-09-10 21:03:52,133 Stage-2 map = 0%, reduce = 0%
2013-09-10 21:03:53,139 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.85 sec
2013-09-10 21:03:54,145 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.85 sec
2013-09-10 21:03:55,151 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.85 sec
2013-09-10 21:03:56,158 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.85 sec
2013-09-10 21:03:57,164 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.85 sec
2013-09-10 21:03:58,169 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.85 sec
2013-09-10 21:03:59,174 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.85 sec
2013-09-10 21:04:00,179 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.85 sec
2013-09-10 21:04:01,185 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.23 sec
2013-09-10 21:04:02,190 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.23 sec
2013-09-10 21:04:03,196 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.23 sec
MapReduce Total cumulative CPU time: 2 seconds 230 msec
Ended Job = job_201309101627_0209
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 2 Cumulative CPU: 52.9 sec HDFS Read: 298803179 HDFS Write: 12221 SUCCESS
Job 1: Map: 1 Reduce: 1 Cumulative CPU: 2.23 sec HDFS Read: 12990 HDFS Write: 2646 SUCCESS
Total MapReduce CPU Time Spent: 55 seconds 130 msec
OK
Time taken: 48.259 seconds, Fetched: 10 row(s)
hive> quit;
-- чуть больше столбцы.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_26079@mturlrep13_201309102104_979944715.txt
hive> ;
hive> quit;
times: 1
query: SELECT * FROM hits_10m WHERE URL LIKE '%metrika%' ORDER BY EventTime LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_26503@mturlrep13_201309102104_1111113065.txt
hive> SELECT * FROM hits_10m WHERE URL LIKE '%metrika%' ORDER BY EventTime LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0210
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 21:04:24,063 Stage-1 map = 0%, reduce = 0%
2013-09-10 21:04:34,103 Stage-1 map = 7%, reduce = 0%
2013-09-10 21:04:37,115 Stage-1 map = 14%, reduce = 0%
2013-09-10 21:04:40,129 Stage-1 map = 22%, reduce = 0%
2013-09-10 21:04:43,157 Stage-1 map = 29%, reduce = 0%
2013-09-10 21:04:49,179 Stage-1 map = 36%, reduce = 0%
2013-09-10 21:04:52,190 Stage-1 map = 43%, reduce = 0%
2013-09-10 21:04:54,206 Stage-1 map = 46%, reduce = 0%, Cumulative CPU 83.32 sec
2013-09-10 21:04:55,213 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 87.85 sec
2013-09-10 21:04:56,219 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 87.85 sec
2013-09-10 21:04:57,225 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 87.85 sec
2013-09-10 21:04:58,230 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 87.85 sec
2013-09-10 21:04:59,241 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 87.85 sec
2013-09-10 21:05:00,250 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 87.85 sec
2013-09-10 21:05:01,256 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 87.85 sec
2013-09-10 21:05:02,262 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:03,268 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:04,274 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:05,302 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:06,307 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:07,312 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:08,317 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:09,322 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:10,328 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:11,333 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:12,338 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:13,343 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:14,348 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:15,354 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:16,367 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:17,373 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:18,377 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:19,382 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:20,387 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:21,392 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:22,397 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 87.85 sec
2013-09-10 21:05:23,401 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 129.86 sec
2013-09-10 21:05:24,406 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 129.86 sec
2013-09-10 21:05:25,411 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 167.87 sec
2013-09-10 21:05:26,416 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 167.87 sec
2013-09-10 21:05:27,422 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 171.01 sec
2013-09-10 21:05:28,427 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 171.01 sec
2013-09-10 21:05:29,433 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 171.01 sec
MapReduce Total cumulative CPU time: 2 minutes 51 seconds 10 msec
Ended Job = job_201309101627_0210
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 171.01 sec HDFS Read: 1082943442 HDFS Write: 5318 SUCCESS
Total MapReduce CPU Time Spent: 2 minutes 51 seconds 10 msec
OK
Time taken: 75.657 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_28589@mturlrep13_201309102105_518152082.txt
hive> ;
hive> quit;
times: 2
query: SELECT * FROM hits_10m WHERE URL LIKE '%metrika%' ORDER BY EventTime LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_28997@mturlrep13_201309102105_1789871872.txt
hive> SELECT * FROM hits_10m WHERE URL LIKE '%metrika%' ORDER BY EventTime LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0211
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 21:05:43,799 Stage-1 map = 0%, reduce = 0%
2013-09-10 21:05:50,827 Stage-1 map = 7%, reduce = 0%
2013-09-10 21:05:56,854 Stage-1 map = 14%, reduce = 0%, Cumulative CPU 27.49 sec
2013-09-10 21:05:57,861 Stage-1 map = 14%, reduce = 0%, Cumulative CPU 27.49 sec
2013-09-10 21:05:58,868 Stage-1 map = 14%, reduce = 0%, Cumulative CPU 27.49 sec
2013-09-10 21:05:59,891 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 27.49 sec
2013-09-10 21:06:00,897 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 27.49 sec
2013-09-10 21:06:01,902 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 27.49 sec
2013-09-10 21:06:02,907 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 27.49 sec
2013-09-10 21:06:03,912 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 27.49 sec
2013-09-10 21:06:04,917 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 27.49 sec
2013-09-10 21:06:05,922 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 27.49 sec
2013-09-10 21:06:06,927 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 27.49 sec
2013-09-10 21:06:07,932 Stage-1 map = 36%, reduce = 0%, Cumulative CPU 27.49 sec
2013-09-10 21:06:08,937 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 27.49 sec
2013-09-10 21:06:09,942 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 27.49 sec
2013-09-10 21:06:10,947 Stage-1 map = 43%, reduce = 0%, Cumulative CPU 27.49 sec
2013-09-10 21:06:11,956 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 74.55 sec
2013-09-10 21:06:12,962 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 74.55 sec
2013-09-10 21:06:13,967 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 74.55 sec
2013-09-10 21:06:14,973 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 74.55 sec
2013-09-10 21:06:15,979 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 74.55 sec
2013-09-10 21:06:16,985 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 74.55 sec
2013-09-10 21:06:17,991 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 74.55 sec
2013-09-10 21:06:18,997 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 74.55 sec
2013-09-10 21:06:20,003 Stage-1 map = 54%, reduce = 17%, Cumulative CPU 74.55 sec
2013-09-10 21:06:21,009 Stage-1 map = 54%, reduce = 17%, Cumulative CPU 74.55 sec
2013-09-10 21:06:22,015 Stage-1 map = 54%, reduce = 17%, Cumulative CPU 74.55 sec
2013-09-10 21:06:23,020 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 74.55 sec
2013-09-10 21:06:24,031 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 74.55 sec
2013-09-10 21:06:25,037 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 74.55 sec
2013-09-10 21:06:26,042 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 74.55 sec
2013-09-10 21:06:27,047 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 74.55 sec
2013-09-10 21:06:28,053 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 74.55 sec
2013-09-10 21:06:29,075 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 74.55 sec
2013-09-10 21:06:30,080 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 74.55 sec
2013-09-10 21:06:31,085 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 74.55 sec
2013-09-10 21:06:32,090 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 74.55 sec
2013-09-10 21:06:33,096 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 74.55 sec
2013-09-10 21:06:34,101 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 74.55 sec
2013-09-10 21:06:35,106 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 74.55 sec
2013-09-10 21:06:36,111 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 74.55 sec
2013-09-10 21:06:37,116 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 74.55 sec
2013-09-10 21:06:38,121 Stage-1 map = 92%, reduce = 17%, Cumulative CPU 74.55 sec
2013-09-10 21:06:39,126 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 114.8 sec
2013-09-10 21:06:40,132 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 114.8 sec
2013-09-10 21:06:41,137 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 152.73 sec
2013-09-10 21:06:42,142 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 152.73 sec
2013-09-10 21:06:43,147 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 152.73 sec
2013-09-10 21:06:44,152 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 152.73 sec
2013-09-10 21:06:45,158 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 152.73 sec
2013-09-10 21:06:46,165 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 155.83 sec
2013-09-10 21:06:47,170 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 155.83 sec
MapReduce Total cumulative CPU time: 2 minutes 35 seconds 830 msec
Ended Job = job_201309101627_0211
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 155.83 sec HDFS Read: 1082943442 HDFS Write: 5318 SUCCESS
Total MapReduce CPU Time Spent: 2 minutes 35 seconds 830 msec
OK
Time taken: 72.161 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_30422@mturlrep13_201309102106_1368463100.txt
hive> ;
hive> quit;
times: 3
query: SELECT * FROM hits_10m WHERE URL LIKE '%metrika%' ORDER BY EventTime LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_30864@mturlrep13_201309102106_502369015.txt
hive> SELECT * FROM hits_10m WHERE URL LIKE '%metrika%' ORDER BY EventTime LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0212
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 21:07:00,709 Stage-1 map = 0%, reduce = 0%
2013-09-10 21:07:11,769 Stage-1 map = 7%, reduce = 0%
2013-09-10 21:07:14,794 Stage-1 map = 14%, reduce = 0%
2013-09-10 21:07:17,807 Stage-1 map = 22%, reduce = 0%
2013-09-10 21:07:20,819 Stage-1 map = 29%, reduce = 0%
2013-09-10 21:07:23,831 Stage-1 map = 32%, reduce = 0%
2013-09-10 21:07:26,842 Stage-1 map = 36%, reduce = 0%
2013-09-10 21:07:29,853 Stage-1 map = 43%, reduce = 0%
2013-09-10 21:07:30,865 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 74.26 sec
2013-09-10 21:07:31,872 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 74.26 sec
2013-09-10 21:07:32,877 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 74.26 sec
2013-09-10 21:07:33,883 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 74.26 sec
2013-09-10 21:07:34,888 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 74.26 sec
2013-09-10 21:07:35,893 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 74.26 sec
2013-09-10 21:07:36,898 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 74.26 sec
2013-09-10 21:07:37,903 Stage-1 map = 50%, reduce = 17%, Cumulative CPU 74.26 sec
2013-09-10 21:07:38,908 Stage-1 map = 54%, reduce = 17%, Cumulative CPU 74.26 sec
2013-09-10 21:07:39,913 Stage-1 map = 54%, reduce = 17%, Cumulative CPU 74.26 sec
2013-09-10 21:07:40,918 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 74.26 sec
2013-09-10 21:07:41,923 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 74.26 sec
2013-09-10 21:07:42,928 Stage-1 map = 57%, reduce = 17%, Cumulative CPU 74.26 sec
2013-09-10 21:07:43,959 Stage-1 map = 61%, reduce = 17%, Cumulative CPU 74.26 sec
2013-09-10 21:07:44,982 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 74.26 sec
2013-09-10 21:07:46,012 Stage-1 map = 65%, reduce = 17%, Cumulative CPU 74.26 sec
2013-09-10 21:07:47,017 Stage-1 map = 69%, reduce = 17%, Cumulative CPU 74.26 sec
2013-09-10 21:07:48,022 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 74.26 sec
2013-09-10 21:07:49,050 Stage-1 map = 73%, reduce = 17%, Cumulative CPU 74.26 sec
2013-09-10 21:07:50,062 Stage-1 map = 76%, reduce = 17%, Cumulative CPU 74.26 sec
2013-09-10 21:07:51,068 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 74.26 sec
2013-09-10 21:07:52,073 Stage-1 map = 80%, reduce = 17%, Cumulative CPU 74.26 sec
2013-09-10 21:07:53,091 Stage-1 map = 84%, reduce = 17%, Cumulative CPU 74.26 sec
2013-09-10 21:07:54,096 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 74.26 sec
2013-09-10 21:07:55,102 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 140.78 sec
2013-09-10 21:07:56,123 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 140.78 sec
2013-09-10 21:07:57,128 Stage-1 map = 88%, reduce = 17%, Cumulative CPU 140.78 sec
2013-09-10 21:07:58,133 Stage-1 map = 93%, reduce = 17%, Cumulative CPU 149.05 sec
2013-09-10 21:07:59,138 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 149.05 sec
2013-09-10 21:08:00,143 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 159.54 sec
2013-09-10 21:08:01,148 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 159.54 sec
2013-09-10 21:08:02,153 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 159.54 sec
2013-09-10 21:08:03,158 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 159.54 sec
2013-09-10 21:08:04,165 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 161.75 sec
2013-09-10 21:08:05,171 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 161.75 sec
2013-09-10 21:08:06,177 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 161.75 sec
MapReduce Total cumulative CPU time: 2 minutes 41 seconds 750 msec
Ended Job = job_201309101627_0212
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 161.75 sec HDFS Read: 1082943442 HDFS Write: 5318 SUCCESS
Total MapReduce CPU Time Spent: 2 minutes 41 seconds 750 msec
OK
Time taken: 73.239 seconds, Fetched: 10 row(s)
hive> quit;
-- плохой запрос - вынимаем все столбцы.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_32256@mturlrep13_201309102108_1366898564.txt
hive> ;
hive> quit;
times: 1
query: SELECT SearchPhrase, EventTime FROM hits_10m WHERE SearchPhrase != '' ORDER BY EventTime LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_32681@mturlrep13_201309102108_1470113641.txt
hive> SELECT SearchPhrase, EventTime FROM hits_10m WHERE SearchPhrase != '' ORDER BY EventTime LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0213
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 21:08:27,012 Stage-1 map = 0%, reduce = 0%
2013-09-10 21:08:34,045 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.79 sec
2013-09-10 21:08:35,053 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.79 sec
2013-09-10 21:08:36,060 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.79 sec
2013-09-10 21:08:37,067 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.79 sec
2013-09-10 21:08:38,073 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.79 sec
2013-09-10 21:08:39,078 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.79 sec
2013-09-10 21:08:40,084 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.79 sec
2013-09-10 21:08:41,089 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 18.79 sec
2013-09-10 21:08:42,096 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 28.06 sec
2013-09-10 21:08:43,100 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.48 sec
2013-09-10 21:08:44,104 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.48 sec
2013-09-10 21:08:45,108 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.48 sec
2013-09-10 21:08:46,113 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.48 sec
2013-09-10 21:08:47,118 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.48 sec
2013-09-10 21:08:48,124 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.48 sec
2013-09-10 21:08:49,130 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 43.58 sec
2013-09-10 21:08:50,136 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 43.58 sec
2013-09-10 21:08:51,141 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 43.58 sec
MapReduce Total cumulative CPU time: 43 seconds 580 msec
Ended Job = job_201309101627_0213
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 43.58 sec HDFS Read: 28228143 HDFS Write: 766 SUCCESS
Total MapReduce CPU Time Spent: 43 seconds 580 msec
OK
Time taken: 33.927 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_1886@mturlrep13_201309102108_2113013029.txt
hive> ;
hive> quit;
times: 2
query: SELECT SearchPhrase, EventTime FROM hits_10m WHERE SearchPhrase != '' ORDER BY EventTime LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_2309@mturlrep13_201309102108_179448674.txt
hive> SELECT SearchPhrase, EventTime FROM hits_10m WHERE SearchPhrase != '' ORDER BY EventTime LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0214
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 21:09:04,342 Stage-1 map = 0%, reduce = 0%
2013-09-10 21:09:12,378 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.06 sec
2013-09-10 21:09:13,392 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.06 sec
2013-09-10 21:09:14,406 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.06 sec
2013-09-10 21:09:15,411 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.06 sec
2013-09-10 21:09:16,417 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.06 sec
2013-09-10 21:09:17,422 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.06 sec
2013-09-10 21:09:18,428 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 27.59 sec
2013-09-10 21:09:19,434 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.52 sec
2013-09-10 21:09:20,439 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.52 sec
2013-09-10 21:09:21,444 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.52 sec
2013-09-10 21:09:22,450 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.52 sec
2013-09-10 21:09:23,455 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.52 sec
2013-09-10 21:09:24,460 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.52 sec
2013-09-10 21:09:25,465 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.52 sec
2013-09-10 21:09:26,470 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 37.52 sec
2013-09-10 21:09:27,476 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 42.58 sec
2013-09-10 21:09:28,481 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 42.58 sec
MapReduce Total cumulative CPU time: 42 seconds 580 msec
Ended Job = job_201309101627_0214
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 42.58 sec HDFS Read: 28228143 HDFS Write: 766 SUCCESS
Total MapReduce CPU Time Spent: 42 seconds 580 msec
OK
Time taken: 31.299 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_3670@mturlrep13_201309102109_1995808280.txt
hive> ;
hive> quit;
times: 3
query: SELECT SearchPhrase, EventTime FROM hits_10m WHERE SearchPhrase != '' ORDER BY EventTime LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_4075@mturlrep13_201309102109_1289626769.txt
hive> SELECT SearchPhrase, EventTime FROM hits_10m WHERE SearchPhrase != '' ORDER BY EventTime LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0215
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 21:09:42,640 Stage-1 map = 0%, reduce = 0%
2013-09-10 21:09:49,676 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.1 sec
2013-09-10 21:09:50,683 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.1 sec
2013-09-10 21:09:51,690 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.1 sec
2013-09-10 21:09:52,696 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.1 sec
2013-09-10 21:09:53,703 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.1 sec
2013-09-10 21:09:54,710 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.1 sec
2013-09-10 21:09:55,716 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.1 sec
2013-09-10 21:09:56,722 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.3 sec
2013-09-10 21:09:57,728 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.3 sec
2013-09-10 21:09:58,733 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.3 sec
2013-09-10 21:09:59,738 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.3 sec
2013-09-10 21:10:00,744 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.3 sec
2013-09-10 21:10:01,750 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.3 sec
2013-09-10 21:10:02,756 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.3 sec
2013-09-10 21:10:03,783 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.3 sec
2013-09-10 21:10:04,790 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 43.5 sec
2013-09-10 21:10:05,796 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 43.5 sec
MapReduce Total cumulative CPU time: 43 seconds 500 msec
Ended Job = job_201309101627_0215
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 43.5 sec HDFS Read: 28228143 HDFS Write: 766 SUCCESS
Total MapReduce CPU Time Spent: 43 seconds 500 msec
OK
Time taken: 31.328 seconds, Fetched: 10 row(s)
hive> quit;
-- большая сортировка.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_6065@mturlrep13_201309102110_1165648233.txt
hive> ;
hive> quit;
times: 1
query: SELECT SearchPhrase FROM hits_10m WHERE SearchPhrase != '' ORDER BY SearchPhrase LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_6867@mturlrep13_201309102110_1247268932.txt
hive> SELECT SearchPhrase FROM hits_10m WHERE SearchPhrase != '' ORDER BY SearchPhrase LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0216
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 21:10:27,797 Stage-1 map = 0%, reduce = 0%
2013-09-10 21:10:34,833 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.67 sec
2013-09-10 21:10:35,841 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.67 sec
2013-09-10 21:10:36,848 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.67 sec
2013-09-10 21:10:37,854 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.67 sec
2013-09-10 21:10:38,860 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.67 sec
2013-09-10 21:10:39,866 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.67 sec
2013-09-10 21:10:40,872 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.67 sec
2013-09-10 21:10:41,878 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.67 sec
2013-09-10 21:10:42,884 Stage-1 map = 96%, reduce = 8%, Cumulative CPU 19.67 sec
2013-09-10 21:10:43,895 Stage-1 map = 100%, reduce = 8%, Cumulative CPU 39.0 sec
2013-09-10 21:10:44,900 Stage-1 map = 100%, reduce = 8%, Cumulative CPU 39.0 sec
2013-09-10 21:10:45,905 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.0 sec
2013-09-10 21:10:46,910 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.0 sec
2013-09-10 21:10:47,915 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.0 sec
2013-09-10 21:10:48,920 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.0 sec
2013-09-10 21:10:49,926 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.0 sec
2013-09-10 21:10:50,930 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 39.0 sec
2013-09-10 21:10:51,937 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 45.08 sec
2013-09-10 21:10:52,943 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 45.08 sec
MapReduce Total cumulative CPU time: 45 seconds 80 msec
Ended Job = job_201309101627_0216
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 45.08 sec HDFS Read: 27820105 HDFS Write: 666 SUCCESS
Total MapReduce CPU Time Spent: 45 seconds 80 msec
OK
ялта интурист
! как одеть трехнедельного ребенка при температуре 20 градусов
! отель rattana beach hotel 3*
! официальный сайт ооо "группа аист"г москва, ул коцюбинского, д 4, офис 343
! официальный сайт ооо "группа аист"г москва, ул коцюбинского, д 4, офис 343
!( центробежный скважинный калибр форумы)
!(!(storm master silmarils))
!(!(storm master silmarils))
!(!(title:(схема sputnik hi 4000)))
!(44-фз о контрактной системе)
Time taken: 35.466 seconds, Fetched: 10 row(s)
hive> quit;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_8346@mturlrep13_201309102110_1245725421.txt
hive> ;
hive> quit;
times: 2
query: SELECT SearchPhrase FROM hits_10m WHERE SearchPhrase != '' ORDER BY SearchPhrase LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_8754@mturlrep13_201309102110_1375499441.txt
hive> SELECT SearchPhrase FROM hits_10m WHERE SearchPhrase != '' ORDER BY SearchPhrase LIMIT 10;;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_8981@mturlrep13_201309102111_72221677.txt
hive> ;
hive> quit;
times: 3
query: SELECT SearchPhrase FROM hits_10m WHERE SearchPhrase != '' ORDER BY SearchPhrase LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_9402@mturlrep13_201309102111_58410451.txt
hive> SELECT SearchPhrase FROM hits_10m WHERE SearchPhrase != '' ORDER BY SearchPhrase LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0217
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 21:11:13,005 Stage-1 map = 0%, reduce = 0%
2013-09-10 21:11:21,043 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.54 sec
2013-09-10 21:11:22,051 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.54 sec
2013-09-10 21:11:23,058 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.54 sec
2013-09-10 21:11:24,064 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.54 sec
2013-09-10 21:11:25,071 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.54 sec
2013-09-10 21:11:26,077 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.54 sec
2013-09-10 21:11:27,084 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 19.54 sec
2013-09-10 21:11:28,090 Stage-1 map = 97%, reduce = 17%, Cumulative CPU 28.93 sec
2013-09-10 21:11:29,096 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.94 sec
2013-09-10 21:11:30,101 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.94 sec
2013-09-10 21:11:31,106 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.94 sec
2013-09-10 21:11:32,111 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.94 sec
2013-09-10 21:11:33,116 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.94 sec
2013-09-10 21:11:34,121 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.94 sec
2013-09-10 21:11:35,127 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 38.94 sec
2013-09-10 21:11:36,142 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 44.86 sec
2013-09-10 21:11:37,148 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 44.86 sec
2013-09-10 21:11:38,154 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 44.86 sec
MapReduce Total cumulative CPU time: 44 seconds 860 msec
Ended Job = job_201309101627_0217
MapReduce Jobs Launched:
Job 0: Map: 4 Reduce: 1 Cumulative CPU: 44.86 sec HDFS Read: 27820105 HDFS Write: 666 SUCCESS
Total MapReduce CPU Time Spent: 44 seconds 860 msec
OK
ялта интурист
! как одеть трехнедельного ребенка при температуре 20 градусов
! отель rattana beach hotel 3*
! официальный сайт ооо "группа аист"г москва, ул коцюбинского, д 4, офис 343
! официальный сайт ооо "группа аист"г москва, ул коцюбинского, д 4, офис 343
!( центробежный скважинный калибр форумы)
!(!(storm master silmarils))
!(!(storm master silmarils))
!(!(title:(схема sputnik hi 4000)))
!(44-фз о контрактной системе)
Time taken: 32.221 seconds, Fetched: 10 row(s)
hive> quit;
-- большая сортировка по строкам.;
status
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_10685@mturlrep13_201309102111_636633590.txt
hive> ;
hive> quit;
times: 1
query: SELECT SearchPhrase, EventTime FROM hits_10m WHERE SearchPhrase != '' ORDER BY EventTime, SearchPhrase LIMIT 10;
spawn hive
Logging initialized using configuration in file:/opt/hive/conf/hive-log4j.properties
Hive history file=/tmp/kartavyy/hive_job_log_kartavyy_11102@mturlrep13_201309102111_1946455694.txt
hive> SELECT SearchPhrase, EventTime FROM hits_10m WHERE SearchPhrase != '' ORDER BY EventTime, SearchPhrase LIMIT 10;;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Kill Command = /usr/libexec/../bin/hadoop job -kill job_201309101627_0218
Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1
2013-09-10 21:11:58,014 Stage-1 map = 0%, reduce = 0%
2013-09-10 21:12:05,044 Stage-1 map = 43%, reduce = 0%
2013-09-10 21:12:06,056 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.29 sec
2013-09-10 21:12:07,063 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.29 sec
2013-09-10 21:12:08,070 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.29 sec
2013-09-10 21:12:09,076 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.29 sec
2013-09-10 21:12:10,081 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.29 sec
2013-09-10 21:12:11,087 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.29 sec
2013-09-10 21:12:12,093 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.29 sec
2013-09-10 21:12:13,100 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 20.29 sec
2013-09-10 21:12:14,106 Stage-1 map = 97%, reduce = 8%, Cumulative CPU 30.09 sec
2013-09-10 21:12:15,112 Stage-1 map = 100%, reduce = 8%, Cumulative CPU 40.62 sec
2013-09-10 21:12:16,118 Stage-1 map = 100%, reduce = 8%, Cumulative CPU 40.62 sec
2013-09-10 21:12:17,123 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.62 sec
2013-09-10 21:12:18,128 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.62 sec
2013-09-10 21:12:19,133 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.62 sec
2013-09-10 21:12:20,138 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.62 sec
2013-09-10 21:12:21,143 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.62 sec
2013-09-10 21:12:22,148 Stage-1 map = 100%, reduce = 17%, Cumulative CPU 40.62 sec
2013-09-10 21:12:23,154 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 47.82 sec
status
spawn hive
restart server: /bin/echo restart
restart