mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-09-29 21:20:49 +00:00
Merge remote-tracking branch 'origin/master' into CLICKHOUSE-2910
This commit is contained in:
commit
abc73eb6ba
@ -1,3 +1,8 @@
|
|||||||
|
# ClickHouse release 1.1.54381, 2018-05-14
|
||||||
|
|
||||||
|
## Bug fixes:
|
||||||
|
* Fixed a nodes leak in ZooKeeper when ClickHouse loses connection to ZooKeeper server.
|
||||||
|
|
||||||
# ClickHouse release 1.1.54380, 2018-04-21
|
# ClickHouse release 1.1.54380, 2018-04-21
|
||||||
|
|
||||||
## New features:
|
## New features:
|
||||||
|
@ -1,3 +1,8 @@
|
|||||||
|
# ClickHouse release 1.1.54381, 2018-05-14
|
||||||
|
|
||||||
|
## Исправление ошибок:
|
||||||
|
* Исправлена ошибка, приводящая к "утеканию" метаданных в ZooKeeper при потере соединения с сервером ZooKeeper.
|
||||||
|
|
||||||
# ClickHouse release 1.1.54380, 2018-04-21
|
# ClickHouse release 1.1.54380, 2018-04-21
|
||||||
|
|
||||||
## Новые возможности:
|
## Новые возможности:
|
||||||
|
50
ci/README.md
50
ci/README.md
@ -1,4 +1,4 @@
|
|||||||
### Build and test ClickHouse on various plaforms
|
## Build and test ClickHouse on various plaforms
|
||||||
|
|
||||||
Quick and dirty scripts.
|
Quick and dirty scripts.
|
||||||
|
|
||||||
@ -13,17 +13,23 @@ Another example, check build on ARM 64:
|
|||||||
./run-with-docker.sh multiarch/ubuntu-core:arm64-bionic jobs/quick-build/run.sh
|
./run-with-docker.sh multiarch/ubuntu-core:arm64-bionic jobs/quick-build/run.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
Look at `default_config` and `jobs/quick-build/config`
|
Another example, check build on FreeBSD:
|
||||||
|
```
|
||||||
|
./prepare-vagrant-image-freebsd.sh
|
||||||
|
./run-with-vagrant.sh freebsd jobs/quick-build/run.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
Look at `default_config` and `jobs/quick-build/run.sh`
|
||||||
|
|
||||||
Various possible options. We are not going to automate testing all of them.
|
Various possible options. We are not going to automate testing all of them.
|
||||||
|
|
||||||
##### CPU architectures:
|
#### CPU architectures:
|
||||||
- x86_64;
|
- x86_64;
|
||||||
- AArch64.
|
- AArch64.
|
||||||
|
|
||||||
x86_64 is the main CPU architecture. We also have minimal support for AArch64.
|
x86_64 is the main CPU architecture. We also have minimal support for AArch64.
|
||||||
|
|
||||||
##### Operating systems:
|
#### Operating systems:
|
||||||
- Linux;
|
- Linux;
|
||||||
- FreeBSD.
|
- FreeBSD.
|
||||||
|
|
||||||
@ -31,7 +37,7 @@ We also target Mac OS X, but it's more difficult to test.
|
|||||||
Linux is the main. FreeBSD is also supported as production OS.
|
Linux is the main. FreeBSD is also supported as production OS.
|
||||||
Mac OS is intended only for development and have minimal support: client should work, server should just start.
|
Mac OS is intended only for development and have minimal support: client should work, server should just start.
|
||||||
|
|
||||||
##### Linux distributions:
|
#### Linux distributions:
|
||||||
For build:
|
For build:
|
||||||
- Ubuntu Bionic;
|
- Ubuntu Bionic;
|
||||||
- Ubuntu Trusty.
|
- Ubuntu Trusty.
|
||||||
@ -42,83 +48,83 @@ For run:
|
|||||||
|
|
||||||
We should support almost any Linux to run ClickHouse. That's why we test also on old distributions.
|
We should support almost any Linux to run ClickHouse. That's why we test also on old distributions.
|
||||||
|
|
||||||
##### How to obtain sources:
|
#### How to obtain sources:
|
||||||
- use sources from local working copy;
|
- use sources from local working copy;
|
||||||
- clone sources from github;
|
- clone sources from github;
|
||||||
- download source tarball.
|
- download source tarball.
|
||||||
|
|
||||||
##### Compilers:
|
#### Compilers:
|
||||||
- gcc-7;
|
- gcc-7;
|
||||||
- gcc-8;
|
- gcc-8;
|
||||||
- clang-6;
|
- clang-6;
|
||||||
- clang-svn.
|
- clang-svn.
|
||||||
|
|
||||||
##### Compiler installation:
|
#### Compiler installation:
|
||||||
- from OS packages;
|
- from OS packages;
|
||||||
- build from sources.
|
- build from sources.
|
||||||
|
|
||||||
##### C++ standard library implementation:
|
#### C++ standard library implementation:
|
||||||
- libc++;
|
- libc++;
|
||||||
- libstdc++ with C++11 ABI;
|
- libstdc++ with C++11 ABI;
|
||||||
- libstdc++ with old ABI.
|
- libstdc++ with old ABI.
|
||||||
|
|
||||||
When building with clang, libc++ is used. When building with gcc, we choose libstdc++ with C++11 ABI.
|
When building with clang, libc++ is used. When building with gcc, we choose libstdc++ with C++11 ABI.
|
||||||
|
|
||||||
##### Linkers:
|
#### Linkers:
|
||||||
- ldd;
|
- ldd;
|
||||||
- gold;
|
- gold;
|
||||||
|
|
||||||
When building with clang on x86_64, ldd is used. Otherwise we use gold.
|
When building with clang on x86_64, ldd is used. Otherwise we use gold.
|
||||||
|
|
||||||
##### Build types:
|
#### Build types:
|
||||||
- RelWithDebInfo;
|
- RelWithDebInfo;
|
||||||
- Debug;
|
- Debug;
|
||||||
- ASan;
|
- ASan;
|
||||||
- TSan.
|
- TSan.
|
||||||
|
|
||||||
##### Build types, extra:
|
#### Build types, extra:
|
||||||
- -g0 for quick build;
|
- -g0 for quick build;
|
||||||
- enable test coverage;
|
- enable test coverage;
|
||||||
- debug tcmalloc.
|
- debug tcmalloc.
|
||||||
|
|
||||||
##### What to build:
|
#### What to build:
|
||||||
- only `clickhouse` target;
|
- only `clickhouse` target;
|
||||||
- all targets;
|
- all targets;
|
||||||
- debian packages;
|
- debian packages;
|
||||||
|
|
||||||
We also have intent to build RPM and simple tgz packages.
|
We also have intent to build RPM and simple tgz packages.
|
||||||
|
|
||||||
##### Where to get third-party libraries:
|
#### Where to get third-party libraries:
|
||||||
- from contrib directory (submodules);
|
- from contrib directory (submodules);
|
||||||
- from OS packages.
|
- from OS packages.
|
||||||
|
|
||||||
The only production option is to use libraries from contrib directory.
|
The only production option is to use libraries from contrib directory.
|
||||||
Using libraries from OS packages is discouraged, but we also support this option.
|
Using libraries from OS packages is discouraged, but we also support this option.
|
||||||
|
|
||||||
##### Linkage types:
|
#### Linkage types:
|
||||||
- static;
|
- static;
|
||||||
- shared;
|
- shared;
|
||||||
|
|
||||||
Static linking is the only option for production usage.
|
Static linking is the only option for production usage.
|
||||||
We also have support for shared linking, but it is indended only for developers.
|
We also have support for shared linking, but it is indended only for developers.
|
||||||
|
|
||||||
##### Make tools:
|
#### Make tools:
|
||||||
- make;
|
- make;
|
||||||
- ninja.
|
- ninja.
|
||||||
|
|
||||||
##### Installation options:
|
#### Installation options:
|
||||||
- run built `clickhouse` binary directly;
|
- run built `clickhouse` binary directly;
|
||||||
- install from packages.
|
- install from packages.
|
||||||
|
|
||||||
##### How to obtain packages:
|
#### How to obtain packages:
|
||||||
- build them;
|
- build them;
|
||||||
- download from repository.
|
- download from repository.
|
||||||
|
|
||||||
##### Sanity checks:
|
#### Sanity checks:
|
||||||
- check that clickhouse binary has no dependencies on unexpected shared libraries;
|
- check that clickhouse binary has no dependencies on unexpected shared libraries;
|
||||||
- check that source code have no style violations.
|
- check that source code have no style violations.
|
||||||
|
|
||||||
##### Tests:
|
#### Tests:
|
||||||
- Functional tests;
|
- Functional tests;
|
||||||
- Integration tests;
|
- Integration tests;
|
||||||
- Unit tests;
|
- Unit tests;
|
||||||
@ -127,10 +133,10 @@ We also have support for shared linking, but it is indended only for developers.
|
|||||||
- Tests for external dictionaries (should be moved to integration tests);
|
- Tests for external dictionaries (should be moved to integration tests);
|
||||||
- Jepsen like tests for quorum inserts (not yet available in opensource).
|
- Jepsen like tests for quorum inserts (not yet available in opensource).
|
||||||
|
|
||||||
##### Tests extra:
|
#### Tests extra:
|
||||||
- Run functional tests with Valgrind.
|
- Run functional tests with Valgrind.
|
||||||
|
|
||||||
##### Static analyzers:
|
#### Static analyzers:
|
||||||
- CppCheck;
|
- CppCheck;
|
||||||
- clang-tidy;
|
- clang-tidy;
|
||||||
- Coverity.
|
- Coverity.
|
||||||
|
@ -4,8 +4,8 @@ set -e -x
|
|||||||
source default-config
|
source default-config
|
||||||
|
|
||||||
# TODO Non debian systems
|
# TODO Non debian systems
|
||||||
$SUDO apt-get install -y subversion
|
./install-os-packages.sh svn
|
||||||
apt-cache search cmake3 | grep -P '^cmake3 ' && $SUDO apt-get -y install cmake3 || $SUDO apt-get -y install cmake
|
./install-os-packages.sh cmake
|
||||||
|
|
||||||
mkdir "${WORKSPACE}/llvm"
|
mkdir "${WORKSPACE}/llvm"
|
||||||
|
|
||||||
|
@ -3,7 +3,7 @@ set -e -x
|
|||||||
|
|
||||||
source default-config
|
source default-config
|
||||||
|
|
||||||
$SUDO apt-get install -y curl
|
./install-os-packages.sh curl
|
||||||
|
|
||||||
if [[ "${GCC_SOURCES_VERSION}" == "latest" ]]; then
|
if [[ "${GCC_SOURCES_VERSION}" == "latest" ]]; then
|
||||||
GCC_SOURCES_VERSION=$(curl -sSL https://ftpmirror.gnu.org/gcc/ | grep -oE 'gcc-[0-9]+(\.[0-9]+)+' | sort -Vr | head -n1)
|
GCC_SOURCES_VERSION=$(curl -sSL https://ftpmirror.gnu.org/gcc/ | grep -oE 'gcc-[0-9]+(\.[0-9]+)+' | sort -Vr | head -n1)
|
||||||
|
@ -3,7 +3,7 @@ set -e -x
|
|||||||
|
|
||||||
source default-config
|
source default-config
|
||||||
|
|
||||||
$SUDO apt-get install -y jq
|
./install-os-packages.sh jq
|
||||||
|
|
||||||
[[ -d "${WORKSPACE}/sources" ]] || die "Run get-sources.sh first"
|
[[ -d "${WORKSPACE}/sources" ]] || die "Run get-sources.sh first"
|
||||||
|
|
||||||
|
@ -9,7 +9,7 @@ SCRIPTPATH=$(pwd)
|
|||||||
WORKSPACE=${SCRIPTPATH}/workspace
|
WORKSPACE=${SCRIPTPATH}/workspace
|
||||||
PROJECT_ROOT=$(cd $SCRIPTPATH/.. && pwd)
|
PROJECT_ROOT=$(cd $SCRIPTPATH/.. && pwd)
|
||||||
|
|
||||||
# All scripts take no arguments. All arguments must be in config.
|
# Almost all scripts take no arguments. Arguments should be in config.
|
||||||
|
|
||||||
# get-sources
|
# get-sources
|
||||||
SOURCES_METHOD=local # clone, local, tarball
|
SOURCES_METHOD=local # clone, local, tarball
|
||||||
@ -44,7 +44,7 @@ DOCKER_UBUNTU_TAG_ARCH=arm64 # How the architecture is named in Docker
|
|||||||
DOCKER_UBUNTU_QEMU_VER=v2.9.1
|
DOCKER_UBUNTU_QEMU_VER=v2.9.1
|
||||||
DOCKER_UBUNTU_REPO=multiarch/ubuntu-core
|
DOCKER_UBUNTU_REPO=multiarch/ubuntu-core
|
||||||
|
|
||||||
THREADS=$(grep -c ^processor /proc/cpuinfo || nproc || sysctl -a | grep -F 'hw.ncpu')
|
THREADS=$(grep -c ^processor /proc/cpuinfo || nproc || sysctl -a | grep -F 'hw.ncpu' | grep -oE '[0-9]+')
|
||||||
|
|
||||||
# All scripts should return 0 in case of success, 1 in case of permanent error,
|
# All scripts should return 0 in case of success, 1 in case of permanent error,
|
||||||
# 2 in case of temporary error, any other code in case of permanent error.
|
# 2 in case of temporary error, any other code in case of permanent error.
|
||||||
@ -55,7 +55,7 @@ function die {
|
|||||||
|
|
||||||
[[ $EUID -ne 0 ]] && SUDO=sudo
|
[[ $EUID -ne 0 ]] && SUDO=sudo
|
||||||
|
|
||||||
command -v apt-get && $SUDO apt-get update
|
./install-os-packages.sh prepare
|
||||||
|
|
||||||
# Configuration parameters may be overriden with CONFIG environment variable pointing to config file.
|
# Configuration parameters may be overriden with CONFIG environment variable pointing to config file.
|
||||||
[[ -n "$CONFIG" ]] && source $CONFIG
|
[[ -n "$CONFIG" ]] && source $CONFIG
|
||||||
|
@ -4,12 +4,12 @@ set -e -x
|
|||||||
source default-config
|
source default-config
|
||||||
|
|
||||||
if [[ "$SOURCES_METHOD" == "clone" ]]; then
|
if [[ "$SOURCES_METHOD" == "clone" ]]; then
|
||||||
$SUDO apt-get install -y git
|
./install-os-packages.sh git
|
||||||
SOURCES_DIR="${WORKSPACE}/sources"
|
SOURCES_DIR="${WORKSPACE}/sources"
|
||||||
mkdir -p "${SOURCES_DIR}"
|
mkdir -p "${SOURCES_DIR}"
|
||||||
git clone --recursive --branch "$SOURCES_BRANCH" "$SOURCES_CLONE_URL" "${SOURCES_DIR}"
|
git clone --recursive --branch "$SOURCES_BRANCH" "$SOURCES_CLONE_URL" "${SOURCES_DIR}"
|
||||||
pushd "${SOURCES_DIR}"
|
pushd "${SOURCES_DIR}"
|
||||||
git checkout "$SOURCES_COMMIT"
|
git checkout --recurse-submodules "$SOURCES_COMMIT"
|
||||||
popd
|
popd
|
||||||
elif [[ "$SOURCES_METHOD" == "local" ]]; then
|
elif [[ "$SOURCES_METHOD" == "local" ]]; then
|
||||||
ln -f -s "${PROJECT_ROOT}" "${WORKSPACE}/sources"
|
ln -f -s "${PROJECT_ROOT}" "${WORKSPACE}/sources"
|
||||||
|
@ -3,27 +3,20 @@ set -e -x
|
|||||||
|
|
||||||
source default-config
|
source default-config
|
||||||
|
|
||||||
# TODO Non debian systems
|
|
||||||
# TODO Install from PPA on older Ubuntu
|
# TODO Install from PPA on older Ubuntu
|
||||||
|
|
||||||
if [ -f '/etc/lsb-release' ]; then
|
./install-os-packages.sh ${COMPILER}-${COMPILER_PACKAGE_VERSION}
|
||||||
source /etc/lsb-release
|
|
||||||
if [[ "$DISTRIB_ID" == "Ubuntu" ]]; then
|
if [[ "$COMPILER" == "gcc" ]]; then
|
||||||
if [[ "$COMPILER" == "gcc" ]]; then
|
if command -v gcc-${COMPILER_PACKAGE_VERSION}; then export CC=gcc-${COMPILER_PACKAGE_VERSION} CXX=g++-${COMPILER_PACKAGE_VERSION};
|
||||||
$SUDO apt-get -y install gcc-${COMPILER_PACKAGE_VERSION} g++-${COMPILER_PACKAGE_VERSION}
|
elif command -v gcc${COMPILER_PACKAGE_VERSION}; then export CC=gcc${COMPILER_PACKAGE_VERSION} CXX=g++${COMPILER_PACKAGE_VERSION};
|
||||||
export CC=gcc-${COMPILER_PACKAGE_VERSION}
|
elif command -v gcc; then export CC=gcc CXX=g++;
|
||||||
export CXX=g++-${COMPILER_PACKAGE_VERSION}
|
|
||||||
elif [[ "$COMPILER" == "clang" ]]; then
|
|
||||||
[[ $(uname -m) == "x86_64" ]] && LLD="lld-${COMPILER_PACKAGE_VERSION}"
|
|
||||||
$SUDO apt-get -y install clang-${COMPILER_PACKAGE_VERSION} "$LLD" libc++-dev libc++abi-dev
|
|
||||||
export CC=clang-${COMPILER_PACKAGE_VERSION}
|
|
||||||
export CXX=clang++-${COMPILER_PACKAGE_VERSION}
|
|
||||||
else
|
|
||||||
die "Unknown compiler specified"
|
|
||||||
fi
|
fi
|
||||||
else
|
elif [[ "$COMPILER" == "clang" ]]; then
|
||||||
die "Unknown Linux variant"
|
if command -v clang-${COMPILER_PACKAGE_VERSION}; then export CC=clang-${COMPILER_PACKAGE_VERSION} CXX=clang++-${COMPILER_PACKAGE_VERSION};
|
||||||
|
elif command -v clang${COMPILER_PACKAGE_VERSION}; then export CC=clang${COMPILER_PACKAGE_VERSION} CXX=clang++${COMPILER_PACKAGE_VERSION};
|
||||||
|
elif command -v clang; then export CC=clang CXX=clang++;
|
||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
die "Unknown OS"
|
die "Unknown compiler specified"
|
||||||
fi
|
fi
|
||||||
|
@ -3,10 +3,12 @@ set -e -x
|
|||||||
|
|
||||||
source default-config
|
source default-config
|
||||||
|
|
||||||
# TODO Non-debian systems
|
./install-os-packages.sh libssl-dev
|
||||||
|
./install-os-packages.sh libicu-dev
|
||||||
$SUDO apt-get -y install libssl-dev libicu-dev libreadline-dev libmysqlclient-dev unixodbc-dev
|
./install-os-packages.sh libreadline-dev
|
||||||
|
./install-os-packages.sh libmariadbclient-dev
|
||||||
|
./install-os-packages.sh libunixodbc-dev
|
||||||
|
|
||||||
if [[ "$ENABLE_EMBEDDED_COMPILER" == 1 && "$USE_LLVM_LIBRARIES_FROM_SYSTEM" == 1 ]]; then
|
if [[ "$ENABLE_EMBEDDED_COMPILER" == 1 && "$USE_LLVM_LIBRARIES_FROM_SYSTEM" == 1 ]]; then
|
||||||
$SUDO apt-get -y install liblld-5.0-dev libclang-5.0-dev
|
./install-os-packages.sh llvm-libs-5.0
|
||||||
fi
|
fi
|
||||||
|
120
ci/install-os-packages.sh
Executable file
120
ci/install-os-packages.sh
Executable file
@ -0,0 +1,120 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -e -x
|
||||||
|
|
||||||
|
# Dispatches package installation on various OS and distributives
|
||||||
|
|
||||||
|
WHAT=$1
|
||||||
|
|
||||||
|
[[ $EUID -ne 0 ]] && SUDO=sudo
|
||||||
|
|
||||||
|
command -v apt-get && PACKAGE_MANAGER=apt
|
||||||
|
command -v yum && PACKAGE_MANAGER=yum
|
||||||
|
command -v pkg && PACKAGE_MANAGER=pkg
|
||||||
|
|
||||||
|
|
||||||
|
case $PACKAGE_MANAGER in
|
||||||
|
apt)
|
||||||
|
case $WHAT in
|
||||||
|
prepare)
|
||||||
|
$SUDO apt-get update
|
||||||
|
;;
|
||||||
|
svn)
|
||||||
|
$SUDO apt-get install -y subversion
|
||||||
|
;;
|
||||||
|
gcc*)
|
||||||
|
$SUDO apt-get install -y $WHAT ${WHAT/cc/++}
|
||||||
|
;;
|
||||||
|
clang*)
|
||||||
|
$SUDO apt-get install -y $WHAT libc++-dev libc++abi-dev
|
||||||
|
[[ $(uname -m) == "x86_64" ]] && $SUDO apt-get install -y ${WHAT/clang/lld} || true
|
||||||
|
;;
|
||||||
|
git)
|
||||||
|
$SUDO apt-get install -y git
|
||||||
|
;;
|
||||||
|
cmake)
|
||||||
|
$SUDO apt-get install -y cmake3 || $SUDO apt-get install -y cmake
|
||||||
|
;;
|
||||||
|
curl)
|
||||||
|
$SUDO apt-get install -y curl
|
||||||
|
;;
|
||||||
|
jq)
|
||||||
|
$SUDO apt-get install -y jq
|
||||||
|
;;
|
||||||
|
libssl-dev)
|
||||||
|
$SUDO apt-get install -y libssl-dev
|
||||||
|
;;
|
||||||
|
libicu-dev)
|
||||||
|
$SUDO apt-get install -y libicu-dev
|
||||||
|
;;
|
||||||
|
libreadline-dev)
|
||||||
|
$SUDO apt-get install -y libreadline-dev
|
||||||
|
;;
|
||||||
|
libunixodbc-dev)
|
||||||
|
$SUDO apt-get install -y unixodbc-dev
|
||||||
|
;;
|
||||||
|
libmariadbclient-dev)
|
||||||
|
$SUDO apt-get install -y libmariadbclient-dev
|
||||||
|
;;
|
||||||
|
llvm-libs*)
|
||||||
|
$SUDO apt-get install -y ${WHAT/llvm-libs/liblld}-dev ${WHAT/llvm-libs/libclang}-dev
|
||||||
|
;;
|
||||||
|
qemu-user-static)
|
||||||
|
$SUDO apt-get install -y qemu-user-static
|
||||||
|
;;
|
||||||
|
vagrant-virtualbox)
|
||||||
|
$SUDO apt-get install -y vagrant virtualbox
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown package"; exit 1;
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
;;
|
||||||
|
pkg)
|
||||||
|
case $WHAT in
|
||||||
|
prepare)
|
||||||
|
;;
|
||||||
|
svn)
|
||||||
|
$SUDO pkg install -y subversion
|
||||||
|
;;
|
||||||
|
gcc*)
|
||||||
|
$SUDO pkg install -y ${WHAT/-/}
|
||||||
|
;;
|
||||||
|
clang*)
|
||||||
|
$SUDO pkg install -y clang-devel
|
||||||
|
;;
|
||||||
|
git)
|
||||||
|
$SUDO pkg install -y git
|
||||||
|
;;
|
||||||
|
cmake)
|
||||||
|
$SUDO pkg install -y cmake
|
||||||
|
;;
|
||||||
|
curl)
|
||||||
|
$SUDO pkg install -y curl
|
||||||
|
;;
|
||||||
|
jq)
|
||||||
|
$SUDO pkg install -y jq
|
||||||
|
;;
|
||||||
|
libssl-dev)
|
||||||
|
$SUDO pkg install -y openssl
|
||||||
|
;;
|
||||||
|
libicu-dev)
|
||||||
|
$SUDO pkg install -y icu
|
||||||
|
;;
|
||||||
|
libreadline-dev)
|
||||||
|
$SUDO pkg install -y readline
|
||||||
|
;;
|
||||||
|
libunixodbc-dev)
|
||||||
|
$SUDO pkg install -y unixODBC libltdl
|
||||||
|
;;
|
||||||
|
libmariadbclient-dev)
|
||||||
|
$SUDO pkg install -y mariadb102-client
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown package"; exit 1;
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown distributive"; exit 1;
|
||||||
|
;;
|
||||||
|
esac
|
5
ci/jobs/quick-build/README.md
Normal file
5
ci/jobs/quick-build/README.md
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
## Build with debug mode and without many libraries
|
||||||
|
|
||||||
|
This job is intended as first check that build is not broken on wide variety of platforms.
|
||||||
|
|
||||||
|
Results of this build are not intended for production usage.
|
@ -1,12 +0,0 @@
|
|||||||
SOURCES_METHOD=local
|
|
||||||
COMPILER=clang
|
|
||||||
COMPILER_INSTALL_METHOD=packages
|
|
||||||
COMPILER_PACKAGE_VERSION=6.0
|
|
||||||
USE_LLVM_LIBRARIES_FROM_SYSTEM=0
|
|
||||||
BUILD_METHOD=normal
|
|
||||||
BUILD_TARGETS=clickhouse
|
|
||||||
BUILD_TYPE=Debug
|
|
||||||
ENABLE_EMBEDDED_COMPILER=0
|
|
||||||
CMAKE_FLAGS="-D CMAKE_C_FLAGS_ADD=-g0 -D CMAKE_CXX_FLAGS_ADD=-g0 -D ENABLE_TCMALLOC=0 -D ENABLE_CAPNP=0 -D ENABLE_RDKAFKA=0 -D ENABLE_UNWIND=0 -D ENABLE_ICU=0"
|
|
||||||
|
|
||||||
# TODO it doesn't build with -D ENABLE_NETSSL=0 -D ENABLE_MONGODB=0 -D ENABLE_MYSQL=0 -D ENABLE_DATA_ODBC=0
|
|
@ -7,11 +7,24 @@ set -e -x
|
|||||||
# or:
|
# or:
|
||||||
# ./run-with-docker.sh ubuntu:bionic jobs/quick-build/run.sh
|
# ./run-with-docker.sh ubuntu:bionic jobs/quick-build/run.sh
|
||||||
|
|
||||||
CONFIG="$(dirname $0)"/config
|
|
||||||
cd "$(dirname $0)"/../..
|
cd "$(dirname $0)"/../..
|
||||||
|
|
||||||
. default-config
|
. default-config
|
||||||
|
|
||||||
|
SOURCES_METHOD=local
|
||||||
|
COMPILER=clang
|
||||||
|
COMPILER_INSTALL_METHOD=packages
|
||||||
|
COMPILER_PACKAGE_VERSION=6.0
|
||||||
|
USE_LLVM_LIBRARIES_FROM_SYSTEM=0
|
||||||
|
BUILD_METHOD=normal
|
||||||
|
BUILD_TARGETS=clickhouse
|
||||||
|
BUILD_TYPE=Debug
|
||||||
|
ENABLE_EMBEDDED_COMPILER=0
|
||||||
|
|
||||||
|
CMAKE_FLAGS="-D CMAKE_C_FLAGS_ADD=-g0 -D CMAKE_CXX_FLAGS_ADD=-g0 -D ENABLE_TCMALLOC=0 -D ENABLE_CAPNP=0 -D ENABLE_RDKAFKA=0 -D ENABLE_UNWIND=0 -D ENABLE_ICU=0 -D ENABLE_POCO_MONGODB=0 -D ENABLE_POCO_NETSSL=0 -D ENABLE_POCO_ODBC=0 -D ENABLE_MYSQL=0"
|
||||||
|
|
||||||
|
[[ $(uname) == "FreeBSD" ]] && COMPILER_PACKAGE_VERSION=devel && export COMPILER_PATH=/usr/local/bin
|
||||||
|
|
||||||
. get-sources.sh
|
. get-sources.sh
|
||||||
. prepare-toolchain.sh
|
. prepare-toolchain.sh
|
||||||
. install-libraries.sh
|
. install-libraries.sh
|
||||||
|
@ -6,7 +6,7 @@ source default-config
|
|||||||
./check-docker.sh
|
./check-docker.sh
|
||||||
|
|
||||||
# http://fl47l1n3.net/2015/12/24/binfmt/
|
# http://fl47l1n3.net/2015/12/24/binfmt/
|
||||||
$SUDO apt-get -y install qemu-user-static
|
./install-os-packages.sh qemu-user-static
|
||||||
|
|
||||||
pushd docker-multiarch
|
pushd docker-multiarch
|
||||||
|
|
||||||
|
@ -3,11 +3,10 @@ set -e -x
|
|||||||
|
|
||||||
source default-config
|
source default-config
|
||||||
|
|
||||||
# TODO Non debian systems
|
./install-os-packages.sh cmake
|
||||||
apt-cache search cmake3 | grep -P '^cmake3 ' && $SUDO apt-get -y install cmake3 || $SUDO apt-get -y install cmake
|
|
||||||
|
|
||||||
if [[ "$COMPILER_INSTALL_METHOD" == "packages" ]]; then
|
if [[ "$COMPILER_INSTALL_METHOD" == "packages" ]]; then
|
||||||
. install-compiler-from-packages.sh;
|
. install-compiler-from-packages.sh
|
||||||
elif [[ "$COMPILER_INSTALL_METHOD" == "sources" ]]; then
|
elif [[ "$COMPILER_INSTALL_METHOD" == "sources" ]]; then
|
||||||
. install-compiler-from-sources.sh
|
. install-compiler-from-sources.sh
|
||||||
else
|
else
|
||||||
|
@ -3,11 +3,10 @@ set -e -x
|
|||||||
|
|
||||||
source default-config
|
source default-config
|
||||||
|
|
||||||
$SUDO apt-get -y install vagrant virtualbox
|
./install-os-packages.sh vagrant-virtualbox
|
||||||
|
|
||||||
pushd "vagrant-freebsd"
|
pushd "vagrant-freebsd"
|
||||||
vagrant up
|
vagrant up
|
||||||
vagrant ssh-config > vagrant-ssh
|
vagrant ssh-config > vagrant-ssh
|
||||||
ssh -F vagrant-ssh default 'uname -a'
|
ssh -F vagrant-ssh default 'uname -a'
|
||||||
scp -F vagrant-ssh -r ../../ci default:~
|
|
||||||
popd
|
popd
|
||||||
|
@ -1,6 +1,9 @@
|
|||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
set -e -x
|
set -e -x
|
||||||
|
|
||||||
|
mkdir -p /var/cache/ccache
|
||||||
|
DOCKER_ENV+=" --mount=type=bind,source=/var/cache/ccache,destination=/ccache -e CCACHE_DIR=/ccache "
|
||||||
|
|
||||||
PROJECT_ROOT="$(cd "$(dirname "$0")/.."; pwd -P)"
|
PROJECT_ROOT="$(cd "$(dirname "$0")/.."; pwd -P)"
|
||||||
[[ -n "$CONFIG" ]] && DOCKER_ENV="--env=CONFIG"
|
[[ -n "$CONFIG" ]] && DOCKER_ENV="--env=CONFIG"
|
||||||
docker run -t --network=host --mount=type=bind,source=${PROJECT_ROOT},destination=/ClickHouse --workdir=/ClickHouse/ci $DOCKER_ENV "$1" "$2"
|
docker run -t --network=host --mount=type=bind,source=${PROJECT_ROOT},destination=/ClickHouse --workdir=/ClickHouse/ci $DOCKER_ENV "$1" "$2"
|
||||||
|
14
ci/run-with-vagrant.sh
Executable file
14
ci/run-with-vagrant.sh
Executable file
@ -0,0 +1,14 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -e -x
|
||||||
|
|
||||||
|
[[ -r "vagrant-${1}/vagrant-ssh" ]] || die "Run prepare-vagrant-image-... first."
|
||||||
|
|
||||||
|
pushd vagrant-$1
|
||||||
|
|
||||||
|
shopt -s extglob
|
||||||
|
|
||||||
|
vagrant ssh -c "mkdir -p ClickHouse"
|
||||||
|
scp -q -F vagrant-ssh -r ../../!(*build*) default:~/ClickHouse
|
||||||
|
vagrant ssh -c "cd ClickHouse/ci; $2"
|
||||||
|
|
||||||
|
popd
|
@ -1,6 +1,9 @@
|
|||||||
if (ARCH_FREEBSD)
|
if (ARCH_FREEBSD)
|
||||||
find_library (EXECINFO_LIBRARY execinfo)
|
find_library (EXECINFO_LIBRARY execinfo)
|
||||||
|
find_library (ELF_LIBRARY elf)
|
||||||
message (STATUS "Using execinfo: ${EXECINFO_LIBRARY}")
|
message (STATUS "Using execinfo: ${EXECINFO_LIBRARY}")
|
||||||
|
message (STATUS "Using elf: ${ELF_LIBRARY}")
|
||||||
else ()
|
else ()
|
||||||
set (EXECINFO_LIBRARY "")
|
set (EXECINFO_LIBRARY "")
|
||||||
|
set (ELF_LIBRARY "")
|
||||||
endif ()
|
endif ()
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
option (ENABLE_EMBEDDED_COMPILER "Set to TRUE to enable support for 'compile' option for query execution" 1)
|
option (ENABLE_EMBEDDED_COMPILER "Set to TRUE to enable support for 'compile' option for query execution" 1)
|
||||||
option (USE_INTERNAL_LLVM_LIBRARY "Use bundled or system LLVM library. Default: system library for quicker developer builds." 0)
|
option (USE_INTERNAL_LLVM_LIBRARY "Use bundled or system LLVM library. Default: system library for quicker developer builds." ${APPLE})
|
||||||
|
|
||||||
if (ENABLE_EMBEDDED_COMPILER)
|
if (ENABLE_EMBEDDED_COMPILER)
|
||||||
if (USE_INTERNAL_LLVM_LIBRARY AND NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/llvm/llvm/CMakeLists.txt")
|
if (USE_INTERNAL_LLVM_LIBRARY AND NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/llvm/llvm/CMakeLists.txt")
|
||||||
|
@ -8,8 +8,21 @@ if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/poco/CMakeLists.txt")
|
|||||||
set (MISSING_INTERNAL_POCO_LIBRARY 1)
|
set (MISSING_INTERNAL_POCO_LIBRARY 1)
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
|
set (POCO_COMPONENTS Net XML SQL Data)
|
||||||
|
if (NOT DEFINED ENABLE_POCO_NETSSL OR ENABLE_POCO_NETSSL)
|
||||||
|
list (APPEND POCO_COMPONENTS Crypto NetSSL)
|
||||||
|
endif ()
|
||||||
|
if (NOT DEFINED ENABLE_POCO_MONGODB OR ENABLE_POCO_MONGODB)
|
||||||
|
list (APPEND POCO_COMPONENTS MongoDB)
|
||||||
|
endif ()
|
||||||
|
# TODO: after new poco release with SQL library rename ENABLE_POCO_ODBC -> ENABLE_POCO_SQLODBC
|
||||||
|
if (NOT DEFINED ENABLE_POCO_ODBC OR ENABLE_POCO_ODBC)
|
||||||
|
list (APPEND POCO_COMPONENTS DataODBC)
|
||||||
|
#list (APPEND POCO_COMPONENTS SQLODBC) # future
|
||||||
|
endif ()
|
||||||
|
|
||||||
if (NOT USE_INTERNAL_POCO_LIBRARY)
|
if (NOT USE_INTERNAL_POCO_LIBRARY)
|
||||||
find_package (Poco COMPONENTS Net NetSSL XML SQL Data Crypto DataODBC MongoDB)
|
find_package (Poco COMPONENTS ${POCO_COMPONENTS})
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
if (Poco_INCLUDE_DIRS AND Poco_Foundation_LIBRARY)
|
if (Poco_INCLUDE_DIRS AND Poco_Foundation_LIBRARY)
|
||||||
@ -46,13 +59,12 @@ elseif (NOT MISSING_INTERNAL_POCO_LIBRARY)
|
|||||||
"${ClickHouse_SOURCE_DIR}/contrib/poco/Util/include/"
|
"${ClickHouse_SOURCE_DIR}/contrib/poco/Util/include/"
|
||||||
)
|
)
|
||||||
|
|
||||||
if (NOT DEFINED POCO_ENABLE_MONGODB OR POCO_ENABLE_MONGODB)
|
if (NOT DEFINED ENABLE_POCO_MONGODB OR ENABLE_POCO_MONGODB)
|
||||||
set (Poco_MongoDB_FOUND 1)
|
set (USE_POCO_MONGODB 1)
|
||||||
set (Poco_MongoDB_LIBRARY PocoMongoDB)
|
set (Poco_MongoDB_LIBRARY PocoMongoDB)
|
||||||
set (Poco_MongoDB_INCLUDE_DIRS "${ClickHouse_SOURCE_DIR}/contrib/poco/MongoDB/include/")
|
set (Poco_MongoDB_INCLUDE_DIRS "${ClickHouse_SOURCE_DIR}/contrib/poco/MongoDB/include/")
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
|
|
||||||
if (EXISTS "${ClickHouse_SOURCE_DIR}/contrib/poco/SQL/ODBC/include/")
|
if (EXISTS "${ClickHouse_SOURCE_DIR}/contrib/poco/SQL/ODBC/include/")
|
||||||
set (Poco_SQL_FOUND 1)
|
set (Poco_SQL_FOUND 1)
|
||||||
set (Poco_SQL_LIBRARY PocoSQL)
|
set (Poco_SQL_LIBRARY PocoSQL)
|
||||||
@ -60,8 +72,8 @@ elseif (NOT MISSING_INTERNAL_POCO_LIBRARY)
|
|||||||
"${ClickHouse_SOURCE_DIR}/contrib/poco/SQL/include"
|
"${ClickHouse_SOURCE_DIR}/contrib/poco/SQL/include"
|
||||||
"${ClickHouse_SOURCE_DIR}/contrib/poco/Data/include"
|
"${ClickHouse_SOURCE_DIR}/contrib/poco/Data/include"
|
||||||
)
|
)
|
||||||
if (ODBC_FOUND)
|
if ((NOT DEFINED ENABLE_POCO_ODBC OR ENABLE_POCO_ODBC) AND ODBC_FOUND)
|
||||||
set (Poco_SQLODBC_FOUND 1)
|
set (USE_POCO_SQLODBC 1)
|
||||||
set (Poco_SQLODBC_INCLUDE_DIRS
|
set (Poco_SQLODBC_INCLUDE_DIRS
|
||||||
"${ClickHouse_SOURCE_DIR}/contrib/poco/SQL/ODBC/include/"
|
"${ClickHouse_SOURCE_DIR}/contrib/poco/SQL/ODBC/include/"
|
||||||
"${ClickHouse_SOURCE_DIR}/contrib/poco/Data/ODBC/include/"
|
"${ClickHouse_SOURCE_DIR}/contrib/poco/Data/ODBC/include/"
|
||||||
@ -73,8 +85,8 @@ elseif (NOT MISSING_INTERNAL_POCO_LIBRARY)
|
|||||||
set (Poco_Data_FOUND 1)
|
set (Poco_Data_FOUND 1)
|
||||||
set (Poco_Data_INCLUDE_DIRS "${ClickHouse_SOURCE_DIR}/contrib/poco/Data/include")
|
set (Poco_Data_INCLUDE_DIRS "${ClickHouse_SOURCE_DIR}/contrib/poco/Data/include")
|
||||||
set (Poco_Data_LIBRARY PocoData)
|
set (Poco_Data_LIBRARY PocoData)
|
||||||
if (ODBC_FOUND)
|
if ((NOT DEFINED ENABLE_POCO_ODBC OR ENABLE_POCO_ODBC) AND ODBC_FOUND)
|
||||||
set (Poco_DataODBC_FOUND 1)
|
set (USE_POCO_DATAODBC 1)
|
||||||
set (Poco_DataODBC_INCLUDE_DIRS
|
set (Poco_DataODBC_INCLUDE_DIRS
|
||||||
"${ClickHouse_SOURCE_DIR}/contrib/poco/Data/ODBC/include/"
|
"${ClickHouse_SOURCE_DIR}/contrib/poco/Data/ODBC/include/"
|
||||||
${ODBC_INCLUDE_DIRECTORIES}
|
${ODBC_INCLUDE_DIRECTORIES}
|
||||||
@ -84,8 +96,8 @@ elseif (NOT MISSING_INTERNAL_POCO_LIBRARY)
|
|||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
# TODO! fix internal ssl
|
# TODO! fix internal ssl
|
||||||
if (OPENSSL_FOUND AND NOT USE_INTERNAL_SSL_LIBRARY)
|
if (OPENSSL_FOUND AND NOT USE_INTERNAL_SSL_LIBRARY AND (NOT DEFINED ENABLE_POCO_NETSSL OR ENABLE_POCO_NETSSL))
|
||||||
set (Poco_NetSSL_FOUND 1)
|
set (USE_POCO_NETSSL 1)
|
||||||
set (Poco_NetSSL_LIBRARY PocoNetSSL)
|
set (Poco_NetSSL_LIBRARY PocoNetSSL)
|
||||||
set (Poco_Crypto_LIBRARY PocoCrypto)
|
set (Poco_Crypto_LIBRARY PocoCrypto)
|
||||||
endif ()
|
endif ()
|
||||||
@ -103,7 +115,7 @@ elseif (NOT MISSING_INTERNAL_POCO_LIBRARY)
|
|||||||
set (Poco_XML_LIBRARY PocoXML)
|
set (Poco_XML_LIBRARY PocoXML)
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
message(STATUS "Using Poco: ${Poco_INCLUDE_DIRS} : ${Poco_Foundation_LIBRARY},${Poco_Util_LIBRARY},${Poco_Net_LIBRARY},${Poco_NetSSL_LIBRARY},${Poco_XML_LIBRARY},${Poco_Data_LIBRARY},${Poco_DataODBC_LIBRARY},${Poco_MongoDB_LIBRARY}; MongoDB=${Poco_MongoDB_FOUND}, DataODBC=${Poco_DataODBC_FOUND}, NetSSL=${Poco_NetSSL_FOUND}")
|
message(STATUS "Using Poco: ${Poco_INCLUDE_DIRS} : ${Poco_Foundation_LIBRARY},${Poco_Util_LIBRARY},${Poco_Net_LIBRARY},${Poco_NetSSL_LIBRARY},${Poco_XML_LIBRARY},${Poco_Data_LIBRARY},${Poco_DataODBC_LIBRARY},${Poco_MongoDB_LIBRARY}; MongoDB=${USE_POCO_MONGODB}, DataODBC=${Poco_DataODBC_FOUND}, NetSSL=${USE_POCO_NETSSL}")
|
||||||
|
|
||||||
# How to make sutable poco:
|
# How to make sutable poco:
|
||||||
# use branch:
|
# use branch:
|
||||||
|
5
contrib/CMakeLists.txt
vendored
5
contrib/CMakeLists.txt
vendored
@ -128,7 +128,7 @@ if (USE_INTERNAL_POCO_LIBRARY)
|
|||||||
set (_save ${ENABLE_TESTS})
|
set (_save ${ENABLE_TESTS})
|
||||||
set (ENABLE_TESTS 0)
|
set (ENABLE_TESTS 0)
|
||||||
set (CMAKE_DISABLE_FIND_PACKAGE_ZLIB 1)
|
set (CMAKE_DISABLE_FIND_PACKAGE_ZLIB 1)
|
||||||
if (USE_INTERNAL_SSL_LIBRARY)
|
if (USE_INTERNAL_SSL_LIBRARY OR (DEFINED ENABLE_POCO_NETSSL AND NOT ENABLE_POCO_NETSSL))
|
||||||
set (DISABLE_INTERNAL_OPENSSL 1 CACHE INTERNAL "")
|
set (DISABLE_INTERNAL_OPENSSL 1 CACHE INTERNAL "")
|
||||||
set (ENABLE_NETSSL 0 CACHE INTERNAL "") # TODO!
|
set (ENABLE_NETSSL 0 CACHE INTERNAL "") # TODO!
|
||||||
set (ENABLE_CRYPTO 0 CACHE INTERNAL "") # TODO!
|
set (ENABLE_CRYPTO 0 CACHE INTERNAL "") # TODO!
|
||||||
@ -141,7 +141,8 @@ if (USE_INTERNAL_POCO_LIBRARY)
|
|||||||
set (ENABLE_TESTS ${_save})
|
set (ENABLE_TESTS ${_save})
|
||||||
set (CMAKE_CXX_FLAGS ${save_CMAKE_CXX_FLAGS})
|
set (CMAKE_CXX_FLAGS ${save_CMAKE_CXX_FLAGS})
|
||||||
set (CMAKE_C_FLAGS ${save_CMAKE_C_FLAGS})
|
set (CMAKE_C_FLAGS ${save_CMAKE_C_FLAGS})
|
||||||
if (OPENSSL_FOUND AND TARGET Crypto)
|
|
||||||
|
if (OPENSSL_FOUND AND TARGET Crypto AND (NOT DEFINED ENABLE_POCO_NETSSL OR ENABLE_POCO_NETSSL))
|
||||||
# Bug in poco https://github.com/pocoproject/poco/pull/2100 found on macos
|
# Bug in poco https://github.com/pocoproject/poco/pull/2100 found on macos
|
||||||
target_include_directories(Crypto PUBLIC ${OPENSSL_INCLUDE_DIR})
|
target_include_directories(Crypto PUBLIC ${OPENSSL_INCLUDE_DIR})
|
||||||
endif ()
|
endif ()
|
||||||
|
2
contrib/llvm
vendored
2
contrib/llvm
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 6b3975cf38d5c9436e1311b7e54ad93ef1a9aa9c
|
Subproject commit 163def217817c90fb982a6daf384744d8472b92b
|
2
contrib/poco
vendored
2
contrib/poco
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 2d5a158303adf9d47b980cdcfdb26cee1460704e
|
Subproject commit 3a2d0a833a22ef5e1164a9ada54e3253cb038904
|
@ -104,9 +104,7 @@ if (USE_EMBEDDED_COMPILER)
|
|||||||
if (TERMCAP_LIBRARY)
|
if (TERMCAP_LIBRARY)
|
||||||
list(APPEND REQUIRED_LLVM_LIBRARIES ${TERMCAP_LIBRARY})
|
list(APPEND REQUIRED_LLVM_LIBRARIES ${TERMCAP_LIBRARY})
|
||||||
endif ()
|
endif ()
|
||||||
if (LTDL_LIBRARY)
|
list(APPEND REQUIRED_LLVM_LIBRARIES ${CMAKE_DL_LIBS})
|
||||||
list(APPEND REQUIRED_LLVM_LIBRARIES ${LTDL_LIBRARY})
|
|
||||||
endif ()
|
|
||||||
|
|
||||||
target_link_libraries (dbms ${REQUIRED_LLVM_LIBRARIES})
|
target_link_libraries (dbms ${REQUIRED_LLVM_LIBRARIES})
|
||||||
target_include_directories (dbms BEFORE PUBLIC ${LLVM_INCLUDE_DIRS})
|
target_include_directories (dbms BEFORE PUBLIC ${LLVM_INCLUDE_DIRS})
|
||||||
@ -149,6 +147,7 @@ target_link_libraries (clickhouse_common_io
|
|||||||
${Poco_Data_LIBRARY}
|
${Poco_Data_LIBRARY}
|
||||||
${ZLIB_LIBRARIES}
|
${ZLIB_LIBRARIES}
|
||||||
${EXECINFO_LIBRARY}
|
${EXECINFO_LIBRARY}
|
||||||
|
${ELF_LIBRARY}
|
||||||
${Boost_SYSTEM_LIBRARY}
|
${Boost_SYSTEM_LIBRARY}
|
||||||
${CMAKE_DL_LIBS}
|
${CMAKE_DL_LIBS}
|
||||||
)
|
)
|
||||||
@ -172,7 +171,7 @@ if (NOT USE_INTERNAL_BOOST_LIBRARY)
|
|||||||
target_include_directories (clickhouse_common_io BEFORE PUBLIC ${Boost_INCLUDE_DIRS})
|
target_include_directories (clickhouse_common_io BEFORE PUBLIC ${Boost_INCLUDE_DIRS})
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
if (Poco_SQLODBC_FOUND)
|
if (USE_POCO_SQLODBC)
|
||||||
target_link_libraries (clickhouse_common_io ${Poco_SQL_LIBRARY})
|
target_link_libraries (clickhouse_common_io ${Poco_SQL_LIBRARY})
|
||||||
target_link_libraries (dbms ${Poco_SQLODBC_LIBRARY} ${Poco_SQL_LIBRARY})
|
target_link_libraries (dbms ${Poco_SQLODBC_LIBRARY} ${Poco_SQL_LIBRARY})
|
||||||
if (NOT USE_INTERNAL_POCO_LIBRARY)
|
if (NOT USE_INTERNAL_POCO_LIBRARY)
|
||||||
@ -186,7 +185,7 @@ if (Poco_Data_FOUND AND NOT USE_INTERNAL_POCO_LIBRARY)
|
|||||||
target_include_directories (dbms PRIVATE ${Poco_Data_INCLUDE_DIRS})
|
target_include_directories (dbms PRIVATE ${Poco_Data_INCLUDE_DIRS})
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
if (Poco_DataODBC_FOUND)
|
if (USE_POCO_DATAODBC)
|
||||||
target_link_libraries (clickhouse_common_io ${Poco_Data_LIBRARY})
|
target_link_libraries (clickhouse_common_io ${Poco_Data_LIBRARY})
|
||||||
target_link_libraries (dbms ${Poco_DataODBC_LIBRARY})
|
target_link_libraries (dbms ${Poco_DataODBC_LIBRARY})
|
||||||
if (NOT USE_INTERNAL_POCO_LIBRARY)
|
if (NOT USE_INTERNAL_POCO_LIBRARY)
|
||||||
@ -194,12 +193,11 @@ if (Poco_DataODBC_FOUND)
|
|||||||
endif()
|
endif()
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
|
if (USE_POCO_MONGODB)
|
||||||
if (Poco_MongoDB_FOUND)
|
|
||||||
target_link_libraries (dbms ${Poco_MongoDB_LIBRARY})
|
target_link_libraries (dbms ${Poco_MongoDB_LIBRARY})
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
if (Poco_NetSSL_FOUND)
|
if (USE_POCO_NETSSL)
|
||||||
target_link_libraries (clickhouse_common_io ${Poco_NetSSL_LIBRARY})
|
target_link_libraries (clickhouse_common_io ${Poco_NetSSL_LIBRARY})
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
|
@ -21,7 +21,7 @@
|
|||||||
#include <Interpreters/ClientInfo.h>
|
#include <Interpreters/ClientInfo.h>
|
||||||
|
|
||||||
#include <Common/config.h>
|
#include <Common/config.h>
|
||||||
#if Poco_NetSSL_FOUND
|
#if USE_POCO_NETSSL
|
||||||
#include <Poco/Net/SecureStreamSocket.h>
|
#include <Poco/Net/SecureStreamSocket.h>
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
@ -57,7 +57,7 @@ void Connection::connect()
|
|||||||
|
|
||||||
if (static_cast<bool>(secure))
|
if (static_cast<bool>(secure))
|
||||||
{
|
{
|
||||||
#if Poco_NetSSL_FOUND
|
#if USE_POCO_NETSSL
|
||||||
socket = std::make_unique<Poco::Net::SecureStreamSocket>();
|
socket = std::make_unique<Poco::Net::SecureStreamSocket>();
|
||||||
#else
|
#else
|
||||||
throw Exception{"tcp_secure protocol is disabled because poco library was built without NetSSL support.", ErrorCodes::SUPPORT_IS_DISABLED};
|
throw Exception{"tcp_secure protocol is disabled because poco library was built without NetSSL support.", ErrorCodes::SUPPORT_IS_DISABLED};
|
||||||
|
@ -369,15 +369,17 @@ ConfigProcessor::Files ConfigProcessor::getConfigMergeFiles(const std::string &
|
|||||||
Files files;
|
Files files;
|
||||||
|
|
||||||
Poco::Path merge_dir_path(config_path);
|
Poco::Path merge_dir_path(config_path);
|
||||||
merge_dir_path.setExtension("d");
|
std::set<std::string> merge_dirs;
|
||||||
|
|
||||||
std::vector<std::string> merge_dirs;
|
/// Add path_to_config/config_name.d dir
|
||||||
merge_dirs.push_back(merge_dir_path.toString());
|
merge_dir_path.setExtension("d");
|
||||||
if (merge_dir_path.getBaseName() != "conf")
|
merge_dirs.insert(merge_dir_path.toString());
|
||||||
{
|
/// Add path_to_config/conf.d dir
|
||||||
merge_dir_path.setBaseName("conf");
|
merge_dir_path.setBaseName("conf");
|
||||||
merge_dirs.push_back(merge_dir_path.toString());
|
merge_dirs.insert(merge_dir_path.toString());
|
||||||
}
|
/// Add path_to_config/config.d dir
|
||||||
|
merge_dir_path.setBaseName("config");
|
||||||
|
merge_dirs.insert(merge_dir_path.toString());
|
||||||
|
|
||||||
for (const std::string & merge_dir_name : merge_dirs)
|
for (const std::string & merge_dir_name : merge_dirs)
|
||||||
{
|
{
|
||||||
|
@ -10,7 +10,7 @@
|
|||||||
#cmakedefine01 USE_CAPNP
|
#cmakedefine01 USE_CAPNP
|
||||||
#cmakedefine01 USE_EMBEDDED_COMPILER
|
#cmakedefine01 USE_EMBEDDED_COMPILER
|
||||||
#cmakedefine01 LLVM_HAS_RTTI
|
#cmakedefine01 LLVM_HAS_RTTI
|
||||||
#cmakedefine01 Poco_SQLODBC_FOUND
|
#cmakedefine01 USE_POCO_SQLODBC
|
||||||
#cmakedefine01 Poco_DataODBC_FOUND
|
#cmakedefine01 USE_POCO_DATAODBC
|
||||||
#cmakedefine01 Poco_MongoDB_FOUND
|
#cmakedefine01 USE_POCO_MONGODB
|
||||||
#cmakedefine01 Poco_NetSSL_FOUND
|
#cmakedefine01 USE_POCO_NETSSL
|
||||||
|
@ -35,10 +35,10 @@ const char * auto_config_build[]
|
|||||||
"USE_VECTORCLASS", "@USE_VECTORCLASS@",
|
"USE_VECTORCLASS", "@USE_VECTORCLASS@",
|
||||||
"USE_RDKAFKA", "@USE_RDKAFKA@",
|
"USE_RDKAFKA", "@USE_RDKAFKA@",
|
||||||
"USE_CAPNP", "@USE_CAPNP@",
|
"USE_CAPNP", "@USE_CAPNP@",
|
||||||
"USE_Poco_SQLODBC", "@Poco_SQLODBC_FOUND@",
|
"USE_POCO_SQLODBC", "@USE_POCO_SQLODBC@",
|
||||||
"USE_Poco_DataODBC", "@Poco_DataODBC_FOUND@",
|
"USE_POCO_DATAODBC", "@USE_POCO_DATAODBC@",
|
||||||
"USE_Poco_MongoDB", "@Poco_MongoDB_FOUND@",
|
"USE_POCO_MONGODB", "@USE_POCO_MONGODB@",
|
||||||
"USE_Poco_NetSSL", "@Poco_NetSSL_FOUND@",
|
"USE_POCO_NETSSL", "@USE_POCO_NETSSL@",
|
||||||
|
|
||||||
nullptr, nullptr
|
nullptr, nullptr
|
||||||
};
|
};
|
||||||
|
@ -16,10 +16,10 @@
|
|||||||
#include <mutex>
|
#include <mutex>
|
||||||
|
|
||||||
#include <Common/config.h>
|
#include <Common/config.h>
|
||||||
#if Poco_MongoDB_FOUND
|
#if USE_POCO_MONGODB
|
||||||
#include <Dictionaries/MongoDBDictionarySource.h>
|
#include <Dictionaries/MongoDBDictionarySource.h>
|
||||||
#endif
|
#endif
|
||||||
#if Poco_SQLODBC_FOUND || Poco_DataODBC_FOUND
|
#if USE_POCO_SQLODBC || USE_POCO_DATAODBC
|
||||||
#pragma GCC diagnostic push
|
#pragma GCC diagnostic push
|
||||||
#pragma GCC diagnostic ignored "-Wunused-parameter"
|
#pragma GCC diagnostic ignored "-Wunused-parameter"
|
||||||
#include <Poco/Data/ODBC/Connector.h>
|
#include <Poco/Data/ODBC/Connector.h>
|
||||||
@ -88,7 +88,7 @@ Block createSampleBlock(const DictionaryStructure & dict_struct)
|
|||||||
DictionarySourceFactory::DictionarySourceFactory()
|
DictionarySourceFactory::DictionarySourceFactory()
|
||||||
: log(&Poco::Logger::get("DictionarySourceFactory"))
|
: log(&Poco::Logger::get("DictionarySourceFactory"))
|
||||||
{
|
{
|
||||||
#if Poco_SQLODBC_FOUND || Poco_DataODBC_FOUND
|
#if USE_POCO_SQLODBC || USE_POCO_DATAODBC
|
||||||
Poco::Data::ODBC::Connector::registerConnector();
|
Poco::Data::ODBC::Connector::registerConnector();
|
||||||
#endif
|
#endif
|
||||||
}
|
}
|
||||||
@ -139,7 +139,7 @@ DictionarySourcePtr DictionarySourceFactory::create(
|
|||||||
}
|
}
|
||||||
else if ("mongodb" == source_type)
|
else if ("mongodb" == source_type)
|
||||||
{
|
{
|
||||||
#if Poco_MongoDB_FOUND
|
#if USE_POCO_MONGODB
|
||||||
return std::make_unique<MongoDBDictionarySource>(dict_struct, config, config_prefix + ".mongodb", sample_block);
|
return std::make_unique<MongoDBDictionarySource>(dict_struct, config, config_prefix + ".mongodb", sample_block);
|
||||||
#else
|
#else
|
||||||
throw Exception{"Dictionary source of type `mongodb` is disabled because poco library was built without mongodb support.",
|
throw Exception{"Dictionary source of type `mongodb` is disabled because poco library was built without mongodb support.",
|
||||||
@ -148,7 +148,7 @@ DictionarySourcePtr DictionarySourceFactory::create(
|
|||||||
}
|
}
|
||||||
else if ("odbc" == source_type)
|
else if ("odbc" == source_type)
|
||||||
{
|
{
|
||||||
#if Poco_SQLODBC_FOUND || Poco_DataODBC_FOUND
|
#if USE_POCO_SQLODBC || USE_POCO_DATAODBC
|
||||||
return std::make_unique<ODBCDictionarySource>(dict_struct, config, config_prefix + ".odbc", sample_block, context);
|
return std::make_unique<ODBCDictionarySource>(dict_struct, config, config_prefix + ".odbc", sample_block, context);
|
||||||
#else
|
#else
|
||||||
throw Exception{"Dictionary source of type `odbc` is disabled because poco library was built without ODBC support.",
|
throw Exception{"Dictionary source of type `odbc` is disabled because poco library was built without ODBC support.",
|
||||||
@ -168,7 +168,7 @@ DictionarySourcePtr DictionarySourceFactory::create(
|
|||||||
if (dict_struct.has_expressions)
|
if (dict_struct.has_expressions)
|
||||||
throw Exception{"Dictionary source of type `http` does not support attribute expressions", ErrorCodes::LOGICAL_ERROR};
|
throw Exception{"Dictionary source of type `http` does not support attribute expressions", ErrorCodes::LOGICAL_ERROR};
|
||||||
|
|
||||||
#if Poco_NetSSL_FOUND
|
#if USE_POCO_NETSSL
|
||||||
// Used for https queries
|
// Used for https queries
|
||||||
std::call_once(ssl_init_once, SSLInit);
|
std::call_once(ssl_init_once, SSLInit);
|
||||||
#endif
|
#endif
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
#include <Common/config.h>
|
#include <Common/config.h>
|
||||||
#if Poco_MongoDB_FOUND
|
#if USE_POCO_MONGODB
|
||||||
|
|
||||||
#include <vector>
|
#include <vector>
|
||||||
#include <string>
|
#include <string>
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
#include <Common/config.h>
|
#include <Common/config.h>
|
||||||
#if Poco_MongoDB_FOUND
|
#if USE_POCO_MONGODB
|
||||||
#include <Poco/Util/AbstractConfiguration.h>
|
#include <Poco/Util/AbstractConfiguration.h>
|
||||||
|
|
||||||
#pragma GCC diagnostic push
|
#pragma GCC diagnostic push
|
||||||
|
@ -21,7 +21,7 @@ public:
|
|||||||
|
|
||||||
String getName() const override { return "MySQL"; }
|
String getName() const override { return "MySQL"; }
|
||||||
|
|
||||||
Block getHeader() const override { return description.sample_block; };
|
Block getHeader() const override { return description.sample_block; }
|
||||||
|
|
||||||
private:
|
private:
|
||||||
Block readImpl() override;
|
Block readImpl() override;
|
||||||
|
@ -755,7 +755,7 @@ public:
|
|||||||
tuple = typeid_cast<const ColumnTuple *>(materialized_tuple.get());
|
tuple = typeid_cast<const ColumnTuple *>(materialized_tuple.get());
|
||||||
}
|
}
|
||||||
|
|
||||||
if (tuple)
|
if (tuple && type_tuple->getElements().size() != 1)
|
||||||
{
|
{
|
||||||
const Columns & tuple_columns = tuple->getColumns();
|
const Columns & tuple_columns = tuple->getColumns();
|
||||||
const DataTypes & tuple_types = type_tuple->getElements();
|
const DataTypes & tuple_types = type_tuple->getElements();
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
#include <IO/HTTPCommon.h>
|
#include <IO/HTTPCommon.h>
|
||||||
|
|
||||||
#include <Common/config.h>
|
#include <Common/config.h>
|
||||||
#if Poco_NetSSL_FOUND
|
#if USE_POCO_NETSSL
|
||||||
#include <Poco/Net/AcceptCertificateHandler.h>
|
#include <Poco/Net/AcceptCertificateHandler.h>
|
||||||
#include <Poco/Net/Context.h>
|
#include <Poco/Net/Context.h>
|
||||||
#include <Poco/Net/InvalidCertificateHandler.h>
|
#include <Poco/Net/InvalidCertificateHandler.h>
|
||||||
@ -30,7 +30,7 @@ std::once_flag ssl_init_once;
|
|||||||
void SSLInit()
|
void SSLInit()
|
||||||
{
|
{
|
||||||
// http://stackoverflow.com/questions/18315472/https-request-in-c-using-poco
|
// http://stackoverflow.com/questions/18315472/https-request-in-c-using-poco
|
||||||
#if Poco_NetSSL_FOUND
|
#if USE_POCO_NETSSL
|
||||||
Poco::Net::initializeSSL();
|
Poco::Net::initializeSSL();
|
||||||
#endif
|
#endif
|
||||||
}
|
}
|
||||||
|
@ -9,7 +9,7 @@
|
|||||||
#include <Poco/Version.h>
|
#include <Poco/Version.h>
|
||||||
#include <common/logger_useful.h>
|
#include <common/logger_useful.h>
|
||||||
|
|
||||||
#if Poco_NetSSL_FOUND
|
#if USE_POCO_NETSSL
|
||||||
#include <Poco/Net/HTTPSClientSession.h>
|
#include <Poco/Net/HTTPSClientSession.h>
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
@ -36,7 +36,7 @@ ReadWriteBufferFromHTTP::ReadWriteBufferFromHTTP(const Poco::URI & uri,
|
|||||||
session
|
session
|
||||||
{
|
{
|
||||||
std::unique_ptr<Poco::Net::HTTPClientSession>(
|
std::unique_ptr<Poco::Net::HTTPClientSession>(
|
||||||
#if Poco_NetSSL_FOUND
|
#if USE_POCO_NETSSL
|
||||||
is_ssl ? new Poco::Net::HTTPSClientSession :
|
is_ssl ? new Poco::Net::HTTPSClientSession :
|
||||||
#endif
|
#endif
|
||||||
new Poco::Net::HTTPClientSession)
|
new Poco::Net::HTTPClientSession)
|
||||||
|
@ -1,5 +1,8 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <memory>
|
#include <memory>
|
||||||
|
#include <ctime>
|
||||||
|
#include <cstddef>
|
||||||
|
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
|
@ -32,6 +32,8 @@
|
|||||||
#include <Interpreters/convertFieldToType.h>
|
#include <Interpreters/convertFieldToType.h>
|
||||||
#include <Interpreters/Set.h>
|
#include <Interpreters/Set.h>
|
||||||
#include <Interpreters/Join.h>
|
#include <Interpreters/Join.h>
|
||||||
|
#include <Interpreters/ProjectionManipulation.h>
|
||||||
|
#include <Interpreters/evaluateConstantExpression.h>
|
||||||
|
|
||||||
#include <AggregateFunctions/AggregateFunctionFactory.h>
|
#include <AggregateFunctions/AggregateFunctionFactory.h>
|
||||||
#include <AggregateFunctions/parseAggregateFunctionParameters.h>
|
#include <AggregateFunctions/parseAggregateFunctionParameters.h>
|
||||||
@ -58,7 +60,7 @@
|
|||||||
#include <DataTypes/DataTypeFactory.h>
|
#include <DataTypes/DataTypeFactory.h>
|
||||||
#include <DataTypes/DataTypeFunction.h>
|
#include <DataTypes/DataTypeFunction.h>
|
||||||
#include <Functions/FunctionsMiscellaneous.h>
|
#include <Functions/FunctionsMiscellaneous.h>
|
||||||
#include "ProjectionManipulation.h"
|
#include <DataTypes/DataTypeTuple.h>
|
||||||
|
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
@ -1645,82 +1647,72 @@ void ExpressionAnalyzer::makeExplicitSet(const ASTFunction * node, const Block &
|
|||||||
if (args.children.size() != 2)
|
if (args.children.size() != 2)
|
||||||
throw Exception("Wrong number of arguments passed to function in", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
throw Exception("Wrong number of arguments passed to function in", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||||
|
|
||||||
const ASTPtr & arg = args.children.at(1);
|
|
||||||
|
|
||||||
DataTypes set_element_types;
|
|
||||||
const ASTPtr & left_arg = args.children.at(0);
|
const ASTPtr & left_arg = args.children.at(0);
|
||||||
|
const ASTPtr & right_arg = args.children.at(1);
|
||||||
|
|
||||||
const ASTFunction * left_arg_tuple = typeid_cast<const ASTFunction *>(left_arg.get());
|
auto getTupleTypeFromAst = [this](const ASTPtr & node) -> DataTypePtr
|
||||||
|
|
||||||
/** NOTE If tuple in left hand side specified non-explicitly
|
|
||||||
* Example: identity((a, b)) IN ((1, 2), (3, 4))
|
|
||||||
* instead of (a, b)) IN ((1, 2), (3, 4))
|
|
||||||
* then set creation doesn't work correctly.
|
|
||||||
*/
|
|
||||||
if (left_arg_tuple && left_arg_tuple->name == "tuple")
|
|
||||||
{
|
{
|
||||||
for (const auto & arg : left_arg_tuple->arguments->children)
|
auto ast_function = typeid_cast<const ASTFunction *>(node.get());
|
||||||
set_element_types.push_back(sample_block.getByName(arg->getColumnName()).type);
|
if (ast_function && ast_function->name == "tuple" && !ast_function->arguments->children.empty())
|
||||||
}
|
|
||||||
else
|
|
||||||
{
|
{
|
||||||
DataTypePtr left_type = sample_block.getByName(left_arg->getColumnName()).type;
|
/// Won't parse all values of outer tuple.
|
||||||
set_element_types.push_back(left_type);
|
auto element = ast_function->arguments->children.at(0);
|
||||||
|
std::pair<Field, DataTypePtr> value_raw = evaluateConstantExpression(element, context);
|
||||||
|
return std::make_shared<DataTypeTuple>(DataTypes({value_raw.second}));
|
||||||
}
|
}
|
||||||
|
|
||||||
/// The case `x in (1, 2)` distinguishes from the case `x in 1` (also `x in (1)`).
|
return evaluateConstantExpression(node, context).second;
|
||||||
bool single_value = false;
|
};
|
||||||
ASTPtr elements_ast = arg;
|
|
||||||
|
|
||||||
if (ASTFunction * set_func = typeid_cast<ASTFunction *>(arg.get()))
|
const DataTypePtr & left_arg_type = sample_block.getByName(left_arg->getColumnName()).type;
|
||||||
{
|
const DataTypePtr & right_arg_type = getTupleTypeFromAst(right_arg);
|
||||||
if (set_func->name == "tuple")
|
|
||||||
{
|
|
||||||
if (set_func->arguments->children.empty())
|
|
||||||
{
|
|
||||||
/// Empty set.
|
|
||||||
elements_ast = set_func->arguments;
|
|
||||||
}
|
|
||||||
else
|
|
||||||
{
|
|
||||||
/// Distinguish the case `(x, y) in ((1, 2), (3, 4))` from the case `(x, y) in (1, 2)`.
|
|
||||||
ASTFunction * any_element = typeid_cast<ASTFunction *>(set_func->arguments->children.at(0).get());
|
|
||||||
if (set_element_types.size() >= 2 && (!any_element || any_element->name != "tuple"))
|
|
||||||
single_value = true;
|
|
||||||
else
|
|
||||||
elements_ast = set_func->arguments;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
else
|
|
||||||
{
|
|
||||||
if (set_element_types.size() >= 2)
|
|
||||||
throw Exception("Incorrect type of 2nd argument for function " + node->name
|
|
||||||
+ ". Must be subquery or set of " + toString(set_element_types.size()) + "-element tuples.",
|
|
||||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
|
||||||
|
|
||||||
single_value = true;
|
std::function<size_t(const DataTypePtr &)> getTupleDepth;
|
||||||
}
|
getTupleDepth = [&getTupleDepth](const DataTypePtr & type) -> size_t
|
||||||
}
|
|
||||||
else if (typeid_cast<ASTLiteral *>(arg.get()))
|
|
||||||
{
|
{
|
||||||
single_value = true;
|
if (auto tuple_type = typeid_cast<const DataTypeTuple *>(type.get()))
|
||||||
}
|
return 1 + (tuple_type->getElements().empty() ? 0 : getTupleDepth(tuple_type->getElements().at(0)));
|
||||||
else
|
|
||||||
{
|
|
||||||
throw Exception("Incorrect type of 2nd argument for function " + node->name + ". Must be subquery or set of values.",
|
|
||||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (single_value)
|
return 0;
|
||||||
|
};
|
||||||
|
|
||||||
|
size_t left_tuple_depth = getTupleDepth(left_arg_type);
|
||||||
|
size_t right_tuple_depth = getTupleDepth(right_arg_type);
|
||||||
|
|
||||||
|
DataTypes set_element_types = {left_arg_type};
|
||||||
|
auto left_tuple_type = typeid_cast<const DataTypeTuple *>(left_arg_type.get());
|
||||||
|
if (left_tuple_type && left_tuple_type->getElements().size() != 1)
|
||||||
|
set_element_types = left_tuple_type->getElements();
|
||||||
|
|
||||||
|
ASTPtr elements_ast = nullptr;
|
||||||
|
|
||||||
|
/// 1 in 1; (1, 2) in (1, 2); identity(tuple(tuple(tuple(1)))) in tuple(tuple(tuple(1))); etc.
|
||||||
|
if (left_tuple_depth == right_tuple_depth)
|
||||||
{
|
{
|
||||||
ASTPtr exp_list = std::make_shared<ASTExpressionList>();
|
ASTPtr exp_list = std::make_shared<ASTExpressionList>();
|
||||||
exp_list->children.push_back(elements_ast);
|
exp_list->children.push_back(right_arg);
|
||||||
elements_ast = exp_list;
|
elements_ast = exp_list;
|
||||||
}
|
}
|
||||||
|
/// 1 in (1, 2); (1, 2) in ((1, 2), (3, 4)); etc.
|
||||||
|
else if (left_tuple_depth + 1 == right_tuple_depth)
|
||||||
|
{
|
||||||
|
ASTFunction * set_func = typeid_cast<ASTFunction *>(right_arg.get());
|
||||||
|
|
||||||
|
if (!set_func || set_func->name != "tuple")
|
||||||
|
throw Exception("Incorrect type of 2nd argument for function " + node->name
|
||||||
|
+ ". Must be subquery or set of elements with type " + left_arg_type->getName() + ".",
|
||||||
|
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||||
|
|
||||||
|
elements_ast = set_func->arguments;
|
||||||
|
}
|
||||||
|
else
|
||||||
|
throw Exception("Invalid types for IN function: "
|
||||||
|
+ left_arg_type->getName() + " and " + right_arg_type->getName() + ".",
|
||||||
|
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||||
|
|
||||||
SetPtr set = std::make_shared<Set>(SizeLimits(settings.max_rows_in_set, settings.max_bytes_in_set, settings.set_overflow_mode));
|
SetPtr set = std::make_shared<Set>(SizeLimits(settings.max_rows_in_set, settings.max_bytes_in_set, settings.set_overflow_mode));
|
||||||
set->createFromAST(set_element_types, elements_ast, context, create_ordered_set);
|
set->createFromAST(set_element_types, elements_ast, context, create_ordered_set);
|
||||||
prepared_sets[arg.get()] = std::move(set);
|
prepared_sets[right_arg.get()] = std::move(set);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -2089,8 +2081,10 @@ void ExpressionAnalyzer::getActionsImpl(const ASTPtr & ast, bool no_subqueries,
|
|||||||
/// If the function has an argument-lambda expression, you need to determine its type before the recursive call.
|
/// If the function has an argument-lambda expression, you need to determine its type before the recursive call.
|
||||||
bool has_lambda_arguments = false;
|
bool has_lambda_arguments = false;
|
||||||
|
|
||||||
for (auto & child : node->arguments->children)
|
for (size_t arg = 0; arg < node->arguments->children.size(); ++arg)
|
||||||
{
|
{
|
||||||
|
auto & child = node->arguments->children[arg];
|
||||||
|
|
||||||
ASTFunction * lambda = typeid_cast<ASTFunction *>(child.get());
|
ASTFunction * lambda = typeid_cast<ASTFunction *>(child.get());
|
||||||
if (lambda && lambda->name == "lambda")
|
if (lambda && lambda->name == "lambda")
|
||||||
{
|
{
|
||||||
@ -2108,7 +2102,7 @@ void ExpressionAnalyzer::getActionsImpl(const ASTPtr & ast, bool no_subqueries,
|
|||||||
/// Select the name in the next cycle.
|
/// Select the name in the next cycle.
|
||||||
argument_names.emplace_back();
|
argument_names.emplace_back();
|
||||||
}
|
}
|
||||||
else if (prepared_sets.count(child.get()))
|
else if (prepared_sets.count(child.get()) && functionIsInOrGlobalInOperator(node->name) && arg == 1)
|
||||||
{
|
{
|
||||||
ColumnWithTypeAndName column;
|
ColumnWithTypeAndName column;
|
||||||
column.type = std::make_shared<DataTypeSet>();
|
column.type = std::make_shared<DataTypeSet>();
|
||||||
@ -2204,9 +2198,9 @@ void ExpressionAnalyzer::getActionsImpl(const ASTPtr & ast, bool no_subqueries,
|
|||||||
|
|
||||||
Names captured;
|
Names captured;
|
||||||
Names required = lambda_actions->getRequiredColumns();
|
Names required = lambda_actions->getRequiredColumns();
|
||||||
for (size_t j = 0; j < required.size(); ++j)
|
for (const auto & required_arg : required)
|
||||||
if (findColumn(required[j], lambda_arguments) == lambda_arguments.end())
|
if (findColumn(required_arg, lambda_arguments) == lambda_arguments.end())
|
||||||
captured.push_back(required[j]);
|
captured.push_back(required_arg);
|
||||||
|
|
||||||
/// We can not name `getColumnName()`,
|
/// We can not name `getColumnName()`,
|
||||||
/// because it does not uniquely define the expression (the types of arguments can be different).
|
/// because it does not uniquely define the expression (the types of arguments can be different).
|
||||||
@ -2226,9 +2220,9 @@ void ExpressionAnalyzer::getActionsImpl(const ASTPtr & ast, bool no_subqueries,
|
|||||||
|
|
||||||
if (only_consts)
|
if (only_consts)
|
||||||
{
|
{
|
||||||
for (size_t i = 0; i < argument_names.size(); ++i)
|
for (const auto & argument_name : argument_names)
|
||||||
{
|
{
|
||||||
if (!actions_stack.getSampleBlock().has(argument_names[i]))
|
if (!actions_stack.getSampleBlock().has(argument_name))
|
||||||
{
|
{
|
||||||
arguments_present = false;
|
arguments_present = false;
|
||||||
break;
|
break;
|
||||||
|
@ -208,6 +208,7 @@ void Set::createFromAST(const DataTypes & types, ASTPtr node, const Context & co
|
|||||||
|
|
||||||
MutableColumns columns = header.cloneEmptyColumns();
|
MutableColumns columns = header.cloneEmptyColumns();
|
||||||
|
|
||||||
|
DataTypePtr tuple_type;
|
||||||
Row tuple_values;
|
Row tuple_values;
|
||||||
ASTExpressionList & list = typeid_cast<ASTExpressionList &>(*node);
|
ASTExpressionList & list = typeid_cast<ASTExpressionList &>(*node);
|
||||||
for (auto & elem : list.children)
|
for (auto & elem : list.children)
|
||||||
@ -221,10 +222,22 @@ void Set::createFromAST(const DataTypes & types, ASTPtr node, const Context & co
|
|||||||
}
|
}
|
||||||
else if (ASTFunction * func = typeid_cast<ASTFunction *>(elem.get()))
|
else if (ASTFunction * func = typeid_cast<ASTFunction *>(elem.get()))
|
||||||
{
|
{
|
||||||
|
Field function_result;
|
||||||
|
const TupleBackend * tuple = nullptr;
|
||||||
if (func->name != "tuple")
|
if (func->name != "tuple")
|
||||||
throw Exception("Incorrect element of set. Must be tuple.", ErrorCodes::INCORRECT_ELEMENT_OF_SET);
|
{
|
||||||
|
if (!tuple_type)
|
||||||
|
tuple_type = std::make_shared<DataTypeTuple>(types);
|
||||||
|
|
||||||
size_t tuple_size = func->arguments->children.size();
|
function_result = extractValueFromNode(elem, *tuple_type, context);
|
||||||
|
if (function_result.getType() != Field::Types::Tuple)
|
||||||
|
throw Exception("Invalid type of set. Expected tuple, got " + String(function_result.getTypeName()),
|
||||||
|
ErrorCodes::INCORRECT_ELEMENT_OF_SET);
|
||||||
|
|
||||||
|
tuple = &function_result.get<Tuple>().t;
|
||||||
|
}
|
||||||
|
|
||||||
|
size_t tuple_size = tuple ? tuple->size() : func->arguments->children.size();
|
||||||
if (tuple_size != num_columns)
|
if (tuple_size != num_columns)
|
||||||
throw Exception("Incorrect size of tuple in set: " + toString(tuple_size) + " instead of " + toString(num_columns),
|
throw Exception("Incorrect size of tuple in set: " + toString(tuple_size) + " instead of " + toString(num_columns),
|
||||||
ErrorCodes::INCORRECT_ELEMENT_OF_SET);
|
ErrorCodes::INCORRECT_ELEMENT_OF_SET);
|
||||||
@ -235,7 +248,8 @@ void Set::createFromAST(const DataTypes & types, ASTPtr node, const Context & co
|
|||||||
size_t i = 0;
|
size_t i = 0;
|
||||||
for (; i < tuple_size; ++i)
|
for (; i < tuple_size; ++i)
|
||||||
{
|
{
|
||||||
Field value = extractValueFromNode(func->arguments->children[i], *types[i], context);
|
Field value = tuple ? (*tuple)[i]
|
||||||
|
: extractValueFromNode(func->arguments->children[i], *types[i], context);
|
||||||
|
|
||||||
/// If at least one of the elements of the tuple has an impossible (outside the range of the type) value, then the entire tuple too.
|
/// If at least one of the elements of the tuple has an impossible (outside the range of the type) value, then the entire tuple too.
|
||||||
if (value.isNull())
|
if (value.isNull())
|
||||||
|
@ -87,6 +87,8 @@ struct Settings
|
|||||||
\
|
\
|
||||||
M(SettingBool, merge_tree_uniform_read_distribution, true, "Distribute read from MergeTree over threads evenly, ensuring stable average execution time of each thread within one read operation.") \
|
M(SettingBool, merge_tree_uniform_read_distribution, true, "Distribute read from MergeTree over threads evenly, ensuring stable average execution time of each thread within one read operation.") \
|
||||||
\
|
\
|
||||||
|
M(SettingUInt64, mysql_max_rows_to_insert, 65536, "The maximum number of rows in MySQL batch insertion of the MySQL storage engine") \
|
||||||
|
\
|
||||||
M(SettingUInt64, optimize_min_equality_disjunction_chain_length, 3, "The minimum length of the expression `expr = x1 OR ... expr = xN` for optimization ") \
|
M(SettingUInt64, optimize_min_equality_disjunction_chain_length, 3, "The minimum length of the expression `expr = x1 OR ... expr = xN` for optimization ") \
|
||||||
\
|
\
|
||||||
M(SettingUInt64, min_bytes_to_use_direct_io, 0, "The minimum number of bytes for input/output operations is bypassing the page cache. 0 - disabled.") \
|
M(SettingUInt64, min_bytes_to_use_direct_io, 0, "The minimum number of bytes for input/output operations is bypassing the page cache. 0 - disabled.") \
|
||||||
|
@ -12,9 +12,8 @@ llvm_map_components_to_libnames(REQUIRED_LLVM_LIBRARIES all)
|
|||||||
if (TERMCAP_LIBRARY)
|
if (TERMCAP_LIBRARY)
|
||||||
list(APPEND REQUIRED_LLVM_LIBRARIES ${TERMCAP_LIBRARY})
|
list(APPEND REQUIRED_LLVM_LIBRARIES ${TERMCAP_LIBRARY})
|
||||||
endif ()
|
endif ()
|
||||||
if (LTDL_LIBRARY)
|
list(APPEND REQUIRED_LLVM_LIBRARIES ${CMAKE_DL_LIBS})
|
||||||
list(APPEND REQUIRED_LLVM_LIBRARIES ${LTDL_LIBRARY})
|
|
||||||
endif ()
|
|
||||||
|
|
||||||
message(STATUS "Using LLVM ${LLVM_VERSION}: ${LLVM_INCLUDE_DIRS} : ${REQUIRED_LLVM_LIBRARIES}")
|
message(STATUS "Using LLVM ${LLVM_VERSION}: ${LLVM_INCLUDE_DIRS} : ${REQUIRED_LLVM_LIBRARIES}")
|
||||||
|
|
||||||
|
@ -12,9 +12,7 @@ llvm_map_components_to_libnames(REQUIRED_LLVM_LIBRARIES all)
|
|||||||
if (TERMCAP_LIBRARY)
|
if (TERMCAP_LIBRARY)
|
||||||
list(APPEND REQUIRED_LLVM_LIBRARIES ${TERMCAP_LIBRARY})
|
list(APPEND REQUIRED_LLVM_LIBRARIES ${TERMCAP_LIBRARY})
|
||||||
endif ()
|
endif ()
|
||||||
if (LTDL_LIBRARY)
|
list(APPEND REQUIRED_LLVM_LIBRARIES ${CMAKE_DL_LIBS})
|
||||||
list(APPEND REQUIRED_LLVM_LIBRARIES ${LTDL_LIBRARY})
|
|
||||||
endif ()
|
|
||||||
|
|
||||||
message(STATUS "Using LLVM ${LLVM_VERSION}: ${LLVM_INCLUDE_DIRS} : ${REQUIRED_LLVM_LIBRARIES}")
|
message(STATUS "Using LLVM ${LLVM_VERSION}: ${LLVM_INCLUDE_DIRS} : ${REQUIRED_LLVM_LIBRARIES}")
|
||||||
|
|
||||||
|
@ -12,9 +12,8 @@ llvm_map_components_to_libnames(REQUIRED_LLVM_LIBRARIES all)
|
|||||||
if (TERMCAP_LIBRARY)
|
if (TERMCAP_LIBRARY)
|
||||||
list(APPEND REQUIRED_LLVM_LIBRARIES ${TERMCAP_LIBRARY})
|
list(APPEND REQUIRED_LLVM_LIBRARIES ${TERMCAP_LIBRARY})
|
||||||
endif ()
|
endif ()
|
||||||
if (LTDL_LIBRARY)
|
list(APPEND REQUIRED_LLVM_LIBRARIES ${CMAKE_DL_LIBS})
|
||||||
list(APPEND REQUIRED_LLVM_LIBRARIES ${LTDL_LIBRARY})
|
|
||||||
endif ()
|
|
||||||
|
|
||||||
message(STATUS "Using LLVM ${LLVM_VERSION}: ${LLVM_INCLUDE_DIRS} : ${REQUIRED_LLVM_LIBRARIES}")
|
message(STATUS "Using LLVM ${LLVM_VERSION}: ${LLVM_INCLUDE_DIRS} : ${REQUIRED_LLVM_LIBRARIES}")
|
||||||
|
|
||||||
|
@ -26,6 +26,7 @@
|
|||||||
#include <Interpreters/DDLWorker.h>
|
#include <Interpreters/DDLWorker.h>
|
||||||
#include <Interpreters/ProcessList.h>
|
#include <Interpreters/ProcessList.h>
|
||||||
#include <Interpreters/loadMetadata.h>
|
#include <Interpreters/loadMetadata.h>
|
||||||
|
#include <Interpreters/DNSCacheUpdater.h>
|
||||||
#include <Storages/StorageReplicatedMergeTree.h>
|
#include <Storages/StorageReplicatedMergeTree.h>
|
||||||
#include <Storages/System/attachSystemTables.h>
|
#include <Storages/System/attachSystemTables.h>
|
||||||
#include <AggregateFunctions/registerAggregateFunctions.h>
|
#include <AggregateFunctions/registerAggregateFunctions.h>
|
||||||
@ -38,12 +39,9 @@
|
|||||||
#include "StatusFile.h"
|
#include "StatusFile.h"
|
||||||
#include "TCPHandlerFactory.h"
|
#include "TCPHandlerFactory.h"
|
||||||
|
|
||||||
#if Poco_NetSSL_FOUND
|
#if USE_POCO_NETSSL
|
||||||
#include <Poco/Net/Context.h>
|
#include <Poco/Net/Context.h>
|
||||||
#include <Poco/Net/SecureServerSocket.h>
|
#include <Poco/Net/SecureServerSocket.h>
|
||||||
#include <Interpreters/DNSCacheUpdater.h>
|
|
||||||
|
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
namespace CurrentMetrics
|
namespace CurrentMetrics
|
||||||
@ -431,7 +429,7 @@ int Server::main(const std::vector<std::string> & /*args*/)
|
|||||||
/// HTTPS
|
/// HTTPS
|
||||||
if (config().has("https_port"))
|
if (config().has("https_port"))
|
||||||
{
|
{
|
||||||
#if Poco_NetSSL_FOUND
|
#if USE_POCO_NETSSL
|
||||||
std::call_once(ssl_init_once, SSLInit);
|
std::call_once(ssl_init_once, SSLInit);
|
||||||
|
|
||||||
Poco::Net::SecureServerSocket socket;
|
Poco::Net::SecureServerSocket socket;
|
||||||
@ -471,7 +469,7 @@ int Server::main(const std::vector<std::string> & /*args*/)
|
|||||||
/// TCP with SSL
|
/// TCP with SSL
|
||||||
if (config().has("tcp_port_secure"))
|
if (config().has("tcp_port_secure"))
|
||||||
{
|
{
|
||||||
#if Poco_NetSSL_FOUND
|
#if USE_POCO_NETSSL
|
||||||
Poco::Net::SecureServerSocket socket;
|
Poco::Net::SecureServerSocket socket;
|
||||||
auto address = socket_bind_listen(socket, listen_host, config().getInt("tcp_port_secure"), /* secure = */ true);
|
auto address = socket_bind_listen(socket, listen_host, config().getInt("tcp_port_secure"), /* secure = */ true);
|
||||||
socket.setReceiveTimeout(settings.receive_timeout);
|
socket.setReceiveTimeout(settings.receive_timeout);
|
||||||
|
@ -1,12 +1,22 @@
|
|||||||
#include <Storages/StorageMySQL.h>
|
#include <Storages/StorageMySQL.h>
|
||||||
|
|
||||||
#if USE_MYSQL
|
#if USE_MYSQL
|
||||||
|
|
||||||
#include <Storages/StorageFactory.h>
|
#include <Storages/StorageFactory.h>
|
||||||
#include <Storages/transformQueryForExternalDatabase.h>
|
#include <Storages/transformQueryForExternalDatabase.h>
|
||||||
#include <Dictionaries/MySQLBlockInputStream.h>
|
#include <Dictionaries/MySQLBlockInputStream.h>
|
||||||
#include <Interpreters/evaluateConstantExpression.h>
|
#include <Interpreters/evaluateConstantExpression.h>
|
||||||
|
#include <Interpreters/Settings.h>
|
||||||
|
#include <Interpreters/Context.h>
|
||||||
|
#include <DataStreams/IBlockOutputStream.h>
|
||||||
|
#include <DataStreams/BlockOutputStreamFromRowOutputStream.h>
|
||||||
|
#include <DataStreams/ValuesRowOutputStream.h>
|
||||||
|
#include <DataStreams/FormatFactory.h>
|
||||||
#include <Common/parseAddress.h>
|
#include <Common/parseAddress.h>
|
||||||
|
#include <IO/Operators.h>
|
||||||
|
#include <IO/WriteHelpers.h>
|
||||||
#include <Parsers/ASTLiteral.h>
|
#include <Parsers/ASTLiteral.h>
|
||||||
|
#include <mysqlxx/Transaction.h>
|
||||||
|
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
@ -15,20 +25,26 @@ namespace DB
|
|||||||
namespace ErrorCodes
|
namespace ErrorCodes
|
||||||
{
|
{
|
||||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||||
|
extern const int BAD_ARGUMENTS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
StorageMySQL::StorageMySQL(
|
StorageMySQL::StorageMySQL(const std::string & name,
|
||||||
const std::string & name,
|
|
||||||
mysqlxx::Pool && pool,
|
mysqlxx::Pool && pool,
|
||||||
const std::string & remote_database_name,
|
const std::string & remote_database_name,
|
||||||
const std::string & remote_table_name,
|
const std::string & remote_table_name,
|
||||||
const ColumnsDescription & columns_)
|
const bool replace_query,
|
||||||
|
const std::string & on_duplicate_clause,
|
||||||
|
const ColumnsDescription & columns_,
|
||||||
|
const Context & context)
|
||||||
: IStorage{columns_}
|
: IStorage{columns_}
|
||||||
, name(name)
|
, name(name)
|
||||||
, remote_database_name(remote_database_name)
|
, remote_database_name(remote_database_name)
|
||||||
, remote_table_name(remote_table_name)
|
, remote_table_name(remote_table_name)
|
||||||
|
, replace_query{replace_query}
|
||||||
|
, on_duplicate_clause{on_duplicate_clause}
|
||||||
, pool(std::move(pool))
|
, pool(std::move(pool))
|
||||||
|
, context(context)
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -56,18 +72,132 @@ BlockInputStreams StorageMySQL::read(
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class StorageMySQLBlockOutputStream : public IBlockOutputStream
|
||||||
|
{
|
||||||
|
public:
|
||||||
|
explicit StorageMySQLBlockOutputStream(const StorageMySQL & storage,
|
||||||
|
const std::string & remote_database_name,
|
||||||
|
const std::string & remote_table_name ,
|
||||||
|
const mysqlxx::PoolWithFailover::Entry & entry,
|
||||||
|
const size_t & mysql_max_rows_to_insert)
|
||||||
|
: storage{storage}
|
||||||
|
, remote_database_name{remote_database_name}
|
||||||
|
, remote_table_name{remote_table_name}
|
||||||
|
, entry{entry}
|
||||||
|
, max_batch_rows{mysql_max_rows_to_insert}
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|
||||||
|
Block getHeader() const override { return storage.getSampleBlock(); }
|
||||||
|
|
||||||
|
void write(const Block & block) override
|
||||||
|
{
|
||||||
|
auto blocks = splitBlocks(block, max_batch_rows);
|
||||||
|
mysqlxx::Transaction trans(entry);
|
||||||
|
try
|
||||||
|
{
|
||||||
|
for (const Block & batch_data : blocks)
|
||||||
|
{
|
||||||
|
writeBlockData(batch_data);
|
||||||
|
}
|
||||||
|
trans.commit();
|
||||||
|
}
|
||||||
|
catch(...)
|
||||||
|
{
|
||||||
|
trans.rollback();
|
||||||
|
throw;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void writeBlockData(const Block & block)
|
||||||
|
{
|
||||||
|
WriteBufferFromOwnString sqlbuf;
|
||||||
|
sqlbuf << (storage.replace_query ? "REPLACE" : "INSERT") << " INTO ";
|
||||||
|
sqlbuf << backQuoteIfNeed(remote_database_name) << "." << backQuoteIfNeed(remote_table_name);
|
||||||
|
sqlbuf << " ( " << dumpNamesWithBackQuote(block) << " ) VALUES ";
|
||||||
|
|
||||||
|
auto writer = FormatFactory().getOutput("Values", sqlbuf, storage.getSampleBlock(), storage.context);
|
||||||
|
writer->write(block);
|
||||||
|
|
||||||
|
if (!storage.on_duplicate_clause.empty())
|
||||||
|
sqlbuf << " ON DUPLICATE KEY " << storage.on_duplicate_clause;
|
||||||
|
|
||||||
|
sqlbuf << ";";
|
||||||
|
|
||||||
|
auto query = this->entry->query(sqlbuf.str());
|
||||||
|
query.execute();
|
||||||
|
}
|
||||||
|
|
||||||
|
Blocks splitBlocks(const Block & block, const size_t & max_rows) const
|
||||||
|
{
|
||||||
|
/// Avoid Excessive copy when block is small enough
|
||||||
|
if (block.rows() <= max_rows)
|
||||||
|
return Blocks{std::move(block)};
|
||||||
|
|
||||||
|
const size_t splited_block_size = ceil(block.rows() * 1.0 / max_rows);
|
||||||
|
Blocks splitted_blocks(splited_block_size);
|
||||||
|
|
||||||
|
for (size_t idx = 0; idx < splited_block_size; ++idx)
|
||||||
|
splitted_blocks[idx] = block.cloneEmpty();
|
||||||
|
|
||||||
|
const size_t columns = block.columns();
|
||||||
|
const size_t rows = block.rows();
|
||||||
|
size_t offsets = 0;
|
||||||
|
size_t limits = max_batch_rows;
|
||||||
|
for (size_t idx = 0; idx < splited_block_size; ++idx)
|
||||||
|
{
|
||||||
|
/// For last batch, limits should be the remain size
|
||||||
|
if (idx == splited_block_size - 1) limits = rows - offsets;
|
||||||
|
for (size_t col_idx = 0; col_idx < columns; ++col_idx)
|
||||||
|
{
|
||||||
|
splitted_blocks[idx].getByPosition(col_idx).column = block.getByPosition(col_idx).column->cut(offsets, limits);
|
||||||
|
}
|
||||||
|
offsets += max_batch_rows;
|
||||||
|
}
|
||||||
|
|
||||||
|
return splitted_blocks;
|
||||||
|
}
|
||||||
|
|
||||||
|
std::string dumpNamesWithBackQuote(const Block & block) const
|
||||||
|
{
|
||||||
|
WriteBufferFromOwnString out;
|
||||||
|
for (auto it = block.begin(); it != block.end(); ++it)
|
||||||
|
{
|
||||||
|
if (it != block.begin())
|
||||||
|
out << ", ";
|
||||||
|
out << backQuoteIfNeed(it->name);
|
||||||
|
}
|
||||||
|
return out.str();
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
private:
|
||||||
|
const StorageMySQL & storage;
|
||||||
|
std::string remote_database_name;
|
||||||
|
std::string remote_table_name;
|
||||||
|
mysqlxx::PoolWithFailover::Entry entry;
|
||||||
|
size_t max_batch_rows;
|
||||||
|
};
|
||||||
|
|
||||||
|
|
||||||
|
BlockOutputStreamPtr StorageMySQL::write(
|
||||||
|
const ASTPtr & /*query*/, const Settings & settings)
|
||||||
|
{
|
||||||
|
return std::make_shared<StorageMySQLBlockOutputStream>(*this, remote_database_name, remote_table_name, pool.Get(), settings.mysql_max_rows_to_insert);
|
||||||
|
}
|
||||||
|
|
||||||
void registerStorageMySQL(StorageFactory & factory)
|
void registerStorageMySQL(StorageFactory & factory)
|
||||||
{
|
{
|
||||||
factory.registerStorage("MySQL", [](const StorageFactory::Arguments & args)
|
factory.registerStorage("MySQL", [](const StorageFactory::Arguments & args)
|
||||||
{
|
{
|
||||||
ASTs & engine_args = args.engine_args;
|
ASTs & engine_args = args.engine_args;
|
||||||
|
|
||||||
if (engine_args.size() != 5)
|
if (engine_args.size() < 5 || engine_args.size() > 7)
|
||||||
throw Exception(
|
throw Exception(
|
||||||
"Storage MySQL requires exactly 5 parameters: MySQL('host:port', database, table, 'user', 'password').",
|
"Storage MySQL requires 5-7 parameters: MySQL('host:port', database, table, 'user', 'password'[, replace_query, 'on_duplicate_clause']).",
|
||||||
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||||
|
|
||||||
for (size_t i = 0; i < 5; ++i)
|
for (size_t i = 0; i < engine_args.size(); ++i)
|
||||||
engine_args[i] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[i], args.local_context);
|
engine_args[i] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[i], args.local_context);
|
||||||
|
|
||||||
/// 3306 is the default MySQL port.
|
/// 3306 is the default MySQL port.
|
||||||
@ -80,12 +210,27 @@ void registerStorageMySQL(StorageFactory & factory)
|
|||||||
|
|
||||||
mysqlxx::Pool pool(remote_database, parsed_host_port.first, username, password, parsed_host_port.second);
|
mysqlxx::Pool pool(remote_database, parsed_host_port.first, username, password, parsed_host_port.second);
|
||||||
|
|
||||||
|
bool replace_query = false;
|
||||||
|
std::string on_duplicate_clause;
|
||||||
|
if (engine_args.size() >= 6)
|
||||||
|
replace_query = static_cast<const ASTLiteral &>(*engine_args[5]).value.safeGet<UInt64>() > 0;
|
||||||
|
if (engine_args.size() == 7)
|
||||||
|
on_duplicate_clause = static_cast<const ASTLiteral &>(*engine_args[6]).value.safeGet<String>();
|
||||||
|
|
||||||
|
if (replace_query && !on_duplicate_clause.empty())
|
||||||
|
throw Exception(
|
||||||
|
"Only one of 'replace_query' and 'on_duplicate_clause' can be specified, or none of them",
|
||||||
|
ErrorCodes::BAD_ARGUMENTS);
|
||||||
|
|
||||||
return StorageMySQL::create(
|
return StorageMySQL::create(
|
||||||
args.table_name,
|
args.table_name,
|
||||||
std::move(pool),
|
std::move(pool),
|
||||||
remote_database,
|
remote_database,
|
||||||
remote_table,
|
remote_table,
|
||||||
args.columns);
|
replace_query,
|
||||||
|
on_duplicate_clause,
|
||||||
|
args.columns,
|
||||||
|
args.context);
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -24,7 +24,10 @@ public:
|
|||||||
mysqlxx::Pool && pool,
|
mysqlxx::Pool && pool,
|
||||||
const std::string & remote_database_name,
|
const std::string & remote_database_name,
|
||||||
const std::string & remote_table_name,
|
const std::string & remote_table_name,
|
||||||
const ColumnsDescription & columns);
|
const bool replace_query,
|
||||||
|
const std::string & on_duplicate_clause,
|
||||||
|
const ColumnsDescription & columns,
|
||||||
|
const Context & context);
|
||||||
|
|
||||||
std::string getName() const override { return "MySQL"; }
|
std::string getName() const override { return "MySQL"; }
|
||||||
std::string getTableName() const override { return name; }
|
std::string getTableName() const override { return name; }
|
||||||
@ -37,14 +40,20 @@ public:
|
|||||||
size_t max_block_size,
|
size_t max_block_size,
|
||||||
unsigned num_streams) override;
|
unsigned num_streams) override;
|
||||||
|
|
||||||
|
BlockOutputStreamPtr write(const ASTPtr & query, const Settings & settings) override;
|
||||||
|
|
||||||
private:
|
private:
|
||||||
|
friend class StorageMySQLBlockOutputStream;
|
||||||
std::string name;
|
std::string name;
|
||||||
|
|
||||||
std::string remote_database_name;
|
std::string remote_database_name;
|
||||||
std::string remote_table_name;
|
std::string remote_table_name;
|
||||||
|
bool replace_query;
|
||||||
|
std::string on_duplicate_clause;
|
||||||
|
|
||||||
|
|
||||||
mysqlxx::Pool pool;
|
mysqlxx::Pool pool;
|
||||||
|
const Context & context;
|
||||||
};
|
};
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -23,7 +23,7 @@ void registerStorageJoin(StorageFactory & factory);
|
|||||||
void registerStorageView(StorageFactory & factory);
|
void registerStorageView(StorageFactory & factory);
|
||||||
void registerStorageMaterializedView(StorageFactory & factory);
|
void registerStorageMaterializedView(StorageFactory & factory);
|
||||||
|
|
||||||
#if Poco_SQLODBC_FOUND || Poco_DataODBC_FOUND
|
#if USE_POCO_SQLODBC || USE_POCO_DATAODBC
|
||||||
void registerStorageODBC(StorageFactory & factory);
|
void registerStorageODBC(StorageFactory & factory);
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
@ -56,7 +56,7 @@ void registerStorages()
|
|||||||
registerStorageView(factory);
|
registerStorageView(factory);
|
||||||
registerStorageMaterializedView(factory);
|
registerStorageMaterializedView(factory);
|
||||||
|
|
||||||
#if Poco_SQLODBC_FOUND || Poco_DataODBC_FOUND
|
#if USE_POCO_SQLODBC || USE_POCO_DATAODBC
|
||||||
registerStorageODBC(factory);
|
registerStorageODBC(factory);
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
@ -7,12 +7,12 @@ list(REMOVE_ITEM clickhouse_table_functions_headers ITableFunction.h TableFuncti
|
|||||||
add_library(clickhouse_table_functions ${clickhouse_table_functions_sources})
|
add_library(clickhouse_table_functions ${clickhouse_table_functions_sources})
|
||||||
target_link_libraries(clickhouse_table_functions clickhouse_storages_system dbms ${Poco_Foundation_LIBRARY})
|
target_link_libraries(clickhouse_table_functions clickhouse_storages_system dbms ${Poco_Foundation_LIBRARY})
|
||||||
|
|
||||||
if (Poco_SQLODBC_FOUND)
|
if (USE_POCO_SQLODBC)
|
||||||
target_link_libraries (clickhouse_table_functions ${Poco_SQLODBC_LIBRARY})
|
target_link_libraries (clickhouse_table_functions ${Poco_SQLODBC_LIBRARY})
|
||||||
target_include_directories (clickhouse_table_functions PRIVATE ${ODBC_INCLUDE_DIRECTORIES} ${Poco_SQLODBC_INCLUDE_DIRS})
|
target_include_directories (clickhouse_table_functions PRIVATE ${ODBC_INCLUDE_DIRECTORIES} ${Poco_SQLODBC_INCLUDE_DIRS})
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
if (Poco_DataODBC_FOUND)
|
if (USE_POCO_DATAODBC)
|
||||||
target_link_libraries (clickhouse_table_functions ${Poco_DataODBC_LIBRARY})
|
target_link_libraries (clickhouse_table_functions ${Poco_DataODBC_LIBRARY})
|
||||||
target_include_directories (clickhouse_table_functions PRIVATE ${ODBC_INCLUDE_DIRECTORIES} ${Poco_DataODBC_INCLUDE_DIRS})
|
target_include_directories (clickhouse_table_functions PRIVATE ${ODBC_INCLUDE_DIRECTORIES} ${Poco_DataODBC_INCLUDE_DIRS})
|
||||||
endif ()
|
endif ()
|
||||||
|
@ -29,8 +29,8 @@ namespace DB
|
|||||||
|
|
||||||
namespace ErrorCodes
|
namespace ErrorCodes
|
||||||
{
|
{
|
||||||
extern const int LOGICAL_ERROR;
|
|
||||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||||
|
extern const int BAD_ARGUMENTS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -89,11 +89,11 @@ StoragePtr TableFunctionMySQL::executeImpl(const ASTPtr & ast_function, const Co
|
|||||||
|
|
||||||
ASTs & args = typeid_cast<ASTExpressionList &>(*args_func.arguments).children;
|
ASTs & args = typeid_cast<ASTExpressionList &>(*args_func.arguments).children;
|
||||||
|
|
||||||
if (args.size() != 5)
|
if (args.size() < 5 || args.size() > 7)
|
||||||
throw Exception("Table function 'mysql' requires exactly 5 arguments: host:port, database name, table name, username and password",
|
throw Exception("Table function 'mysql' requires 5-7 parameters: MySQL('host:port', database, table, 'user', 'password'[, replace_query, 'on_duplicate_clause']).",
|
||||||
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||||
|
|
||||||
for (size_t i = 0; i < 5; ++i)
|
for (size_t i = 0; i < args.size(); ++i)
|
||||||
args[i] = evaluateConstantExpressionOrIdentifierAsLiteral(args[i], context);
|
args[i] = evaluateConstantExpressionOrIdentifierAsLiteral(args[i], context);
|
||||||
|
|
||||||
std::string host_port = static_cast<const ASTLiteral &>(*args[0]).value.safeGet<String>();
|
std::string host_port = static_cast<const ASTLiteral &>(*args[0]).value.safeGet<String>();
|
||||||
@ -102,6 +102,18 @@ StoragePtr TableFunctionMySQL::executeImpl(const ASTPtr & ast_function, const Co
|
|||||||
std::string user_name = static_cast<const ASTLiteral &>(*args[3]).value.safeGet<String>();
|
std::string user_name = static_cast<const ASTLiteral &>(*args[3]).value.safeGet<String>();
|
||||||
std::string password = static_cast<const ASTLiteral &>(*args[4]).value.safeGet<String>();
|
std::string password = static_cast<const ASTLiteral &>(*args[4]).value.safeGet<String>();
|
||||||
|
|
||||||
|
bool replace_query = false;
|
||||||
|
std::string on_duplicate_clause;
|
||||||
|
if (args.size() >= 6)
|
||||||
|
replace_query = static_cast<const ASTLiteral &>(*args[5]).value.safeGet<UInt64>() > 0;
|
||||||
|
if (args.size() == 7)
|
||||||
|
on_duplicate_clause = static_cast<const ASTLiteral &>(*args[6]).value.safeGet<String>();
|
||||||
|
|
||||||
|
if (replace_query && !on_duplicate_clause.empty())
|
||||||
|
throw Exception(
|
||||||
|
"Only one of 'replace_query' and 'on_duplicate_clause' can be specified, or none of them",
|
||||||
|
ErrorCodes::BAD_ARGUMENTS);
|
||||||
|
|
||||||
/// 3306 is the default MySQL port number
|
/// 3306 is the default MySQL port number
|
||||||
auto parsed_host_port = parseAddress(host_port, 3306);
|
auto parsed_host_port = parseAddress(host_port, 3306);
|
||||||
|
|
||||||
@ -152,7 +164,10 @@ StoragePtr TableFunctionMySQL::executeImpl(const ASTPtr & ast_function, const Co
|
|||||||
std::move(pool),
|
std::move(pool),
|
||||||
database_name,
|
database_name,
|
||||||
table_name,
|
table_name,
|
||||||
ColumnsDescription{columns});
|
replace_query,
|
||||||
|
on_duplicate_clause,
|
||||||
|
ColumnsDescription{columns},
|
||||||
|
context);
|
||||||
|
|
||||||
res->startup();
|
res->startup();
|
||||||
return res;
|
return res;
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
#include <TableFunctions/TableFunctionODBC.h>
|
#include <TableFunctions/TableFunctionODBC.h>
|
||||||
|
|
||||||
#if Poco_SQLODBC_FOUND || Poco_DataODBC_FOUND
|
#if USE_POCO_SQLODBC || USE_POCO_DATAODBC
|
||||||
#include <type_traits>
|
#include <type_traits>
|
||||||
#include <ext/scope_guard.h>
|
#include <ext/scope_guard.h>
|
||||||
|
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <Common/config.h>
|
#include <Common/config.h>
|
||||||
#if Poco_SQLODBC_FOUND || Poco_DataODBC_FOUND
|
#if USE_POCO_SQLODBC || USE_POCO_DATAODBC
|
||||||
|
|
||||||
#include <TableFunctions/ITableFunction.h>
|
#include <TableFunctions/ITableFunction.h>
|
||||||
|
|
||||||
|
@ -13,7 +13,7 @@ void registerTableFunctionNumbers(TableFunctionFactory & factory);
|
|||||||
void registerTableFunctionCatBoostPool(TableFunctionFactory & factory);
|
void registerTableFunctionCatBoostPool(TableFunctionFactory & factory);
|
||||||
void registerTableFunctionFile(TableFunctionFactory & factory);
|
void registerTableFunctionFile(TableFunctionFactory & factory);
|
||||||
|
|
||||||
#if Poco_SQLODBC_FOUND || Poco_DataODBC_FOUND
|
#if USE_POCO_SQLODBC || USE_POCO_DATAODBC
|
||||||
void registerTableFunctionODBC(TableFunctionFactory & factory);
|
void registerTableFunctionODBC(TableFunctionFactory & factory);
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
@ -33,7 +33,7 @@ void registerTableFunctions()
|
|||||||
registerTableFunctionCatBoostPool(factory);
|
registerTableFunctionCatBoostPool(factory);
|
||||||
registerTableFunctionFile(factory);
|
registerTableFunctionFile(factory);
|
||||||
|
|
||||||
#if Poco_SQLODBC_FOUND || Poco_DataODBC_FOUND
|
#if USE_POCO_SQLODBC || USE_POCO_DATAODBC
|
||||||
registerTableFunctionODBC(factory);
|
registerTableFunctionODBC(factory);
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
@ -14,7 +14,7 @@ Don't use Docker from your system repository.
|
|||||||
|
|
||||||
* [pip](https://pypi.python.org/pypi/pip). To install: `sudo apt-get install python-pip`
|
* [pip](https://pypi.python.org/pypi/pip). To install: `sudo apt-get install python-pip`
|
||||||
* [py.test](https://docs.pytest.org/) testing framework. To install: `sudo -H pip install pytest`
|
* [py.test](https://docs.pytest.org/) testing framework. To install: `sudo -H pip install pytest`
|
||||||
* [docker-compose](https://docs.docker.com/compose/) and additional python libraries. To install: `sudo -H pip install docker-compose docker dicttoxml kazoo`
|
* [docker-compose](https://docs.docker.com/compose/) and additional python libraries. To install: `sudo -H pip install docker-compose docker dicttoxml kazoo PyMySQL`
|
||||||
|
|
||||||
If you want to run the tests under a non-privileged user, you must add this user to `docker` group: `sudo usermod -aG docker $USER` and re-login.
|
If you want to run the tests under a non-privileged user, you must add this user to `docker` group: `sudo usermod -aG docker $USER` and re-login.
|
||||||
(You must close all your sessions (for example, restart your computer))
|
(You must close all your sessions (for example, restart your computer))
|
||||||
|
@ -48,15 +48,17 @@ class ClickHouseCluster:
|
|||||||
|
|
||||||
self.base_cmd = ['docker-compose', '--project-directory', self.base_dir, '--project-name', self.project_name]
|
self.base_cmd = ['docker-compose', '--project-directory', self.base_dir, '--project-name', self.project_name]
|
||||||
self.base_zookeeper_cmd = None
|
self.base_zookeeper_cmd = None
|
||||||
|
self.base_mysql_cmd = []
|
||||||
self.pre_zookeeper_commands = []
|
self.pre_zookeeper_commands = []
|
||||||
self.instances = {}
|
self.instances = {}
|
||||||
self.with_zookeeper = False
|
self.with_zookeeper = False
|
||||||
|
self.with_mysql = False
|
||||||
|
|
||||||
self.docker_client = None
|
self.docker_client = None
|
||||||
self.is_up = False
|
self.is_up = False
|
||||||
|
|
||||||
|
|
||||||
def add_instance(self, name, config_dir=None, main_configs=[], user_configs=[], macroses={}, with_zookeeper=False,
|
def add_instance(self, name, config_dir=None, main_configs=[], user_configs=[], macroses={}, with_zookeeper=False, with_mysql=False,
|
||||||
clickhouse_path_dir=None, hostname=None):
|
clickhouse_path_dir=None, hostname=None):
|
||||||
"""Add an instance to the cluster.
|
"""Add an instance to the cluster.
|
||||||
|
|
||||||
@ -75,7 +77,7 @@ class ClickHouseCluster:
|
|||||||
|
|
||||||
instance = ClickHouseInstance(
|
instance = ClickHouseInstance(
|
||||||
self, self.base_dir, name, config_dir, main_configs, user_configs, macroses, with_zookeeper,
|
self, self.base_dir, name, config_dir, main_configs, user_configs, macroses, with_zookeeper,
|
||||||
self.zookeeper_config_path, self.base_configs_dir, self.server_bin_path, clickhouse_path_dir, hostname=hostname)
|
self.zookeeper_config_path, with_mysql, self.base_configs_dir, self.server_bin_path, clickhouse_path_dir, hostname=hostname)
|
||||||
|
|
||||||
self.instances[name] = instance
|
self.instances[name] = instance
|
||||||
self.base_cmd.extend(['--file', instance.docker_compose_path])
|
self.base_cmd.extend(['--file', instance.docker_compose_path])
|
||||||
@ -85,6 +87,12 @@ class ClickHouseCluster:
|
|||||||
self.base_zookeeper_cmd = ['docker-compose', '--project-directory', self.base_dir, '--project-name',
|
self.base_zookeeper_cmd = ['docker-compose', '--project-directory', self.base_dir, '--project-name',
|
||||||
self.project_name, '--file', p.join(HELPERS_DIR, 'docker_compose_zookeeper.yml')]
|
self.project_name, '--file', p.join(HELPERS_DIR, 'docker_compose_zookeeper.yml')]
|
||||||
|
|
||||||
|
if with_mysql and not self.with_mysql:
|
||||||
|
self.with_mysql = True
|
||||||
|
self.base_cmd.extend(['--file', p.join(HELPERS_DIR, 'docker_compose_mysql.yml')])
|
||||||
|
self.base_mysql_cmd = ['docker-compose', '--project-directory', self.base_dir, '--project-name',
|
||||||
|
self.project_name, '--file', p.join(HELPERS_DIR, 'docker_compose_mysql.yml')]
|
||||||
|
|
||||||
return instance
|
return instance
|
||||||
|
|
||||||
|
|
||||||
@ -124,6 +132,9 @@ class ClickHouseCluster:
|
|||||||
for command in self.pre_zookeeper_commands:
|
for command in self.pre_zookeeper_commands:
|
||||||
self.run_kazoo_commands_with_retries(command, repeats=5)
|
self.run_kazoo_commands_with_retries(command, repeats=5)
|
||||||
|
|
||||||
|
if self.with_mysql and self.base_mysql_cmd:
|
||||||
|
subprocess.check_call(self.base_mysql_cmd + ['up', '-d', '--no-recreate'])
|
||||||
|
|
||||||
# Uncomment for debugging
|
# Uncomment for debugging
|
||||||
#print ' '.join(self.base_cmd + ['up', '--no-recreate'])
|
#print ' '.join(self.base_cmd + ['up', '--no-recreate'])
|
||||||
|
|
||||||
@ -138,6 +149,7 @@ class ClickHouseCluster:
|
|||||||
|
|
||||||
instance.client = Client(instance.ip_address, command=self.client_bin_path)
|
instance.client = Client(instance.ip_address, command=self.client_bin_path)
|
||||||
|
|
||||||
|
|
||||||
self.is_up = True
|
self.is_up = True
|
||||||
|
|
||||||
|
|
||||||
@ -201,7 +213,7 @@ services:
|
|||||||
class ClickHouseInstance:
|
class ClickHouseInstance:
|
||||||
def __init__(
|
def __init__(
|
||||||
self, cluster, base_path, name, custom_config_dir, custom_main_configs, custom_user_configs, macroses,
|
self, cluster, base_path, name, custom_config_dir, custom_main_configs, custom_user_configs, macroses,
|
||||||
with_zookeeper, zookeeper_config_path, base_configs_dir, server_bin_path, clickhouse_path_dir, hostname=None):
|
with_zookeeper, zookeeper_config_path, with_mysql, base_configs_dir, server_bin_path, clickhouse_path_dir, hostname=None):
|
||||||
|
|
||||||
self.name = name
|
self.name = name
|
||||||
self.base_cmd = cluster.base_cmd[:]
|
self.base_cmd = cluster.base_cmd[:]
|
||||||
@ -220,6 +232,8 @@ class ClickHouseInstance:
|
|||||||
self.base_configs_dir = base_configs_dir
|
self.base_configs_dir = base_configs_dir
|
||||||
self.server_bin_path = server_bin_path
|
self.server_bin_path = server_bin_path
|
||||||
|
|
||||||
|
self.with_mysql = with_mysql
|
||||||
|
|
||||||
self.path = p.join(self.cluster.instances_dir, name)
|
self.path = p.join(self.cluster.instances_dir, name)
|
||||||
self.docker_compose_path = p.join(self.path, 'docker_compose.yml')
|
self.docker_compose_path = p.join(self.path, 'docker_compose.yml')
|
||||||
|
|
||||||
@ -269,7 +283,6 @@ class ClickHouseInstance:
|
|||||||
|
|
||||||
while True:
|
while True:
|
||||||
status = self.get_docker_handle().status
|
status = self.get_docker_handle().status
|
||||||
|
|
||||||
if status == 'exited':
|
if status == 'exited':
|
||||||
raise Exception("Instance `{}' failed to start. Container status: {}".format(self.name, status))
|
raise Exception("Instance `{}' failed to start. Container status: {}".format(self.name, status))
|
||||||
|
|
||||||
@ -356,9 +369,15 @@ class ClickHouseInstance:
|
|||||||
logs_dir = p.abspath(p.join(self.path, 'logs'))
|
logs_dir = p.abspath(p.join(self.path, 'logs'))
|
||||||
os.mkdir(logs_dir)
|
os.mkdir(logs_dir)
|
||||||
|
|
||||||
depends_on = '[]'
|
depends_on = []
|
||||||
|
|
||||||
|
if self.with_mysql:
|
||||||
|
depends_on.append("mysql1")
|
||||||
|
|
||||||
if self.with_zookeeper:
|
if self.with_zookeeper:
|
||||||
depends_on = '["zoo1", "zoo2", "zoo3"]'
|
depends_on.append("zoo1")
|
||||||
|
depends_on.append("zoo2")
|
||||||
|
depends_on.append("zoo3")
|
||||||
|
|
||||||
with open(self.docker_compose_path, 'w') as docker_compose:
|
with open(self.docker_compose_path, 'w') as docker_compose:
|
||||||
docker_compose.write(DOCKER_COMPOSE_TEMPLATE.format(
|
docker_compose.write(DOCKER_COMPOSE_TEMPLATE.format(
|
||||||
@ -370,7 +389,7 @@ class ClickHouseInstance:
|
|||||||
config_d_dir=config_d_dir,
|
config_d_dir=config_d_dir,
|
||||||
db_dir=db_dir,
|
db_dir=db_dir,
|
||||||
logs_dir=logs_dir,
|
logs_dir=logs_dir,
|
||||||
depends_on=depends_on))
|
depends_on=str(depends_on)))
|
||||||
|
|
||||||
|
|
||||||
def destroy_dir(self):
|
def destroy_dir(self):
|
||||||
|
9
dbms/tests/integration/helpers/docker_compose_mysql.yml
Normal file
9
dbms/tests/integration/helpers/docker_compose_mysql.yml
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
version: '2'
|
||||||
|
services:
|
||||||
|
mysql1:
|
||||||
|
image: mysql:5.7
|
||||||
|
restart: always
|
||||||
|
environment:
|
||||||
|
MYSQL_ROOT_PASSWORD: clickhouse
|
||||||
|
ports:
|
||||||
|
- 3308:3306
|
@ -0,0 +1,11 @@
|
|||||||
|
<yandex>
|
||||||
|
<logger>
|
||||||
|
<level>trace</level>
|
||||||
|
<log>/var/log/clickhouse-server/copier/log.log</log>
|
||||||
|
<errorlog>/var/log/clickhouse-server/copier/log.err.log</errorlog>
|
||||||
|
<size>1000M</size>
|
||||||
|
<count>10</count>
|
||||||
|
<stderr>/var/log/clickhouse-server/copier/stderr</stderr>
|
||||||
|
<stdout>/var/log/clickhouse-server/copier/stdout</stdout>
|
||||||
|
</logger>
|
||||||
|
</yandex>
|
@ -183,7 +183,7 @@ def execute_task(task, cmd_options):
|
|||||||
copiers_exec_ids = []
|
copiers_exec_ids = []
|
||||||
|
|
||||||
cmd = ['/usr/bin/clickhouse', 'copier',
|
cmd = ['/usr/bin/clickhouse', 'copier',
|
||||||
'--config', '/etc/clickhouse-server/config-preprocessed.xml',
|
'--config', '/etc/clickhouse-server/config-copier.xml',
|
||||||
'--task-path', zk_task_path,
|
'--task-path', zk_task_path,
|
||||||
'--base-dir', '/var/log/clickhouse-server/copier']
|
'--base-dir', '/var/log/clickhouse-server/copier']
|
||||||
cmd += cmd_options
|
cmd += cmd_options
|
||||||
|
@ -0,0 +1,12 @@
|
|||||||
|
<yandex>
|
||||||
|
<remote_servers>
|
||||||
|
<test_cluster>
|
||||||
|
<shard>
|
||||||
|
<replica>
|
||||||
|
<host>node1</host>
|
||||||
|
<port>9000</port>
|
||||||
|
</replica>
|
||||||
|
</shard>
|
||||||
|
</test_cluster>
|
||||||
|
</remote_servers>
|
||||||
|
</yandex>
|
98
dbms/tests/integration/test_storage_mysql/test.py
Normal file
98
dbms/tests/integration/test_storage_mysql/test.py
Normal file
@ -0,0 +1,98 @@
|
|||||||
|
from contextlib import contextmanager
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
## sudo -H pip install PyMySQL
|
||||||
|
import pymysql.cursors
|
||||||
|
|
||||||
|
from helpers.cluster import ClickHouseCluster
|
||||||
|
|
||||||
|
cluster = ClickHouseCluster(__file__)
|
||||||
|
|
||||||
|
node1 = cluster.add_instance('node1', main_configs=['configs/remote_servers.xml'], with_mysql = True)
|
||||||
|
create_table_sql_template = """
|
||||||
|
CREATE TABLE `clickhouse`.`{}` (
|
||||||
|
`id` int(11) NOT NULL,
|
||||||
|
`name` varchar(50) NOT NULL,
|
||||||
|
`age` int NOT NULL default 0,
|
||||||
|
`money` int NOT NULL default 0,
|
||||||
|
PRIMARY KEY (`id`)) ENGINE=InnoDB;
|
||||||
|
"""
|
||||||
|
|
||||||
|
@pytest.fixture(scope="module")
|
||||||
|
def started_cluster():
|
||||||
|
try:
|
||||||
|
cluster.start()
|
||||||
|
|
||||||
|
conn = get_mysql_conn()
|
||||||
|
## create mysql db and table
|
||||||
|
create_mysql_db(conn, 'clickhouse')
|
||||||
|
yield cluster
|
||||||
|
|
||||||
|
finally:
|
||||||
|
cluster.shutdown()
|
||||||
|
|
||||||
|
|
||||||
|
def test_insert_select(started_cluster):
|
||||||
|
table_name = 'test_insert_select'
|
||||||
|
conn = get_mysql_conn()
|
||||||
|
create_mysql_table(conn, table_name)
|
||||||
|
|
||||||
|
node1.query('''
|
||||||
|
CREATE TABLE {}(id UInt32, name String, age UInt32, money UInt32) ENGINE = MySQL('mysql1:3306', 'clickhouse', '{}', 'root', 'clickhouse');
|
||||||
|
'''.format(table_name, table_name))
|
||||||
|
node1.query("INSERT INTO {}(id, name, money) select number, concat('name_', toString(number)), 3 from numbers(10000) ".format(table_name))
|
||||||
|
assert node1.query("SELECT count() FROM {}".format(table_name)).rstrip() == '10000'
|
||||||
|
assert node1.query("SELECT sum(money) FROM {}".format(table_name)).rstrip() == '30000'
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
|
||||||
|
def test_replace_select(started_cluster):
|
||||||
|
table_name = 'test_replace_select'
|
||||||
|
conn = get_mysql_conn()
|
||||||
|
create_mysql_table(conn, table_name)
|
||||||
|
|
||||||
|
node1.query('''
|
||||||
|
CREATE TABLE {}(id UInt32, name String, age UInt32, money UInt32) ENGINE = MySQL('mysql1:3306', 'clickhouse', '{}', 'root', 'clickhouse', 1);
|
||||||
|
'''.format(table_name, table_name))
|
||||||
|
node1.query("INSERT INTO {}(id, name, money) select number, concat('name_', toString(number)), 3 from numbers(10000) ".format(table_name))
|
||||||
|
node1.query("INSERT INTO {}(id, name, money) select number, concat('name_', toString(number)), 3 from numbers(10000) ".format(table_name))
|
||||||
|
assert node1.query("SELECT count() FROM {}".format(table_name)).rstrip() == '10000'
|
||||||
|
assert node1.query("SELECT sum(money) FROM {}".format(table_name)).rstrip() == '30000'
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
|
||||||
|
def test_insert_on_duplicate_select(started_cluster):
|
||||||
|
table_name = 'test_insert_on_duplicate_select'
|
||||||
|
conn = get_mysql_conn()
|
||||||
|
create_mysql_table(conn, table_name)
|
||||||
|
|
||||||
|
node1.query('''
|
||||||
|
CREATE TABLE {}(id UInt32, name String, age UInt32, money UInt32) ENGINE = MySQL('mysql1:3306', 'clickhouse', '{}', 'root', 'clickhouse', 0, 'update money = money + values(money)');
|
||||||
|
'''.format(table_name, table_name))
|
||||||
|
node1.query("INSERT INTO {}(id, name, money) select number, concat('name_', toString(number)), 3 from numbers(10000) ".format(table_name))
|
||||||
|
node1.query("INSERT INTO {}(id, name, money) select number, concat('name_', toString(number)), 3 from numbers(10000) ".format(table_name))
|
||||||
|
assert node1.query("SELECT count() FROM {}".format(table_name)).rstrip() == '10000'
|
||||||
|
assert node1.query("SELECT sum(money) FROM {}".format(table_name)).rstrip() == '60000'
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
|
||||||
|
def get_mysql_conn():
|
||||||
|
conn = pymysql.connect(user='root', password='clickhouse', host='127.0.0.1', port=3308)
|
||||||
|
return conn
|
||||||
|
|
||||||
|
def create_mysql_db(conn, name):
|
||||||
|
with conn.cursor() as cursor:
|
||||||
|
cursor.execute(
|
||||||
|
"CREATE DATABASE {} DEFAULT CHARACTER SET 'utf8'".format(name))
|
||||||
|
|
||||||
|
def create_mysql_table(conn, tableName):
|
||||||
|
with conn.cursor() as cursor:
|
||||||
|
cursor.execute(create_table_sql_template.format(tableName))
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
with contextmanager(started_cluster)() as cluster:
|
||||||
|
for name, instance in cluster.instances.items():
|
||||||
|
print name, instance.ip_address
|
||||||
|
raw_input("Cluster created, press any key to destroy...")
|
38
dbms/tests/queries/0_stateless/00626_in_syntax.reference
Normal file
38
dbms/tests/queries/0_stateless/00626_in_syntax.reference
Normal file
@ -0,0 +1,38 @@
|
|||||||
|
1
|
||||||
|
1
|
||||||
|
1
|
||||||
|
1
|
||||||
|
1
|
||||||
|
1
|
||||||
|
-
|
||||||
|
1
|
||||||
|
1
|
||||||
|
1
|
||||||
|
1
|
||||||
|
1
|
||||||
|
1
|
||||||
|
1
|
||||||
|
-
|
||||||
|
0
|
||||||
|
0
|
||||||
|
1
|
||||||
|
0
|
||||||
|
1
|
||||||
|
1
|
||||||
|
-
|
||||||
|
0
|
||||||
|
1
|
||||||
|
1
|
||||||
|
1
|
||||||
|
0
|
||||||
|
1
|
||||||
|
0
|
||||||
|
1
|
||||||
|
1
|
||||||
|
0
|
||||||
|
-
|
||||||
|
1
|
||||||
|
1
|
||||||
|
1
|
||||||
|
-
|
||||||
|
(1,2) ((1,2),(3,4)) 1 1
|
44
dbms/tests/queries/0_stateless/00626_in_syntax.sql
Normal file
44
dbms/tests/queries/0_stateless/00626_in_syntax.sql
Normal file
@ -0,0 +1,44 @@
|
|||||||
|
select (1, 2) in tuple((1, 2));
|
||||||
|
select (1, 2) in ((1, 2), (3, 4));
|
||||||
|
select ((1, 2), (3, 4)) in ((1, 2), (3, 4));
|
||||||
|
select ((1, 2), (3, 4)) in (((1, 2), (3, 4)));
|
||||||
|
select ((1, 2), (3, 4)) in tuple(((1, 2), (3, 4)));
|
||||||
|
select ((1, 2), (3, 4)) in (((1, 2), (3, 4)), ((5, 6), (7, 8)));
|
||||||
|
|
||||||
|
select '-';
|
||||||
|
select 1 in 1;
|
||||||
|
select 1 in tuple(1);
|
||||||
|
select tuple(1) in tuple(1);
|
||||||
|
select tuple(1) in tuple(tuple(1));
|
||||||
|
select tuple(tuple(1)) in tuple(tuple(1));
|
||||||
|
select tuple(tuple(1)) in tuple(tuple(tuple(1)));
|
||||||
|
select tuple(tuple(tuple(1))) in tuple(tuple(tuple(1)));
|
||||||
|
|
||||||
|
select '-';
|
||||||
|
select 1 in Null;
|
||||||
|
select 1 in tuple(Null);
|
||||||
|
select 1 in tuple(Null, 1);
|
||||||
|
select tuple(1) in tuple(tuple(Null));
|
||||||
|
select tuple(1) in tuple(tuple(Null), tuple(1));
|
||||||
|
select tuple(tuple(Null), tuple(1)) in tuple(tuple(Null), tuple(1));
|
||||||
|
|
||||||
|
select '-';
|
||||||
|
select 1 in (1 + 1, 1 - 1);
|
||||||
|
select 1 in (0 + 1, 1, toInt8(sin(5)));
|
||||||
|
select (0 + 1, 1, toInt8(sin(5))) in (0 + 1, 1, toInt8(sin(5)));
|
||||||
|
select identity(tuple(1)) in (tuple(1), tuple(2));
|
||||||
|
select identity(tuple(1)) in (tuple(0), tuple(2));
|
||||||
|
select identity(tuple(1)) in (identity(tuple(1)), tuple(2));
|
||||||
|
select identity(tuple(1)) in (identity(tuple(0)), tuple(2));
|
||||||
|
select identity(tuple(1)) in (identity(tuple(1)), identity(tuple(2)));
|
||||||
|
select identity(tuple(1)) in (identity(tuple(1)), identity(identity(tuple(2))));
|
||||||
|
select identity(tuple(1)) in (identity(tuple(0)), identity(identity(tuple(2))));
|
||||||
|
|
||||||
|
select '-';
|
||||||
|
select identity((1, 2)) in (1, 2);
|
||||||
|
select identity((1, 2)) in ((1, 2), (3, 4));
|
||||||
|
select identity((1, 2)) in ((1, 2), identity((3, 4)));
|
||||||
|
|
||||||
|
select '-';
|
||||||
|
select (1,2) as x, ((1,2),(3,4)) as y, 1 in x, x in y;
|
||||||
|
|
1
debian/pbuilder-hooks/A00ccache
vendored
1
debian/pbuilder-hooks/A00ccache
vendored
@ -12,5 +12,6 @@ if [ -n "$CCACHE_DIR" ]; then
|
|||||||
chmod -R a+rwx $CCACHE_DIR || true
|
chmod -R a+rwx $CCACHE_DIR || true
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
df -h
|
||||||
ccache --show-stats
|
ccache --show-stats
|
||||||
ccache -M ${CCACHE_SIZE:=32G}
|
ccache -M ${CCACHE_SIZE:=32G}
|
||||||
|
@ -657,6 +657,16 @@ The uncompressed cache is advantageous for very short queries in individual case
|
|||||||
<uncompressed_cache_size>8589934592</uncompressed_cache_size>
|
<uncompressed_cache_size>8589934592</uncompressed_cache_size>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## user_files_path
|
||||||
|
|
||||||
|
A catalog with user files. Used in a [file()](../../table_functions/file.md#table_functions-file) table function.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<user_files_path>/var/lib/clickhouse/user_files/</user_files_path>
|
||||||
|
```
|
||||||
|
|
||||||
<a name="server_settings-users_config"></a>
|
<a name="server_settings-users_config"></a>
|
||||||
|
|
||||||
## users_config
|
## users_config
|
||||||
|
@ -4,13 +4,16 @@
|
|||||||
|
|
||||||
The MySQL engine allows you to perform SELECT queries on data that is stored on a remote MySQL server.
|
The MySQL engine allows you to perform SELECT queries on data that is stored on a remote MySQL server.
|
||||||
|
|
||||||
The engine takes 4 parameters: the server address (host and port); the name of the database; the name of the table; the user's name; the user's password. Example:
|
The engine takes 5-7 parameters: the server address (host and port); the name of the database; the name of the table; the user's name; the user's password; whether to use replace query; the on duplcate clause. Example:
|
||||||
|
|
||||||
```text
|
```text
|
||||||
MySQL('host:port', 'database', 'table', 'user', 'password');
|
MySQL('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_duplicate_clause']);
|
||||||
```
|
```
|
||||||
|
|
||||||
At this time, simple WHERE clauses such as ```=, !=, >, >=, <, <=``` are executed on the MySQL server.
|
At this time, simple WHERE clauses such as ```=, !=, >, >=, <, <=``` are executed on the MySQL server.
|
||||||
|
|
||||||
The rest of the conditions and the LIMIT sampling constraint are executed in ClickHouse only after the query to MySQL finishes.
|
The rest of the conditions and the LIMIT sampling constraint are executed in ClickHouse only after the query to MySQL finishes.
|
||||||
|
|
||||||
|
If `replace_query` is specified to 1, then `INSERT INTO` query to this table would be transformed to `REPLACE INTO`.
|
||||||
|
If `on_duplicate_clause` is specified, eg `update impression = values(impression) + impression`, it would add `on_duplicate_clause` to the end of the MySQL insert sql.
|
||||||
|
Notice that only one of 'replace_query' and 'on_duplicate_clause' can be specified, or none of them.
|
||||||
|
18
docs/en/table_functions/file.md
Normal file
18
docs/en/table_functions/file.md
Normal file
@ -0,0 +1,18 @@
|
|||||||
|
<a name="table_functions-file"></a>
|
||||||
|
|
||||||
|
# file
|
||||||
|
|
||||||
|
`file(path, format, structure)` - returns a table created from a path file with a format type, with columns specified in structure.
|
||||||
|
|
||||||
|
path - a relative path to a file from [user_files_path](../operations/server_settings/settings.md#user_files_path).
|
||||||
|
|
||||||
|
format - file [format](../formats/index.md).
|
||||||
|
|
||||||
|
structure - table structure in 'UserID UInt64, URL String' format. Determines column names and types.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- getting the first 10 lines of a table that contains 3 columns of UInt32 type from a CSV file
|
||||||
|
SELECT * FROM file('test.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') LIMIT 10
|
||||||
|
```
|
@ -122,9 +122,10 @@ pages:
|
|||||||
|
|
||||||
- 'Table functions':
|
- 'Table functions':
|
||||||
- 'Introduction': 'table_functions/index.md'
|
- 'Introduction': 'table_functions/index.md'
|
||||||
- 'remote': 'table_functions/remote.md'
|
- 'file': 'table_functions/file.md'
|
||||||
- 'merge': 'table_functions/merge.md'
|
- 'merge': 'table_functions/merge.md'
|
||||||
- 'numbers': 'table_functions/numbers.md'
|
- 'numbers': 'table_functions/numbers.md'
|
||||||
|
- 'remote': 'table_functions/remote.md'
|
||||||
|
|
||||||
- 'Formats':
|
- 'Formats':
|
||||||
- 'Introduction': 'formats/index.md'
|
- 'Introduction': 'formats/index.md'
|
||||||
|
@ -122,9 +122,10 @@ pages:
|
|||||||
|
|
||||||
- 'Табличные функции':
|
- 'Табличные функции':
|
||||||
- 'Введение': 'table_functions/index.md'
|
- 'Введение': 'table_functions/index.md'
|
||||||
- 'remote': 'table_functions/remote.md'
|
- 'file': 'table_functions/file.md'
|
||||||
- 'merge': 'table_functions/merge.md'
|
- 'merge': 'table_functions/merge.md'
|
||||||
- 'numbers': 'table_functions/numbers.md'
|
- 'numbers': 'table_functions/numbers.md'
|
||||||
|
- 'remote': 'table_functions/remote.md'
|
||||||
|
|
||||||
- 'Форматы':
|
- 'Форматы':
|
||||||
- 'Введение': 'formats/index.md'
|
- 'Введение': 'formats/index.md'
|
||||||
|
@ -660,6 +660,16 @@ ClickHouse проверит условия `min_part_size` и `min_part_size_rat
|
|||||||
<uncompressed_cache_size>8589934592</uncompressed_cache_size>
|
<uncompressed_cache_size>8589934592</uncompressed_cache_size>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## user_files_path
|
||||||
|
|
||||||
|
Каталог с пользовательскими файлами. Используется в табличной функции [file()](../../table_functions/file.md#table_functions-file).
|
||||||
|
|
||||||
|
**Пример**
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<user_files_path>/var/lib/clickhouse/user_files/</user_files_path>
|
||||||
|
```
|
||||||
|
|
||||||
<a name="server_settings-users_config"></a>
|
<a name="server_settings-users_config"></a>
|
||||||
|
|
||||||
## users_config
|
## users_config
|
||||||
|
18
docs/ru/table_functions/file.md
Normal file
18
docs/ru/table_functions/file.md
Normal file
@ -0,0 +1,18 @@
|
|||||||
|
<a name="table_functions-file"></a>
|
||||||
|
|
||||||
|
# file
|
||||||
|
|
||||||
|
`file(path, format, structure)` - возвращает таблицу со столбцами, указанными в structure, созданную из файла path типа format.
|
||||||
|
|
||||||
|
path - относительный путь до файла от [user_files_path](../operations/server_settings/settings.md#user_files_path).
|
||||||
|
|
||||||
|
format - [формат](../formats/index.md) файла.
|
||||||
|
|
||||||
|
structure - структура таблицы в форме 'UserID UInt64, URL String'. Определяет имена и типы столбцов.
|
||||||
|
|
||||||
|
**Пример**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- получение первых 10 строк таблицы, состоящей из трёх колонок типа UInt32 из CSV файла
|
||||||
|
SELECT * FROM file('test.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') LIMIT 10
|
||||||
|
```
|
@ -17,4 +17,4 @@ endif ()
|
|||||||
target_include_directories (daemon PUBLIC include)
|
target_include_directories (daemon PUBLIC include)
|
||||||
target_include_directories (daemon PRIVATE ${ClickHouse_SOURCE_DIR}/libs/libpocoext/include)
|
target_include_directories (daemon PRIVATE ${ClickHouse_SOURCE_DIR}/libs/libpocoext/include)
|
||||||
|
|
||||||
target_link_libraries (daemon clickhouse_common_io clickhouse_common_config ${EXECINFO_LIBRARY})
|
target_link_libraries (daemon clickhouse_common_io clickhouse_common_config ${EXECINFO_LIBRARY} ${ELF_LIBRARY})
|
||||||
|
@ -700,7 +700,7 @@ void BaseDaemon::buildLoggers(Poco::Util::AbstractConfiguration & config)
|
|||||||
return;
|
return;
|
||||||
config_logger = current_logger;
|
config_logger = current_logger;
|
||||||
|
|
||||||
bool is_daemon = config.getBool("application.runAsDaemon", true);
|
bool is_daemon = config.getBool("application.runAsDaemon", false);
|
||||||
|
|
||||||
// Split log and error log.
|
// Split log and error log.
|
||||||
Poco::AutoPtr<SplitterChannel> split = new SplitterChannel;
|
Poco::AutoPtr<SplitterChannel> split = new SplitterChannel;
|
||||||
@ -883,7 +883,7 @@ void BaseDaemon::initialize(Application & self)
|
|||||||
config().add(map_config, PRIO_APPLICATION - 100);
|
config().add(map_config, PRIO_APPLICATION - 100);
|
||||||
}
|
}
|
||||||
|
|
||||||
bool is_daemon = config().getBool("application.runAsDaemon", true);
|
bool is_daemon = config().getBool("application.runAsDaemon", false);
|
||||||
|
|
||||||
if (is_daemon)
|
if (is_daemon)
|
||||||
{
|
{
|
||||||
@ -943,28 +943,29 @@ void BaseDaemon::initialize(Application & self)
|
|||||||
if (!log_path.empty())
|
if (!log_path.empty())
|
||||||
log_path = Poco::Path(log_path).setFileName("").toString();
|
log_path = Poco::Path(log_path).setFileName("").toString();
|
||||||
|
|
||||||
if (is_daemon)
|
/** Redirect stdout, stderr to separate files in the log directory (or in the specified file).
|
||||||
{
|
|
||||||
/** Redirect stdout, stderr to separate files in the log directory.
|
|
||||||
* Some libraries write to stderr in case of errors in debug mode,
|
* Some libraries write to stderr in case of errors in debug mode,
|
||||||
* and this output makes sense even if the program is run in daemon mode.
|
* and this output makes sense even if the program is run in daemon mode.
|
||||||
* We have to do it before buildLoggers, for errors on logger initialization will be written to these files.
|
* We have to do it before buildLoggers, for errors on logger initialization will be written to these files.
|
||||||
|
* If logger.stderr is specified then stderr will be forcibly redirected to that file.
|
||||||
*/
|
*/
|
||||||
if (!log_path.empty())
|
if ((!log_path.empty() && is_daemon) || config().has("logger.stderr"))
|
||||||
{
|
{
|
||||||
std::string stdout_path = log_path + "/stdout";
|
std::string stderr_path = config().getString("logger.stderr", log_path + "/stderr");
|
||||||
if (!freopen(stdout_path.c_str(), "a+", stdout))
|
|
||||||
throw Poco::OpenFileException("Cannot attach stdout to " + stdout_path);
|
|
||||||
|
|
||||||
std::string stderr_path = log_path + "/stderr";
|
|
||||||
if (!freopen(stderr_path.c_str(), "a+", stderr))
|
if (!freopen(stderr_path.c_str(), "a+", stderr))
|
||||||
throw Poco::OpenFileException("Cannot attach stderr to " + stderr_path);
|
throw Poco::OpenFileException("Cannot attach stderr to " + stderr_path);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if ((!log_path.empty() && is_daemon) || config().has("logger.stdout"))
|
||||||
|
{
|
||||||
|
std::string stdout_path = config().getString("logger.stdout", log_path + "/stdout");
|
||||||
|
if (!freopen(stdout_path.c_str(), "a+", stdout))
|
||||||
|
throw Poco::OpenFileException("Cannot attach stdout to " + stdout_path);
|
||||||
|
}
|
||||||
|
|
||||||
/// Create pid file.
|
/// Create pid file.
|
||||||
if (is_daemon && config().has("pid"))
|
if (is_daemon && config().has("pid"))
|
||||||
pid.seed(config().getString("pid"));
|
pid.seed(config().getString("pid"));
|
||||||
}
|
|
||||||
|
|
||||||
/// Change path for logging.
|
/// Change path for logging.
|
||||||
if (!log_path.empty())
|
if (!log_path.empty())
|
||||||
|
Loading…
Reference in New Issue
Block a user