mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-24 16:42:05 +00:00
Fix typo in readme
This commit is contained in:
parent
dd07029050
commit
11da23f153
@ -3,7 +3,7 @@ Motivation
|
|||||||
|
|
||||||
For reproducible build, we need to control, what exactly version of boost we build,
|
For reproducible build, we need to control, what exactly version of boost we build,
|
||||||
because different versions of boost obviously have slightly different behaviour.
|
because different versions of boost obviously have slightly different behaviour.
|
||||||
You may already have installed arbitary version of boost on your system, to build another projects.
|
You may already have installed arbitrary version of boost on your system, to build another projects.
|
||||||
|
|
||||||
We need to have all libraries with C++ interface to be located in tree and to be build together.
|
We need to have all libraries with C++ interface to be located in tree and to be build together.
|
||||||
This is needed to allow quickly changing build options, that could introduce changes in ABI of that libraries.
|
This is needed to allow quickly changing build options, that could introduce changes in ABI of that libraries.
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
This directory contains several hash-map implementations, similar in
|
This directory contains several hash-map implementations, similar in
|
||||||
API to SGI's hash_map class, but with different performance
|
API to SGI's hash_map class, but with different performance
|
||||||
characteristics. sparse_hash_map uses very little space overhead, 1-2
|
characteristics. sparse_hash_map uses very little space overhead, 1-2
|
||||||
bits per entry. dense_hash_map is very fast, particulary on lookup.
|
bits per entry. dense_hash_map is very fast, particularly on lookup.
|
||||||
(sparse_hash_set and dense_hash_set are the set versions of these
|
(sparse_hash_set and dense_hash_set are the set versions of these
|
||||||
routines.) On the other hand, these classes have requirements that
|
routines.) On the other hand, these classes have requirements that
|
||||||
may not make them appropriate for all applications.
|
may not make them appropriate for all applications.
|
||||||
|
@ -5,8 +5,8 @@ was possible segfaults or another faults in ODBC implementations, which can
|
|||||||
crash whole clickhouse-server process.
|
crash whole clickhouse-server process.
|
||||||
|
|
||||||
This tool works via HTTP, not via pipes, shared memory, or TCP because:
|
This tool works via HTTP, not via pipes, shared memory, or TCP because:
|
||||||
- It's simplier to implement
|
- It's simpler to implement
|
||||||
- It's simplier to debug
|
- It's simpler to debug
|
||||||
- jdbc-bridge can be implemented in the same way
|
- jdbc-bridge can be implemented in the same way
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
@ -20,7 +20,7 @@ Reference
|
|||||||
|
|
||||||
[Using Block Prefetch for Optimized Memory Performance](http://files.rsdn.ru/23380/AMD_block_prefetch_paper.pdf)
|
[Using Block Prefetch for Optimized Memory Performance](http://files.rsdn.ru/23380/AMD_block_prefetch_paper.pdf)
|
||||||
|
|
||||||
The artical only focused on aligned huge memory copy. You need handle other conditions by your self.
|
The article only focused on aligned huge memory copy. You need handle other conditions by your self.
|
||||||
|
|
||||||
|
|
||||||
Results
|
Results
|
||||||
|
Loading…
Reference in New Issue
Block a user