First a caveat. I managed to destroy my Ubuntu 14.04 LTS beyond recoverability with the native Nvidia drivers and had to reinstall from scratch.
At least if you run a dual GPU setup like on my mobile workstation (Intel HD4600 and Nvidia Quadro K1100M) install the Nvidia CUDA package from the Nvidia repository on a fresh Ubuntu installation - and if it works, don't touch again.
If it works, it works good, you can even switch between Intel and Nvidia graphics without rebooting.
OK, here is the output of deviceQuery:
Device 0: "Quadro K1100M"
CUDA Driver Version / Runtime Version 7.5 / 7.5
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2048 MBytes (2147352576 bytes)
( 2) Multiprocessors, (192) CUDA Cores/MP: 384 CUDA Cores
GPU Max Clock rate: 706 MHz (0.71 GHz)
Memory Clock rate: 1400 Mhz
Memory Bus Width: 128-bit
L2 Cache Size: 262144 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 7.5, CUDA Runtime Version = 7.5, NumDevs = 1,
Device0 = Quadro K1100M
Result = PASS
LOG: CUDA Runtime version: 7.5.0
LOG: NVIDIA driver version: 352.39
LOG: GPU0 Quadro K1100M (384 CUDA cores, 705MHz), L2 256KB, RAM 2047MB (128bits, 1400MHz), capability 3.0
LOG: NVRTC - CUDA Runtime Compilation vertion 7.5
Now the tests:
CREATE TABLE t_test AS
SELECT x, 'a'::char(100) AS y, 'b'::char(100) AS z
FROM generate_series(1, 5000000) AS x
ORDER BY random();
SET pg_strom.enabled = OFF;
EXPLAIN ANALYZE SELECT count(*)
FROM t_test
WHERE sqrt(x) > 0
GROUP BY y;
HashAggregate (cost=242892.34..242892.35 rows=1 width=101) (actual time=2550.064..2550.064 rows=1 loops=1)
Group Key: y
-> Seq Scan on t_test (cost=0.00..234559.00 rows=1666667 width=101) (actual time=0.016..779.110 rows=5000000 loops=1)
Filter: (sqrt((x)::double precision) > '0'::double precision)"
Planning time: 0.104 ms
Execution time: 2550.131 ms
SET pg_strom.enabled = ON;
EXPLAIN ANALYZE SELECT count(*)
FROM t_test
WHERE sqrt(x) > 0
GROUP BY y;
HashAggregate (cost=177230.88..177230.89 rows=1 width=101) (actual time=25393.766..25393.767 rows=1 loops=1)
Group Key: y
-> Custom Scan (GpuPreAgg) (cost=13929.24..173681.39 rows=260 width=408) (actual time=348.584..25393.123 rows=76 loops=1)
Bulkload: On (density: 100.00%)"
Reduction: Local + Global
Device Filter: (sqrt((x)::double precision) > '0'::double precision)"
-> Custom Scan (BulkScan) on t_test (cost=9929.24..168897.54 rows=5000000 width=101) (actual time=4.336..628.920 rows=5000000 loops=1)"
Planning time: 0.330 ms
Execution time: 25488.189 ms
Whoa, pg_strom is 10x slower for me. Why? I don't know.
It could be a driver issue, because I see heavy CPU spikes during the query - up to 100% on some cores. The whole system becomes tangibly unresponsive. My driver version is 352.39 instead of 352.30.
It could also be that in the original test a comparatively weak CPU (unspecified 4 GHz AMD, I therefore assume a FX8350) with a powerful GPU (Nvidia GTX970, 4GB) were used, while my test used a comparatively powerful CPU (Intel core i7-4700MQ) and a weak GPU (Nvidia Quadro K1100M, 2GB).
But does that explain the CPU spikes? Well, probably we see suboptimal host<->device memory transfers here, the GTX970 not only has double the memory, it has also double the bus width.->
I second that we might be seeing the future here,
UPDATE: Please see the next post also.
Looks good to me. 4360.404 ms vs 2165.910 ms
ReplyDeletewocuda=# EXPLAIN ANALYZE SELECT count(*)
FROM t_test
WHERE sqrt(x) > 0
GROUP BY y;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=242892.45..242892.46 rows=1 width=101) (actual time=4360.312..4360.312 rows=1 loops=1)
Group Key: y
-> Seq Scan on t_test (cost=0.00..234559.11 rows=1666669 width=101) (actual time=4.197..1791.154 rows=5000000 loops=1)
Filter: (sqrt((x)::double precision) > '0'::double precision)
Planning time: 0.134 ms
Execution time: 4360.404 ms
(6 řádek)
wocuda=# SET pg_strom.enabled = ON;
SET
wocuda=# EXPLAIN ANALYZE SELECT count(*)
FROM t_test
WHERE sqrt(x) > 0
GROUP BY y;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=177230.91..177230.92 rows=1 width=101) (actual time=2006.707..2006.707 rows=1 loops=1)
Group Key: y
-> Custom Scan (GpuPreAgg) (cost=13929.24..173681.41 rows=260 width=408) (actual time=997.161..2005.989 rows=76 loops=1)
Bulkload: On (density: 100.00%)
Reduction: Local + Global
Device Filter: (sqrt((x)::double precision) > '0'::double precision)
-> Custom Scan (BulkScan) on t_test (cost=9929.24..168897.56 rows=5000006 width=101) (actual time=22.665..1975.907 rows=5000000 loops=1)
Planning time: 0.434 ms
Execution time: 2165.910 ms
(9 řádek)
Going on
Gentoo
Postgresql 9.5alpha1
CUDA Runtime version: 7.5.0
NVIDIA driver version: 355.11
GPU0 GeForce GTX 960 (1024 CUDA cores, 1278MHz), L2 1024KB, RAM 4095MB (128bits, 3505MHz), capability 5.2
NVRTC - CUDA Runtime Compilation vertion 7.5
AMD Phenom(tm) II X6 1100T Processor
Yes, it's a know issue with my type of graphics card. See the answer by the original author. Your GTX960 has much more memory bandwith. And your driver is the latest. After doing some desk research on CUDA and high CPU load I still have a feeling that the driver has a saying in this too.
DeleteIt's a known issue, and I'm now working on.
ReplyDeleteYour GPU (Quadro K1100M) has relatively less memory performance (44.8GB/sec bandwidth), on the other hands, workload is very memory intensive - GpuPreAgg heavily uses atomic operations.
In addition, grouping key distribution is worst, because all the "y" column has 'a'. It means all the GPU kernel thread tries to make atomic operation on a particular item.
Thank you for addressing the issue so fast.
DeleteAfter doing some desk research on CUDA and high CPU load I still have a feeling that the driver has a saying in this too, but I'm stuck with 352 for the moment. 355 messes up my installation.