Thursday, September 17, 2015

pg_strom - The road ahead!

OK, since my last post, strange things have happened.

I reinstalled CUDA from a binary package instead of directly over the Internet from the Nvidia repository. According to the Version numbers, they should be identical.


SET pg_strom.enabled = ON;

FROM t_test
WHERE sqrt(x) > 0


HashAggregate  (cost=177230.88..177230.89 rows=1 width=101) (actual time=2694.635..2694.635 rows=1 loops=1)
 Group Key: y
  ->  Custom Scan (GpuPreAgg)  (cost=13929.24..173681.39 rows=260 width=408) (actual time=235.256..2694.053 rows=76 loops=1)
        Bulkload: On (density: 100.00%)
       Reduction: Local + Global
       Device Filter: (sqrt((x)::double precision) > '0'::double precision)
       ->  Custom Scan (BulkScan) on t_test  (cost=9929.24..168897.54 rows=5000000 width=101) (actual time=3.946..599.690 rows=5000000 loops=1)
Planning time: 0.110 ms
Execution time: 2749.609 ms

pg_strom now works as expected! So it was a driver issue.

It's still 7 % slower on my machine than CPU, but that is explained by the weak GPU.

Monday, September 14, 2015

pg_strom - The rough road ahead

Inspired by those articles on pg_strom, the CUDA based GPGPU accelerator for PostgreSQL, I decided to repeat the tests.

First a caveat. I managed to destroy my Ubuntu 14.04 LTS beyond recoverability with the native Nvidia drivers and had to reinstall from scratch.

At least if you run a dual GPU setup like on my mobile workstation (Intel HD4600 and Nvidia Quadro K1100M) install the Nvidia CUDA package from the Nvidia repository on a fresh Ubuntu installation - and if it works, don't touch again.

If it works, it works good, you can even switch between Intel and Nvidia graphics without rebooting.

OK, here is the output of deviceQuery:

Device 0: "Quadro K1100M"
  CUDA Driver Version / Runtime Version          7.5 / 7.5
  CUDA Capability Major/Minor version number:    3.0
  Total amount of global memory:                 2048 MBytes (2147352576 bytes)
  ( 2) Multiprocessors, (192) CUDA Cores/MP:     384 CUDA Cores
  GPU Max Clock rate:                            706 MHz (0.71 GHz)
  Memory Clock rate:                             1400 Mhz
  Memory Bus Width:                              128-bit
  L2 Cache Size:                                 262144 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 7.5, CUDA Runtime Version = 7.5, NumDevs = 1, 

Device0 = Quadro K1100M

Result = PASS

PostgreSQL says:

LOG:  CUDA Runtime version: 7.5.0
LOG:  NVIDIA driver version: 352.39
LOG:  GPU0 Quadro K1100M (384 CUDA cores, 705MHz), L2 256KB, RAM 2047MB (128bits, 1400MHz), capability 3.0
LOG:  NVRTC - CUDA Runtime Compilation vertion 7.5

Now the tests:

SELECT x, 'a'::char(100) AS y, 'b'::char(100) AS z
FROM generate_series(1, 5000000) AS x
ORDER BY random();

SET pg_strom.enabled = OFF;

FROM   t_test
WHERE sqrt(x) > 0


HashAggregate  (cost=242892.34..242892.35 rows=1 width=101) (actual time=2550.064..2550.064 rows=1 loops=1)
  Group Key: y
  ->  Seq Scan on t_test  (cost=0.00..234559.00 rows=1666667 width=101) (actual time=0.016..779.110 rows=5000000 loops=1)
        Filter: (sqrt((x)::double precision) > '0'::double precision)"
Planning time: 0.104 ms

Execution time: 2550.131 ms

SET pg_strom.enabled = ON;

FROM   t_test
WHERE sqrt(x) > 0


HashAggregate  (cost=177230.88..177230.89 rows=1 width=101) (actual time=25393.766..25393.767 rows=1 loops=1)
  Group Key: y
  ->  Custom Scan (GpuPreAgg)  (cost=13929.24..173681.39 rows=260 width=408) (actual time=348.584..25393.123 rows=76 loops=1)
        Bulkload: On (density: 100.00%)"
        Reduction: Local + Global
        Device Filter: (sqrt((x)::double precision) > '0'::double precision)"
        ->  Custom Scan (BulkScan) on t_test  (cost=9929.24..168897.54 rows=5000000 width=101) (actual time=4.336..628.920 rows=5000000 loops=1)"
Planning time: 0.330 ms

Execution time: 25488.189 ms

Whoa, pg_strom is 10x slower for me. Why? I don't know.

It could be a driver issue, because I see heavy CPU spikes during the query - up to 100% on some cores. The whole system becomes tangibly unresponsive. My driver version is 352.39 instead of 352.30.

It could also be that in the original test a comparatively weak CPU (unspecified 4 GHz AMD, I therefore assume a FX8350) with a powerful GPU (Nvidia GTX970, 4GB) were used, while my test used a comparatively powerful CPU (Intel core i7-4700MQ) and a weak GPU (Nvidia Quadro K1100M, 2GB).

But does that explain the CPU spikes? Well, probably we see suboptimal host<->device memory transfers here, the GTX970 not only has double the memory, it has also double the bus width.

I second that we might be seeing the future here, but it's not ready for prime time yet...

UPDATE: Please see the next post also.

Thursday, September 10, 2015

Finally - Cilk(TM) Plus in gcc 5.1

gcc 5.1 now has CilkTM Plus support.

Thread level parallelism and vectorization in 27 lines of c++:

#include <iostream>
#include <cilk/cilk.h>

using namespace std;

int main()
    int a[8];
    int b[8];

    b[0:4] = 1;
    b[4:4] = -1;

    cilk_for (int i = 0; i < 8; ++i)
        a[i] = i*i;

    a[:] = a[:] * b[:];

    for (int i = 0; i < 8; ++i)
        cout << a[i] << endl;

    return 0;

I really, really love this...

Tuesday, August 25, 2015

The 'other' cloud - parasitic storage as a service

Since some NoSQL products by default ship with no security at all, this was no real surprise after MongoDB, but the magnitude is astounding.

How about using this for something useful?
  1. Scan the Internet for known products/servers that allow unconditional read/write access
  2. Write storage adapters
  3. Invent a mechanism to store data encrypted and with some redundancy, in case someone gets a wake up call
  4. Invent a mechanism to rebalance storage if servers become unavailable or new ones are added to the list of storage nodes
  5. Build a service around 1, 2, 3, and 4
There it is, the 'other' cloud, at almost no cost except bandwidth...

But of course, this is purely fictional. The bad guys don't have good ideas and the good guys won't do it. There is no illegal data stored in your session cache.

Just keep ignoring the fine manuals and carry on. Nobody needs database administrators, everybody knows that...

Tuesday, August 18, 2015

So, what is TABLESAMPLE actually good for?

Q: So, what is TABLESAMPLE actually good for?

A: To speed things up that can be answered from a small sample instead of the whole population!

Like so:

CREATE TABLE public.ts_test
  id integer NOT NULL DEFAULT nextval('ts_test_id_seq'::regclass),
  val double precision,
  CONSTRAINT ts_test_pkey PRIMARY KEY (id)

Insert 100 million random values in the range 0.0 <= x < 1.0:

INSERT INTO public.ts_test (val)
    generate_series(1, 100000000);

And now, some queries to get the standard deviation.

For the whole population:

SELECT round(stddev_pop(val)::numeric,5) FROM ts_test;

0.28869 in 15005 ms
Error: 0%
Speedup: 1x

For a sample of size 20000 (good for 1% error and 99% confidence):

SELECT round(stddev_samp(val)::numeric,5) FROM ts_test TABLESAMPLE BERNOULLI(0.02);

0.28867 in 2387 ms
Error: < 1%
Speedup: 6.3x

SELECT round(stddev_samp(val)::numeric,5) FROM ts_test TABLESAMPLE SYSTEM(0.02);

0.28791 in 25 ms
Error: < 1%
Speedup: 600x

With a table that large, both methods BERNOULLI and SYSTEM are accurate enough to give results well below the 1% error target.

(For potential problems with SYSTEM, see this post)

SYSTEM aces BERNOULLI with a speedup of 600x vs. 6.3x relative to a full population query.

And this is hard to beat...

Monday, August 17, 2015

ToroDB up and running in five simple steps

A working ToroDB installation in five steps.

You'll need:

ToroDB from 8Kdata:

A working PostgreSQL instance, 9.4 or better:

A suitable JRE, version 6 or higher. It has been mainly tested with Oracle JRE 8.

The mongo shell and mongoimport from MongoDB:

Some test data from citusdata:

"Hitman" Kevin MacLeod from
Licensed under Creative Commons: By Attribution 3.0

Tuesday, August 4, 2015

...gently down the stream

So, for the test I changed the location to San Francisco and after about one hour the table tstest.tstream contains 3705 tweets.

select tag, count(*) as tag_count from tstest.tagstream group 
              by tag order by tag_count desc limit 10;



So there is something going on on MTV in the Bay Area. :-)

In a production system, I probably would write a background worker for stream injection instead of an external job.

And following discussions on Database Soup, it could be indeed interesting to configure the stream server for maximum throughput sacrificing durability for speed and then pull any data that should be preserved over a postgres_fdw link into another server with a more conservative setup.

BTW: If you plan to keep way more data in the ringbuffer to dig further into history occasionally, this would be a perfect case for using a long-tailed table.

Row, row, row your boat...

In the announcement of PipelineDB, Josh Berkus gave a hint that -  and how - a similar functionality might be achieved with stock PostgreSQL. Let's see, if we can analyze a continuous twitter stream too...

First, we need a fast table to act as the ringbuffer:

  rec timestamp with time zone NOT NULL DEFAULT clock_timestamp(),
  tevent jsonb NOT NULL,
  CONSTRAINT pk_tstream PRIMARY KEY (rec)

To inject new tweets into the buffer and trim the outdated ones, I used this little injector, written in Java using the hosebird client and the example code provided there.

Once it runs in it's infinite loop, the table is populated with new tweets, while tweets older than one hour are evicted every five minutes by courtesy of a fixed rate timer task. So, the maximum size of the buffer table is one hour and five minutes worth of tweets in the worst case.

The analysis view is taken verbatim from the PipelineDB example:

  CREATE OR REPLACE VIEW tstest.tagstream as
    SELECT jsonb_array_elements(tevent #>
      ARRAY['entities','hashtags']) ->> 'text' AS tag
    FROM tstest.tstream
    WHERE rec >
          ( clock_timestamp() - interval '1 hour' );

Plus the missing ')' before ->> 'text'. :-)

As is the most popular tags query:

select tag, count(*) as tag_count from tstest.tagstream group
              by tag order by tag_count desc limit 10;

Does it work? Yes.

Since I don't have access to my development machine right now, stay tuned for Part 2 with some results.