Monday, October 20, 2014

pgconf.eu 2014

On the worker nodes:

 CREATE TABLE kjb
(
  line text
);



CREATE OR REPLACE FUNCTION map_kjb()
  RETURNS SETOF record AS
$BODY$
DECLARE r record;
BEGIN
FOR r IN (SELECT TRIM(both from word),count(1) FROM (SELECT regexp_split_to_table(line, E'\\W+') as word FROM kjb) w GROUP BY word) LOOP
RETURN NEXT r;
END LOOP;
RETURN;
END;
$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 1000
  ROWS 1000;


On the head node:

CREATE TYPE t_mr_wc AS (word text, count bigint);

CREATE OR REPLACE FUNCTION map_kjb()
  RETURNS SETOF record AS
$BODY$
CLUSTER 'head';
RUN ON ALL;
$BODY$
  LANGUAGE plproxy VOLATILE
  COST 1000
  ROWS 1000;


CREATE OR REPLACE FUNCTION reduce_kjb()
  RETURNS SETOF t_mr_wc AS
$BODY$ DECLARE result public.t_mr_wc; BEGIN
FOR result IN select word,sum(count) from map_kjb() AS (word text,count bigint) where word != '' group by word LOOP RETURN NEXT result; END LOOP; END; $BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100
  ROWS 100;


1. Load data into tables.

2. Get word count:

SELECT * FROM reduce_kjb();

3. Get top 21 word counts: 

SELECT * FROM reduce_kjb() ORDER BY 2 DESC LIMIT 21; 

Wednesday, September 3, 2014

Using Code::Blocks with the Intel C/C++ compiler on Windows in five minutes

Code::Blocks apparently does not support the latest Intel C/C++ compiler out of the box. Trying to set up the necessary include paths for both the Visual Studio compiler and the Intel compiler leads to madness.

But there is an easier way:

Make two Intel compiler entries in Code::Blocks for the 32 and 64 bit version with the correct base directory, e.g. C:\Program Files (x86)\Intel\Composer XE\bin.
The 32 bit toolchain executables are prefixed ia32\, the 64 bit ones intel64\.

When you have installed the Intel compiler, there is an 'Command Prompt with Intel Compiler' entry in the start menu for 32 and 64 bit targets each. This opens a command prompt with all necessary paths and environment variables set correctly.

Just start codeblocks.exe from this command prompt and it inherits the environment. Then you can select the 32 or 64 bit Intel compiler option in the 'Project->Build options' and it works without further ado.

The only thing you cannot do is switching between 32 and 64 bit targets on the fly, you have to start Code::Blocks with the correct environment first.

Thursday, March 20, 2014

Eclipse compiler produces faster FP code?

While testing this very simple FP code in Java:

package the.plateisbad;

public class Simple {

  public static void main(String[] args) {
    final long arg_flops;
    double i,x = 0.0d, y = 0.0d;
    final long start, end;

    if (args.length != 2) {
      System.out.println("Usage: the.plateisbad.Simple ");
      return;
    }

    arg_flops = Long.parseLong(args[0]);
    y = Long.parseLong(args[1]);

    System.out.println("Thinking really hard for " + arg_flops + " flops...");

    start = System.currentTimeMillis();

    for (i = 0; i < arg_flops; i++) {
      x = i * y;
    }

    end = System.currentTimeMillis();

    System.out.println("We calculated: " + x + " in " +(end-start)+ " ms");
  }
}



I've stumbled over the fact, that it runs considerably faster when compiled with the Eclipse ECJ compiler compared to a standard javac.

With ECJ, executed with JDK 1.7:

java -server the.plateisbad.Simple 1000000000 3

Thinking really hard for 1000000000 flops...
We calculated: 2.999999997E9 in 1964 ms

With javac, executed with JDK 1.7:

java -server the.plateisbad.Simple 1000000000 3

Thinking really hard for 1000000000 flops...
We calculated: 2.999999997E9 in 3514 ms

With the new JDK 1.8, there is no noticeable difference between javac and ECJ:

java -server the.plateisbad.Simple 1000000000 3

Thinking really hard for 1000000000 flops...
We calculated: 2.999999997E9 in 3727 ms

but it is always the slowest of the three. The Bytecode tells me that ECJ builds a tail controlled loop which loops while i is < arg_flops:

  64: invokestatic  #52                 // Method java/lang/System.currentTimeMillis:()J
      67: lstore        9
      69: dconst_0     
      70: dstore_3     
      71: goto          84
      74: dload_3      
      75: dload         7
      77: dmul         
      78: dstore        5
      80: dload_3      
      81: dconst_1     
      82: dadd         
      83: dstore_3     
      84: dload_3      
      85: lload_1      
      86: l2d          
      87: dcmpg        
      88: iflt          74
      91: invokestatic  #52                 // Method java/lang/System.currentTimeMillis:()J


while javac builds a head controlled loop that exits if i >= arg_flops:

      67: invokestatic  #13                 // Method java/lang/System.currentTimeMillis:()J
      70: lstore        9
      72: dconst_0     
      73: dstore_3     
      74: dload_3      
      75: lload_1      
      76: l2d          
      77: dcmpg        
      78: ifge          94
      81: dload_3      
      82: dload         7
      84: dmul         
      85: dstore        5
      87: dload_3      
      88: dconst_1     
      89: dadd         
      90: dstore_3     
      91: goto          74
      94: invokestatic  #13                 // Method java/lang/System.currentTimeMillis:()J

And ECJ uses StringBuffer while javac uses StringBuilder for the String operations, but since these are not in the loop, that should not make any difference.

Does somebody know what is going on here?

UPDATE: This seems to be an anomaly. SciMark 2.0 shows now significant differences between ECJ and javac and jdk1.7 and jdk1.8 - with 1.8 being slightly faster.

Wednesday, March 5, 2014

A suggestion to all architects of high-security buildings ;->

Please, don't put the restrooms outside the security gates!

Wednesday, February 12, 2014

Arbitrary parallel (well, almost) ad-hoc queries with PostgreSQL

Contemporary PostgreSQL lacks the ability to run single queries on multiple cores, nodes etc., i.e. it lacks automatic horizontal scaling. While this seems to be under development, what can be done today?

PL/Proxy allows database partitioning and RUN ON ALL executes the function on all nodes simultaneously. PL/Proxy is limited to the partitioned execution of functions and has good reasons for this design. But PostgreSQL can execute dynamic SQL within functions, so let's see how far we can get.

Worker function (on all worker nodes):

CREATE OR REPLACE FUNCTION parallel_query(statement text)
  RETURNS SETOF record AS
$BODY$
DECLARE r record;
BEGIN
IF lower($1) LIKE 'select%' THEN
FOR r IN EXECUTE $1 LOOP
RETURN NEXT r;
END LOOP;
ELSE
RAISE EXCEPTION 'Only queries allowed';
END IF;
END
$BODY$
  LANGUAGE plpgsql VOLATILE;


Proxy function (on all head nodes):

CREATE OR REPLACE FUNCTION parallel_query(statement text)
  RETURNS SETOF record AS
$BODY$
 CLUSTER 'head'; RUN ON ALL;
$BODY$
  LANGUAGE plproxy VOLATILE;


Table (on all worker nodes):

CREATE TABLE users
(
  username text NOT NULL,
  CONSTRAINT users_pkey PRIMARY KEY (username)
)
WITH (
  OIDS=FALSE
);


With 10000 rows in two nodes, partitioned by username hash (~5000 on each node)

select * from parallel_query('select * from users') as (username text);

returns all 10000 rows. Since the nodes can be databases within the same server, there is no need for additional hardware, server installations etc. But if more performance is required in the future, adding more boxes is possible.

All it takes is logical partitioning and a bit of PL/pgSQL if you really need to run parallel queries.

There are some differences though. Take the following query:

select * from execute_query('select max(username) from users') as (username text);

"user_name_9995"
"user_name_9999"

It now returns two maximums, one for each partition. To get the expected result a second stage is needed:

select max(username) from execute_query('select max(username) from users') as (username text);

"user_name_9999"

The same applies for other aggregation functions like avg() etc.

The proxy function can finally be hidden in a VIEW:

CREATE OR REPLACE VIEW "users" AS select * from parallel_query('select * from users') as (username text);