Wednesday, February 4, 2015

count(*) is faster than count(1) revisited

Whoa, I didn't expect that much resonance to the last post. :-)

"Can you post the `explain analyze` output of both queries? I'm pretty sure this is normal "distribution" caused by other stuff going on in the system"

OK. Since my 'test' was on a VirtualBox VM with a self-compiled PostgreSQL and this might not be a reliable test setup, so I hijacked one of our real PostgreSQL servers:

uname -a
Linux  2.6.32-358.23.2.el6.x86_64 #1 SMP Sat Sep 14 05:32:37 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux

Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz ,16 GB RAM, storage on SAN, ext4.

select version();
"PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4), 64-bit"

create table hmr(id serial, value real);

insert into hmr (value) select random()*10000000 from generate_series(1,10000000);


CREATE INDEX idx_id
  ON hmr
  USING btree
  (id);


vacuum analyze hmr;

-----------------------------------------------------------------------

explain analyze select count(1) from hmr;

"Aggregate  (cost=169248.00..169248.01 rows=1 width=0) (actual time=1439.521..1439.521 rows=1 loops=1)"
"  ->  Seq Scan on hmr  (cost=0.00..144248.00 rows=10000000 width=0) (actual time=0.006..642.276 rows=10000000 loops=1)"
"Total runtime: 1439.548 ms"


select count(1) from hmr; --811 msec.

-----------------------------------------------------------------------

explain analyze select count(*) from hmr;


"Aggregate  (cost=169248.00..169248.01 rows=1 width=0) (actual time=1299.034..1299.034 rows=1 loops=1)"
"  ->  Seq Scan on hmr  (cost=0.00..144248.00 rows=10000000 width=0) (actual time=0.006..644.912 rows=10000000 loops=1)"
"Total runtime: 1299.061 ms"


select count(*) from hmr; --670 msec 

-----------------------------------------------------------------------

explain analyze select count(id) from hmr;


"Aggregate  (cost=169248.00..169248.01 rows=1 width=4) (actual time=1576.046..1576.046 rows=1 loops=1)"
"  ->  Seq Scan on hmr  (cost=0.00..144248.00 rows=10000000 width=4) (actual time=0.004..636.076 rows=10000000 loops=1)"
"Total runtime: 1576.069 ms"


select count(id) from hmr; --920 msec

----------------------------------------------------------------------- 

And the winner is: still count(*) with count(1) second and count(id) third.

This could be an explanation for count(1) being slower:

"So count(1) is explicitly passing the 1 in and checking it for being NULL before incrementing a counter while count(*) is simply incrementing the counter."

And count(id) might be slower, because it takes additional time to consider using the index?
Or because it's actually reading data: width=4 vs. width=0?

I'm out of my league here.

2 comments:

  1. This shouldn't be possible since the first two execution plans are identical, but you're right. I have a table with about 10M rows, and even when fully cached, counting it is reliably about 200ms faster when using * instead of 1. That's really weird. I ran it dozens of times with both variants, and it's always the same.

    It's pretty clear passing a constant to count should have the same effect as *, so this may be an easy performance tweak in the next version of PG. Nice.

    ReplyDelete
  2. I know that count(X) is implemented using the same code as count(*) expect that the first is declared as STRICT and takes one input parameter which it ignores (it is only there so that the strictness will skip those rows). So it seems that the code which checks if 1 is NULL is ran for every row and takes a significant amount of time.

    ReplyDelete