Round 2:
Sifting through the PL/R documentation reveals a way to do expensive initializations only once and persist them between function calls.
This leads to the first optimized version of our predictor function:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
CREATE OR REPLACE FUNCTION r_predict2(inp integer) | |
RETURNS text AS | |
$BODY$ | |
if (pg.state.firstpass) | |
{ | |
library(e1071) | |
data <- seq(1,10) | |
classes <- c('b','b','b','b','a','a','a','a','b','b') | |
mysvm = svm (data, classes, type='C', kernel='radial', gamma=0.1, cost=10) | |
assign("pg.state.firstpass", FALSE, env=.GlobalEnv) | |
} | |
result <- predict(mysvm, inp) | |
return(as.character(result[1:1])) | |
$BODY$ | |
LANGUAGE plr IMMUTABLE STRICT | |
COST 100; |
select s.*, r_predict2(s.*) from generate_series(1,1000) s;
671 ms for the first run. 302 ms for each of the following two. Average: 425 ms.
That's a 60% improvement compared to the original code.
But we still need to provide the training data and run the training once.
What if we can't, because e.g. of sheer size, legal or intellectual property restrictions?
Can we do better?
No comments:
Post a Comment