The predict1() function from the first post of this series has a performance problem: The svm is trained every time the function is called. Can this be corrected?
Sifting through the PL/R documentation reveals a way to do expensive initializations only once and persist them between function calls.
This leads to the first optimized version of our predictor function:
Let's run this statement three times again:
select s.*, r_predict2(s.*) from generate_series(1,1000) s;
671 ms for the first run. 302 ms for each of the following two. Average: 425 ms.
That's a 60% improvement compared to the original code.
But we still need to provide the training data and run the training once.
What if we can't, because e.g. of sheer size, legal or intellectual property restrictions?
Can we do better?