Table A.1.
Performance metrics on all assemblies of the finely-discretized dataset, for three of the studied MOGP models (all with q = 16 latent processes) and two baselines. The best and worst value for each metric are highlighted in green and red respectively. Definitions of the metrics are given in Section 3.2. The training time Ttrain is defined at the level of one assembly. PLMC and VLMC were trained on GPU, with minibatch training in the latter case. The α-CI and PVA metrics are missing for the PLMC are missing because predicted variance was not computed with this model: our current implementation requires the formation of a full p × p noise matrix for this, which causes a memory error in this setting.
| Model | R2 | RMSE | Errmax | Err
|
Err
|
α-CI | PVA | Ttrain (s) | ℛ |
|---|---|---|---|---|---|---|---|---|---|
| (cm−1) | (cm−1) | ||||||||
| VLMC | 0.185 | 1.86 ⋅ 10−3 | 0.189 | 2.04 ⋅ 10−4 | 0.543 | 1.00 | −5.90 | 8, 010 | 266 |
| PLMC | 0.984 | 2.43 ⋅ 10−3 | 0.169 | 1.69 ⋅ 10−4 | 0.423 | – | – | 122 | 266 |
| Lazy-LMC | 0.999 | 2.97 ⋅ 10−3 | 0.145 | 1.75 ⋅ 10−4 | 0.469 | 1.00 | −11.5 | 5 ⋅ 10−4 | 266 |
| BPR | 0.997 | 1.19 ⋅ 10−1 | 0.758 | 1.78 ⋅ 10−4 | 0.348 | 0.987 | −1.48 | 6310 | 13 |
| Training-less SOGP | 0.999 | 2.86 ⋅ 10−3 | 0.111 | 1.24 ⋅ 10−4 | 0.434 | 0.999 | −6.03 | 12 200 | 21 |
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.


