class: center, middle, inverse, title-slide # CRPS-Learning ## Jonathan Berrisch, Florian Ziel ### University of Duisburg-Essen ### 2021-06-28 --- class:middle name: content # Outline - [Motivation](#motivation) - [The Framework of Prediction under Expert Advice](#pred_under_exp_advice) - [The Continious Ranked Probability Score](#crps) - [Optimality of (Pointwise) CRPS-Learning](#crps_optim) - [A Simple Probabilistic Example](#simple_example) - [The Proposed CRPS-Learning Algorithm](#proposed_algorithm) - [Simulation Results](#simulation) - [Possible Extensions](#extensions) - [Application Study](#application) - [Wrap-Up](#conclusion) - [References](#references) --- name: motivation # Motivation .pull-left[ The Idea: - Combine multiple forecasts instead of choosing one - Combination weights may vary over **time**, over the **distribution** or **both** 2 Popular options for combining distributions: - Combining across quantiles (this paper) - Horizontal aggregation, vincentization - Combining across probabilities - Vertical aggregation ] .pull-right[ <div style="position:relative; margin-top:-50px; z-index: 0"> .panelset[ .panel[.panel-name[Time] ![](data:image/png;base64,#index_files/figure-html/unnamed-chunk-1-1.svg)<!-- --> ] .panel[.panel-name[Distribution] ![](data:image/png;base64,#index_files/figure-html/unnamed-chunk-2-1.svg)<!-- --> ]] ] --- name: pred_under_exp_advice # The Framework of Prediction under Expert Advice .pull-left[ ### The sequential framework Each day, `\(t = 1, 2, ... T\)` - The **forecaster** receives predictions `\(\widehat{X}_{t,k}\)` from `\(K\)` **experts** - The **forecaster** assings weights `\(w_{t,k}\)` to each **expert** - The **forecaster** calculates her prediction: `\begin{equation} \widetilde{X}_{t} = \sum_{k=1}^K w_{t,k} \widehat{X}_{t,k}. \label{eq_forecast_def} \end{equation}` - The realization for `\(t\)` is observed ] .pull-left[ ### The Regret Weights are updated sequentially according to the past performance of the `\(K\)` experts. That is, a loss function `\(\ell\)` is needed. This is used to compute the **cumulative regret** `\(R_{t,k}\)` `\begin{equation} R_{t,k} = \widetilde{L}_{t} - \widehat{L}_{t,k} = \sum_{i = 1}^t \ell(\widetilde{X}_{i},Y_i) - \ell(\widehat{X}_{i,k},Y_i) \label{eq_regret} \end{equation}` - <a id='cite-cesa2006prediction'></a><a href='#bib-cesa2006prediction'>Cesa-Bianchi and Lugosi (2006)</a> ] --- name: popular_algs # Popular Algorithms and the Risk .pull-left[ ### Popular Aggregation Algorithms #### The naive combination `\begin{equation} w_{t,k}^{\text{Naive}} = \frac{1}{K} \end{equation}` #### The exponentially weighted average forecaster (EWA) `\begin{align} w_{t,k}^{\text{EWA}} & = \frac{e^{\eta R_{t,k}} }{\sum_{k = 1}^K e^{\eta R_{t,k}}} = \frac{e^{-\eta \ell(\widehat{X}_{t,k},Y_t)} w^{\text{EWA}}_{t-1,k} }{\sum_{k = 1}^K e^{-\eta \ell(\widehat{X}_{t,k},Y_t)} w^{\text{EWA}}_{t-1,k} } \label{eq_ewa_general} \end{align}` ] .pull-right[ ### Optimality In stochastic settings, the cumulative Risk should be analyzed <a id='cite-wintenberger2017optimal'></a><a href='#bib-wintenberger2017optimal'>Wintenberger (2017)</a>: `\begin{align} &\underbrace{\widetilde{\mathcal{R}}_t = \sum_{i=1}^t \mathbb{E}[\ell(\widetilde{X}_{i},Y_i)|\mathcal{F}_{i-1}]}_{\text{Cumulative Risk of Forecaster}} \\ &\underbrace{\widehat{\mathcal{R}}_{t,k} = \sum_{i=1}^t \mathbb{E}[\ell(\widehat{X}_{i,k},Y_i)|\mathcal{F}_{i-1}]}_{\text{Cumulative Risk of Experts}} \label{eq_def_cumrisk} \end{align}` ] --- # Optimal Convergence .pull-left[ ### The selection problem `\begin{equation} \frac{1}{t}\left(\widetilde{\mathcal{R}}_t - \widehat{\mathcal{R}}_{t,\min} \right) \stackrel{t\to \infty}{\rightarrow} a \quad \text{with} \quad a \leq 0. \label{eq_opt_select} \end{equation}` The forecaster is asymptotically not worse than the best expert `\(\widehat{\mathcal{R}}_{t,\min}\)`. ### The convex aggregation problem `\begin{equation} \frac{1}{t}\left(\widetilde{\mathcal{R}}_t - \widehat{\mathcal{R}}_{t,\pi} \right) \stackrel{t\to \infty}{\rightarrow} b \quad \text{with} \quad b \leq 0 . \label{eq_opt_conv} \end{equation}` The forecaster is asymptotically not worse than the best convex combination `\(\widehat{X}_{t,\pi}\)` in hindsight (**oracle**). ] .pull-right[ Optimal rates with respect to selection \eqref{eq_opt_select} and convex aggregation \eqref{eq_opt_conv} <a href='#bib-wintenberger2017optimal'>Wintenberger (2017)</a>: `\begin{align} \frac{1}{t}\left(\widetilde{\mathcal{R}}_t - \widehat{\mathcal{R}}_{t,\min} \right) & = \mathcal{O}\left(\frac{\log(K)}{t}\right)\label{eq_optp_select} \end{align}` `\begin{align} \frac{1}{t}\left(\widetilde{\mathcal{R}}_t - \widehat{\mathcal{R}}_{t,\pi} \right) & = \mathcal{O}\left(\sqrt{\frac{\log(K)}{t}}\right) \label{eq_optp_conv} \end{align}` Algorithms can statisfy both \eqref{eq_optp_select} and \eqref{eq_optp_conv} depending on: - The loss function - Regularity conditions on `\(Y_t\)` and `\(\widehat{X}_{t,k}\)` - The weighting scheme ] --- name:crps .pull-left[ ## Optimality EWA satisfies the optimal selection convergence \eqref{eq_optp_select} in a deterministic setting if: - Loss `\(\ell\)` is exp-concave - Learning-rate `\(\eta\)` is chosen correctly Those results can be converted to stochastic iid settings <a id='cite-kakade2008generalization'></a><a href='#bib-kakade2008generalization'>Kakade and Tewari (2008)</a> <a id='cite-gaillard2014second'></a><a href='#bib-gaillard2014second'>Gaillard, Stoltz, and Van Erven (2014)</a>. The optimal convex aggregation convergence \eqref{eq_optp_conv} can be satisfied by applying the kernel-trick: `\begin{align} \ell^{\nabla}(x,y) = \ell'(\widetilde{X},y) x \end{align}` `\(\ell'\)` is the subgradient of `\(\ell\)` at forecast combination `\(\widetilde{X}\)`. ] .pull-right[ ## The CRPS `\begin{align*} \text{CRPS}(F, y) & = \int_{\mathbb{R}} {(F(x) - \mathbb{1}\{ x > y \})}^2 dx \label{eq_crps} \end{align*}` It's strictly proper <a id='cite-gneiting2007strictly'></a><a href='#bib-gneiting2007strictly'>Gneiting and Raftery (2007)</a>. Using the CRPS, we can calculate time-adaptive weights `\(w_{t,k}\)`. However, what if the experts' performance is not uniform over all parts of the distribution? The idea: utilize this relation: `\begin{align*} \text{CRPS}(F, y) = 2 \int_0^{1} \text{QL}_p(F^{-1}(p), y) \, d p. \label{eq_crps_qs} \end{align*}` To combine quantiles of the probabilistic forecasts individually using the quantile-loss QL. ] --- name: crps_optim # CRPS-Learning Optimality QL is convex, but not exp-concave
Bernstein Online Aggregation (BOA) lets us weaken the exp-concavity condition. It satisfies that there exist a `\(C>0\)` such that for `\(x>0\)` it holds that `\begin{equation} P\left( \frac{1}{t}\left(\widetilde{\mathcal{R}}_t - \widehat{\mathcal{R}}_{t,\pi} \right) \leq C \log(\log(t)) \left(\sqrt{\frac{\log(K)}{t}} + \frac{\log(K)+x}{t}\right) \right) \geq 1-e^{x} \label{eq_boa_opt_conv} \end{equation}`
Almost optimal w.r.t *convex aggregation* \eqref{eq_optp_conv} <a href='#bib-wintenberger2017optimal'>Wintenberger (2017)</a> . The same algorithm satisfies that there exist a `\(C>0\)` such that for `\(x>0\)` it holds that `\begin{equation} P\left( \frac{1}{t}\left(\widetilde{\mathcal{R}}_t - \widehat{\mathcal{R}}_{t,\min} \right) \leq C\left(\frac{\log(K)+\log(\log(Gt))+ x}{\alpha t}\right)^{\frac{1}{2-\beta}} \right) \geq 1-e^{x} \label{eq_boa_opt_select} \end{equation}` if `\(Y_t\)` is bounded, the considered loss `\(\ell\)` is convex `\(G\)`-Lipschitz and weak exp-concave in its first coordinate.
Almost optimal w.r.t *selection* \eqref{eq_optp_select} <a id='cite-gaillard2018efficient'></a><a href='#bib-gaillard2018efficient'>Gaillard and Wintenberger (2018)</a>.
We show that this holds for QL under feasible conditions. --- name: simple_example # A Probabilistic Example .pull-left[ Simple Example: `\begin{align} Y_t & \sim \mathcal{N}(0,\,1) \\ \widehat{X}_{t,1} & \sim \widehat{F}_{1} = \mathcal{N}(-1,\,1) \\ \widehat{X}_{t,2} & \sim \widehat{F}_{2} = \mathcal{N}(3,\,4) \label{eq:dgp_sim1} \end{align}` - True weights vary over `\(p\)` - Figures show the ECDF and calculated weights using `\(T=25\)` realizations - Pointwise solution creates rough estimates - Pointwise is better than constant - Smooth solution is better than pointwise ] .pull-right[ <div style="position:relative; margin-top:-50px; z-index: 0"> .panelset[ .panel[.panel-name[CDFs] <img src="data:image/png;base64,#index_files/figure-html/unnamed-chunk-3-1.svg" style="display: block; margin: auto;" /> ] .panel[.panel-name[Weights] <img src="data:image/png;base64,#index_files/figure-html/unnamed-chunk-4-1.svg" style="display: block; margin: auto;" /> ]] ] --- # The Smoothing Procedure .pull-left[ We are using penalized cubic b-splines: Let `\(\varphi=(\varphi_1,\ldots, \varphi_L)\)` be bounded basis functions on `\((0,1)\)` Then we approximate `\(w_{t,k}\)` by `\begin{align} w_{t,k}^{\text{smooth}} = \sum_{l=1}^L \beta_l \varphi_l = \beta'\varphi \end{align}` with parameter vector `\(\beta\)`. The latter is estimated penalized `\(L_2\)`-smoothing which minimizes `\begin{equation} \| w_{t,k} - \beta' \varphi \|^2_2 + \lambda \| \mathcal{D}^{d} (\beta' \varphi) \|^2_2 \label{eq_function_smooth} \end{equation}` with differential operator `\(\mathcal{D}\)` Smoothing can be applied ex-post or inside of the algorithm (
[Simulation](#simulation)). ] .pull-right[ We receive the constant solution for high values of `\(\lambda\)` when setting `\(d=1\)` <center> <img src="weights_lambda.gif"> </center> ] --- name:proposed_algorithm # The Proposed CRPS-Learning Algorithm .pull-left-3[ .font90[ **Initialization:** Array of expert predicitons: `\(\widehat{X}_{t,k,p}\)` Vector of Prediction targets: `\(Y_t\)` Starting Weights: `\(w_0=(w_{0,1},\ldots, w_{0,K})\)`, Penalization parameter: `\(\lambda\geq 0\)` B-spline and penalty matrices `\(B\)` and `\(D\)` on `\(\mathcal{P}= (p_1,\ldots,p_M)\)` Hat matrix: `$$\mathcal{H} = B(B'B+ \lambda D'D)^{-1} B'$$` Cumulative Regret: `\(R_{0,k} = 0\)` Range parameter: `\(E_{0,k}=0\)` ]] .pull-right-3[ .font90[ **Core**: for(t in 1:T) { for(p in `\(\mathcal{P}\)`) { `\(\widetilde{X}_{t,k}(p) = \sum_{k=1}^K w_{t-1,k,p} \widehat{X}_{t,k}(p)\)` .grey[\# Prediction] for(k in 1:K){ `\(r_{t,k,p} = \text{QL}_p^{\nabla}(\widehat{X}_{t,k}(p),Y_t) - \text{QL}_p^{\nabla}(\widetilde{X}_{t}(p),Y_t)\)` `\(E_{t,k,p} = \max(E_{t-1,k,p}, |r_{t,k,p}|)\)` `\(\eta_{t,k,p}=\min\left(1/2E_{t,k,p}, \sqrt{\log(K)/ \sum_{i=1}^t (r^2_{i, k,p})}\right)\)` `\(R_{t,k,p} = R_{t-1,k,p} + \frac{1}{2} \left( r_{t,k,p} \left( 1+ \eta_{t,k,p} r_{t,k,p} \right) + 2E_{t,k,p} \mathbb{1}(\eta_{t,k,p}r_{t,k,p} > \frac{1}{2}) \right)\)` `\(w_{t,k,p} = \eta_{t,k,p} \exp \left(- \eta_{t,k,p} R_{t,k,p} \right) w_{0,k,p} / \left( \frac{1}{K} \sum_{k = 1}^K \eta_{t,k,p} \exp \left( - \eta_{t,k,p} R_{t,k,p}\right) \right)\)` }.grey[\#k]}.grey[\#p] for(k in 1:K){ `\(w_{t,k} = \mathcal{H} w_{t,k}(\mathcal{P})\)` .grey[\# Smoothing] } .grey[\#k]} .grey[\#t] ] ] --- name: simulation # Simulation Study .pull-left[ Data Generating Process of the [simple probabilistic example](#simple_example) - Constant solution `\(\lambda \rightarrow \infty\)` - Pointwise Solution of the proposed BOAG - Smoothed Solution of the proposed BOAG - Weights are smoothed during learning - Smooth weights are used to calculate Regret, adjust weights, etc. - Smooth ex-post solution - Weights are smoothed after the learning - Algorithm always uses non-smoothed weights ] .pull-right[ <div style="position:relative; margin-top:-50px; z-index: 0"> .panelset[ .panel[.panel-name[QL Deviation] Deviation from best attainable `\(\text{QL}_p\)` (1000 runs). ![](data:image/png;base64,#pre_vs_post.gif) ] .panel[.panel-name[CRPS vs. Lambda] CRPS Values for different `\(\lambda\)` (1000 runs) ![](data:image/png;base64,#pre_vs_post_lambda.gif) ]] ] --- # Simulation Study The same simulation carried out for different algorithms (1000 runs): <center> <img src="algos_constant.gif"> </center> --- # Simulation Study .pull-left-1[ **New DGP:** `\begin{align} Y_t & \sim \mathcal{N}\left(\frac{\sin(0.005 \pi t )}{2},\,1\right) \\ \widehat{X}_{t,1} & \sim \widehat{F}_{1} = \mathcal{N}(-1,\,1) \\ \widehat{X}_{t,2} & \sim \widehat{F}_{2} = \mathcal{N}(3,\,4) \label{eq_dgp_sim2} \end{align}`
Changing optimal weights
Single run example depicted aside
No forgetting leads to long-term constant weights ] .pull-right-2[ **Weights of expert 2** <img src="data:image/png;base64,#index_files/figure-html/unnamed-chunk-5-1.svg" style="display: block; margin: auto;" /> ] --- # Simulation Results The simulation using the new DGP carried out for different algorithms (1000 runs): <center> <img src="algos_changing.gif"> </center> --- name:extensions # Possible Extensions .pull-left[ **Forgetting** - Only taking part of the old cumulative regret into account - Exponential forgetting of past regret `\begin{align*} R_{t,k} & = R_{t-1,k}(1-\xi) + \ell(\widetilde{F}_{t},Y_i) - \ell(\widehat{F}_{t,k},Y_i) \label{eq_regret_forget} \end{align*}` **Fixed Shares** <a id='cite-herbster1998tracking'></a><a href='#bib-herbster1998tracking'>Herbster and Warmuth (1998)</a> - Adding fixed shares to the weights - Shrinkage towards a constant solution `\begin{align*} \widetilde{w}_{t,k} = \rho \frac{1}{K} + (1-\rho) w_{t,k} \label{fixed_share_simple}. \end{align*}` ] .pull-right[ **Non-Equidistant Knots** - Non-equidistant spline-basis could be used - Potentially improves the tail-behavior - Destroys shrinkage towards constant <center> <img src="uneven_grid.gif"> </center> ] --- name: application # Application Study: Overview .pull-left-1[ .font90[ Data: - Forecasting European emission allowances (EUA) - Daily month-ahead prices - Jan 13 - Dec 20 (Phase III, 2092 Obs) Combination methods: - Naive, BOAG, EWAG, ML-PolyG, BMA Tuning paramter grids: - Smoothing Penalty: `\(\Lambda= \{0\}\cup \{2^x|x\in \{-4,-3.5,\ldots,12\}\}\)` - Learning Rates: `\(\mathcal{E}= \{2^x|x\in \{-1,-0.5,\ldots,9\}\}\)` ] ] .pull-right-2[ <img src="data:image/png;base64,#index_files/figure-html/unnamed-chunk-7-1.svg" style="display: block; margin: auto;" /> ] --- # Application Study: Experts .font90[ Simple exponential smoothing with additive errors (**ETS-ANN**): `\begin{align*} Y_{t} = l_{t-1} + \varepsilon_t \quad \text{with} \quad l_t = l_{t-1} + \alpha \varepsilon_t \quad \text{and} \quad \varepsilon_t \sim \mathcal{N}(0,\sigma^2) \end{align*}` Quantile regression (**QuantReg**): For each `\(p \in \mathcal{P}\)` we assume: `\begin{align*} F^{-1}_{Y_t}(p) = \beta_{p,0} + \beta_{p,1} Y_{t-1} + \beta_{p,2} |Y_{t-1}-Y_{t-2}| \end{align*}` ARIMA(1,0,1)-GARCH(1,1) with Gaussian errors (**ARMA-GARCH**): `\begin{align*} Y_{t} = \mu + \phi(Y_{t-1}-\mu) + \theta \varepsilon_{t-1} + \varepsilon_t \quad \text{with} \quad \varepsilon_t = \sigma_t Z, \quad \sigma_t^2 = \omega + \alpha \varepsilon_{t-1}^2 + \beta \sigma_{t-1}^2 \quad \text{and} \quad Z_t \sim \mathcal{N}(0,1) \end{align*}` ARIMA(0,1,0)-I-EGARCH(1,1) with Gaussian errors (**I-EGARCH**): `\begin{align*} Y_{t} = \mu + Y_{t-1} + \varepsilon_t \quad \text{with} \quad \varepsilon_t = \sigma_t Z, \quad \log(\sigma_t^2) = \omega + \alpha Z_{t-1}+ \gamma (|Z_{t-1}|-\mathbb{E}|Z_{t-1}|) + \beta \log(\sigma_{t-1}^2) \quad \text{and} \quad Z_t \sim \mathcal{N}(0,1) \end{align*}` ARIMA(0,1,0)-GARCH(1,1) with student-t errors (**I-GARCHt**}): `\begin{align*} Y_{t} = \mu + Y_{t-1} + \varepsilon_t \quad \text{with} \quad \varepsilon_t = \sigma_t Z, \quad \sigma_t^2 = \omega + \alpha \varepsilon_{t-1}^2 + \beta \sigma_{t-1}^2 \quad \text{and} \quad Z_t \sim t(0,1, \nu) \end{align*}` ] --- # Application Study: Results <div style="position:relative; margin-top:-25px; z-index: 0"> .panelset[ .panel[.panel-name[Significance] <table class=" lightable-material" style='font-family: "Source Sans Pro", helvetica, sans-serif; margin-left: auto; margin-right: auto;'> <thead> <tr> <th style="text-align:center;"> ETS-ANN </th> <th style="text-align:center;"> QuantReg </th> <th style="text-align:center;"> ARMA-GARCH </th> <th style="text-align:center;"> I-EGARCH </th> <th style="text-align:center;"> I-GARCHt </th> </tr> </thead> <tbody> <tr> <td style="text-align:center;background-color: #FF808C !important;"> 2.103 (>.999) </td> <td style="text-align:center;background-color: #FF808C !important;"> 1.360 (>.999) </td> <td style="text-align:center;background-color: #FFB180 !important;"> 0.522 (0.993) </td> <td style="text-align:center;background-color: #FFB480 !important;"> 0.503 (0.999) </td> <td style="text-align:center;background-color: #F3FF80 !important;"> -0.035 (0.411) </td> </tr> </tbody> </table> <table class=" lightable-material" style='font-family: "Source Sans Pro", helvetica, sans-serif; margin-left: auto; margin-right: auto;'> <thead> <tr> <th style="text-align:left;"> </th> <th style="text-align:center;"> BOAG </th> <th style="text-align:center;"> EWAG </th> <th style="text-align:center;"> ML-PolyG </th> <th style="text-align:center;"> BMA </th> </tr> </thead> <tbody> <tr> <td style="text-align:left;font-weight: bold;"> pointwise </td> <td style="text-align:center;background-color: #99EE80 !important;"> -0.161 (0.067) </td> <td style="text-align:center;background-color: #D6FF80 !important;"> -0.085 (0.177) </td> <td style="text-align:center;background-color: #AFF580 !important;"> -0.136 (0.126) </td> <td style="text-align:center;background-color: #FFF980 !important;"> 0.030 (0.753) </td> </tr> <tr> <td style="text-align:left;font-weight: bold;"> smooth </td> <td style="text-align:center;background-color: #85E780 !important;"> -0.185 (0.037) </td> <td style="text-align:center;background-color: #D1FF80 !important;"> -0.094 (0.150) </td> <td style="text-align:center;background-color: #99EE80 !important;"> -0.161 (0.066) </td> <td style="text-align:center;background-color: #FFF980 !important;"> 0.027 (0.722) </td> </tr> <tr> <td style="text-align:left;font-weight: bold;"> constant </td> <td style="text-align:center;background-color: #AEF580 !important;"> -0.137 (0.020) </td> <td style="text-align:center;background-color: #E0FF80 !important;"> -0.067 (0.144) </td> <td style="text-align:center;background-color: #B3F780 !important;"> -0.132 (0.027) </td> <td style="text-align:center;background-color: #FFF880 !important;"> 0.035 (0.826) </td> </tr> <tr> <td style="text-align:left;font-weight: bold;"> smooth* </td> <td style="text-align:center;background-color: #80E680 !important;"> -0.191 (0.023) </td> <td style="text-align:center;background-color: #9CEF80 !important;"> -0.158 (0.025) </td> <td style="text-align:center;background-color: #80E680 !important;"> -0.190 (0.021) </td> <td style="text-align:center;background-color: #FFFE80 !important;"> -0.009 (0.333) </td> </tr> </tbody> </table> CRPS difference to **Naive** (scaled by `\(10^4\)`) of single experts and four combination methods with four options. Additionally, we show the p-value of the DM-test, testing against **Naive**. The smallest value is bold. We also report the optimal ex-post selection by **smooth*** ] .panel[.panel-name[QL] <img src="data:image/png;base64,#index_files/figure-html/unnamed-chunk-9-1.svg" style="display: block; margin: auto;" /> ] .panel[.panel-name[Cumulative Loss Difference] <img src="data:image/png;base64,#index_files/figure-html/unnamed-chunk-10-1.svg" style="display: block; margin: auto;" /> .panel[.panel-name[Weights] <img src="data:image/png;base64,#index_files/figure-html/unnamed-chunk-11-1.svg" style="display: block; margin: auto;" /> ] ]] --- name: conclusion # Wrap-Up .font90[ .pull-left[ Potential Downsides: - Pointwise optimization can induce quantile crossing - Can be solved by sorting the predictions Upsides: - Pointwise learning outperforms the Naive solution significantly - Online learning is much faster than batch methods - Smoothing further improves the predictive performance - Asymptotically not worse than the best convex combination ] .pull-left[ Important: - The choice of the learning rate is crucial - The loss function has to meet certain criteria The [
profoc](https://profoc.berrisch.biz/) R Package: - Implements all algorithms discussed above - Is written using RcppArmadillo
its fast - Accepts vectors for most parameters - The best parameter combination is chosen online - Implements - Forgetting, Fixed Share - Different loss functions + gradients ] ] <a href="https://github.com/BerriJ" class="github-corner" aria-label="View source on Github"><svg width="80" height="80" viewBox="0 0 250 250" style="fill:#f2f2f2; color:#212121; position: absolute; top: 0; border: 0; right: 0;" aria-hidden="true"><path d="M0,0 L115,115 L130,115 L142,142 L250,250 L250,0 Z"></path><path d="M128.3,109.0 C113.8,99.7 119.0,89.6 119.0,89.6 C122.0,82.7 120.5,78.6 120.5,78.6 C119.2,72.0 123.4,76.3 123.4,76.3 C127.3,80.9 125.5,87.3 125.5,87.3 C122.9,97.6 130.6,101.9 134.4,103.2" fill="currentColor" style="transform-origin: 130px 106px;" class="octo-arm"></path><path d="M115.0,115.0 C114.9,115.1 118.7,116.5 119.8,115.4 L133.7,101.6 C136.9,99.2 139.9,98.4 142.2,98.6 C133.8,88.0 127.5,74.4 143.8,58.0 C148.5,53.4 154.0,51.2 159.7,51.0 C160.3,49.4 163.2,43.6 171.4,40.1 C171.4,40.1 176.1,42.5 178.8,56.2 C183.1,58.6 187.2,61.8 190.9,65.4 C194.5,69.0 197.7,73.2 200.1,77.6 C213.8,80.2 216.3,84.9 216.3,84.9 C212.7,93.1 206.9,96.0 205.4,96.6 C205.1,102.4 203.0,107.8 198.3,112.5 C181.9,128.9 168.3,122.5 157.7,114.1 C157.9,116.9 156.7,120.9 152.7,124.9 L141.0,136.5 C139.8,137.7 141.6,141.9 141.8,141.8 Z" fill="currentColor" class="octo-body"></path></svg></a><style>.github-corner:hover .octo-arm{animation:octocat-wave 560ms ease-in-out}@keyframes octocat-wave{0%,100%{transform:rotate(0)}20%,60%{transform:rotate(-25deg)}40%,80%{transform:rotate(10deg)}}@media (max-width:500px){.github-corner:hover .octo-arm{animation:none}.github-corner .octo-arm{animation:octocat-wave 560ms ease-in-out}}</style> ??? Execution Times: T = 5000 Opera: Ml-Poly > 157 ms Boa > 212 ms Profoc: Ml-Poly > 17 BOA > 16 --- class: center, middle [
CRPS-Learning](https://arxiv.org/abs/2102.00968) --- name:references # References 1 Cesa-Bianchi, N. and G. Lugosi (2006). _Prediction, learning, and games_. Cambridge university press. Gaillard, P., G. Stoltz, and T. Van Erven (2014). "A second-order bound with excess losses". In: _Conference on Learning Theory_. PMLR. , pp. 176-196. Gaillard, P. and O. Wintenberger (2018). "Efficient online algorithms for fast-rate regret bounds under sparsity". In: _Advances in Neural Information Processing Systems_. , pp. 7026-7036. Gneiting, T. and A. E. Raftery (2007). "Strictly proper scoring rules, prediction, and estimation". In: _Journal of the American statistical Association_ 102.477, pp. 359-378. Herbster, M. and M. K. Warmuth (1998). "Tracking the best expert". In: _Machine learning_ 32.2, pp. 151-178. Kakade, S. M. and A. Tewari (2008). "On the Generalization Ability of Online Strongly Convex Programming Algorithms." In: _NIPS_. , pp. 801-808. Wintenberger, O. (2017). "Optimal learning with Bernstein online aggregation". In: _Machine Learning_ 106.1, pp. 119-141. --- # References 2 ``` ## Warning in `[[.BibEntry`(x, ind): subscript out of bounds ``` Wintenberger, O. (2017). "Optimal learning with Bernstein online aggregation". In: _Machine Learning_ 106.1, pp. 119-141. --- class: center, middle [
](#content)