EXACT MANIFOLD GAUSSIAN VARIATIONAL

Abstract

We propose an optimization algorithm for Variational Inference (VI) in complex models. Our approach relies on natural gradient updates where the variational space is a Riemann manifold. We develop an efficient algorithm for Gaussian Variational Inference that implicitly satisfies the positive definite constraint on the variational covariance matrix. Our Exact manifold Gaussian Variational Bayes (EMGVB) provides exact but simple update rules and is straightforward to implement. Due to its black-box nature, EMGVB stands as a ready-to-use solution for VI in complex models. Over five datasets, we empirically validate our feasible approach on different statistical and econometric models, discussing its performance with respect to baseline methods.

1. INTRODUCTION

Although Bayesian principles are not new to Machine Learning (ML) (e.g. Mackay, 1992; 1995; Lampinen & Vehtari, 2001) , it has been only recently that feasible methods boosted a growing use of Bayesian methods within the field (e.g. Zhang et al., 2018; Trusheim et al., 2018; Osawa et al., 2019; Khan et al., 2018b; Khan & Nielsen, 2018) . In the typical ML settings the applicability of sampling methods for the challenging computation of the posterior is prohibitive, however approximate methods such as Variational Inference (VI) have been proved suitable and successful (Saul et al., 1996; Wainwright & Jordan, 2008; Hoffman et al., 2013; Blei et al., 2017) . VI is generally performed with Stochastic Gradient Descent (SGD) methods (Robbins & Monro, 1951; Hoffman et al., 2013; Salimans & Knowles, 2014) , boosted by the use of natural gradients (Hoffman et al., 2013; Wierstra et al., 2014; Khan et al., 2018b) , and the updates often take a simple form (Khan & Nielsen, 2018; Osawa et al., 2019; Magris et al., 2022) . The majority of VI algorithms rely on the extensive use of models' gradients and the form of the variational posterior implies additional model-specific derivations that are not easy to adapt to a general, plug-and-play optimizer. Black box methods (Ranganath et al., 2014) , are straightforward to implement and versatile use as they avoid model-specific derivations by relying on stochastic sampling (Salimans & Knowles, 2014; Paisley et al., 2012; Kingma & Welling, 2013) . The increased variance in the gradient estimates as opposed to e.g. methods relying on the Reparametrization Trick (RT) (Blundell et al., 2015; Xu et al., 2019 ) can be alleviated with variance reduction techniques (e.g Magris et al., 2022) . Furthermore, the majority of existing algorithms do not directly address parameters' constraints. Under the typical Gaussian variational assumption, granting positive-definiteness of the covariance matrix is an acknowledged problem (e.g Tran et al., 2021a; Khan et al., 2018b; Lin et al., 2020) . Only a few algorithms directly tackle the problem (Osawa et al., 2019; Lin et al., 2020) , see Section 3. A recent approximate approach based on manifold optimization is provided by Tran et al. (2021a) . On the theoretical results of Khan & Lin (2017) ; Khan et al. (2018a) we develop an exact version of Tran et al. (2021a) , resulting in an algorithm that explicitly tackles the positive-definiteness constraint for the variational covariance matrix and resembles the readily-applicable natural-gradient black-box framework of (Magris et al., 2022) . For its implementation, we discuss recommendations and practicalities, show that EMGVB is of simple implementation, and demonstrate its feasibility in extensive experiments over four datasets, 12 models, and three competing VI optimizers. In Section 2 we review the basis of VI, in Section 3 we review the Manifold Gaussian Variational Bayes approach and other related works, in Section 4 we discuss our proposed approach. Experi-ments are found in Section 5, while Section 6 concludes. Appendices A, B complement the main discussion, Appendix C.4 reinforces and expands the experiments, Appendix D provides proofs.

2. VARIATIONAL INFERENCE

Variational Inference (VI) stands as a convenient and feasible approximate method for Bayesian inference. Let y denote the data, p(y|θ) the likelihood of the data based on some model whose k-dimensional parameter is θ. Let p(θ) be the prior distribution on θ. In standard Bayesian inference the posterior is retrieved via the Bayes theorem as p(θ|y) = p(θ)p(y|θ)/p(y). As the marginal likelihood p(y) is generally intractable, Bayesian inference is often difficult for complex models. Though the problem can be tackled with sampling methods, Monte Carlo techniques, although nonparametric and asymptotically exact may be slow, especially in high-dimensional applications (Salimans et al., 2015) . VI approximates the true unknown posterior with a probability density q within a tractable class of distributions Q, such as the exponential family. VI turns the Bayesian inference problem into that of finding the best variational distribution q ⋆ ∈ Q minimizing the Kullback-Leibler (KL) divergence from q to p(θ|y): q ⋆ = arg min q∈Q D KL (q||p(θ|y)). It can be shown that the KL minimization problem is equivalent to the maximization of the so-called Lower Bound (LB) on log p(y), (e.g. Tran et al., 2021b) . In fixed-form variational Bayes, the parametric form of the variational posterior is set. The optimization problem accounts for finding the optimal variational parameter ζ parametrizing q ≡ q ζ that maximizes the LB (L), that is: ζ ⋆ = arg max ζ∈Z L(ζ) := q ζ (θ) log p(θ)p(y|θ) q ζ (θ) dθ = E q ζ log p(θ)p(y|θ) q ζ (θ) , where E q means that the expectation is taken with respect to the distribution q ζ , and Z is the parameter space for ζ. The maximization of the LB is generally tackled with a gradient-descent method such as SGD (Robbins & Monro, 1951) , ADAM Kingma & Ba (2014) , or ADAGRAD Duchi et al. (2011) . The learning of the parameter ζ based on standard gradient descent is however problematic as it ignores the information geometry of the distribution q ζ , is not scale invariant, unstable, and very susceptible to the initial values (Wierstra et al., 2014) . SGD implicitly relies on the Euclidean norm for capturing the dissimilarity between two distributions, which can be a poor and misleading measure of dissimilarity (Khan & Nielsen, 2018) . By using the KL divergence in place of the Euclidean norm, the SGD update results in the following natural gradient update: ζ t+1 = ζ t + β t ∇ζ L(ζ) ζ=ζt , where β t is a possibly adaptive learning rate, and t denotes the iteration. The above update results in improved steps towards the maximum of the LB when optimizing it for the variational parameter A major issue in following this approach is that ζ is unconstrained. Think of a Gaussian variational posterior: under the above update, there is no guarantee that the covariance matrix is iteratively updated onto a symmetric and positive definite matrix. As discussed in the introduction, manifold optimization is an attractive possibility.

3. RELATED WORK

In Tran et al. (2021a) , a d-dimensional Gaussian distribution N (µ, Σ), provides the fixed-form of the variational posterior q ζ = (µ, vec(Σ)). There are no restrictions on µ yet the covariance matrix Σ is constrained to the manifold M of symmetric positive-definite matric es, M = Σ ∈ R d×d : Σ = Σ ⊤ , Σ ≻ 0 , see e.g. (Abraham et al., 2012; Hu et al., 2020) . The exact form of the Fisher information matrix for the multivariate normal distribution is, e.g., provided in (Mardia & Marshall, 1984) and reads I ζ = Σ -1 0 0 I ζ (Σ) , I ζ (Σ) σij ,σ kl = 1 2 tr Σ -1 ∂Σ ∂σ ij Σ -1 ∂Σ ∂σ kl . ( ) with I ζ (Σ) σij ,σ kl being the generic element of the d 2 × d 2 matrix I ζ (Σ). The MGVB method relies on the approximation I -1 ζ (Σ) ≈ Σ -1 ⊗ Σ -1 , which leads to a convenient approximate form of the natural gradients of the lower bound with respect to µ and Σ, respectively computed as 1∇µ L(ζ) = Σ∇ µ L(ζ) and ∇Σ L(ζ) ≈ vec -1 (Σ ⊗ Σ)∇ vec(Σ) L(ζ) = Σ∇ Σ L(ζ)Σ, where ⊗ denotes the Kronecker product. In virtue of the natural gradient definition, the first natural gradient is exact while the second is approximate. Thus, Tran et al. (2021a) adopt the following updates for the parameters of the variational posterior: µ = µ + β ∇µ L(ζ) and Σ = R Σ (β ∇Σ L(ζ)), where R Σ (•) denotes a suitable retraction for Σ on the manifold M. Momentum gradients can be used in place of plain natural ones. In particular, the momentum gradient for the update of Σ relies on a vector transport granting that at each iteration the weighted gradient remains in the tangent space of the manifold M. Refer to Section 4.2 for more information on retraction and vector transport. Besides the relationship between EMGVB and MGVB already discussed, a method of handling the positivity constraint in diagonal covariance matrices is VOGN optimizer (Khan et al., 2018b; Osawa et al., 2019) . VOGN relates to the VON update (see Appendix B) as it indirectly updates µ and Σ from the Gaussian natural parameters updates. Following a non-Black-Box approach, VOGN uses some theoretical results on the Gaussian distribution to recover an update for Σ that involves the Hessian of the likelihood. Such Hessian is estimated as the samples' mean squared gradient, granting the non-negativity of the diagonal covariance update. Osawa et al. (2019) devise the computation of the approximate Hessian in a block-diagonal fashion within the layers of a Deep-learning model. Lin et al. (2020) extend the above to handle the positive definiteness constraint by adding an additional term to the update rule for Σ, applicable to certain partitioned structures of the FIM. The retraction map in (Lin et al., 2020) is more general than (Tran et al., 2021a) and obtained through a different Riemann metric, from which MGVB is retrieved as a special case. As opposed to EMGVB, the use of the RT in (Lin et al., 2020) requires model-specific computation or auto-differentiation. See (Lin et al., 2021) for an extension on stochastic, non-convex problems. Lin et al. (2020) underline that in Tran et al. (2021a) the chosen form of the retraction is not well-justified as it is specific for the SPD matrix manifold, whereas the natural gradient is computed for the Gaussian manifold. An extensive discussion on this point and its relationship with the EMGVB optimizer here proposed is found in Appendix D.3. Alternative methods that rely on unconstrained transformations (e.g. Cholesky factor) (e.g Tan, 2021) , or on the adaptive adjustment of the learning rate (e.g. Khan & Lin, 2017) lie outside the manifold context here discussed. Among the methods that do not control for the positive definiteness constraint, the QBVI update (Magris et al., 2022) provides a comparable black-bock method that, despite other black-bock VI algorithms, uses exact natural gradients updates obtained without the computation of the FIM.

4. EXACT MANIFOLD GAUSSIAN VB

Consider a variational Gaussian distribution q λ with mean µ and positive-definite covariance matrix 2 Σ. Be λ 1 = Σ -1 µ and λ 2 = -1 2 Σ -1 its natural parameters and define λ = (λ 1 , vec(λ 2 )). The corresponding mean or expectation parameters m = (m 1 , vec(m 2 )) are given by m 1 = E q λ [θ] = µ and m 2 = E q λ θθ ⊤ = µµ ⊤ +Σ. When required, in place of the somewhat vague notation L whose precise meaning is to be inferred from the context, we shall use L(m) to explicitly denote the lower bound expressed in terms of the expectation parameter m, opposed to L(λ) expressed in terms of λ. Proposition 1 For a differentiable function L, and q λ being a Gaussian distribution with mean µ and covariance matrix S, ∇µ L = Σ∇ µ L ∇Σ -1 L = -2 ∇λ2 L = -2∇ Σ L, where λ 2 = -1 2 Σ -1 denotes the second natural parameter of q λ . The covariance matrix Σ is positive definite, its inverse exists and it is as well symmetric and positive definite. Therefore Σ -1 lies within the manifold M and can be updated with a suitable retraction algorithm as for Σ in equation 3: Σ -1 = R Σ -1 β ∇Σ -1 L = R Σ -1 (-2β∇ Σ L). (4) Opposed to the update in eq. 3, which relies on the approximation I -1 ζ (Σ) ≈ Σ -1 ⊗ Σ -1 , for tacking a positive-definite update of Σ, we target at updating Σ -1 , for which its natural gradient is available in an exact form, by primarily exploiting the duality between the gradients in the natural and expectation parameter space (Appendix D.1, eq. 25) that circumvents the computation of the FIM. For coherency with the literature on VI for Bayesian deep learning (e.g. Ranganath et al., 2014 , among many others), we specify the variational posterior in terms of the covariance matrix Σ, but update Σ -1 . Yet nothing prevents specifying q λ in terms of its precision matrix Σ -1 , as is often the case in Bayesian statistics textbooks, in which this case, the update 4 corresponds to an update for the actual variational precision parameter. For updating µ is reasonable to adopt plain SGD-like step driven by the natural parameter ∇µ L = Σ∇ µ L, as in (Tran et al., 2021a) . We refer to the following update rules as Exact Manifold Gaussian Variational Bayes, or shortly EMGVB, µ t+1 = µ t + βΣ∇ µ L t and Σ -1 t+1 = R Σ -1 t (-2β∇ Σ L t ), where the gradients w.r.t. L are intended as evaluated at the current value of the parameters, e.g. ∇ Σ L t = ∇ Σ L| µ=µt,Σ=Σt . With respect to the MGVB update of Tran et al. (2021a) , there are no approximations, e.g. regarding the FIM, yet the cost of updating Σ -1 appears to be that of introducing an additional inversion for retrieving Σ that is involved in the EMGVB update for µ. In the following Section, we show that with a certain gradient estimator such an inversion is irrelevant. Furthermore, in Appendix B we point out that a covariance matrix inversion is implicit in both MGVB and EMGVB due to the second-order form of the retraction and also show that the update for µ is optimal in the sense therein specified.

4.1. IMPLEMENTATION

We elaborate on how to evaluate the gradients ∇ Σ L and ∇ µ L. We follow the Black-box approach (Ranganath et al., 2014) under which such gradients are approximated via Monte Carlo (MC) sampling and rely on function queries only. The implementation of the EMGVB updates does not require the model's gradient to be specified nor to be computed numerically, e.g. with backpropagation. By use of the so-called log-derivative trick (see e.g. (Ranganath et al., 2014) ) it is possible to evaluate the gradients of the LB as an expectation with respect to the variational distribution. In particular, for a generic differentiation variable ζ, ∇ ζ L(ζ) = E q λ [∇ ζ [log q ζ (θ)] h ζ (θ)], where h ζ (θ) = log p(θ)p(y|θ) q ζ (θ) . In the EMGVB context with q ∼ N (µ, Σ), ζ = (µ, vec(Σ)) and L(ζ) = L(µ, Σ). The gradient of the L w.r.t. ζ evaluated at ζ = ζ t can be easily estimated using S samples from the variational posterior through the unbiased estimator ∇ ζ L(ζ t ) = ∇ ζ L(ζ)| ζ=ζt ≈ 1 S S s=1 [∇ ζ [log q ζ (θ s )] h ζ (θ s )]| ζ=ζt , θ s ∼ N (µ t , S t ) where the h-function is evaluated in the current values of the parameters, i.e. in ζ t = (µ t , vec(Σ t )). For a Gaussian distribution q ∼ N (µ, Σ) it can be shown that (e.g. Wierstra et al., 2014; Magris et al., 2022) : ∇ µ log q(θ) = Σ -1 (θ -µ), ∇ Σ log q(θ) = - 1 2 Σ -1 -Σ -1 (θ -µ)(θ -µ) ⊤ Σ -1 , Equations 7, 8 along with 6 and Proposition 1 immediately lead to the feasible natural gradients estimators: ∇µ L(ζ t ) ≈ Σ t ∇µ L(ζ t ) = 1 S S s=1 [(θ s -µ t )h ζt (θ s )], ∇Σ -1 L(ζ t ) ≈ -2 ∇Σ L(ζ t ) = 1 S S s=1 Σ -1 t -Σ -1 t (θ s -µ t )(θ s -µ t ) ⊤ Σ -1 t h ζt (θ s ) . ( ) As for the MGVB update, the EMGVB update applies exclusively to Gaussian variational posteriors, yet no constraints are imposed on the parametric form of p. When considering a Gaussian prior, the implementation of the EMGVB update can take advantage of some analytical results leading to MC estimators of reduced variance, namely implemented over the log-likelihood log p(y|θ s ) rather than the h-function. In Appendix D.2, we show that, under a Gaussian prior specification, the above updates can be also implemented in terms of the model likelihood than in terms of the h-function. The general form of the EMGVB updates then writes: ∇µ L(ζ t ) ≈ c µt + 1 S S s=1 [(θ s -µ t ) log f (θ s )] ∇Σ -1 L(ζ t ) ≈ C Σt + 1 S S s=1 Σ -1 t -Σ -1 t (θ s -µ t )(θ s -µ t ) ⊤ Σ -1 t log f (θ s ) where, (i ) if p is Gaussian C Σt = -Σ -1 t + Σ -1 0 , c µt = -Σ t Σ -1 0 (µ t -µ 0 ) and log f (θ s ) = log p(y|θ s ), (ii) if p is Gaussian or not C Σt = c µt = 0 and log f (θ s ) = h ζt (θ s ). and θ s ∼ q ζt = N (µ t , Σ t ), s = 1, . . . , S. log(y|θ s ) and h ζt (θ s ) respectively denote the model likelihood and the h-function evaluated in θ s . Note that the latter depends on t as it involves the variational posterior, evaluated at the parameters at the value of the parameters for iteration t. p denotes the prior. It is clear that under the Gaussian specification for p the MC estimator is of reduced variance, compared to the general one based on the h-function. Note that the log-likelihood case does not involve an additional inversion for retrieving Σ in c µ , as Σ is anyway required in the second-order retraction (for both MGVB and EMGVB). This aspect is further discussed Appendix B. For Inverting Σ -1 we suggest inverting the Cholesky factor L -1 of Σ -1 and compute Σ as LL ⊤ . This takes advantage of the triangular form of L -1 which can be inverted with back-substitution, which is k 3 /3 flops cheaper than inverting Σ -1 , but still O k 3 . L is furthermore used for generating the draws θ s as either θ s = µ + Lε or θ s = µ + L -⊤ ε with ε, with ε ∼ N (0, I). As outlined in Appendix A, we devise the use of control variates to reduce the variance of the stochastic gradient estimators. Though the lower bound is not directly involved in EMGVB updates, it can be naively estimated at each iteration as Lt = 1 S S s=1 [p(θ) + log p(y|θ) -log q ζ (θ)]. As discussed in Appendix A, Lt is needed for terminating the optimization routine, verifying anomalies in the algorithm works (the LB should actually increase and converge) and comparing EMGVB with MGVB, see Section 5.

4.2. RETRACTION AND VECTOR TRANSPORT

Aligned with Tran et al. (2021a) , we adopt the retraction method advanced in (Jeuris et al., 2012) for the manifold M of symmetric and positive definite matrices R Σ -1 (ξ) = Σ -1 + ξ + 1 2 ξΣξ, where ξ ∈ T Σ -1 M, with ξ being the rescaled natural gradient β ∇Σ -1 L = -2β∇ Σ L. In practice, whenever applicable, as e.g. in the retraction, for granting the symmetric from of a matrix (or gradient matrix) S, we compute S as 1/2(S + S ⊤ ). Vector transport is as well easily implemented by T Σ -1 t →Σ -1 t+1 (ξ) = EξE ⊤ , where E = Σ -1 t+1 Σ t 1 2 , ξ ∈ T Σ -1 M. ( ) We refer to the Manopt toolbox (Boumal et al., 2014) for the practical details of implementing the above two algorithms in a numerically stable fashion. This translates into the momentum gradients ∇mom. Σ -1 L t+1 = ω T Σ -1 t →Σ -1 t+1 ∇mom. Σ -1 L t + (1 -ω) ∇Σ -1 L t+1 , µ L t+1 = ω ∇mom. µ L t + (1 -ω) ∇µ L t , where the weight 0 < ω < 1 is a hyper-parameter. The attentive reader may recognize the adoption of the form of retraction and parallel transport obtained from the SPD (matrix) manifold on the natural gradient obtained from the Gaussian manifold. This apparent inconsistency in mixing elements of different manifold structures is discussed in Appendix D.3. We show that, from a learning perspective, the discrepancy between the form of the SPD manifold Riemann gradient Σ -1 ∇ Σ -1 Σ -1 = -∇ Σ and the natural gradi- ent ∇Σ -1 L = -2∇ Σ L = -2Σ -1 ∇ Σ -1 Σ -1 is absorbed in the learning rate β. In particular, our update rule can be derived within a fully consistent SPD manifold setting by updating µ, 2Σ -1 . In the above view, we can now further clarify that the wording "Exact" in EMGVB is twofold. (i) In Tran et al. (2021a) the natural gradient Σ∇ Σ Σ is in place of the actual one 2ΣΣ -1 Σ, whose corresponding one for Σ -1 is the one that EMGVB actually adopts. (ii) Even by the adoption of the actual natural gradient -2Σ -1 ∇ Σ -1 LΣ, the use of the SPD retraction and vector transport forms 14,15, as of Tran et al. (2021a) , are not well-justified: in Appendix D.3 these are justified, and EMGVB is shown to be a consistent approach. Note that EMGVB is exact in the sense of the above, yet still approximate in absolute terms due to the use of retraction. Retractions are approximate forms of the exponential map tracing back vectors on the tangent space to the manifold, which is generally cumbersome transform to compute and impractical (e.g. Absil et al., 2009; Hu et al., 2020) . Algorithm 1 summarizes the EMGVB update for the Gaussian prior-variational posterior case. Computational aspects are discussed in Appendix B.2. Algorithm 1 EMGVB implementation 1: Set hyper-parameters: 0 < β, ω < 1, S 2: Set the type of gradient estimator, i.e. function log f (θs) 3: Set prior p(θ), likelihood form p(y|θ), and initial values µ, Σ -1 4: t = 1, Stop = false 5: Generate: θs ∼ qµ 1 ,Σ 1 , s = 1 . . . S 6: Compute: ĝµ = Σ ∇µL, ĝΣ -1 = -2 ∇ΣL ▷ eqs. 11,12 7: mµ = ĝµ, m Σ -1 = ĝΣ -1 ▷ initialize momentum 8: while Stop = true do 9: µ = µ + βmµ ▷ EMGVB update for µ 10: Σ -1 old = Σ -1 , Σ -1 = R Σ -1 (βm Σ -1 ) ▷ EMGVB update for Σ -1 11: Generate: θs ∼ qµ t ,Σ t , s = 1 . . . S 12: Compute: ĝµ, ĝΣ -1 ▷ as in line 6 13: mµ = ωmµ + (1 -ω)ĝµ ▷ eq. 16 14: m Σ -1 = T Σ -1 old →Σ -1 (m Σ -1 ) + (1 -ω)ĝ Σ -1 ▷ eq. 17

15:

Compute: Lt ▷ eq. 13 16: t = t + 1, Stop = fexit L, P, tmax ▷ see Appendix A 17: end while 4.3 FURTHER CONSIDERATIONS Along with the choice of the gradient estimator and the use of momentum, there are other aspects of relevance in the implementation of EMGVB. Details are discussed in Appendix A.

4.4. ISOTROPIC PRIOR

For mid-sized to large-scale problems, the prior is commonly specified as an isotropic Gaussian of mean µ 0 , often µ 0 = 0, and covariance matrix Σ -1 0 = τ I, with τ > 0 a scalar precision parameter. The covariance matrix of the variational posterior can be either diagonal or not. Whether a full covariance specification (d 2 -d parameters) can provide additional degrees of freedom that can gauge models' predictive ability, a diagonal posterior (d parameters) can be practically and computationally convenient to adopt e.g. in large-sized problems. The diagonal-posterior assumption is largely adopted in Bayesian inference and VI (e.g. Blundell et al., 2015; Ganguly & Earp, 2021; Tran et al., 2021b) and Bayesian ML applications (e.g. Kingma & Welling, 2013; Graves, 2011; Khan et al., 2018b; Osawa et al., 2019) , in Appendix A we provide a block-diagonal variant.

4.4.1. ISOTROPIC PRIOR AND DIAGONAL GAUSSIAN POSTERIOR

Assume a d-variate diagonal Gaussian variational specification, that is q ∼ N (µ, Σ) with diag(Σ) = σ 2 , Σ ij = 0, for i, j = 1, . . . , d and i ̸ = j. In this case, Σ -1 = diag 1/σ 2 , where the division is intended element-wise, and ∇ Σ L = diag(∇ σ 2 L), is now a d × 1 vector. Updating Σ -1 amounts to updating σ -2 : the natural gradient retraction-based update for σ -2 is now based on the equality ∇σ -2 L = -2∇ σ 2 L, so that the general-case EMGVB update reads σ -2 t+1 = R σ -2 t+1 (-2β∇ σ 2 L) and µ t+1 = µ t + σ 2 t ⊙ β∇ µ L where ⊙ denotes the element-wise product. The corresponding MC estimators for the gradients are -2 ∇σ 2 L ≈ c σ 2 t + σ -2 ⊙ 1 S S s=1 1 d -(θ s -µ t ) 2 ⊙ σ -2 log p t (y|θ s ) (19) σ 2 ⊙ ∇µ L ≈ c µt + 1 S S s=1 [(θ s -µ t ) log p t (y|θ s )], where c σ 2 t = -σ -2 + τ , c µt = τ σ 2 ⊙ (µ -µ 0 ), θ s ∼ N µ, diag σ 2 , s = 1 . . . , S, (θ s -µ t ) 2 is intended element-wise, and 1 d = (1, . . . , 1) ⊤ ∈ R d . In the Gaussian case with a general diagonal covariance matrix, retrieving σ 2 from the updated σ -2 is inexpensive as σ 2 i = 1/σ -2 i , indicating that in this context the use of the h-function estimator is never advisable.

4.4.2. ISOTROPIC PRIOR AND FULL GAUSSIAN POSTERIOR

Because of the full form of the covariance matrix, this case is rather analogous to the general one. In particular, factors c µt and c Σt in eq. 29 are replaced by (i) c Σt = -Σ -1 + τ , c µt = τ Σ(µ t -µ 0 ) or (ii) c Σt = 0, c µt = 0, respectively under the Gaussian-prior case (log f t (θ s ) = log p(y|θ s )) and the general one (log f t (θ s ) = h(θ)). The MC estimators 7 and 8 apply: (i) leads to an estimator of reduced variance, while (ii) is identical to the general case.

5. EXPERIMENTS

We validate and explore the empirical validity and feasibility of our suggested optimizer over four datasets and 12 models. These include logistic regression (Labor dataset), different volatility models on S&P 500 returns (Volatility dataset), and linear regression on Stock indexes (Istanbul data). Details on the datasets and models are summarized in Appendix 11. The main baseline for model comparison is the MGVB optimizer and (sequential) MCMC estimates representative of the true posterior. Additionally, we also include results related to the QBVI optimizer Magris et al. (2022) . In this section, we report synthetic results on two tasks: logistic regression (classification) and volatility modeling with the FIGARCH model (regression). Results on the other datasets and models appear in Appendix C.4. Matlab codes are available at github.com/blinded. The bottom rows in Figures 1, 5 clearly show that our results align with the sampling-based MCMC results and with the Maximum Likelihood (ML) estimates. Whereas marginal posterior approximations are rather close between EMGVB and MGVB, the top row in Figures 1, 5 show that the parameter learning is qualitatively different. The panels in Figure 1 depict the LB optimization process across the iterations. In a diagonal posterior setting, MGVB is exact and aligns with EMGVB (middle panel), however for non-diagonal posteriors, EMGVB's lower bound shows an improved convergence rate on both the training and test sets (left and right panels respectively). Furthermore, we observe that the adoption of the h-function estimator has a minimal impact. From the point of view of standard performance measures, Figure 2 shows that compared to MGVB, at early iterations, EMGVB displays a steeper growth in model accuracy, precision, recall, and f1-score both on the training test and test set. Ultimately EMGVB and MGVB measures converge to the same value, yet the exact nature of the EMGVB update leads to convergence after approximately 200 iterations on the training set as opposed to 500 for MGVB. A similar behavior is observed for the FIGARCH(1,1,1) model, the top row of Figure 5 . Tables 1 and 2 report such performance measures for the optimized variational posterior along with the value of the maximized lower bound. EMGVB is very close to the baselines and well-aligned with the MCMC and ML estimates, which, along with the estimates (in Table 3 for the logistic regression), show that EMGVB converges towards the same LB maximum, with a comparable predictive ability with respect to the alternatives. It is thus not surprising that the estimates, performance metrics, and value of the optimized LB are similar across the optimizers: they all converge to the same minimum but in a qualitatively different way. Also estimated variational covariance matrix for the full-covariance cases, closely replicates the one from the MCMC chain (see tables 4, 6). For the diagonal cases, MCMC and ML covariance matrices are not suitable for a direct comparison (see Appendix C.3). The sanity check in Figure 4 furthermore shows that the learning of either the mean and covariance variation parameters is smooth and steady without wigglings or anomalies. As expected, the non-diagonal version leads to faster convergence while the use of the h-function estimator slightly stabilizes the learning process. In Table 7 we also show that the impact the number of MC samples S has on posterior means, likelihood, performance measures, and the optimized lower bound is minor for both the training and test phases. 

6. CONCLUSION

Within a Gaussian variational framework, we propose an algorithm based on manifold optimization to guarantee the positive-definite constraint on the covariance matrix, employing exact analytical solutions for the natural gradients. Our black-box optimizer results in a ready-to-use solution for VI, scalable to structured covariance matrices, that can take advantage of control variates, momentum, and alternative forms of the stochastic gradient estimator. We show the feasibility of our solution on a multitude of models. Our approach aligns with sampling methods and provides advantages over state-of-the-art baselines. Future research may investigate the applicability of our approach to a broader set of variational distributions, explore the advantages and limitations of the black-box framework, or attempt at addressing the online inversion bottleneck of manifold-based VI.

A FURTHER CONSIDERATIONS ON EMGVB IMPLEMENTATION A.1 VARIANCE REDUCTION

As EMGVB does not involve model gradients the use of the reparametrization trick (RT) (Blundell et al., 2015) is not immediate. While eq. 5 would generally hold, the form of the EMGVB gradient estimators under the RT would differ from eqs. 7, 8: we develop EMGVB as a general and readyto-use solution for VI that does not require model-specific derivations, yet one may certainly enable the RT within EMGVB. Though the use of the RT is quite popular in VI and ML as it empirically yields more accurate estimates of the gradient of the variational objective than alternative approaches (Mohamed et al., 2020) , note that the variance of the RT estimator can be higher than that of the score-function estimator and the path-wise RT estimator is not necessarily preferable (Xu et al., 2019; Mohamed et al., 2020) . Not less importantly, note that the use of the score estimator is broader as it does not require log p(y|θ) to be differentiable. Control Variates (CV) stand as a simple and effective approach for reducing the variance of the MC gradient estimator, e.g. (Paisley et al., 2012) . The CV estimator 1 S S s=1 ∇ ζ [log q(θ s )](log p(y|θ s ) -c), is unbiased for the expected gradient, but of equal or smaller variance that the naive MC one. For i = 1, 2 the optimal c i minimizing the variances of the CV estimator is c ⋆ = Cov(∇ ζ [log q(θ)] log p(y|θ), ∇ ζ log q(θ))/Var(∇ ζ log q(θ)). By enabling CVs, S can be tuned to balance the estimates' variance and computational performance. In Table 7 we asses that for logistic regression values of S as little as 10 appear satisfactory, yet if the iterative computation of the log-likelihood is not prohibitive we suggest the adoption of a more generous value, e.g. S ≈ 100. Magris et al. (2022) furthermore shows that the denominator in 21 is analytically tractable for a Gaussian q, reducing the variance of estimated c ⋆ and thus improving the overall CVs' efficiency. If model gradients are available one may use CV along with the RT to further enhance the efficiency of the expected gradient estimation.

A.2 LB SMOOTHING AND STOPPING CRITERION

The stochastic nature of the gradient estimator introduces some noise in the estimated LB L that can violate its expected non-decreasing behavior across the iterations. By setting window of size w we rather consider the moving average on the LB, Lt = 1/w t i=1 Lt-i+1 , whose variance is reduced and behavior stabilized. By keeping track of max L we terminate the the learning after max L did not improved for P iterations (patience parameter) or after a maximum number of iteration (t max ) is reached (stopping criterion f exit function in Algorithm 1).

A.3 CONSTRAINTS ON MODEL PARAMETERS

EMGVB assumes a Gaussian variational posterior, that is the parameters are unbounded and defined over the entire real line. Assuming that a model parameter θ is required to lie on a support S, to impose such a constraint it suffices to identify a feasible transform T : R → S and apply the EMGVB update to the unconstrained parameter ψ = T -1 (θ). Certainly, by applying VI on ψ we require that the variational posterior assumption holds for ψ rather than θ. The actual distribution for θ under a Gaussian variational ψ can be computed (or approximated with a sampling method) as N T -1 (θ); µ, Σ | det(J T -1 (θ))|, with J T -1 the Jacobin of the inverse transform (Kucukelbir et al., 2015) . Example. For the GARCH(1,1) model (see Section 5) the intercept ω, the autoregressive coefficient of the lag-one squared return α and moving-average coefficient β of the lag-one conditional variances need to satisfy the stationarity conditions α + β < 1 and ω > 0, α ≥ 0, β ≥ 0. Such conditions are unfeasible under a Gaussian variational approximation: we estimate the unconstrained parameters ψ ω , ψ α , ψ β , where ω = T (ψ ω ), α = T (ψ α )(1 -T (ψ β )), β = T (ψ α )T (ψ β ) with T (x) = exp(x)/(1 + exp(x)) for x real, on which Gaussian's prior-posterior assumptions apply.

A.4 GRADIENT CLIPPING

Especially for low values of S, and even more if a variance control method is not adopted, the stochastic gradient estimate may be poor and the offset from its actual value large. This may result in updates whose magnitude is too big either in a positive or negative direction. Especially at early iterations, and with poor initial values, this issue may e.g. cause complex roots in eq. 15. At each iteration t, to control for the magnitude of the stochastic gradient ĝt we rescale its ↕ 2 -norm ||ĝ t || whenever it is larger than a fixed threshold l max by replacing ĝt with ĝt l max ||ĝ t ||, which preserves its norm. Gradient clipping can be either applied to the gradients ∇µ , ∇Σ or to the natural gradients Σ ∇µ , -2 ∇Σ and in any case before obtaining momentum gradients. We suggest applying gradient clipping readily to ∇µ , ∇Σ to promptly mitigate the impact that far-from-the-mean estimates may have on successive computations.

A.5 ADAPTIVE LEARNING RATE

It is convenient to adopt an adaptive learning rate or scheduler for decreasing β after a certain number of iterations. Typical options are that of reducing β by a certain factor (e.g. 0.2) every set number of iterations (e.g. 100), or decrease it after iteration t ′ e.g. by setting β t = min -(β, β t ′ t ) , where t ′ is a fraction (e.g. 0.7) of the maximum number of iterations t max allowed before the LB optimization is stopped.

A.6 CLASSIFICATION VS. REGRESSION

We point out that the EMGVB framework is applicable to both regression and classification problems. In generic DL classification problems, predictions are based on the class of maximum probability which is computed by applying a softmax function at the last layer returning to the probability p i (c j ) of a certain class c j for the i-th sample, i = 1, . . . , M . From these probabilities it is straightforward to compute the model log-likelihood as M i=1 y i,ctrue log p i (c true ), with y i,ctrue representing the one-key-hot encoding of the i-th sample, whose true class is c true . For regression, the parametric form of log p(y|θ) is clearly different and model-specific (e.g. regression with normal errors as opposed to Poisson regression, with the latter being feasible as the use of the score estimator does not require the likelihood to be differentiable). Note that however additional parameters may enter into play besides the ones involved in the back-bone forward model: e.g. for regression with normal errors tackled with an artificial neural network, the Gaussian-form likelihood involves the regression variance, which is an additional parameter over the network's ones, or for Student-t errors the degree of freedom parameter ν (with the constraint ν > 2). See the application in Appendix C.3.

A.7 MEAN-FIELD VARIANT

Assume that for a d-variate model the Gaussian variational posterior is factorized as q ζ (θ) = q ζ1 (θ 1 )q ζ2 (θ 2 ), . . . , q ζ h (θ h ) = h i=1 q ζ h (θ h ), with h ≤ k. If h = d this corresponds to a full-diagonal case where each θ i is a scalar and the covariance between ζ 1 , . . . , ζ k is ignored. If h < d, the variational covariance matrix Σ of q ζ corresponds to a block-diagonal matrix, and some of the θs are indeed vectors. In any case, the expected gradients with respect to each block of parameters can be computed independently, given the scalars h ζ h (θ) or log p(y|θ), depending on whether the h-function estimator is used. For a Gaussian prior, its covariance matrix can be diagonal, full, block-diagonal with a structure matching or not that of S. Eqs. 11, 12, with the condition 29 can be used as a starting point to derive casespecific EMGVB variants based on the form of the prior covariance. Algorithm 2 summarizes the case with an isotropic Gaussian prior of zero-mean and variance τ , using the gradient estimator based on the log-likelihood: µ i , Σ i (Σ -1 i ) respectively denote the mean and covariance (precision) matrix of the i-th block of Σ. In this case, the block-wise natural gradients are estimated as Σ ∇µi L = -Sτ -1 µ i + 1 S S s=1 [(θ si -µ i ) log p(y|θ s )], -2 ∇Σi L = -Σ i + diag τ -1 + 1 S S s=1 Σ -1 i -Σ -1 i (θ si -µ i )(θ si -µ i ) ⊤ Σ -1 i log p(y|θ s ) , where θ s is a sample from the variational posterior. θ s can be obtained by concatenating marginal samples from each block, θ s = [θ s1 , . . . , θ s h ], with θ si ∼ q µi,Σi , i = 1, . . . , h Algorithm 2 EMGVB for a block-diagonal covariance matrix (prior with zero-mean and covariance matrix τ I)  ĝµ i = Σi ∇µ i L, ĝΣ -1 i = -2 ∇Σ i L 9: mµ i = ĝµ i , m Σ -1 i = ĝΣ -1 i 10: end for 11: while Stop = true do 12: L = 0 13: for i = 1, . . . , h do 14: µi = µi + βmµ i 15: Σ -1 old,i = Σ -1 i , Σ -1 i = R Σ -1 i βm Σ -1 i 16: end for 17: Generate: θs = [θs 1 , . . . , θs h ], θs i ∼ qµ i ,Σ i , s = 1 . . . S, i = 1, . . . , h 18: Compute: log p(y|θs), log p(θs) 19: for i = 1, . . . , h do 20: Compute: ĝµ i , ĝΣ -1 i 21: mµ i = ωmµ i + (1 -ω)ĝµ i 22: m Σ -1 i = T Σ -1 old,i →Σ -1 i m Σ -1 i + (1 -ω)ĝ Σ -1 i 23: Lt = L + 1 S log p(θs) + 1 S log p(y|θs) -1 S log qm i ,Σ i (θs i ) 24: t = t + 1, Stop = fexit L, P, tmax 25: end for 26: end while

B OPTIMALITY AND EFFICIENCY B.1 OPTIMALITY

Several authors (e.g. Khan & Lin, 2017; Khan et al., 2018a) obtained update rules for VI by developing over SGD-like updates for the natural parameters of the variational posterior. By updating the natural parameter λ and exploiting its definition, it is relatively simple to recover the update rules for µ and Σ. Indeed from the SGD update for the natural parameter λ t+1 = λ t + β ∇λ L(λ) it follows µ t+1 = Σ t+1 Σ -1 t µ t + β ∇λ L(λ t ) = Σ t+1 Σ -1 t -2β[∇ Σ L(λ t )] µ t + β[∇ µ L(λ t )] , (22) and Σ -1 t+1 = Σ -1 t -2β[∇ Σ L(λ t )] By replacing Σ -1 t -2β[∇ Σ L(λ t )] with Σ -1 t+1 in the update for µ, Khan & Lin (2017); Khan et al. (2018a) obtain µ t+1 = µ t + βΣ t+1 [∇ µ L(λ t )]. Eq. 23 does not apply to the EMGVB update as the update for Σ -1 is carried out with retraction and does not result from an SGD update of the natural parameter λ 2 = -1 2 Σ -1 t+1 , yet the form of equation 22 does apply. We refer to eq. 23 as an indirect update since derived from the natural parameter update. Note that, the µ update exploits Σ -1 t+1 , resulting in a one-step forward-looking rule. It is relevant to investigate whether the update 22 is preferable to the EMGVB update for µ. Intuitively one might expect that eq. 22 is preferable as it somewhat readily exploits the updated Σ t+1 value as soon as it becomes available. The following theorem however proves that this is not the case, a proof is provided in Appendix D.4. Theorem 1 For the Gaussian distribution with parameters ζ = (µ, vec(Σ)) the optimization problem ζ t+1 = arg min ζ ⟨ζ, ∇ ζ L(ζ t )⟩ + 1 β D KL (p ζ ||p ζt ), where ⟨•, •⟩ denotes the inner product and ∇ ζ L(ζ t ) = ( ∇ µ L(ζ), vec(∇ Σ L(ζ)) )| ζ=ζt , is convex with respect to ζ. The optimum update for µ is available in closed form and analogous to that of the EMGBV update. The objective in Theorem 1, is that of the mirror descent developed by Nemirovskij & Yudin (1983) , where a non-Euclidean geometry is induced by considering a penalized optimization obtained through proximity function such as the Bergman divergence, which equals the KL divergence for exponential-family distributions. In this regard see (Raskutti & Mukherjee, 2015) . Following Theorem 1, the update for µ in eq. 5 is optimal in terms of the above objective and perhaps counter-intuitively the indirect forward-looking update 22 is proved to provide non-optimal steps toward the maximization of the lower bound. The EMGVB update for µ is thus preferable over the alternative of recovering an indirect update rule for µ starting from an SGD update on the natural parameter as e.g. in (Khan & Lin, 2017; Khan et al., 2018a ): in the above terms of Theorem 1, the EMGVB update for µ is the best one could take.

B.2 COMPUTATIONAL ASPECTS

In terms of computational complexity, the exact EMGVB implementation is at no additional cost. Actually, the cost of computing the natural gradient in EMGVB as -2∇ -1 Σ L is cheaper than the one in MGVB, Σ∇ Σ LΣ, O k 3 operations for each matrix multiplication. However, both MGVB and EMGVB share a cumbersome matrix inversion. Going back to eqs. 12 and 11, it is noticeable that under the most general estimator based on the h-function, Σ is not involved in any computation, neither in the gradient involved in the updated for Σ -1 nor in that for µ, suggesting that implicitly the EMGVB optimization routine does not require the inversion of Σ -1 . The above point however ignores that the update for Σ -1 is masked by the underlying retraction. For the retraction form in eq. 14, both Σ -1 and its inverse Σ are needed, thus implying a matrix inversion at every iteration. That is, the covariance matrix inversion is implicit in MGVB and EMGVB methods, which both require Σ -1 and Σ at every iteration (with little surprise, as the form of the retraction is a second-order approximation of the exponential map). With the h-function estimator even though neither eq. 7 nor 8 involve Σ, the inversion of Σ -1 is still necessary, as Σ is required in retraction. Similarly, the adoption of the log-likelihood estimator under the Gaussian regime in eq. 29, is not computationally more expensive than the h-function case, as Σ, involved in c µt , is anyway required in retraction. As outlined in Section 4, Σ can be conveniently recovered from the Cholesky factor of Σ -1 , with a lesser number of flips. Lastly, if Σ -1 (Σ) is diagonal, the inversion is trivial and, when applicable, eq. 11 is preferred. Table 7 : Estimated parameters and performance measures on the labour dataset for EMGVB (fullposterior) for different sizes of the number of MC draws for the estimation of the stochastic gradients S. t refers to the run-time per iteration (in milliseconds), L(θ 0 ) to the LB evaluated at the initial parameters. For each S, a common random seed used.

C.2 VOLATILITY MODELS

Our second set of experiments involves the estimation of several GARCH-family volatility models. The models in Table 8 differ for the number of estimated parameters, the form of the likelihood function (which can be quite complex as for the FIGARCH models), and constraints imposed on the parameters. Besides the GARCH-type models, we include the well-known linear HAR model for realized volatility (Corsi, 2009) . We performed a preliminary study for retaining only relevant models, e.g. we observed that for a GARCH(1,0,2) β 2 is not significant, so we trained a GARCH(1,0,1), or that the autoregressive coefficient of the squared innovations is always significant only at lag one, so we did not consider further lags for α. For α, β, γ we restricted the search up to lag 2. Except for HAR's parameter β 3 , all the parameters of all the models are statistically significant under standard ML at 5%. Note that the aim of this experiment is that of applying VI and EMGVB to the above class of models, not to discuss their empirical performance or forecasting ability. For the reader unfamiliar with the above (standard) models, discussion and notation we refer e.g. to the accessible introduction of Teräsvirta (2009) . As for the Labour data, we report the values of the smoothed lower bound computed at the optimized parameter L(θ ⋆ ), the model's log-likelihood in the estimates posterior parameter p(y|θ ⋆ ) and the MSE between the fitted values and squared daily returns, used as a volatility proxy. Details on the data and hyperparameters are provided in Table 11 . Figures 6 provide sample illustrations of the lower bound maximization for the GJR(1,1,1) model (perhaps the most used and effective in applications beyond the standard GARCH(1,0,1)) and the FI-GARCH(1,1,1) model, the most complex one among our selection due to the form of the likelihood, constraints, and econometric interpretation. In general, beyond figure 6 , we witness a slight but consistent improved convergence of the lower bound towards its maximum the train set for EMGVB with respect to MGVB, the convergence of the LB at a similar level on the test sets and MSE that eventually converge to rather similar values but that in some cases can be quite different at early iterations (which is expected but irrelevant in applications as at θ ⋆ the measures are rather analogous. These observations are quantitatively supported by the results in 8, where all the optimizers lead to rather similar estimates and statistics. A visual inspection of the marginal densities as e.g. in Figures 7 and 5 reveals that in general both EMGVB and MGVB perform quite well compared to MCMC sampling and that the variational Gaussian assumption is quite feasible for all the volatility models. Note that the skew observed e.g. in Figure 7 for the ω parameters and the non-standard form of e.g. ψ for the FIGARCH models is due to the parameter transformation: VI is applied on the unconstrained parameters (ψ ω , ψ ϕ ) and such variational Gaussians are back-transformed on the original constrained parameter space where the distributions are generally no longer Gaussian (Figure 5 opposed to Figure 5 ). with ε t ∼ N 0, σ 2 and the covariates respectively correspond to the S&P 500 index, Japanese Nikkei index, Brazilian Bovespa index, German DAX index, UK FTSE index, MSCI European index, and MSCI emerging market index. We estimate the coefficients β 0 , . . . , β 7 and the transformed parameter ψ σ = log(σ), from which σ (standard error of the disturbances) is computed as σ = exp(ψ σ ) + Var(ψ σ )/2, with Var(ψ σ ) read from the variation posterior covariance matrix, while for ML regression corresponds to the residuals' root mean squared error. We consider the following structures for the variational posterior: (i) full covariance matrix (Full), (ii) diagonal covariance matrix (Diagonal), (iii) block-diagonal structure with two blocks of sizes 8 × 8 and 1 × 1 (Block 1) and, (iv) block diagonal structure with blocks of sizes 1 × 1, 3 × 3, 2 × 2, 2 × 2 and 1 × 1 (Block 2). Case (iii), models the covariance between the actual regressors but ignores their covariance with the regression standard error. Case (iv) groups in the 3 × 3 block the indices traded in non-European stock exchanges, and in the remaining 2 × 2 blocks the indices referring to European exchanges and the two MSCI indexes. Furthermore, covariances between the intercept and regression's standard error with all the other variables are set to zero. The purpose of this application is that of providing an example for Algorithm 2 and the discussion in Appendix A.7, rather than providing an effective forecasting model supported by a solid econometric rationale. Yet, structures (iii) and (iv) correspond to a quite intuitive grouping of the variables involved in our regression problem, motivating the choice of the dataset. Tables 9 and 10 summarize the estimation results. Table 9 shows that the impact of the different structures of the covariance matrix is somewhat marginal in terms of the performance measures, with respect to each other and with respect to the ML estimates. As for the logistic regression example, we observe in the most constrained cases (ii) and (iii) certain the estimates of certain posterior means slightly deviate from the other cases indicating that the algorithm perhaps terminates at a different local maximum. Regarding variational covariances reported in Table 10 , there is remarkable accordance between the covariance structures (i), (iii) and ML, while for the diagonal structure (ii) and block-diagonal structure (iv) the covariances are misaligned with the ML and full-diagonal ones, further suggesting that the algorithms convergence at different maxima of the lower bound. From a theoretical perspective, if Σ is the covariance matrix of the joint distribution of the eight variates (case (i)), by the properties of the Gaussian distribution, blocks e.g. in case (iv) and diagonal entries should match the corresponding elements in Σ. It is however not surprising to observe that the elements in the sub-matrices e.g. in cases (ii) and (iv) deviate from those Σ. Indeed, the results refer to independent optimizations of alternative models (over the same dataset) that are not granted to converge at the same maximum (and thus distribution). Across the covariance structures (i) to (iv) the optimal variational parameters correspond to different multivariate distributions, that independently maximize the lower bound, and that are not constrained to be related to each other. This is indeed confirmed by the differences in the maximized Lower bound Lθ ⋆ in Table 9 and in the different levels at which the curves in Figure 8 are observed to converge. Thus the blocks in the covariance matrix under case (iv) do not match the entries in Σ. In this light, the ML estimates' variances in the third panel of Table 10 can be compared to those of case (i), but are misleading for the other cases, as the covariance matrix of the asymptotic (Gaussian) distribution of the ML estimator is implicitly full.  L(λ) = ∇ λ m∇ m L(m) = I ζ ∇ m L(λ), implying that the natural gradient ∇λ L(λ) = I -1 ζ ∇ λ L(λ) = ∇ m L(m) can be easily computed as the euclidean gradient with respect to the expectation parameters, without requiring the inverse FIM (Khan & Lin, 2017; Khan et al., 2018a) . Khan et al. (2018a) derive ∇ m1 L(m) = ∇ µ L(m) -2[∇ Σ L(m)]µ and ∇ m2 L(m) = ∇ Σ L(m), which allows expressing the euclidean gradients with the respect to the expectation parameters as euclidean gradients with respect to µ and Σ, thus providing an exact relationship between the natural gradients of the LB and its euclidean gradients with respect to the common (µ, Σ) parametrization for q λ . Note that the above (and the following) applies to Gaussian distributions only. Proof: The first natural gradient in Proposition 1 is trivial as it follows from the definition of natural gradient and the Gaussian FIM in eq. 1. If ξ ≡ ξ(λ) is a smooth reparametrization of the variational density, (Lehmann & Casella, 1998) . If in addition ξ is an invertible function of λ, then J is itself invertible. Therefore for Σ -1 = -2λ 2 , the above implies that I ξ = -E q ξ (θ) ∇ 2 ξ log q ξ (θ) = JI ζ J ⊤ with J = ∇ ξ λ being the Jacobian matrix I -1 Σ -1 = (∇ Σ -1 λ 2 ) -1 ⊤ I -1 λ2 (∇ Σ -1 λ 2 ) -1 , with (∇ Σ -1 λ 2 ) -1 = ∇ λ2 Σ -1 . Thus for the natural gradient ∇Σ -1 L, ∇Σ -1 L = I -1 Σ -1 ∇ Σ -1 L = I -1 Σ -1 ∂λ 2 ∂Σ -1 ∂L ∂λ 2 = I -1 Σ -1 (∇ Σ -1 λ 2 )∇ λ2 L = (∇ Σ -1 λ 2 ) -1 ⊤ I -1 λ2 (∇ Σ -1 λ 2 ) -1 (∇ Σ -1 λ 2 )∇ λ2 L = ∇ λ2 Σ -1 ⊤ ∇λ2 L = -2 ∇λ2 L. From eqs. 25 and 26 ∇λ2 L = ∇ m2 L = ∇ Σ L, so that ∇Σ -1 L = -2 ∇λ2 L = -2∇ Σ L, which proves the proposition.

D.2 GENERAL FORM OF THE EMGVB UPDATE

For a prior p ∼ N (µ 0 , Σ 0 ) and a variational posterior q ∼ N (µ, Σ), by rewriting the LB as E q ζ [h ζ (θ)] = E q ζ [log p(θ) -log q ζ (θ) + log p(y|θ)] = E q ζ [log p(θ) -log q ζ (θ)]+E q ζ [log p(y|θ)], we decompose ∇ ζ L as ∇ ζ E q ζ [log p(θ) -log q ζ (θ)] + ∇ ζ E q ζ [log p(y|θ)]. As in 6, we apply the log-derivative trick on the last term and write ∇ ζ E q [log p(y|θ)] = E q ζ [∇ ζ [log q ζ (θ)] log p(y|θ)]. On the other hand, it is easy to show that up to a constant that does not depend on µ and Σ E q ζ [log p(θ) -log q ζ (θ)] = - 1 2 log |Σ 0 | + 1 2 log |Σ| + 1 2 d - 1 2 tr Σ -1 0 Σ - 1 2 (µ -µ 0 ) ⊤ Σ -1 0 (µ -µ 0 ), so that ∇ Σ E q ζ [log p(θ) -log q ζ (θ)] = 1 2 Σ -1 - 1 2 Σ -1 0 , ∇ µ E q ζ [log p(θ) -log q ζ (θ)] = -Σ -1 0 (µ -µ 0 ) . For the natural gradients, we have ∇Σ -1 E q ζ [log p(θ) -log q ζ (θ)] = -2∇ Σ E q ζ [log p(θ) -log q ζ (θ)] = -Σ -1 + Σ -1 0 , ∇µ E q ζ [log p(θ) -log q ζ (θ)] = Σ∇ µ E q ζ [log p(θ) -log q ζ (θ)] = -ΣΣ -1 0 (µ -µ 0 ), while the feasible naive estimators for ∇µ E q ζ [log p(y|θ)] and ∇Σ -1 E q ζ [log p(y|θ)] turn analogous to the right-hand sides of eqs. 9, 10 with h ζ replaced with log p(y|θ). This leads to the general form of the EMGVB update, based either on the h-function gradient estimator (generally applicable) or the above decomposition (applicable under a Gaussian prior): ∇µ L(ζ t ) ≈ c µt + 1 S S s=1 [(θ s -µ t ) log f (θ s )] (27) ∇Σ -1 L(ζ t ) ≈ C Σt + 1 S S s=1 Σ -1 t -Σ -1 t (θ s -µ t )(θ s -µ t ) ⊤ Σ -1 t log f (θ s ) where                         C Σt = -Σ -1 t + Σ -1 0 c µt = -Σ t Σ -1 0 (µ t -µ 0 ) log f (θ s ) = log p(y|θ s ) if p is Gaussian    C Σt = 0 c µt = 0 log f (θ s ) = h ζt (θ s ) if p is Gaussian or not (29) D.3 JUSTIFICATION FOR THE EMGVB UPDATE For any positive definite matrix S the Riemann gradient ∇S L, for a differentiable function L, is S∇ S LS (Hosseini & Sra, 2015) . This is the form of the Riemann gradient obtained from the SPD (Symmetric and Positive Definite) matrix manifold, for which the following retraction and parallel transport equations for the SPD (matrix) manifold apply: R S (ξ) = S + ξ + 1 2 ξS -1 ξ, where ξ ∈ T S M, T St→St+1 (ξ) = EξE ⊤ , where E = S t+1 S -1 t 1 2 , ξ ∈ T S M. ( ) with ξ being the rescaled Riemann gradient β ∇S L = βS∇ S LS obtained from the SPD manifold. β > 0 rescales the tangent vector and is arbitrary. From an algorithmic perspective, β is interpreted as a learning rate, driving the magnitude of the gradient component in the retraction. For the precision matrix Σ -1 , ∇Σ -1 L = Σ -1 ∇ Σ -1 LΣ -1 . ( ) On the other hand, for the natural gradient ∇Σ -1 L = -2∇ Σ L = 2Σ -1 ∇ Σ -1 LΣ -1 , where the first equality comes from Proposition 1 and the second is easy to prove with simple matrix algebra and is furthermore analogous to the form of 2. In this regard, more can be found in (Barfoot, 2020) . The natural gradient is obtained from the Gaussian manifold, so that applying the above SPD manifold retraction and parallel transport equations is technically incorrect. The concept of retraction is general and indeed eq. 4 denotes a generic retraction function R -1 Σ (•), whose specific form is specified in Section 4.2 and coincides with eq. 30. The use of eq. 30, which is specific for the SPD manifold, with the natural gradient obtained from the Gaussian manifold, appears incorrect. The gradient ∇Σ -1 L = -2∇ Σ L is not a Riemannian gradient for the SPD manifold but a natural gradient obtained from the Gaussian manifold and equal to 2 ∇Σ L. The actual exponential map and retraction for updating Σ -1 based on ∇Σ -1 need to be separately worked out for the Gaussian manifold by solving a system of ordinary differential equations, whose curbstone solutions do not coincide with those obtained within the SPD manifold. In this light, the use of 30 for Σ and the natural gradient in Tran et al. (2021a) is not well-justified (Lin et al., 2020) . Indeed, their approach can be thought of as inexact in two ways. (i) The natural gradient should read as 2Σ∇ S LS in place of Σ∇ S LS (eq. 5.4 in (Tran et al., 2021a) ): the form of retraction and parallel transport therein applied is consistent with the adopted form of the natural gradient (Σ∇ S LS) which is indeed a Riemann gradient for the SPD manifold, however, the actual natural gradient is 2Σ∇ S LS. (ii) Perhaps the form Σ∇ S LS in place of 2Σ∇ S LS is a typo, thus the application of the SPD-manifold form of the retraction and parallel transport is inexact as it is not applied to a Riemann gradient obtained from the SPD manifold but to the natural gradient obtained from the Gaussian manifold, as also discussed in (Lin et al., 2020) . In this view, our discussion in Section 4 is subject to the same inexact setting as the above case (ii) -a correct form for the natural gradient but mixing manifold structures-, under which the adoption of 30 and 31 is, as in Tran et al. (2021a) not justified (though working in practice). We now show that the forms of retraction and parallel transport in Section 4.2 are justified and arise from the consistent use of the SPD manifold for updating 2Σ -1 , from which the update for Σ -1 follows, and corresponds to the retraction from in Section 4.2. Later, we show that parallel transport in eq. 15 also applies. Consider of updating µ, 2Σ -1 in place of µ, Σ -1 . According to eq. 32 the Riemann gradient of 2Σ -1 is for the SPD manifold: ∇2Σ -1 L = 2Σ -1 ∇ 2Σ -1 L 2Σ -1 = 2Σ -1 1 2 ∂L ∂Σ -1 2Σ -1 = 2Σ -1 ∇ Σ -1 L 2Σ -1 = ∇Σ -1 L. Thus 2Σ -1 ∇ Σ -1 L 2Σ -1 = ∇Σ -1 L is the Riemann gradient w.r.t. 2Σ -1 obtained from the SPD manifold. 2Σ -1 can be updated by the retraction in eq. 30 with the Riemann gradient ∇2Σ -1 L. The update is now legit and justified, as it consistently adopts the SPD manifold Riemann gradient and the SPD manifold form of retraction: 2Σ -1 ← R 2Σ -1 β ∇2Σ -1 = R 2Σ -1 β ∇Σ -1 L , that is, 2Σ -1 t+1 = 2Σ -1 t + β ∇Σ -1 L + 1 2 β 2 ∇Σ -1 L 1 2 Σ ∇Σ -1 L . ( ) As β > 0 is arbitrary we can rewrite the above in terms of an arbitrary β ′ = 2β (i.e. simple reparametrization of the hyperparameter), 2Σ -1 t+1 = 2Σ -1 t + β ′ ∇Σ -1 L + 1 2 β ′2 ∇Σ -1 L 1 2 Σ ∇Σ -1 L . ( ) The update for Σ -1 follows, Σ -1 t+1 = Σ -1 t + β ′ 2 ∇Σ -1 L + 1 2 β ′2 4 ∇Σ -1 L Σ ∇Σ -1 L = Σ -1 t + β ′ 2 ∇Σ -1 L + 1 2 β ′ 2 2 ∇Σ -1 L Σ ∇Σ -1 L = Σ -1 t + β ∇Σ -1 L + 1 2 β 2 ∇Σ -1 L Σ ∇Σ -1 L (35) = R Σ -1 β ∇Σ -1 L = R Σ -1 (-2β∇ Σ L) where β ′ = β/2. From the above, the main equalities are R 2Σ -1 β∇ R 2Σ -1 = R Σ -1 β ∇Σ -1 L = R Σ -1 (-2β∇ Σ L), whose interpretation is as follows. The retraction on 2Σ -1 with the Riemann gradient 2β ∇2Σ -1 leads to an update for Σ -1 (eq. 35) which is the same update obtained with the retraction in 30 on Σ -1 with β ∇Σ -1 L. This last gradient is analogous to -2β∇ Σ L (Proposition 1) and the corresponding retraction for is that presented in Section 4.2. Blindly updating Σ -1 with R Σ -1 (-2β ′ ∇ Σ L) is by itself inexact as it involves the natural gradient from the Gaussian manifold, however, this is equivalent to the update for Σ -1 that one obtains by in updating 2Σ -1 with the consistent retraction for the SPD manifold R 2Σ -1 β ∇2Σ -1 . From the above we also have that, logically, the arbitrary stepsize β for updating Σ -1 is half of that used for updating 2Σ -1 , 2β. A similar argument holds for vector transport. Consider the vector transport for 2Σ -1 , T 2Σ -1 t →2Σ -1 t+1 ∇2Σ -1 L = E 2β ∇2Σ -1 L E ⊤ with E = 2Σ -1 t+1 2Σ -1 t -1 1 2 = Σ -1 t+1 Σ t 1 2 . The vector transport for Σ -1 is then T Σ -1 t →Σ -1 t+1 ∇Σ -1 L = 1 2 T 2Σ -1 t →2Σ -1 t+1 ∇2Σ -1 L = E β ∇2Σ -1 L E ⊤ (36) = E β ∇Σ -1 L E ⊤ . Now note that, E β ∇Σ -1 L E ⊤ = T Σ -1 t →Σ -1 t+1 β ∇Σ -1 L = T Σ -1 t →Σ -1 t+1 (-2β∇ S L), as in Section 4.2. The vector transport in the form of eq. 31 is consistently applied to 2Σ -1 based on the Riemann SPD manifold gradient ∇2Σ -1 L, from which the vector transport for Σ -1 follows (eq. 36), which equals to the vector transport in eq. 31 applied to Σ -1 based on the rescaled natural gradient β∇ S L = -2β∇ S L, eq. 37.

D.4 PROOF OF THEOREM 1

For a Gaussian distribution distribution q with parameter ζ = (µ, vec(Σ)), be ∇ ζ L(ζ t ) = (∇ µ L(ζ t ), vec(∇ Σ L(ζ t ))) where ∇ x L(ζ t ) is the derivative of L(ζ) with respect to x evaluated at ζ = ζ t . Furthermore, adopt the following short-hand notation q ζ := q(µ, Σ) and q ζt := q(µ t , Σ t ). By using the well-known form of the KL divergence between two multivariate Gaussian distributions, the optimization problem can be written as: ⟨ζ, ∇ ζ L(ζ t )⟩ - 1 β D KL (q ζ ||q ζt ) = µ ⊤ ∇ µ L(ζ t ) + vec(Σ) ⊤ vec(∇ Σ L(ζ t )) - 1 2β log |Σ t | |Σ| -d + tr Σ -1 t Σ ⊤ + (µ -µ t )Σ -1 t (µ -µ t ) = µ ⊤ ∇ µ L(ζ t ) + tr(Σ∇ Σ L(ζ t )) - 1 2β log |Σ t | |Σ| -d + tr Σ -1 t Σ ⊤ + (µ -µ t )Σ -1 t (µ -µ t ) . Note that ∇ µ L(ζ t ) and ∇ Σ L(ζ t ) are now constants and that the Hessian of the above equation amounts to the Hessian of the KL divergence. Furthermore the FIM of q ζ equals the Hessian of the function ζ t → D KL (q ζ ||q ζt ) evaluated at ζ t = ζ, that is of the above KL divergence: I ζ = ∇ 2 ζt D KL (q ζ ||q ζt ) ζt=ζ . The multivariate Gaussian distribution is a full-rank exponential family for which the negative Hessian of the log-likelihood (FIM) is the covariance matrix of the sufficient statistics. The FIM is a



We present the MGVB optimizer by exactly following(Tran et al., 2021a).Lin et al. (2020) assert that in(Tran et al., 2021a) there is a typo as theirI -1 ζ (Σ) term reads Σ -1 ⊗ Σ -1 in place of 2 Σ -1 ⊗ Σ -1, which would lead to the actual natural gradient 2Σ∇SΣ (e.g.Barfoot, 2020). While their observation is valid, we argue that the omission of the constant is embedded in the approximation, as it is also omitted from the implementation codes for MGVB, where ∇ΣL is computed as Σ∇SΣ. To clarify, ∇ΣL = 2Σ∇SLΣ is an exact relationship, while ∇ΣL = Σ∇SLΣ not.2 This is the general case of practical relevance in applications, ruling out singular Gaussian distributions. For such peculiar distributions, Σ is singular, Σ -1 does not exist, and neither does the density. Though this might be theoretically interesting to develop, the discussion is here out of scope. Assuming Σ to be positivedefinite is not a restrictive and aligned with(Tran et al., 2021a) Publicly available at key2stats.com/data-set/view/140. See(Mroz, 1984) for details. The data is also adopted in VI applications e.g. by(Tran et al., 2021b;Magris et al., 2022). Publicly available at the UCI repository, archive.ics.uci.edu/ml/datasets/istanbul+ stock+exchange. See(Akbilgic et al., 2014) for details. Publicly available at realized.oxford-man.ox.ac.uk.



ζ. The natural gradient ∇ζ L(ζ) is obtained by rescaling the euclidean gradient ∇ ζ L(ζ) by the inverse of the Fisher Information Matrix (FIM) I ζ , i.e. ∇ζ L(ζ) = I ζ ∇ ζ L(ζ). For readability, we shall write L in place of L(ζ).

Figure 1: Top row: lower bound optimization. Bottom row: variational posteriors (for four of the eight parameters).

Figure 2: EMGVB and MGVB performance on the Labour dataset across the iterations.

Figure 3: FIGARCH(1,1,1) model. Top row: lower bound optimization. Bottom row: variational marginals.

Figure 4: Parameter learning across the iterations under different variants of the EMGVB algorithm for the labour dataset.

Figure 5: FIGARCH(1,1,1) model. Variational and MCMC marginals for the unconstrained parameters, as a complement to Figure 5 in the main text.

Figure 7: GJR(1,1,1) model. MCMC and variational marginals in for the unconstrained parameters (ψ ω , ψ α , ψ γ , ψ β ) (top row) and constrained parameters (ω, α, γ, β) (bottom row).

Optimizers' performance for the Labor data on the train and test sets. See Appendix C.4 for extended results, including the use of the h-function estimator, diagonal and block-diagonal covariance specifications.

Optimizers' estimates and performance for the FIGARCH(1,1,1) model on the Volatility dataset.

Set hyper-parameters: 0 < β, ω < 1, S 2: Set the type of gradient estimator, i.e. function log f (θs) 3: Set prior p(θ; 0, τ ), likelihood p(y|θ), and initial values µ, Σ -1 4: t = 1, Stop = false 5: Generate: θs = [θs 1 , . . . , θs h ], θs i ∼ qµ i ,Σ

Parameters' estimates for the labour dataset. Top: posterior means, middle: variances, bottom: covariances (×10 3 ). † denotes the use of the h-function gradient estimator, diag the use of a diagonal variational posterior.

Parameters' covariance matrices for the labour dataset. Entries are multiplied by 10 2 .

Models' performance for the labour data on the train and test sets. † denotes the use of the h-function gradient estimator, diag the use of a diagonal variational posterior.

Variances of the parameters for the labour dataset. Entries are multiplied by 10 2 .

Parameter estimates (on the actual constrained parameter space) and statistics on models' performance on the train and test set. † denotes the use of the h-function gradient estimator.C.3 ISTANBUL DATASET: BLOCK-DIAGONAL COVARIANCEIn this section, we apply EMGVB under different assumptions for the structure of the variational covariance matrix. We use the Istanbul stock exchange dataset of Akbilgic et al. (2014), (details are provided in C.4 and Table11). To demonstrate the feasibility of the block-diagonal estimation under the mean-field framework outlined in Appendix A.7, we model the Istanbul stock exchange national 100 index (ISE):ISE t = β 0 + β 1 SP t + β 2 NIK t + β 3 BOV t + β 4 DAX t + β 5 FTSE t + β 6 EU t + β 7 EM t + ε t

Posterior means, ML estimates and performance measures on the train and test set. σ refers to the disturbances' standard deviation. Block 1 corresponds to case (iii) and Block 2 to case (iv).

Covariance matrices. Top table: covariance matrix of the estimated coefficients for the full variational posterior and covariances of the ML estimate. Second and third table: block covariance matrices where Block 1 corresponds to case (iii) and Block 2 to case (iv). Bottom table: variances of the full and diagonal posteriors along with ML variances. All the entries across the tables are multiplied by 10 4 .Preliminaries: Noticing that for exponential-family distributions I ζ = ∇ λ m and using the chain rule, ∇ λ

annex

Table 11 : Details on the datasets and corresponding models, as well as the hyperparameters used in the experiments.Table 11 summarized some information about the datasets and the setup used across the experiments.For the experiments on the Labour and Volatility datasets, the same set of hyperparameters applies to EMGVB, MGVB (and QBVI). While the Labour 3 and Istanbul 4 datasets are readily available, the volatility dataset is extracted from the Oxford-Man Institute realized volatility library. 5 We use daily close-to-close demeaned returns for the GARCH-family models and 5-minute sub-sampled daily measures of realized volatilities (further annualized) for the HAR model.convex combination of such positive semi-definite matrices, so it is positive-definite. Thus the above expression is convex with respect to ζ t .The objective is optimized by setting its derivatives with respect to ζ to zero:From which it follows that the optimal updates readThe update for µ is analogous to the EMGVB update, thus optimal in terms of Theorem 1. On the other hand, we have that the optimal update for Σ -1 is the above one, which differs from the EMGVB update. Indeed EMGVB accounts for positive-definiteness constraint on Σ -1 , not in the hypotheses of the Theorem. Note however that the EMGVB update for Σ -1 corresponds to the above one when the second-order term in the retraction (which indeed accounts for the positive definiteness of Σ -1 ) is ignored.

