I would like to calculate the proximal function $Prox_{lambda f}(mathbf{x}) = argmin_{x>0} (f(mathbf{x}) + frac{1}{2 lambda}Vert mathbf{x} – mathbf{x}_0 lVert ^2)$ for function:

$$ f(mathbf{x}) = sum_{i}^m -b_i log(mathbf{a}_i mathbf{x}) $$

Where $mathbf{x} ge 0$ ($mathbf{x}$ is a vector with all non-negative elements)), $b_i ge 0 : forall i$ (b are non-negative scalars), $mathbf{A} = (mathbf{a}_0, mathbf{a}_1, …, mathbf{a}_m) $ is a matrix with all non-negative elements.

I am aware that this can be easily accomplished by iterative minimization methods based on the gradient of $f(mathbf{x})$, but I’m wondering if it has a nice closed-form solution.

I know that for scalar function $g(x) = -c : log(x)$, $Prox_{lambda g}(x) = frac{x + sqrt{x^2 + 4 lambda c}}{2}$, and that there are some rules for composing proximal functions – particularly for the case $mathbf{A} mathbf{A}^T = alpha I$, which is not the case here.

I would like to know if there are any properties that would allow to obtain a closed-form solution based on the solution for the scalar function case.