In the rest of paper \(\Vert \cdot \Vert \) denotes the 2-norm. 5 we consider the specific case where f is a finite-sum which is typical of machine learning applications and compare our approach with the Inexact Newton methods specially designed for this class of problems given in. 4 we propose and study a linesearch Newton-CG algorithm where function estimates have controllable accuracy. 3 we present and study a linesearch Newton-CG algorithm where function estimates are subject to a prefixed deterministic noise. 2 we give preliminaries on Newton-CG and on the problems considered. We provide two linesearch Newton-CG methods suitable for the class of problems specified above and provide bounds on the expected number of iterations required to reach a desired level of accuracy in the optimality gap. Such a class of problems has been considered in and our contribution consists in their solution with linesearch Newton-CG method to our knowledge, this case has not been addressed in the literature. Concerning the evaluation of f we cover two cases: estimates of f subject to noise that is bounded in absolute value estimates of f subject to a controllable error, i.e., computable with a prescribed dynamic accuracy. Importantly, derivative estimates are not required to satisfy suitable accuracy requirements at each iteration but with sufficiently high probability. This work belongs to the recent stream of works and addresses the solution of ( 1) when the objective function f is computed with noise and gradient and Hessian estimates are random.
On the other hand, the research is currently very active for problems with inexact information on f and its derivatives and possibly such that the accuracy cannot be controlled in a deterministic way. The literature on globally convergent Newton-CG methods is well established as long as the gradient and the Hessian matrix are computed exactly or approximated sufficiently accurately in a deterministic way, see e.g.
We focus on the Newton method where the linear systems are solved by the Conjugate Gradient (CG) method, usually denoted as Newton-CG method, and on the enhancement of its convergence properties by means of Armijo-type conditions.