# Exploring the parameter landscape¶

In general the objective function $$\chi^2$$ is a very complex function of the potential parameters $$p_i$$ as well as the weight factors $$w_i$$. The resulting space $$(\{p_i\};\{w_i\})$$ can be excruciatingly difficult to navigate. In this context some basic analysis of the parameter can yield useful insight.

Consider specifically the very simple embedded atom method (EAM) type potential for aluminum that was already considered in several examples: [1], [2], [3]. The embedded atom method (EAM) potential format used here is a special case of the Tersoff potential with a purely repulsive pair potential

$V(r) = A \exp(-\lambda r),$

the embedding function

$F(\rho) = -D \exp(-\rho),$

and the electron density

$\rho(r) = \exp(-2\mu r).$

The four parameters are fitted to 108 atom face-centered cubic (FCC) configurations at the equilibrium lattice parameter as well as 10% compression and expansion. Also included in the fit is the Al bulk modulus and the FCC lattice parameter.

The previous examples started from the initial parameter set

$(A,\lambda,\mu,D) = (500, 2.73, 1.14, 8)$

When using local optimizers, in particular L-BFGS (internal L-BFGS minimizer, several local SciPy minimizers), the final parameter set tends to end up rather close to this initial set. Especially the $$A$$ parameter appears to be rather “immobile”. Considering the functional form it is apparent that $$A$$ and $$\lambda$$ are coupled and because of the exponential function for our purpose of parameter optimization are related in unfavorable fashion.

To quantify their correlation one can for example conduct a scan of the $$(A,\lambda)$$ parameter plane. This can be readily achieved using atomicrex‘s Python interface. Assuming that input files with potential and structure definitions have been prepared (main.xml, potential.xml, structures.xml; see below) the job can be set up as follows:

 1 2 3 4 5 6 7 from __future__ import print_function import atomicrex import numpy as np job = atomicrex.Job() job.parse_input_file('main.xml') job.prepare_fitting() 

After storing the initial parameter vector:

params0 = job.get_potential_parameters()


one can sweep over a range of parameters:

  1 2 3 4 5 6 7 8 9 10 11 del_A = 0.02 * params0[0] del_Lambda = 0.02 * params0[1] n = 15 data = [] for dA in np.arange(-n * del_A, n * del_A, del_A): row = [] data.append(row) params[0] = params0[0] + dA for dLambda in np.arange(-n * del_Lambda, n * del_Lambda, del_Lambda): params[1] = params[0] + dLambda row.append(job.calculate_residual(params)) 

The resulting matrix can be visualized using e.g., matplotlib. This clearly illustrates the extreme imbalance between the $$A$$ and $$\lambda$$ parameters as small (relative) changes in $$\lambda$$ have a much more dramatic effect than in $$A$$. This renders the minimization of the objective cumbersome, even in this very simple example.

A more systematic way to quantify parameter correlation is achieved by computing the second derivative (Hessian) of the objective function $$\Delta \chi^2 = \partial^2 \chi^2/\partial p_i\partial p_j$$:

 1 2 3 4 5 hessian = job.calculate_hessian() for row in hessian: for d in row: print('{:10g}'.format(d), end='') print('') 

This reveals a very large spread of values in particular on the diagonal and the extreme ratio between the off-diagonal values already indicates this to be a poorly conditioned problem. This becomes more apparent by analyzing the ratio of eigenvalues:

eig, vecs = np.linalg.eig(hessian)
print(np.max(np.abs(eig)) / np.min(np.abs(eig)))


This analysis thus illustrates the origin of the numerical diffculties encountered when optimizing this particular functional form. Yet, combinations such as $$A$$ and $$\lambda$$ are frequent in commonly employed potential forms.

The present analysis suggests that an analysis of the eigenvalue spectrum of the Hessian of the objective function can be a useful tool for testing the behavior of the functional form, especially when encountering difficulties in exploring the (local) parameter space.

The following example is revisited in the section on Monte Carlo sampling of the objective function, which turns out to be an interesting approach to navigating “narrow” parameter spaces such as the one discussed above.

## Location of files¶

The example presented above employs input files from this example, which are also included here for convenience.