opt_int

class ModelCachedAmp(amp, w_bkg=1.0)[source]

Bases: Model

This class implements methods to calculate NLL as well as its derivatives for an amplitude model with Cached Int. It may include data for both signal and background. Cached Int well cause wrong results when float parameters include mass or width.

Parameters:
  • ampAllAmplitude object. The amplitude model.

  • w_bkg – Real number. The weight of background.

grad_hessp_batch(p, data, mcdata, weight, mc_weight)[source]

self.nll_grad() is replaced by this one???

\[- \frac{\partial \ln L}{\partial \theta_k } = -\sum_{x_i \in data } w_i \frac{\partial}{\partial \theta_k} \ln f(x_i;\theta_k) + (\sum w_j ) \left( \frac{ \partial }{\partial \theta_k} \sum_{x_i \in mc} f(x_i;\theta_k) \right) \frac{1}{ \sum_{x_i \in mc} f(x_i;\theta_k) }\]
Parameters:
  • data

  • mcdata

  • weight

  • mc_weight

Returns:

nll_grad_batch(data, mcdata, weight, mc_weight)[source]

self.nll_grad() is replaced by this one???

\[- \frac{\partial \ln L}{\partial \theta_k } = -\sum_{x_i \in data } w_i \frac{\partial}{\partial \theta_k} \ln f(x_i;\theta_k) + (\sum w_j ) \left( \frac{ \partial }{\partial \theta_k} \sum_{x_i \in mc} f(x_i;\theta_k) \right) \frac{1}{ \sum_{x_i \in mc} f(x_i;\theta_k) }\]
Parameters:
  • data

  • mcdata

  • weight

  • mc_weight

Returns:

sum_log_integral_grad_batch(mcdata, ndata)[source]
sum_nll_grad_bacth(data)[source]
class ModelCachedInt(amp, w_bkg=1.0)[source]

Bases: Model

This class implements methods to calculate NLL as well as its derivatives for an amplitude model with Cached Int. It may include data for both signal and background. Cached Int well cause wrong results when float parameters include mass or width.

Parameters:
  • ampAllAmplitude object. The amplitude model.

  • w_bkg – Real number. The weight of background.

build_cached_int(mcdata, mc_weight, batch=65000)[source]
get_cached_int(mc_id)[source]
nll_grad_batch(data, mcdata, weight, mc_weight)[source]

self.nll_grad() is replaced by this one???

\[- \frac{\partial \ln L}{\partial \theta_k } = -\sum_{x_i \in data } w_i \frac{\partial}{\partial \theta_k} \ln f(x_i;\theta_k) + (\sum w_j ) \left( \frac{ \partial }{\partial \theta_k} \sum_{x_i \in mc} f(x_i;\theta_k) \right) \frac{1}{ \sum_{x_i \in mc} f(x_i;\theta_k) }\]
Parameters:
  • data

  • mcdata

  • weight

  • mc_weight

Returns:

nll_grad_hessian(data, mcdata, weight=1.0, batch=24000, bg=None, mc_weight=1.0)[source]

The parameters are the same with self.nll(), but it will return Hessian as well.

Return NLL:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

Return Hessian:

2-D Array of real numbers. The Hessian matrix of the variables.

sum_grad_hessp_data2(f, p, var, data, data2, weight=1.0, trans=<function identity>, resolution_size=1, args=(), kwargs=None)[source]

The parameters are the same with sum_gradient(), but this function will return hessian as well, which is the matrix of the second-order derivative.

Returns:

Real number NLL, list gradient, 2-D list hessian

sum_gradient(fs, var, weight=1.0, trans=<function identity>, args=(), kwargs=None)[source]

NLL is the sum of trans(f(data)):math:*`weight; gradient is the derivatives for each variable in ``var`.

Parameters:
  • f – Function. The amplitude PDF.

  • var – List of strings. Names of the trainable variables in the PDF.

  • weight – Weight factor for each data point. It’s either a real number or an array of the same shape with data.

  • trans – Function. Transformation of data before multiplied by weight.

  • kwargs – Further arguments for f.

Returns:

Real number NLL, list gradient

sum_gradient_data2(f, var, data, cached_data, weight=1.0, trans=<function identity>, args=(), kwargs=None)[source]

NLL is the sum of trans(f(data)):math:*`weight; gradient is the derivatives for each variable in ``var`.

Parameters:
  • f – Function. The amplitude PDF.

  • var – List of strings. Names of the trainable variables in the PDF.

  • weight – Weight factor for each data point. It’s either a real number or an array of the same shape with data.

  • trans – Function. Transformation of data before multiplied by weight.

  • kwargs – Further arguments for f.

Returns:

Real number NLL, list gradient