model

This module provides methods to calculate NLL(Negative Log-Likelihood) as well as its derivatives.

class BaseModel(signal, resolution_size=1, extended=False)[source]

Bases: object

This class implements methods to calculate NLL as well as its derivatives for an amplitude model. It may include data for both signal and background.

Parameters:

signal – Signal Model

get_params(trainable_only=False)[source]

It has interface to Amplitude.get_params().

grad_hessp_batch(p, data, mcdata, weight, mc_weight)[source]

self.nll_grad() is replaced by this one???

\[- \frac{\partial \ln L}{\partial \theta_k } = -\sum_{x_i \in data } w_i \frac{\partial}{\partial \theta_k} \ln f(x_i;\theta_k) + (\sum w_j ) \left( \frac{ \partial }{\partial \theta_k} \sum_{x_i \in mc} f(x_i;\theta_k) \right) \frac{1}{ \sum_{x_i \in mc} f(x_i;\theta_k) }\]
Parameters:
  • data

  • mcdata

  • weight

  • mc_weight

Returns:

nll(data, mcdata)[source]

Negative log-Likelihood

nll_grad(data, mcdata, batch=65000)[source]
nll_grad_batch(data, mcdata, weight, mc_weight)[source]

self.nll_grad() is replaced by this one???

\[- \frac{\partial \ln L}{\partial \theta_k } = -\sum_{x_i \in data } w_i \frac{\partial}{\partial \theta_k} \ln f(x_i;\theta_k) + (\sum w_j ) \left( \frac{ \partial }{\partial \theta_k} \sum_{x_i \in mc} f(x_i;\theta_k) \right) \frac{1}{ \sum_{x_i \in mc} f(x_i;\theta_k) }\]
Parameters:
  • data

  • mcdata

  • weight

  • mc_weight

Returns:

nll_grad_hessian(data, mcdata, batch=25000)[source]

The parameters are the same with self.nll(), but it will return Hessian as well.

Return NLL:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

Return Hessian:

2-D Array of real numbers. The Hessian matrix of the variables.

set_params(var)[source]

It has interface to Amplitude.set_params().

sum_log_integral_grad_batch(mcdata, ndata)[source]
sum_nll_grad_bacth(data)[source]
sum_resolution(w)[source]
property trainable_variables
class CombineFCN(model=None, data=None, mcdata=None, bg=None, fcns=None, batch=65000, gauss_constr={})[source]

Bases: object

This class implements methods to calculate the NLL as well as its derivatives for a general function.

Parameters:
  • model – List of model object.

  • data – List of data array.

  • mcdata – list of MCdata array.

  • bg – list of Background array.

  • batch – The length of array to calculate as a vector at a time. How to fold the data array may depend on the GPU computability.

get_grad(x={})[source]
Parameters:

x – List. Values of variables.

Return gradients:

List of real numbers. The gradients for each variable.

get_grad_hessp(x, p, batch)[source]
Parameters:

x – List. Values of variables.

Return nll:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

get_nll(x={})[source]
Parameters:

x – List. Values of variables.

Return nll:

Real number. The value of NLL.

get_nll_grad(x={})[source]
Parameters:

x – List. Values of variables.

Return nll:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

get_nll_grad_hessian(x={}, batch=None)[source]
Parameters:

x – List. Values of variables.

Return nll:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

Return hessian:

2-D Array of real numbers. The Hessian matrix of the variables.

get_params(trainable_only=False)[source]
grad(x={})[source]
grad_hessp(x, p, batch=None)[source]
nll_grad(x={})[source]
nll_grad_hessian(x={}, batch=None)[source]
class ConstrainModel(amp, w_bkg=1.0, constrain={})[source]

Bases: Model

negative log likelihood model with constrains

get_constrain_grad()[source]
constrain: Gauss(mean,sigma)

by add a term \(\frac{d}{d\theta_i}\frac{(\theta_i-\bar{\theta_i})^2}{2\sigma^2} = \frac{\theta_i-\bar{\theta_i}}{\sigma^2}\)

get_constrain_hessian()[source]

the constrained parameter’s 2nd differentiation

get_constrain_term()[source]
constrain: Gauss(mean,sigma)

by add a term \(\frac{(\theta_i-\bar{\theta_i})^2}{2\sigma^2}\)

nll(data, mcdata, weight=1.0, bg=None, batch=None)[source]

calculate negative log-likelihood

\[-\ln L = -\sum_{x_i \in data } w_i \ln f(x_i;\theta_i) + (\sum w_i ) \ln \sum_{x_i \in mc } f(x_i;\theta_i) + cons\]
nll_gradient(data, mcdata, weight=1.0, batch=None, bg=None)[source]

calculate negative log-likelihood with gradient

\[\frac{\partial }{\partial \theta_i }(-\ln L) = -\sum_{x_i \in data } w_i \frac{\partial }{\partial \theta_i } \ln f(x_i;\theta_i) + \frac{\sum w_i }{\sum_{x_i \in mc }f(x_i;\theta_i)} \sum_{x_i \in mc } \frac{\partial }{\partial \theta_i } f(x_i;\theta_i) + cons\]
class FCN(model, data, mcdata, bg=None, batch=65000, inmc=None, gauss_constr={})[source]

Bases: object

This class implements methods to calculate the NLL as well as its derivatives for a general function.

Parameters:
  • model – Model object.

  • data – Data array.

  • mcdata – MCdata array.

  • bg – Background array.

  • batch – The length of array to calculate as a vector at a time. How to fold the data array may depend on the GPU computability.

get_grad(x={})[source]
Parameters:

x – List. Values of variables.

Return gradients:

List of real numbers. The gradients for each variable.

get_grad_hessp(x, p, batch)[source]
get_nll(x={})[source]
Parameters:

x – List. Values of variables.

Return nll:

Real number. The value of NLL.

get_nll_grad(x={})[source]
Parameters:

x – List. Values of variables.

Return nll:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

get_nll_grad_hessian(x={}, batch=None)[source]
Parameters:

x – List. Values of variables.

Return nll:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

Return hessian:

2-D Array of real numbers. The Hessian matrix of the variables.

get_params(trainable_only=False)[source]
grad(x={})[source]
grad_hessp(x, p, batch=None)[source]
nll_grad(x={})[source]
nll_grad_hessian(x={}, batch=None)[source]
class GaussianConstr(vm, constraint={})[source]

Bases: object

get_constrain_grad()[source]
constraint: Gauss(mean,sigma)

by add a term \(\frac{d}{d\theta_i}\frac{(\theta_i-\bar{\theta_i})^2}{2\sigma^2} = \frac{\theta_i-\bar{\theta_i}}{\sigma^2}\)

get_constrain_hessian()[source]

the constrained parameter’s 2nd differentiation

get_constrain_term()[source]
constraint: Gauss(mean,sigma)

by add a term \(\frac{(\theta_i-\bar{\theta_i})^2}{2\sigma^2}\)

update(constraint={})[source]
class MixLogLikehoodFCN(model, data, mcdata, bg=None, batch=65000, gauss_constr={})[source]

Bases: CombineFCN

This class implements methods to calculate the NLL as well as its derivatives for a general function.

Parameters:
  • model – List of model object.

  • data – List of data array.

  • mcdata – list of MCdata array.

  • bg – list of Background array.

  • batch – The length of array to calculate as a vector at a time. How to fold the data array may depend on the GPU computability.

get_nll_grad(x={})[source]
Parameters:

x – List. Values of variables.

Return nll:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

class Model(amp, w_bkg=1.0, resolution_size=1, extended=False, **kwargs)[source]

Bases: object

This class implements methods to calculate NLL as well as its derivatives for an amplitude model. It may include data for both signal and background.

Parameters:
  • ampAllAmplitude object. The amplitude model.

  • w_bkg – Real number. The weight of background.

get_params(trainable_only=False)[source]

It has interface to Amplitude.get_params().

get_weight_data(data, weight=None, bg=None, alpha=True)[source]

Blend data and background data together multiplied by their weights.

Parameters:
  • data – Data array

  • weight – Weight for data

  • bg – Data array for background

  • alpha – Boolean. If it’s true, weight will be multiplied by a factor \(\alpha=\)???

Returns:

Data, weight. Their length both equals len(data)+len(bg).

grad_hessp_batch(p, data, mcdata, weight, mc_weight)[source]

self.nll_grad() is replaced by this one???

\[- \frac{\partial \ln L}{\partial \theta_k } = -\sum_{x_i \in data } w_i \frac{\partial}{\partial \theta_k} \ln f(x_i;\theta_k) + (\sum w_j ) \left( \frac{ \partial }{\partial \theta_k} \sum_{x_i \in mc} f(x_i;\theta_k) \right) \frac{1}{ \sum_{x_i \in mc} f(x_i;\theta_k) }\]
Parameters:
  • data

  • mcdata

  • weight

  • mc_weight

Returns:

mix_data_bakcground(data, bg)[source]
nll(data, mcdata, weight: Tensor = 1.0, batch=None, bg=None, mc_weight=1.0)[source]

Calculate NLL.

\[-\ln L = -\sum_{x_i \in data } w_i \ln f(x_i;\theta_k) + (\sum w_j ) \ln \sum_{x_i \in mc } f(x_i;\theta_k)\]
Parameters:
  • data – Data array

  • mcdata – MCdata array

  • weight – Weight of data???

  • batch – The length of array to calculate as a vector at a time. How to fold the data array may depend on the GPU computability.

  • bg – Background data array. It can be set to None if there is no such thing.

Returns:

Real number. The value of NLL.

nll_grad(data, mcdata, weight=1.0, batch=65000, bg=None, mc_weight=1.0)[source]

Calculate NLL and its gradients.

\[- \frac{\partial \ln L}{\partial \theta_k } = -\sum_{x_i \in data } w_i \frac{\partial}{\partial \theta_k} \ln f(x_i;\theta_k) + (\sum w_j ) \left( \frac{ \partial }{\partial \theta_k} \sum_{x_i \in mc} f(x_i;\theta_k) \right) \frac{1}{ \sum_{x_i \in mc} f(x_i;\theta_k) }\]

The parameters are the same with self.nll(), but it will return gradients as well.

Return NLL:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

nll_grad_batch(data, mcdata, weight, mc_weight)[source]

batch version of self.nll_grad()

\[- \frac{\partial \ln L}{\partial \theta_k } = -\sum_{x_i \in data } w_i \frac{\partial}{\partial \theta_k} \ln f(x_i;\theta_k) + (\sum w_j ) \left( \frac{ \partial }{\partial \theta_k} \sum_{x_i \in mc} f(x_i;\theta_k) \right) \frac{1}{ \sum_{x_i \in mc} f(x_i;\theta_k) }\]
Parameters:
  • data

  • mcdata

  • weight

  • mc_weight

Returns:

nll_grad_hessian(data, mcdata, weight=1.0, batch=24000, bg=None, mc_weight=1.0)[source]

The parameters are the same with self.nll(), but it will return Hessian as well.

Return NLL:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

Return Hessian:

2-D Array of real numbers. The Hessian matrix of the variables.

set_params(var)[source]

It has interface to Amplitude.set_params().

sum_log_integral_grad_batch(mcdata, ndata)[source]
sum_nll_grad_bacth(data)[source]
sum_resolution(w)[source]
class Model_new(amp, w_bkg=1.0, w_inmc=0, float_wmc=False)[source]

Bases: Model

This class implements methods to calculate NLL as well as its derivatives for an amplitude model. It may include data for both signal and background.

Parameters:
  • ampAllAmplitude object. The amplitude model.

  • w_bkg – Real number. The weight of background.

get_weight_data(data, weight=1.0, bg=None, inmc=None, alpha=True)[source]

Blend data and background data together multiplied by their weights.

Parameters:
  • data – Data array

  • weight – Weight for data

  • bg – Data array for background

  • alpha – Boolean. If it’s true, weight will be multiplied by a factor \(\alpha=\)???

Returns:

Data, weight. Their length both equals len(data)+len(bg).

nll(data, mcdata, weight: Tensor = 1.0, batch=None, bg=None)[source]

Calculate NLL.

\[-\ln L = -\sum_{x_i \in data } w_i \ln f(x_i;\theta_k) + (\sum w_j ) \ln \sum_{x_i \in mc } f(x_i;\theta_k)\]
Parameters:
  • data – Data array

  • mcdata – MCdata array

  • weight – Weight of data???

  • batch – The length of array to calculate as a vector at a time. How to fold the data array may depend on the GPU computability.

  • bg – Background data array. It can be set to None if there is no such thing.

Returns:

Real number. The value of NLL.

nll_grad_batch(data, mcdata, weight, mc_weight)[source]

self.nll_grad_new

nll_grad_hessian(data, mcdata, weight, mc_weight)[source]

The parameters are the same with self.nll(), but it will return Hessian as well.

Return NLL:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

Return Hessian:

2-D Array of real numbers. The Hessian matrix of the variables.

clip_log(x, _epsilon=1e-06)[source]

clip log to allowed large value

get_shape(x)[source]
sum_grad_hessp(f, p, data, var, weight=1.0, trans=<function identity>, resolution_size=1, args=(), kwargs=None)[source]

The parameters are the same with sum_gradient(), but this function will return hessian as well, which is the matrix of the second-order derivative.

Returns:

Real number NLL, list gradient, 2-D list hessian

sum_gradient(f, data, var, weight=1.0, trans=<function identity>, resolution_size=1, args=(), kwargs=None)[source]

NLL is the sum of trans(f(data)):math:*`weight; gradient is the derivatives for each variable in ``var`.

Parameters:
  • f – Function. The amplitude PDF.

  • data – Data array

  • var – List of strings. Names of the trainable variables in the PDF.

  • weight – Weight factor for each data point. It’s either a real number or an array of the same shape with data.

  • trans – Function. Transformation of data before multiplied by weight.

  • kwargs – Further arguments for f.

Returns:

Real number NLL, list gradient

sum_gradient_new(amp, data, mcdata, weight, mcweight, var, trans=<function log>, w_flatmc=<function <lambda>>, args=(), kwargs=None)[source]

NLL is the sum of trans(f(data)):math:*`weight; gradient is the derivatives for each variable in ``var`.

Parameters:
  • f – Function. The amplitude PDF.

  • data – Data array

  • var – List of strings. Names of the trainable variables in the PDF.

  • weight – Weight factor for each data point. It’s either a real number or an array of the same shape with data.

  • trans – Function. Transformation of data before multiplied by weight.

  • kwargs – Further arguments for f.

Returns:

Real number NLL, list gradient

sum_hessian(f, data, var, weight=1.0, trans=<function identity>, resolution_size=1, args=(), kwargs=None)[source]

The parameters are the same with sum_gradient(), but this function will return hessian as well, which is the matrix of the second-order derivative.

Returns:

Real number NLL, list gradient, 2-D list hessian

sum_hessian_new(amp, data, mcdata, weight, mcweight, var, trans=<function log>, w_flatmc=<function <lambda>>, args=(), kwargs=None)[source]

The parameters are the same with sum_gradient(), but this function will return hessian as well, which is the matrix of the second-order derivative.

Returns:

Real number NLL, list gradient, 2-D list hessian