Welcome to TFPWA’s documentation!

TFPWA is a generic software package intended for Partial Wave Analysis (PWA). It is developed using TensorFlow2 and the calculation is accelerated by GPU. Users may modify the configuration file (in YML format) and write simple scripts to complete the whole analysis. A detailed configuration file sample (with all usable parameters) can be found here.

Install

To use TFPWA, we need some dependent packages. There are two main ways, conda and virtualenv you can choose one of them. Or you can choose other method in 5. Other install method.

1. vitrual environment

To avoid conflict of dependence, we recommed to use vitrual environment. there are two main vitrual environment for python packages, conda and virtualenv. You can choose one of them. Since conda include cudatoolkit for gpu, we recommed it for user.

1.1 conda

  • 1.1.1 Get miniconda for python3 from miniconda3 and install it.

  • 1.1.2 Create a virtual environment by

conda create -n tfpwa

, the -n <name> option will create a environment named by <name>. You can also use -p <path> option to create environment in the <path> directory.

  • 1.1.3 You can activate the environment by

conda activate tfpwa

and then you can install packages in the conda environment

  • 1.1.4 You can exit the environment by

conda deactivate

1.2 virtualenv

  • 1.2.1 You should have a python3 first.

  • 1.2.2 Install virtualenv

python3 -m pip install --user virtualenv
  • 1.2.3 Create a virtual environment

python3 -m virtualenv ./tfpwa

, it will store in the path tfpwa

  • 1.2.4 You can activate the environment by

source ./tfpwa/bin/activete
  • 1.2.5 You can exit the environment by

deactivate

2. tensorflow2

The most important package is tensorflow2. We recommed to install tensorflow first. You can following the install instructions in tensorflow website (or tensorflow.org).

2.1 conda

Here we provide the simple way to install tensorflow2 gpu version in conda environment

conda install tensorflow-gpu=2.4

it will also install cudatoolkit.

2.2 virtualenv

When using virtualenv, there is also simple way to install tensorflow2

python -m pip install tensorflow

, but you should check you CUDA installation for GPU.

Note

You can use -i https://pypi.tuna.tsinghua.edu.cn/simple option to use pypi mirror site.

3. Other dependences

Other dependences of TFPWA is simple.

3.1 Get TFPWA package

Get the packages using

git clone https://github.com/jiangyi15/tf-pwa

3.2 conda

3.2.1 other dependences

In conda environment, go into the directory of tf-pwa, you can install the rest dependences by

conda install --file requirements-min.txt

Note

we recommed Ampere card users to install with tensorflow_2_6_requirements.txt (see this technical FAQ).

conda install --file tensorflow_2_6_requirements.txt -c conda-forge

3.2.2 TFPWA

install TFPWA

python -m pip install -e ./ --no-deps

Use --no-deps to make sure that no PyPI package will be installed. Using -e, so it can be updated by git pull directly.

3.3 virtualenv

In virtualenv, You can install dependences and TFPWA together.

python3 -m pip install -e ./

Using -e, so it can be updated by git pull directly.

4. (option) Other dependences.

There are some option packages, such as uproot for reading root file.

4.1 conda

It can be installed as

conda install uproot -c conda-forge

4.2 virtualenv

It can be installed as

python -m pip install uproot

5. Other install method.

We also provided other install method.

5.1 conda channel (experimental)

A pre-built conda package (Linux only) is also provided, just run following command to install it.

conda config --add channels jiangyi15
conda install tf-pwa

5.2 pip

When using pip, you will need to install CUDA to use GPU. Just run the following command :

python3 -m pip install -e .

6. For developer

To contribute to the project, please also install additional developer tools with:

python3 -m pip install -e .[dev]

Amplitude

Helicity Formula

Each Decay has Amplitude like

\[A^{A \rightarrow B+C}_{\lambda_{A},\lambda_{B},\lambda_{C}} = H_{\lambda_{B},\lambda_{C}}^{A \rightarrow B+C} D^{J_{A}\star}_{\lambda_{A},\lambda_{B}-\lambda_{C}}(\phi,\theta,0)\]

For a chain decay, amplitude can be combined as

\[A^{A \rightarrow R+B,R \rightarrow C+D}_{\lambda_{A},\lambda_{B},\lambda_{C},\lambda_{D}} = \sum_{\lambda_{R}}A^{A \rightarrow R+B}_{\lambda_{A},\lambda_{R},\lambda_{B}} \color{red}{R(m_{R})}\color{black} A^{R \rightarrow C+D} _{\lambda_{R},\lambda_{C},\lambda_{D}}\]

with angle aligned

\[{\hat{A}}^{A \rightarrow R+B,R \rightarrow C+D}_{\lambda_{A},\lambda_{B},\lambda_{C},\lambda_{D}} = \sum_{\lambda_{B}',\lambda_{C}',\lambda_{D}'}A^{A \rightarrow R+B,R \rightarrow C+D}_{\lambda_{A},\lambda_{B}',\lambda_{C}',\lambda_{D}'} D^{J_{B}\star}_{\lambda_{B}',\lambda_{B}}(\alpha_{B},\beta_{B},\gamma_{B}) D^{J_{C}\star}_{\lambda_{C}',\lambda_{C}}(\alpha_{C},\beta_{C},\gamma_{C}) D^{J_{D}\star}_{\lambda_{D}',\lambda_{D}}(\alpha_{D},\beta_{D},\gamma_{D})\]

the sum of resonances

\[A_{\lambda_{A},\lambda_{B},\lambda_{C},\lambda_{D}}^{total} = \sum_{R_{1}} {\hat{A}}^{A \rightarrow R_{1}+B,R_{1} \rightarrow C+D}_{\lambda_{A},\lambda_{B},\lambda_{C},\lambda_{D}} + \sum_{R_{2}} {\hat{A}}^{A \rightarrow R_{2}+C,R_{2} \rightarrow B+D}_{\lambda_{A},\lambda_{B},\lambda_{C},\lambda_{D}} + \sum_{R_{3}} {\hat{A}}^{A \rightarrow R_{3}+D,R_{3} \rightarrow B+C}_{\lambda_{A},\lambda_{B},\lambda_{C},\lambda_{D}}\]

then the differential cross-section

\[\frac{\mathrm{d}\sigma}{\mathrm{d}\Phi} = \frac{1}{N}\sum_{\lambda_{A}}\sum_{\lambda_{B},\lambda_{C},\lambda_{D}}|A_{\lambda_{A},\lambda_{B},\lambda_{C},\lambda_{D}}^{total}|^2\]

Amplitude Combination Rules

For a decay process A -> R B, R -> C D, we can get different part of amplitude:

  1. Particle:
    1. Initial state: \(1\)

    2. Final state: \(D(\alpha, \beta, \gamma)\)

    3. Propagator: \(R(m)\)

  2. Decay:

    Two body decay (A -> R B): \(H_{\lambda_R,\lambda_B} D_{\lambda_A, \lambda_R - \lambda_B} (\varphi, \theta,0)\)

Now we can use combination rules to build amplitude for the whole process.

Probability Density:

\(P = |\tilde{A}|^2\) (modular square)

Decay Group:

\(\tilde{A} = a_1 A_{R_1} + a_2 A_{R_2} + \cdots\) (addition)

Decay Chain:

\(A_{R} = A_1 \times R \times A_2 \cdots\) (multiplication)

Decay: \(A_i = HD(\varphi, \theta, 0)\)

Particle: \(R(m)\)

The indices part is quantum number, and it can be summed automatically.

Default Amplitude Model

The defalut model for Decay is helicity amplitude

\[A^{A \rightarrow B C}_{\lambda_A,\lambda_B, \lambda_C} = H_{\lambda_B,\lambda_C}^{A \rightarrow B C} D^{J_{A}*}_{\lambda_A,\lambda_B - \lambda_C}(\phi, \theta, 0).\]

The LS coupling formula is used

\[H_{\lambda_{B},\lambda_{C}}^{A \rightarrow B+C} = \sum_{ls} g_{ls} \sqrt{\frac{2l+1}{2 J_{A}+1}} \langle l 0; s \delta|J_{A} \delta\rangle \langle J_{B} \lambda_{B} ;J_{C} -\lambda_{C} | s \delta \rangle q^{l} B_{l}'(q, q_0, d)\]

\(g_{ls}\) are the fit parameters, the first one is fixed. \(q\) and \(q_0\) is the momentum in rest frame for invariant mass and resonance mass.

\(B_{l}'(q, q0, d)\) (Bprime) is Blatt-Weisskopf barrier factors. \(d\) is \(3.0 \mathrm{GeV}^{-1}\) by default.

Resonances model use Relativistic Breit-Wigner function

\[R(m) = \frac{1}{m_0^2 - m^2 - i m_0 \Gamma(m)}\]

with running width

\[\Gamma(m) = \Gamma_0 \left(\frac{q}{q_0}\right)^{2L+1}\frac{m_0}{m} B_{L}'^2(q,q_0,d).\]

By using the combination rules, the amplitude is built automatically.

Custom Model

TF-PWA support custom model of Particle, just implement the Particle.get_amp method for a class inherited from Particle as:

from tf_pwa.amp import register_particle, Particle

@register_particle("MyModel")
class MyParticle(Particle):
    def get_amp(self, *args, **kwargs):
        print(args, kwargs)
        return 1.0

Then it can be used in config.yml (or Resonances.yml) as model: MyModel. We can get the data used for amplitude, and add some calculations such as Breit-Wigner.

from tf_pwa.amp import register_particle, Particle
import tensorflow as tf

@register_particle("BW")
class SimpleBW(Particle):
    def get_amp(self, *args, **kwargs):
        """
        Breit-Wigner formula
        """
        m = args[0]["m"]
        m0 = self.get_mass()
        g0 = self.get_width()
        delta_m = m0*m0 - m * m
        one = tf.ones_like(m)
        ret = 1/tf.complex(delta_m, m0 * g0 * one)
        return ret

Note, we used one to make sure the shape to be same.

We can also add parameters in the Model init_params using self.add_var(...).

@register_particle("Line")
class LineModel(Particle):
    def init_params(self):
        super(LineModel, self).init_params()
        self.a = self.add_var("a")
    def get_amp(self, data, *args, **kwargs):
        """ model as m + a """
        m = data["m"]
        zeros = tf.zeros_like(m)
        return tf.complex( m + self.a(), zeros)

Then a parameters {particle_name}_a will appear in the parameters, we use self.a() to get the value in get_amp. Note, the type of return value should be tf.complex. All builtin model is located in tf_pwa/amp.py.

Simple Resonance (experimental)

There is a simple method to define Resonance model, like

from tf_pwa.amp import simple_resonance, FloatParams

@simple_resonance("Line_2", params=["a"])
def r_line(m, a: FloatParams = 1.0):
    return m + a

Those code will build a class similar as Line model define before. By using inspect module, we can get the FullArgSpec of a function. For a keyword arguments with type annotation as FloatParams, it will be treated as a fit paraments.

Note

the first arguments have to be the invariant mass m of the resonance.

Decay Topology

A decay chain is a simple tree, from top particle to final particles. So the decay chain can be describing as Node (Decay) and Line (Particle)

Topology identity: The combination of final particles

For example, the combination of decay chain A->RC,R->BD and A->ZB,R->CD is

{A: [B, C, D], R: [B, D], B: [B], C: [C], D: [D]}

and

{A: [B, C, D], Z: [C, D], B: [B], C: [C], D: [D]}

The item R and Z is not same, so there are two different topology.

{{A: [B, C, D], B: [B], C: [C], D: [D]}}

is the direct A->BCD decay.

From particles to enumerate all possible decay chain topology:

From a basic line, inserting lines to create a full graph.

from a line: A -> B,

insert a line (node0 -> C) and a node (node0):

1. A -> node0, node0 -> B, node0 -> C

insert a line :

1. A -> node0, node0 -> B, node0 -> node1, node1 -> C, node1 -> D

2. A -> node1, node1 -> node0, node0 -> B, node0 -> C, node1 -> D

3. A -> node0, node0 -> node1, node1 -> B, node0 -> C, node1 -> D

there are the three possible decay chains of A -> B,C,D

1. A -> R+B, R -> C+D

2. A -> R+D, R -> B+C

3. A -> R+C, R -> B+D

the process is unique for different final particles

Each inserting process delete a line and add three new line, So for decay process has \(n\) final particles, there are \((2n-3)!!\) possible decay topology.

Resonances Parameters

This section is about how do the Resonances.yml work.

From Resonacces.yml to the real model, there will be following steps.

  1. loaded by config.yml, it is will be combined the defination in config.yml particle parts.

    For examples, config.yml have

    particle:
       $include: Resonances.yml
       Psi4040:
          float: mg
    

    then Resonances.yml item

    Psi4040:
       J: 1
       float: m
    

    will become {"Psi4040": {"J": 1, "float": "mg"}}

  2. replace some alias, (m0 -> mass, g0 -> width, …)

  3. If it is used in decay chain, then create Particle object.

    The particle class is cls = get_particle_model(item["model"]), and the object is cls(**item).

    All parameters wull be stored in config.particle_property[name].

All aviable parameters can be devied into the flowowing 3 sets.

Common Parameters

Parameters defined in BaseParticle are common parameters including spin, parity, mass and width.

name

default value

comment

J

0

spin, int or half-integral

P

-1

P-parity, +1 or -1

C

None

C-Parity, +1 or -1

mass

None

mass, float, it is always required because of the calcultion of \(q_0\)

width

None

width, float

spins

None

possible spin projections,`[-J, …, J]`, list

Model Parameters

Parameters defined in the real model. Available Resonances Model

There are many parameters defined by the user, the those parameters will be pass to model class, such as the paramthers for __init__(self, **kwargs) method.

For examples, the default model (BWR, BaseParticle) have following parameters:

name

default value

comment

running_width

True

if using running width, bool

bw_l

None, auto deteminated

running width angular momentum, int

Other Parameters

There are also some other parameters which is used to control the program running.

For examples, simple constrains, the following parameters are using by ConfigLoader as constrains.

name

default value

comment

mass_min, mass_max

None

mass range

width_min, width_max

None

width range

float

[]

float paramsters list

Another examples are parameters to build decay chain for particle R.

name

default value

comment

decay_params

{}

parameters pass to Decay which R decay

production_params

{}

parameters pass to Decay which produce R

model

default

Particle model for R

There are also other common used parameters.

name

default value

comment

display

None

control plot legend with latex string, string

Phase Space

\(N\) body decay phase space can be defined as

\[\mathrm{d} \Phi(P;p_1,\cdots,p_n) = (2\pi)^4\delta^4(P - \sum {p_i}) \prod \theta(p^0)2\pi\delta(p_i^2 - m_i^2)\frac{\mathrm{d}^4 p_i}{(2\pi)^{4}}\]

or integrate \(p^0\) as

\[\mathrm{d} \Phi(P;p_1,\cdots,p_n) = (2\pi)^4\delta^4(P - \sum {p_i}) \prod \frac{1}{2E_i}\frac{\mathrm{d}^3 \vec{p_i}}{(2\pi)^{3}}\]

by using the property of \(\delta\)-function,

\[\delta(f(x)) = \sum_{i}\frac{\delta(x-x_i)}{|f'(x_i)|}\]

where \(x_i\) is the root of \(f(x)=0\).

Phase space has the follow chain rule,

\[\begin{split}\mathrm{d} \Phi(P;p_1,\cdots,p_n) =& (2\pi)^4\delta^4(P - \sum {p_i}) \prod \frac{1}{2E_i}\frac{\mathrm{d}^3 \vec{p_i}}{(2\pi)^{3}} \\ =& (2\pi)^4\delta^4(P - \sum_{i=0}^{m} {p_i} -q) \prod_{i=0}^{m} \frac{1}{2E_i}\frac{\mathrm{d}^3 \vec{p_i}}{(2\pi)^{3}} \prod_{i=m+1}^{n} \frac{1}{2E_i}\frac{\mathrm{d}^3 \vec{p_i}}{(2\pi)^{3}}\\ & (2\pi)^4\delta^4(q - \sum_{i=m+1}^{n} {p_i})\frac{\mathrm{d}^4 q}{(2\pi)^4}\delta(q^2 - (\sum_{i=m+1}^{n} {p_i})^2)\mathrm{d} q^2 \\ =& \mathrm{d}\Phi(P;p_1,\cdots,p_m,q)\frac{\mathrm{d} q^2}{2\pi}\mathrm{d}\Phi(q;p_{m+1},\cdots p_{n}) \label{chain_decay},\end{split}\]

where \(q = \sum_{i=m+1}^{n}p_i\) is the invariant mass of particles \(m+1,\cdots,n\).

The two body decay is simple in the center mass frame \(P=(M,0,0,0)\),

\[\begin{split}\mathrm{d} \Phi(P;p_1,p_2) =& (2\pi)^4\delta^4(P - p_1 - p_2) \frac{1}{2E_1}\frac{\mathrm{d}^3 \vec{p_1}}{(2\pi)^{3}} \frac{1}{2E_2}\frac{\mathrm{d}^3 \vec{p_2}}{(2\pi)^{3}} \\ =& 2\pi\delta(M - E_1 - E_2) \frac{1}{2E_1 }\frac{1}{2E_2}\frac{\mathrm{d}^3 \vec{p_2}}{(2\pi)^{3}} \\ =& 2\pi\delta(M - \sqrt{|\vec{p}|^2 + m_1^2} - \sqrt{|\vec{p}|^2 + m_2^2}) \frac{1}{2E_1 }\frac{|\vec{p}|^2}{2E_2}\frac{\mathrm{d} |\vec{p}| \mathrm{d} \Omega}{(2\pi)^{3}} \\ =& \frac{|\vec{p}|}{16 \pi^2 M} \mathrm{d} \Omega\end{split}\]

where \(\mathrm{d} \Omega = \mathrm{d}(\cos\theta)\mathrm{d}\varphi\) and

\[E_1 = \frac{M^2 + m_1^2 - m_2^2 }{2M} , E_1 = \frac{M^2 - m_1^2 + m_2^2 }{2M}\]
\[|\vec{p}| = \frac{\sqrt{(M^2 - (m_1 + m_2)^2)(M^2 -(m_1 - m_2)^2)}}{2M}\]

The three body decay in the center mass frame \(P=(M,0,0,0),q^\star=(m_{23},0,0,0)\),

\[\begin{split}\mathrm{d} \Phi(P;p_1,p_2,p_3) =& \mathrm{d}\Phi(P;p_1,q) \mathrm{d}\Phi(q^\star;p_2^\star,p_3^\star) \frac{\mathrm{d} q^2}{2\pi} \\ =& \frac{|\vec{p_1}||\vec{p_2^\star}|}{(2\pi)^5 16 M m_{23}} \mathrm{d} m_{23}^2 \mathrm{d} \Omega_1 \mathrm{d}\Omega_2^\star \\ =& \frac{|\vec{p_1}||\vec{p_2^\star}|}{(2\pi)^5 8 M} \mathrm{d} m_{23} \mathrm{d} \Omega_1 \mathrm{d}\Omega_2^\star\end{split}\]

The n body decay in the center mass frame \(P=(M,0,0,0)\),

\[\begin{split}\mathrm{d} \Phi(P;p_1,\cdots,p_n) =& \mathrm{d}\Phi(P;p_1,q_1)\prod_{i=1}^{n-2} \frac{\mathrm{d} q_{i}^2}{2\pi}\mathrm{d}\Phi(q_{i},p_{i+1},p_{i+2})\\ =& \frac{1}{2^{2n-2}(2\pi)^{3n-4}}\frac{|\vec{p_{1}}|}{M} \mathrm{d}\Omega_{1} \prod_{i=1}^{n-2} \frac{|\vec{p_{i+1}^\star}|}{M_{i}} \mathrm{d} M_{i}^2 \mathrm{d}\Omega_{i+1}^\star \\ =& \frac{1}{2^n (2\pi)^{3n-4}}\frac{|\vec{p_{1}}|}{M} \mathrm{d}\Omega_{1} \prod_{i=1}^{n-2} |\vec{p_{i+1}^\star}| \mathrm{d} M_{i} \mathrm{d}\Omega_{i+1}^\star\end{split}\]

where

\[M_{i}^2 = (\sum_{j > i} p_j)^2 ,\ |\vec{p_{i}^\star}| = \frac{\sqrt{(M_i^2 - (M_{i+1} + m_{i+1})^2)(M_i^2 - (M_{i+1} - m_{i+1})^2)}}{2 M_i}\]

with those limit

\[\sum_{j>i} m_{j} < M_{i+1} + m_{i+1} < M_{i} < M_{i-1} - m_{i} < M - \sum_{j \leq i } m_i\]

Phase Space Generator

For n body phase space

\[\mathrm{d} \Phi(P;p_1,\cdots,p_n) = \frac{1}{2^n (2\pi)^{3n-4}} \left( \frac{1}{M}\prod_{i=0}^{n-2}|\vec{p_{i+1}^\star}| \right)\prod_{i=1}^{n-2} \mathrm{d} M_{i} \prod_{i=0}^{n-2} \mathrm{d}\Omega_{i+1}^\star,\]

take a weeker condition

\[\sum_{j>i} m_{j} < M_{i} < M - \sum_{j \leq i } m_j,\]

has the simple limit at the factor term

\[\begin{split}\frac{1}{M}\prod_{i=0}^{n-2}|\vec{p_{i+1}^\star}| =& \frac{1}{M}\prod_{i=0}^{n-2}q(M_i,M_{i+1},m_{i+1}) \\ <& \frac{1}{M}\prod_{i=0}^{n-2}q(max(M_i),min(M_{i+1}),m_{i+1})\end{split}\]
    1. Generate \(M_i\) with the factor

    1. Generate \(\mathrm{d}\Omega = \mathrm{d}\cos\theta \mathrm{d}\varphi\)

    1. boost \(p^\star=(\sqrt{|\vec{p*}|^2 + m^2} ,|\vec{p^\star}|\cos\theta\cos\varphi,|\vec{p^\star}|\sin\theta\sin\varphi,|\vec{p^\star}|\cos\theta,)\) to a same farme.

Resolution in Partial Wave Analysis

Resolution is the effect of detector. To Consider the resolution properly, We need to take a general look about the detector process. We can divide the process of detector into two parts. The first part is acceptance, with the probability for truth value \(x\) as \(\epsilon_{T} (x)\). The second part is resolution, it means the measurement value \(y\) will be a random number base on truth value \(x\). It is a conditional probability as \(R_{T}(y|x)\). The conditional probability is normalized as \(\int R_{T}(y|x) \mathrm{d} y = 1\). So, the total effect of detector is transition function

\[T(x,y) = R_{T}(y|x)\epsilon_{T} (x).\]

When we have a distribution of truth value with probability \(p(x)\), then we can get the distribution of measurement value with probability

\[p'(y)= \int p(x) T(x,y) \mathrm{d} x.\]

Using the Bayes Rule, we can rewrite \(T(x,y)\) as

\[T(x,y) = R(x|y) \epsilon_{R}(y),\]

where

\[\epsilon_{R}(y) = \int T(x,y) \mathrm{d} x, \ R(x|y) = \frac{T(x,y)}{\epsilon_{R}(y)}.\]

\(R(x|y)\) is the posterior probability, that means the probability of a certain \(y\) is from \(x\). \(\epsilon_{R}(y)\) is the projection of \(y\) for \(T(x,y)\), and is also the normalize factor for \(R(x|y)\).

Then, the probability \(p'(y)\) can be rewritten as

\[p'(y) = \epsilon_{R}(y) \int p(x) R(x|y) \mathrm{d} x.\]

To consider the resolution, we need to determine \(R(x|y)\). Generally, we use simulation to determine \(R(x|y)\). When \(p(x)=1\) is a flat distribution, then the joint distribution of \(x\) and \(y\) has the probability density \(T(x,y)\). We can build a model for this distribution. To get \(R(x|y)\), we only need to do a normalization for \(T(x,y)\).

In PWA, we usually use the MC to do the normalization for signal probability density. We need to calculate the integration of \(p'(y)\) as

\[\int p'(y) \mathrm{d} y = \int p(x) \epsilon_{T} (x) \int R_{T}(y|x) \mathrm{d} y \mathrm{d} x = \int p(x) \epsilon_{T} (x) \mathrm{d} x.\]

The final negative log-likelihood with considering resolution is

\[- \ln L = -\sum \ln \frac{p'(y)}{\int p'(y) \mathrm{d}y} = -\sum \ln \frac{\int p(x) R(x|y) \mathrm{d} x}{ \int p(x) \epsilon_{T} (x) \mathrm{d} x } - \sum \ln \epsilon_{R}(y).\]

The last part is a constant, we can ignore it in fit. In the numerical form, it can be written as

\[- \ln L = -\sum \ln \frac{1}{M}\sum_{x \sim R(x|y)} p(x) + N \ln \sum_{x \sim \epsilon_{T}(x)} p(x).\]

For the second part, which we already have MC sample with \(x \sim \epsilon_{T}(x)\), we can use MC sample to do the sum directly. For the first part, we can generate some \(x\) (\(M\) times) for every \(y\) (\(N\) events). Using the generated samples (\(MN\) events), we can calculate though the summation.

In addition, we can insert some importance information for the summation as

\[\int p(x) R(x|y) \mathrm{d} x \approx \frac{1}{\sum w_i} \sum_{x\sim \frac{R(x|y)}{w_i(x)}} w_i p(x).\]

We need to keep the normalization. For example, we can use Gauss-Hermite quadrature.

In a simple situation, we only use mass for the variable for resolution function. We can build the datasets by replacing the mass by random number based on the resolution function, keeping the same for other variables and using some constrains.

Once we get such datasets, we can use the likelihood method to fit the dataset with resolution. There is an example in checks.

Some examples

Examples for particle model

Examples for particle model

Examples for Plotter class

Examples for Plotter class

Particle and amplitude

Particle and amplitude

Examples for error propagation

Examples for error propagation

Examples for config.yml file

Examples for config.yml file

Examples for particle model

decay system is model as

DecayGroup

DecayChain

Decay

Particle

16 import matplotlib.pyplot as plt
17
18 from tf_pwa.amp import Decay, DecayChain, DecayGroup, Particle
19 from tf_pwa.vis import plot_decay_struct

We can easy create some instance of Particle and then combine them as Decay

25 a = Particle("A")
26 b = Particle("B")
27 c = Particle("C")
28 d = Particle("D")
29
30 r = Particle("R")
31
32 dec1 = Decay(a, [r, b])
33 dec2 = Decay(r, [c, d])

DecayChain is a list of Decays.

39 decay_chain = DecayChain([dec1, dec2])
40 decay_chain
[A->R+B, R->C+D]

We can plot it using matplotlib.

45 plot_decay_struct(decay_chain)
46 plt.show()
ex1 partial model

DecayGroup is a list of DecayChain with the same initial and final states

51 decay_group = DecayGroup([decay_chain])
52 decay_group
[[A->R+B, R->C+D]]

We can build a simple function to infer the charge from final states.

58 def charge_infer(dec, charge_map):
59     # use particle equal condition
60     cached_charge = {Particle(k): v for k, v in charge_map.items()}
61     # loop for all decays in decay chain
62     for i, dec_i in dec.depth_first(False):
63         # all out particles has charge
64         assert all(i in cached_charge for i in dec_i.outs)
65         # the charge or core particle is the sum of
66         cached_charge[dec_i.core] = sum(cached_charge[i] for i in dec_i.outs)
67     return cached_charge
68
69
70 charges = {
71     "B": -1,
72     "C": 0,
73     "D": 1,
74 }
75
76 charge_infer(decay_chain, charges)
{B: -1, C: 0, D: 1, R: 1, A: 0}

See more in cal_chain_boost.

Total running time of the script: (0 minutes 0.118 seconds)

Gallery generated by Sphinx-Gallery

Examples for Plotter class

Ploter is the new api for partial wave plots.

First, we can build a simple config.

12 config_str = """
13
14 decay:
15     A:
16        - [R1, B]
17        - [R2, C]
18        - [R3, D]
19     R1: [C, D]
20     R2: [B, D]
21     R3: [B, C]
22
23 particle:
24     $top:
25        A: { mass: 1.86, J: 0, P: -1}
26     $finals:
27        B: { mass: 0.494, J: 0, P: -1}
28        C: { mass: 0.139, J: 0, P: -1}
29        D: { mass: 0.139, J: 0, P: -1}
30     R1: [ R1_a, R1_b ]
31     R1_a: { mass: 0.7, width: 0.05, J: 1, P: -1}
32     R1_b: { mass: 0.5, width: 0.05, J: 0, P: +1}
33     R2: { mass: 0.824, width: 0.05, J: 0, P: +1}
34     R3: { mass: 0.824, width: 0.05, J: 0, P: +1}
35
36
37 plot:
38     mass:
39         R1:
40             display: "m(R1)"
41         R2:
42             display: "m(R2)"
43 """
44
45 import matplotlib.pyplot as plt
46 import yaml
47
48 from tf_pwa.config_loader import ConfigLoader
49
50 config = ConfigLoader(yaml.full_load(config_str))

We set parameters to a blance value. And we can generate some toy data and calclute the weights

56 input_params = {
57     "A->R1_a.BR1_a->C.D_total_0r": 6.0,
58     "A->R1_b.BR1_b->C.D_total_0r": 1.0,
59     "A->R2.CR2->B.D_total_0r": 2.0,
60     "A->R3.DR3->B.C_total_0r": 1.0,
61 }
62 config.set_params(input_params)
63
64 data = config.generate_toy(1000)
65 phsp = config.generate_phsp(10000)
7.4%[▓▓▓>----------------------------------------------] 1.08/14.57s eff: 90.000000%
84.3%[▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓>-------] 3.12/3.70s eff: 6.132461%
99.5%[▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓>] 4.15/4.17s eff: 4.733064%
99.8%[▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓>] 4.97/4.98s eff: 4.636872%
100.0%[▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓>] 5.78/5.78s eff: 4.625428%
100.0%[▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓] 5.78/5.78s  eff: 4.624625%

plotter can be created directly from config

71 plotter = config.get_plotter(datasets={"data": [data], "phsp": [phsp]})

Ploting all partial waves is simple.

76 plotter.plot_frame("m_R1")
77 plt.show()
ex4 plotter

Also we can plot other variables in data

82 from tf_pwa.data import data_index
83
84 m2 = config.get_data_index("mass", "R2")
85 m3 = config.get_data_index("mass", "R3")
86
87
88 def f(data):
89     return data_index(data, m2) - data_index(data, m3)
90
91
92 plt.clf()
93 plotter.plot_var(f)
94 plt.xlabel("m(R2)+m(R3)")
95 plt.show()
ex4 plotter

There are 3 main parts in a Plotter

  1. PlotAllData: datasets with weights There is three level: (1). idx: Datasets for combine fit (2). type: data, mc, or bg (3). observations and weights: weights are used for partial wave

  2. Frame: function to get obsevations It is samilar to RooFit’s Frame.

  3. Styles: Plot style for differenct componets

The plot process is as follow:

  1. Plotter.plot_item, extra_plot_item, and hidden_plot_item provide the list of histograms for plotting.

  2. Loop over all data to get the observations though frame.

  3. Frame provide the binning, weights from datas. Their combination is histogram

  4. Plot on the axis with style

Total running time of the script: (0 minutes 8.378 seconds)

Gallery generated by Sphinx-Gallery

Particle and amplitude

Amplitude = DecayGroup + Variable

We will use following parameters for a toy model

11 from tf_pwa.amp import DecayChain, DecayGroup, get_decay, get_particle
12
13 resonances = {
14     "R0": {"J": 0, "P": 1, "mass": 1.0, "width": 0.07},
15     "R1": {"J": 0, "P": 1, "mass": 1.0, "width": 0.07},
16     "R2": {"J": 1, "P": -1, "mass": 1.225, "width": 0.08},
17 }
18
19 a, b, c, d = [get_particle(i, J=0, P=-1) for i in "ABCD"]
20 r1, r2, r3 = [get_particle(i, **resonances[i]) for i in resonances.keys()]
21
22
23 decay_group = DecayGroup(
24     [
25         DecayChain([get_decay(a, [r1, c]), get_decay(r1, [b, d])]),
26         DecayChain([get_decay(a, [r2, b]), get_decay(r2, [c, d])]),
27         DecayChain([get_decay(a, [r3, b]), get_decay(r3, [c, d])]),
28     ]
29 )

The above parts can be represented as config.yml used by ConfigLoader.

We can get AmplitudeModel form decay_group and a optional Variables Managerager. It has parameters, so we can get and set parameters for the amplitude model

36 from tf_pwa.amp import AmplitudeModel
37 from tf_pwa.variable import VarsManager
38
39 vm = VarsManager()
40 amp = AmplitudeModel(decay_group, vm=vm)
41
42 print(amp.get_params())
43 amp.set_params(
44     {
45         "A->R0.CR0->B.D_total_0r": 1.0,
46         "A->R1.BR1->C.D_total_0r": 1.0,
47         "A->R2.BR2->C.D_total_0r": 7.0,
48     }
49 )
{'R0_mass': 1.0, 'R0_width': 0.07, 'R1_mass': 1.0, 'R1_width': 0.07, 'R2_mass': 1.225, 'R2_width': 0.08, 'A->R0.CR0->B.D_total_0r': 1.2912955649770161, 'A->R0.CR0->B.D_total_0i': -2.55000670261145, 'A->R0.C_g_ls_0r': 1.0, 'A->R0.C_g_ls_0i': 0.0, 'R0->B.D_g_ls_0r': 1.0, 'R0->B.D_g_ls_0i': 0.0, 'A->R1.BR1->C.D_total_0r': 1.529102980783036, 'A->R1.BR1->C.D_total_0i': 0.4606078681122159, 'A->R1.B_g_ls_0r': 1.0, 'A->R1.B_g_ls_0i': 0.0, 'R1->C.D_g_ls_0r': 1.0, 'R1->C.D_g_ls_0i': 0.0, 'A->R2.BR2->C.D_total_0r': 1.0615105541261456, 'A->R2.BR2->C.D_total_0i': -1.4925394077297083, 'A->R2.B_g_ls_0r': 1.0, 'A->R2.B_g_ls_0i': 0.0, 'R2->C.D_g_ls_0r': 1.0, 'R2->C.D_g_ls_0i': 0.0}

For the calculation, we generate some phase space data.

54 from tf_pwa.phasespace import PhaseSpaceGenerator
55
56 m_A, m_B, m_C, m_D = 1.8, 0.18, 0.18, 0.18
57 p1, p2, p3 = PhaseSpaceGenerator(m_A, [m_B, m_C, m_D]).generate(100000)

and the calculate helicity angle from the data

62 from tf_pwa.cal_angle import cal_angle_from_momentum
63
64 data = cal_angle_from_momentum({b: p1, c: p2, d: p3}, decay_group)

we can index mass from data as

69 from tf_pwa.data import data_index
70
71 m_bd = data_index(data, ("particle", "(B, D)", "m"))
72 # m_bc = data_index(data, ("particle", "(B, C)", "m"))
73 m_cd = data_index(data, ("particle", "(C, D)", "m"))

Note

If DecayGroup do not include resonant of (B, C), the data will not include its mass too. We can use different DecayGroup for cal_angle and AmplitudeModel when they have the same initial and final particle.

The amplitde square is calculated by amplitude model simply as

83 amp_s2 = amp(data)
/home/docs/checkouts/readthedocs.org/user_builds/tf-pwa/checkouts/stable/docs/../tf_pwa/amp/core.py:803: UserWarning: no mass for particle A, set it to 1.800000022927442
  warnings.warn(
/home/docs/checkouts/readthedocs.org/user_builds/tf-pwa/checkouts/stable/docs/../tf_pwa/amp/core.py:803: UserWarning: no mass for particle C, set it to 0.17999999999999977
  warnings.warn(
/home/docs/checkouts/readthedocs.org/user_builds/tf-pwa/checkouts/stable/docs/../tf_pwa/amp/core.py:803: UserWarning: no mass for particle B, set it to 0.17999999999999977
  warnings.warn(
/home/docs/checkouts/readthedocs.org/user_builds/tf-pwa/checkouts/stable/docs/../tf_pwa/amp/core.py:803: UserWarning: no mass for particle D, set it to 0.17999999999999977
  warnings.warn(

Now by using matplotlib we can get the Dalitz plot as

 88 import matplotlib.pyplot as plt
 89
 90 plt.clf()
 91 plt.hist2d(
 92     m_bd.numpy() ** 2,
 93     m_cd.numpy() ** 2,
 94     weights=amp_s2.numpy(),
 95     bins=60,
 96     cmin=1,
 97     cmap="jet",
 98 )
 99 plt.colorbar()
100 plt.xlabel("$m^2(BD)$")
101 plt.ylabel("$m^2(CD)$")
102 plt.show()
ex2 particle amplitude

Total running time of the script: (0 minutes 4.900 seconds)

Gallery generated by Sphinx-Gallery

Examples for error propagation

Hear we use the same config in particle_amplitude.py

 9 config_str = """
10 decay:
11     A:
12        - [R1, B]
13        - [R2, C]
14        - [R3, D]
15     R1: [C, D]
16     R2: [B, D]
17     R3: [B, C]
18
19 particle:
20     $top:
21        A: { mass: 1.86, J: 0, P: -1}
22     $finals:
23        B: { mass: 0.494, J: 0, P: -1}
24        C: { mass: 0.139, J: 0, P: -1}
25        D: { mass: 0.139, J: 0, P: -1}
26     R1: [ R1_a, R1_b ]
27     R1_a: { mass: 0.7, width: 0.05, J: 1, P: -1}
28     R1_b: { mass: 0.5, width: 0.05, J: 0, P: +1}
29     R2: { mass: 0.824, width: 0.05, J: 0, P: +1}
30     R3: { mass: 0.824, width: 0.05, J: 0, P: +1}
31
32 """
33
34 import matplotlib.pyplot as plt
35 import yaml
36
37 from tf_pwa.config_loader import ConfigLoader
38 from tf_pwa.histogram import Hist1D
39
40 config = ConfigLoader(yaml.full_load(config_str))
41 input_params = {
42     "A->R1_a.BR1_a->C.D_total_0r": 6.0,
43     "A->R1_b.BR1_b->C.D_total_0r": 1.0,
44     "A->R2.CR2->B.D_total_0r": 2.0,
45     "A->R3.DR3->B.C_total_0r": 1.0,
46 }
47 config.set_params(input_params)
48
49 data = config.generate_toy(1000)
50 phsp = config.generate_phsp(10000)
10.5%[▓▓▓▓▓>--------------------------------------------] 0.93/8.83s eff: 90.000000%
72.4%[▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓>-------------] 2.56/3.53s eff: 8.667212%
100.7%[▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓>] 3.68/3.66s eff: 5.762658%
100.0%[▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓] 3.68/3.68s  eff: 5.647375%

After we calculated the parameters error, we will have an error matrix config.inv_he (using the inverse hessain). It is possible to save such matrix directly by numpy.save and to load it by numpy.load.

57 config.get_params_error(data=[data], phsp=[phsp])
Using Model
Time for calculating errors: 1.2023131847381592
hesse_error: [0.06538996515565601, 0.1702258540059552, 0.13332458090882404, 0.21488220658000598, 0.07176595653506987, 0.1430663889457865]

{'A->R1_b.BR1_b->C.D_total_0r': 0.06538996515565601, 'A->R1_b.BR1_b->C.D_total_0i': 0.1702258540059552, 'A->R2.CR2->B.D_total_0r': 0.13332458090882404, 'A->R2.CR2->B.D_total_0i': 0.21488220658000598, 'A->R3.DR3->B.C_total_0r': 0.07176595653506987, 'A->R3.DR3->B.C_total_0i': 0.1430663889457865}

We can use the following method to profamance the error propagation

\[\sigma_{f} = \sqrt{\frac{\partial f}{\partial x_i} V_{ij} \frac{\partial f}{\partial x_j }}\]

by adding some calculation here. We need to use tensorflow functions instead of those of math or numpy.

68 import tensorflow as tf
69
70 with config.params_trans() as pt:
71     a2_r = pt["A->R2.CR2->B.D_total_0r"]
72     a2_i = pt["A->R2.CR2->B.D_total_0r"]
73     a2_x = a2_r * tf.cos(a2_i)

And then we can calculate the error we needed as

78 print(a2_x.numpy(), pt.get_error(a2_x).numpy())
-0.8322936730942848 0.2979459992851929

We can also calculate some more complex examples, such as the ratio in mass range (0.75, 0.85) over full phace space. Even further, we can get the error of error in the meaning of error propagation.

 84 from tf_pwa.data import data_mask
 85
 86 m_R2 = phsp.get_mass("(B, D)")
 87 cut_cond = (m_R2 < 0.85) & (m_R2 > 0.75)
 88
 89 amp = config.get_amplitude()
 90
 91 with config.params_trans() as pt1:
 92     with config.params_trans() as pt:
 93         int_mc = tf.reduce_sum(amp(phsp))
 94         cut_phsp = data_mask(phsp, cut_cond)
 95         cut_int_mc = tf.reduce_sum(amp(cut_phsp))
 96         ratio = cut_int_mc / int_mc
 97     error = pt.get_error(ratio)
 98
 99 print(ratio.numpy(), "+/-", error.numpy())
100 print(error.numpy(), "+/-", pt1.get_error(error).numpy())
0.32915944692985494 +/- 0.01114039004766758
0.01114039004766758 +/- 0.0004925148705355855

Total running time of the script: (0 minutes 7.039 seconds)

Gallery generated by Sphinx-Gallery

Examples for config.yml file

Configuration file config.yml use YAML (https://yaml.org) format to describe decay process.

The main parts of config.yml is decay and particle.

The decay part describe the particle (or an id of a list of particle) decay into which particles, it can be a list of a list of list. A list means that there is ony one decay mode, A list of list is the list of possible decay mode. The list item can be the particle name (or a dict to describe the decay parameters). All name should appear in particle part.

The particle part describe the parameters of particles. There are two special parts $top and $finals describe the top and finals particles. The other parts are lists of particle name or dicts of particle parameters. The list is the same type particle in decay part. The dict is the parameters of the particle name.

22 config_str = """
23
24 decay:
25     A:
26        - [R1, B]
27        - [R2, C]
28        - [R3, D]
29     R1: [C, D]
30     R2: [B, D]
31     R3: [B, C]
32
33 particle:
34     $top:
35        A: { mass: 1.86, J: 0, P: -1}
36     $finals:
37        B: { mass: 0.494, J: 0, P: -1}
38        C: { mass: 0.139, J: 0, P: -1}
39        D: { mass: 0.139, J: 0, P: -1}
40     R1: [ R1_a, R1_b ]
41     R1_a: { mass: 0.7, width: 0.05, J: 1, P: -1}
42     R1_b: { mass: 0.5, width: 0.05, J: 0, P: +1}
43     R2: { mass: 0.824, width: 0.05, J: 0, P: +1}
44     R3: { mass: 0.824, width: 0.05, J: 0, P: +1}
45
46 """

The config file can be loaded by yaml library.

52 import matplotlib.pyplot as plt
53 import yaml
54
55 from tf_pwa.config_loader import ConfigLoader
56 from tf_pwa.histogram import Hist1D

The simple way to create config is write it to config.yml file and then you can load it as config = ConfigLoader("config.yml"). Here we used config_str directly.

64 config = ConfigLoader(yaml.full_load(config_str))

We set parameters to a blance value. And we can generate some toy data and calclute the weights The full params can be found by print(config.get_params()).

71 input_params = {
72     "A->R1_a.BR1_a->C.D_total_0r": 6.0,
73     "A->R1_b.BR1_b->C.D_total_0r": 1.0,
74     "A->R2.CR2->B.D_total_0r": 2.0,
75     "A->R3.DR3->B.C_total_0r": 1.0,
76 }
77 config.set_params(input_params)
True

Here we generate some toy data and phsp mc to show the model

82 data = config.generate_toy(1000)
83 phsp = config.generate_phsp(10000)
9.1%[▓▓▓▓>---------------------------------------------] 0.93/10.18s eff: 90.000000%
96.3%[▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓>-] 2.66/2.77s eff: 7.522486%
99.8%[▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓>] 3.55/3.55s eff: 6.641405%
100.2%[▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓>] 4.36/4.35s eff: 6.604085%
100.0%[▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓] 4.36/4.36s  eff: 6.616095%

You can also fit the data fit to the data We can omit the args when writen in config.yml

90 fit_result = config.fit([data], [phsp])
91 # After the fit, you can get the uncertaities as
92 err = config.get_params_error(fit_result, [data], [phsp])
Using Model
decay chains included:
   [A->R1_a+B, R1_a->C+D]  ls:  ((1, 1),) ((1, 0),)
   [A->R1_b+B, R1_b->C+D]  ls:  ((0, 0),) ((0, 0),)
   [A->R2+C, R2->B+D]  ls:  ((0, 0),) ((0, 0),)
   [A->R3+D, R3->B+C]  ls:  ((0, 0),) ((0, 0),)

########### initial parameters
{
  "R1_a_mass": 0.7,
  "R1_a_width": 0.05,
  "R1_b_mass": 0.5,
  "R1_b_width": 0.05,
  "R2_mass": 0.824,
  "R2_width": 0.05,
  "R3_mass": 0.824,
  "R3_width": 0.05,
  "A->R1_a.BR1_a->C.D_total_0r": 6.0,
  "A->R1_a.BR1_a->C.D_total_0i": 0.0,
  "A->R1_a.B_g_ls_0r": 1.0,
  "A->R1_a.B_g_ls_0i": 0.0,
  "R1_a->C.D_g_ls_0r": 1.0,
  "R1_a->C.D_g_ls_0i": 0.0,
  "A->R1_b.BR1_b->C.D_total_0r": 1.0,
  "A->R1_b.BR1_b->C.D_total_0i": 2.1190235837811127,
  "A->R1_b.B_g_ls_0r": 1.0,
  "A->R1_b.B_g_ls_0i": 0.0,
  "R1_b->C.D_g_ls_0r": 1.0,
  "R1_b->C.D_g_ls_0i": 0.0,
  "A->R2.CR2->B.D_total_0r": 2.0,
  "A->R2.CR2->B.D_total_0i": -0.0028244390488301185,
  "A->R2.C_g_ls_0r": 1.0,
  "A->R2.C_g_ls_0i": 0.0,
  "R2->B.D_g_ls_0r": 1.0,
  "R2->B.D_g_ls_0i": 0.0,
  "A->R3.DR3->B.C_total_0r": 1.0,
  "A->R3.DR3->B.C_total_0i": 2.4902939992162327,
  "A->R3.D_g_ls_0r": 1.0,
  "A->R3.D_g_ls_0i": 0.0,
  "R3->B.C_g_ls_0r": 1.0,
  "R3->B.C_g_ls_0i": 0.0
}
initial NLL:  tf.Tensor(-916.7861574062281, shape=(), dtype=float64)
nll_grad  cost time: 0.4711449146270752
nll_grad  cost time: 0.4763481616973877
nll_grad  cost time: 0.4744119644165039
tf.Tensor(-918.0313860109773, shape=(), dtype=float64)
nll_grad  cost time: 0.4681389331817627
nll_grad  cost time: 0.4773573875427246
tf.Tensor(-918.321175583159, shape=(), dtype=float64)
nll_grad  cost time: 0.4857463836669922
tf.Tensor(-918.6843661108414, shape=(), dtype=float64)
nll_grad  cost time: 0.47803568840026855
nll_grad  cost time: 0.4770476818084717
tf.Tensor(-918.7214532692033, shape=(), dtype=float64)
nll_grad  cost time: 0.49227023124694824
nll_grad  cost time: 0.4756491184234619
tf.Tensor(-918.7405082260302, shape=(), dtype=float64)
nll_grad  cost time: 0.4837212562561035
nll_grad  cost time: 0.4724407196044922
tf.Tensor(-918.7425728780827, shape=(), dtype=float64)
nll_grad  cost time: 0.4707956314086914
tf.Tensor(-918.7465163214356, shape=(), dtype=float64)
nll_grad  cost time: 0.4769773483276367
tf.Tensor(-918.7534349241469, shape=(), dtype=float64)
nll_grad  cost time: 0.4826357364654541
tf.Tensor(-918.7614291844357, shape=(), dtype=float64)
nll_grad  cost time: 0.48675966262817383
tf.Tensor(-918.7615933096431, shape=(), dtype=float64)
nll_grad  cost time: 0.5060186386108398
tf.Tensor(-918.7615933207944, shape=(), dtype=float64)
Optimization terminated successfully.
         Current function value: -918.761593
         Iterations: 11
         Function evaluations: 17
         Gradient evaluations: 17
  message: Optimization terminated successfully.
  success: True
   status: 0
      fun: -918.7615933207944
        x: [ 1.011e+00  2.008e+00  1.885e+00 -8.322e-02  9.890e-01
             2.496e+00]
      nit: 11
      jac: [ 2.629e-06 -5.104e-06 -3.998e-08 -2.505e-07  1.086e-05
             3.129e-06]
 hess_inv: [[ 3.611e-03 -3.190e-04 ...  2.748e-03  3.318e-04]
            [-3.190e-04  1.401e-02 ...  9.223e-05  2.531e-03]
            ...
            [ 2.748e-03  9.223e-05 ...  5.406e-03  1.563e-03]
            [ 3.318e-04  2.531e-03 ...  1.563e-03  1.568e-02]]
     nfev: 17
     njev: 17
fit  cost time: 8.683003902435303
Using Model
Time for calculating errors: 1.1689956188201904
hesse_error: [0.05944145521517815, 0.11791255758827061, 0.10973915034896069, 0.09646762447304948, 0.07283673741311344, 0.12476398105725298]

we can see that thre fit results consistant with inputs, the first one is fixed.

 97 for var in input_params:
 98     print(
 99         f"in: {input_params[var]} => out: {fit_result.params[var]} +/- {err.get(var, 0.)}"
100     )
in: 6.0 => out: 6.0 +/- 0.0
in: 1.0 => out: 1.0111732401399116 +/- 0.05944145521517815
in: 2.0 => out: 1.885346604737314 +/- 0.10973915034896069
in: 1.0 => out: 0.9890204748489917 +/- 0.07283673741311344

We can use the amplitude to plot the fit results

105 amp = config.get_amplitude()

This is the total \(|A|^2\)

110 weight = amp(phsp)

This is the \(|A_i|^2\) for each decay chain

115 partial_weight = amp.partial_weight(phsp)

We can plot the data, Hist1D include some plot method base on matplotlib.

120 data_hist = Hist1D.histogram(
121     data.get_mass("(C, D)"), bins=60, range=(0.25, 1.45)
122 )

Read the mass var from phsp.

127 mass_phsp = phsp.get_mass("(C, D)")

For helicity angle it can be read as phsp.get_angle("(C, D)", "C") The first arg is used to deteminate the decay chain. (decay chain include all particles) The second arg is used to deteminate the angle in decay chain. The return value is a dict as {"alpha": phi, "beta": theta, "gamma": 0}

136 phsp_hist = Hist1D.histogram(
137     mass_phsp, weights=weight, bins=60, range=(0.25, 1.45)
138 )
139 scale = phsp_hist.scale_to(data_hist)
140
141 pw_hist = []
142 for w in partial_weight:
143     # here we used more bins for a smooth plot
144     hist = Hist1D.histogram(
145         mass_phsp, weights=w, bins=60 * 2, range=(0.25, 1.45)
146     )
147     pw_hist.append(scale * hist * 2)

Then we can plot the histogram into matplotlib

152 for hist, dec in zip(pw_hist, config.get_decay()):
153     hist.draw_kde(label=str(dec))
154 phsp_hist.draw(label="total amplitude")
155 data_hist.draw_error(label="toy data", color="black")
156
157 plt.legend()
158 plt.ylim((0, None))
159 plt.xlabel("M(CD)/ GeV")
160 plt.ylabel("Events/ 20 MeV")
161 plt.show()
ex3 particle config

Total running time of the script: (0 minutes 16.396 seconds)

Gallery generated by Sphinx-Gallery

Gallery generated by Sphinx-Gallery

Setup for Developer Enveriment

Note

Before the developing, creating a standalone enveriment is recommanded (see https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-with-commands for more).

The main steps are similar as normal install, only two extra things need to be done.

The first things is writing tests, and tests your code. We use pytest (https://docs.pytest.org/en/stable/) framework, You should install it.

conda install pytest pytest-cov pytest-benchmark

The other things is pre-commit. it need for developing.

  1. You can install pre-commit as

conda install pre-commit

and

  1. then enable pre-commit in the source dir

conda install pylint # local dependences
pre-commit install

You can check if pre-commit is working well by running

pre-commit run -a

It may take some time to install required package.

Note

If there are some GLIBC_XXX errors at this step, you can try to install node.js.

Note

For developer using editor with formatter, you should be careful for the options.

The following are all commands needed

# create environment
conda create -n tfpwa2 python=3.7 -y
conda activate tfpwa2

# install tf-pwa
conda install --file requirements-min.txt -y
python -m pip install -e . --no-deps
# install pytest
conda install pytest pytest-cov -y
# install pylint local
conda install pylint
# install pre-commit
conda install pre-commit -c conda-forge -y
pre-commit install

API

tf_pwa

Partial Wave Analysis program using Tensorflow

Submodules and Subpackages

amp

Basic Amplitude Calculations. A partial wave analysis process has following structure:

DecayGroup: addition (+)
DecayChain: multiplication (x)

Decay, Particle(Propagator)

Submodules and Subpackages

Kmatrix
Bl(l, q, q0, d=3)[source]
Fb(l, q, d=3)[source]
KMatrix_single(n_pole, m1, m2, l, d=3, bkg=0, Kb=0)[source]
class KmatrixSingleChannelParticle(*args, **kwargs)[source]

Bases: Particle

K matrix model for single channel multi pole.

\[K = \sum_{i} \frac{m_i \Gamma_i(m)}{m_i^2 - m^2 }\]
\[P = \sum_{i} \frac{\beta_i m_0 \Gamma_0 }{ m_i^2 - m^2}\]

the barrier factor is included in gls

\[R(m) = (1-iK)^{-1} P\]

requird mass_list: [pole1, pole2] and width_list: [width1, width2].

(Source code, png, hires.png, pdf)

_images/tf_pwa-amp-Kmatrix-1.png
get_amp(*args, **kwargs)[source]
get_beta()[source]
get_gi()[source]
get_mi()[source]
init_params()[source]
model_name = 'KMatrixSingleChannel'
class KmatrixSplitLSParticle(*args, **kwargs)[source]

Bases: Particle

K matrix model for single channel multi pole and the same channel with different (l, s) coupling.

\[K_{a,b} = \sum_{i} \frac{m_i \sqrt{\Gamma_{a,i}(m)\Gamma_{b,i}(m)}}{m_i^2 - m^2 }\]
\[P_{b} = \sum_{i} \frac{\beta_i m_0 \Gamma_{b,i0} }{ m_i^2 - m^2}\]

the barrier factor is included in gls

\[R(m) = (1-iK)^{-1} P\]
get_amp(*args, **kwargs)[source]
get_beta()[source]
get_gi()[source]
get_gi_frac()[source]
get_ls_amp(m)[source]
get_mi()[source]
init_params()[source]
model_name = 'KMatrixSplitLS'
class ParticleDecayLSKmatrix(*args, **kwargs)[source]

Bases: HelicityDecay

get_ls_amp(data, data_p, **kwargs)[source]
init_params()[source]
model_name = 'LS-decay-Kmatrix'
get_relative_p(m0, m1, m2)[source]
opt_lambdify(args, expr, **kwargs)[source]
amp
class AbsPDF(*args, name='', vm=None, polar=None, use_tf_function=False, no_id_cached=False, jit_compile=False, **kwargs)[source]

Bases: object

cached_available()[source]
get_params(trainable_only=False)[source]
mask_params(var)[source]
set_params(var)[source]
temp_params(var)[source]
property trainable_variables
property variables
class AmplitudeModel(decay_group, **kwargs)[source]

Bases: BaseAmplitudeModel

partial_weight(data, combine=None)[source]
class BaseAmplitudeModel(decay_group, **kwargs)[source]

Bases: AbsPDF

cache_data(data, split=None, batch=None)[source]
cached_available()[source]
chains_particle()[source]
factor_iteration(deep=2)[source]
init_params(name='')[source]
partial_weight(data, combine=None)[source]
partial_weight_interference(data)[source]
pdf(data)[source]
set_used_chains(used_chains)[source]
set_used_res(res)[source]
temp_total_gls_one()[source]
temp_used_res(res)[source]
class CachedAmpAmplitudeModel(decay_group, **kwargs)[source]

Bases: BaseAmplitudeModel

pdf(data)[source]
class CachedShapeAmplitudeModel(*args, **kwargs)[source]

Bases: BaseAmplitudeModel

get_cached_shape_idx()[source]
pdf(data)[source]
class FactorAmplitudeModel(*args, **kwargs)[source]

Bases: BaseAmplitudeModel

get_amp_list(data)[source]
get_amp_list_part(data)[source]
pdf(data)[source]
class P4DirectlyAmplitudeModel(decay_group, **kwargs)[source]

Bases: BaseAmplitudeModel

cal_angle(p4)[source]
pdf(data)[source]
create_amplitude(decay_group, **kwargs)[source]
register_amp_model(name=None, f=None)[source]

register a data mode

Params name:

mode name used in configuration

Params f:

Data Mode class

base

Basic amplitude model

class HelicityDecayCPV(*args, has_barrier_factor=True, l_list=None, barrier_factor_mass=False, has_ql=True, has_bprime=True, aligned=False, allow_cc=True, ls_list=None, barrier_factor_norm=False, params_polar=None, below_threshold=False, force_min_l=False, params_head=None, no_q0=False, helicity_inner_full=False, ls_selector=None, **kwargs)[source]

Bases: HelicityDecay

decay model for CPV

get_g_ls(charge=1)[source]
get_ls_amp(data, data_p, **kwargs)[source]
init_params()[source]
model_name = 'gls-cpv'
class HelicityDecayNP(*args, has_barrier_factor=True, l_list=None, barrier_factor_mass=False, has_ql=True, has_bprime=True, aligned=False, allow_cc=True, ls_list=None, barrier_factor_norm=False, params_polar=None, below_threshold=False, force_min_l=False, params_head=None, no_q0=False, helicity_inner_full=False, ls_selector=None, **kwargs)[source]

Bases: HelicityDecay

Full helicity amplitude

\[A = H_{m_1, m_2} D_{m_0, m_1-m_2}^{J_0 *}(\varphi, \theta,0)\]

fit parameters is \(H_{m_1, m_2}\).

fix_unused_h()[source]
get_H()[source]
get_H_zero_mask()[source]
get_factor()[source]
get_factor_H(data=None, data_p=None, **kwargs)[source]
get_helicity_amp(data=None, data_p=None, **kwargs)[source]
get_ls_amp(data, data_p, **kwargs)[source]
get_zero_index()[source]
init_params()[source]
model_name = 'helicity_full'
class HelicityDecayNPbf(*args, has_barrier_factor=True, l_list=None, barrier_factor_mass=False, has_ql=True, has_bprime=True, aligned=False, allow_cc=True, ls_list=None, barrier_factor_norm=False, params_polar=None, below_threshold=False, force_min_l=False, params_head=None, no_q0=False, helicity_inner_full=False, ls_selector=None, **kwargs)[source]

Bases: HelicityDecayNP

get_H_barrier_factor(data, data_p, **kwargs)[source]
get_helicity_amp(data, data_p, **kwargs)[source]
get_ls_amp(data, data_p, **kwargs)[source]
init_params()[source]
model_name = 'helicity_full-bf'
class HelicityDecayP(*args, has_barrier_factor=True, l_list=None, barrier_factor_mass=False, has_ql=True, has_bprime=True, aligned=False, allow_cc=True, ls_list=None, barrier_factor_norm=False, params_polar=None, below_threshold=False, force_min_l=False, params_head=None, no_q0=False, helicity_inner_full=False, ls_selector=None, **kwargs)[source]

Bases: HelicityDecayNP

\[H_{- m1, - m2} = P_0 P_1 P_2 (-1)^{J_1 + J_2 - J_0} H_{m1, m2}\]
get_helicity_amp(data, data_p, **kwargs)[source]
init_params()[source]
model_name = 'helicity_parity'
class ParticleBW(*args, running_width=True, bw_l=None, width_norm=False, params_head=None, **kwargs)[source]

Bases: Particle

\[R(m) = \frac{1}{m_0^2 - m^2 - i m_0 \Gamma_0}\]

(Source code, png, hires.png, pdf)

_images/tf_pwa-amp-base-1.png
get_amp(data, _data_c=None, **kwargs)[source]
get_num_var()[source]
get_sympy_var()[source]
model_name = 'BW'
class ParticleBWR2(*args, running_width=True, bw_l=None, width_norm=False, params_head=None, **kwargs)[source]

Bases: Particle

\[R(m) = \frac{1}{m_0^2 - m^2 - i m_0 \Gamma(m)}\]

The difference of BWR, BWR2 is the behavior when mass is below the threshold ( \(m_0 = 0.1 < 0.1 + 0.1 = m_1 + m_2\)).

(Source code, png, hires.png, pdf)

_images/tf_pwa-amp-base-2.png
get_amp(data, data_c, **kwargs)[source]
model_name = 'BWR2'
class ParticleBWRBelowThreshold(*args, running_width=True, bw_l=None, width_norm=False, params_head=None, **kwargs)[source]

Bases: Particle

\[R(m) = \frac{1}{m_0^2 - m^2 - i m_0 \Gamma(m)}\]
get_amp(data, data_c, **kwargs)[source]
model_name = 'BWR_below'
class ParticleBWRCoupling(*args, running_width=True, bw_l=None, width_norm=False, params_head=None, **kwargs)[source]

Bases: Particle

Force \(q_0=1/d\) to avoid below theshold condition for BWR model, and remove other constant parts, then the \(\Gamma_0\) is coupling parameters.

\[R(m) = \frac{1}{m_0^2 - m^2 - i m_0 \Gamma_0 \frac{q}{m} q^{2l} B_L'^2(q, 1/d, d)}\]

(Source code, png, hires.png, pdf)

_images/tf_pwa-amp-base-3.png
get_amp(data, data_c, **kwargs)[source]
get_sympy_dom(m, m0, g0, m1=None, m2=None, sheet=0)[source]
model_name = 'BWR_coupling'
class ParticleBWR_normal(*args, running_width=True, bw_l=None, width_norm=False, params_head=None, **kwargs)[source]

Bases: Particle

\[R(m) = \frac{\sqrt{m_0 \Gamma(m)}}{m_0^2 - m^2 - i m_0 \Gamma(m)}\]
get_amp(data, data_c, **kwargs)[source]
model_name = 'BWR_normal'
class ParticleDecay(*args, has_barrier_factor=True, l_list=None, barrier_factor_mass=False, has_ql=True, has_bprime=True, aligned=False, allow_cc=True, ls_list=None, barrier_factor_norm=False, params_polar=None, below_threshold=False, force_min_l=False, params_head=None, no_q0=False, helicity_inner_full=False, ls_selector=None, **kwargs)[source]

Bases: HelicityDecay

get_ls_amp(data, data_p, **kwargs)[source]
model_name = 'particle-decay'
class ParticleExp(*args, running_width=True, bw_l=None, width_norm=False, params_head=None, **kwargs)[source]

Bases: Particle

\[R(m) = e^{-|a| m}\]
get_amp(data, _data_c=None, **kwargs)[source]
init_params()[source]
model_name = 'exp'
class ParticleExpCom(*args, running_width=True, bw_l=None, width_norm=False, params_head=None, **kwargs)[source]

Bases: Particle

\[R(m) = e^{-(a+ib) m^2}\]

lineshape when \(a=1.0, b=10.\)

(Source code, png, hires.png, pdf)

_images/tf_pwa-amp-base-4.png
get_amp(data, _data_c=None, **kwargs)[source]
init_params()[source]
model_name = 'exp_com'
class ParticleGS(*args, **kwargs)[source]

Bases: Particle

Gounaris G.J., Sakurai J.J., Phys. Rev. Lett., 21 (1968), pp. 244-247

c_daug2Mass: mass for daughter particle 2 (\(\pi^{+}\)) 0.13957039

c_daug3Mass: mass for daughter particle 3 (\(\pi^{0}\)) 0.1349768

\[R(m) = \frac{1 + D \Gamma_0 / m_0}{(m_0^2 -m^2) + f(m) - i m_0 \Gamma(m)}\]
\[f(m) = \Gamma_0 \frac{m_0 ^2 }{q_0^3} \left[q^2 [h(m)-h(m_0)] + (m_0^2 - m^2) q_0^2 \frac{d h}{d m}|_{m0} \right]\]
\[h(m) = \frac{2}{\pi} \frac{q}{m} \ln \left(\frac{m+2q}{2m_{\pi}} \right)\]
\[\frac{d h}{d m}|_{m0} = h(m_0) [(8q_0^2)^{-1} - (2m_0^2)^{-1}] + (2\pi m_0^2)^{-1}\]
\[D = \frac{f(0)}{\Gamma_0 m_0} = \frac{3}{\pi}\frac{m_\pi^2}{q_0^2} \ln \left(\frac{m_0 + 2q_0}{2 m_\pi }\right) + \frac{m_0}{2\pi q_0} - \frac{m_\pi^2 m_0}{\pi q_0^3}\]
get_amp(data, data_c, **kwargs)[source]
model_name = 'GS_rho'
class ParticleKmatrix(*args, running_width=True, bw_l=None, width_norm=False, params_head=None, **kwargs)[source]

Bases: Particle

get_amp(data, data_c=None, **kwargs)[source]
get_beta(m, **kwargs)[source]
init_params()[source]
model_name = 'Kmatrix'
class ParticleLass(*args, running_width=True, bw_l=None, width_norm=False, params_head=None, **kwargs)[source]

Bases: Particle

get_amp(data, data_c=None, **kwargs)[source]
\[R(m) = \frac{m}{q cot \delta_B - i q} + e^{2i \delta_B}\frac{m_0 \Gamma_0 \frac{m_0}{q_0}} {(m_0^2 - m^2) - i m_0\Gamma_0 \frac{q}{m}\frac{m_0}{q_0}}\]
\[cot \delta_B = \frac{1}{a q} + \frac{1}{2} r q\]
\[e^{2i\delta_B} = \cos 2 \delta_B + i \sin 2\delta_B = \frac{cot^2\delta_B -1 }{cot^2 \delta_B +1} + i \frac{2 cot \delta_B }{cot^2 \delta_B +1 }\]
init_params()[source]
model_name = 'LASS'
class ParticleOne(*args, running_width=True, bw_l=None, width_norm=False, params_head=None, **kwargs)[source]

Bases: Particle

\[R(m) = 1\]

(Source code, png, hires.png, pdf)

_images/tf_pwa-amp-base-5.png
get_amp(data, _data_c=None, **kwargs)[source]
init_params()[source]
model_name = 'one'
get_parity_term(j1, p1, j2, p2, j3, p3)[source]
core

Basic Amplitude Calculations. A partial wave analysis process has following structure:

DecayGroup: addition (+)
DecayChain: multiplication (x)

Decay, Particle(Propagator)

class AmpBase[source]

Bases: object

Base class for amplitude

add_var(names, is_complex=False, shape=(), **kwargs)[source]

default add_var method

amp_shape()[source]
get_factor_variable()[source]
get_params_head()[source]
get_var(name)[source]
get_variable_name(name='')[source]
class AmpDecay(core, outs, name=None, disable=False, p_break=False, c_break=True, curve_style=None, **kwargs)[source]

Bases: Decay, AmpBase

base class for decay with amplitude

amp_index(base_map)[source]
amp_shape()[source]
get_params_head()[source]
list_helicity_inner()[source]
n_helicity_inner()[source]
class AmpDecayChain(*args, is_cp=False, aligned=True, **kwargs)[source]

Bases: DecayChain, AmpBase

get_params_head()[source]
class AngSam3Decay(core, outs, name=None, disable=False, p_break=False, c_break=True, curve_style=None, **kwargs)[source]

Bases: AmpDecay, AmpBase

get_amp(data, data_extra=None, **kwargs)[source]
init_params()[source]
model_name = 'default'
class DecayChain(*args, is_cp=False, aligned=True, **kwargs)[source]

Bases: AmpDecayChain

A list of Decay as a chain decay

amp_index(base_map=None)[source]
amp_shape()[source]
factor_iteration(deep=1)[source]
get_all_factor()[source]
get_amp(data_c, data_p, all_data=None, base_map=None)[source]
get_amp_particle(data_p, data_c, all_data=None)[source]
get_amp_total(charge=1)[source]
get_angle_amp(data_c, data_p, all_data=None, base_map=None)[source]
get_base_map(base_map=None)[source]
get_cp_amp_total(charge=1)[source]
get_factor()[source]
get_factor_angle_amp(data_c, data_p, all_data=None, base_map=None)[source]
get_factor_variable()[source]
get_m_dep(data_c, data_p, all_data=None, base_map=None)[source]
init_params(name='')[source]
model_name = 'default'
product_gls()[source]
class DecayGroup(chains)[source]

Bases: DecayGroup, AmpBase

A Group of Decay Chains with the same final particles.

add_used_chains(used_chains)[source]
amp_index(gen=None, base_map=None)[source]
chains_particle()[source]
factor_iteration(deep=2)[source]
generate_phasespace(num=100000)[source]
get_amp(data)[source]

calculate the amplitude as complex number

get_amp2(data)[source]
get_amp3(data)[source]
get_angle_amp(data)[source]
get_base_map(gen=None, base_map=None)[source]
get_density_matrix()[source]
get_factor()[source]
get_factor_angle_amp(data)[source]
get_factor_variable()[source]
get_id_swap_transpose(key, n)[source]
get_m_dep(data)[source]

get mass dependent items

get_res_map()[source]
get_swap_factor(key)[source]
get_swap_transpose(trans, n)[source]
init_params(name='')[source]
partial_weight(data, combine=None)[source]
partial_weight_interference(data)[source]
set_used_chains(used_chains)[source]
set_used_res(res, only=False)[source]
sum_amp(data, cached=True)[source]

calculat the amplitude modular square

sum_amp_polarization(data)[source]

sum amplitude suqare with density _get_cg_matrix

\[P = \sum_{m, m', \cdots } A_{m, \cdots} \rho_{m, m'} A^{*}_{m', \cdots}\]
sum_with_polarization(amp)[source]
temp_used_res(res)[source]
class FloatParams(x=0, /)[source]

Bases: float

class HelicityDecay(*args, has_barrier_factor=True, l_list=None, barrier_factor_mass=False, has_ql=True, has_bprime=True, aligned=False, allow_cc=True, ls_list=None, barrier_factor_norm=False, params_polar=None, below_threshold=False, force_min_l=False, params_head=None, no_q0=False, helicity_inner_full=False, ls_selector=None, **kwargs)[source]

Bases: AmpDecay

default decay model

The total amplitude is

\[A = H_{\lambda_{B},\lambda_{C}}^{A \rightarrow B+C} D^{J_A*}_{\lambda_{A}, \lambda_{B}-\lambda_{C}} (\varphi,\theta,0)\]

The helicity coupling is

\[H_{\lambda_{B},\lambda_{C}}^{A \rightarrow B+C} = \sum_{ls} g_{ls} \sqrt{\frac{2l+1}{2 J_{A}+1}} \langle l 0; s \delta|J_{A} \delta\rangle \langle J_{B} \lambda_{B} ;J_{C} -\lambda_{C} | s \delta \rangle q^{l} B_{l}'(q, q_0, d)\]

The fit parameters is \(g_{ls}\)

There are some options

(1). has_bprime=False will remove the \(B_{l}'(q, q_0, d)\) part.

(2). has_barrier_factor=False will remove the \(q^{l} B_{l}'(q, q_0, d)\) part.

(3). barrier_factor_norm=True will replace \(q^l\) with \((q/q_{0})^l\)

(4). below_threshold=True will replace the mass used to calculate \(q_0\) with

\[m_0^{eff} = m^{min} + \frac{m^{max} - m^{min}}{2}(1+tanh \frac{m_0 - \frac{m^{max} + m^{min}}{2}}{m^{max} - m^{min}})\]

(5). l_list=[l1, l2] and ls_list=[[l1, s1], [l2, s2]] options give the list of all possible LS used in the decay.

(6). no_q0=True will set the \(q_0=1\).

add_algin(ret, data)[source]
build_ls2hel_eq()[source]
build_simple_data()[source]
check_valid_jp()[source]
factor_iter_names(deep=1, extra=[])[source]
get_amp(data, data_p, **kwargs)[source]
get_angle_amp(data, data_p, **kwargs)[source]
get_angle_g_ls()[source]
get_angle_helicity_amp(data, data_p, **kwargs)[source]
get_angle_ls_amp(data, data_p, **kwargs)[source]
get_barrier_factor(mass, q, q0, d)[source]
get_barrier_factor2(mass, q2, q02, d)[source]
get_barrier_factor_mass(mass)[source]
get_cg_matrix(out_sym=False)[source]

The matrix indexed by \([(l,s),(\lambda_b,\lambda_c)]\). The matrix element is

\[\sqrt{\frac{ 2 l + 1 }{ 2 j_a + 1 }} \langle j_b, j_c, \lambda_b, - \lambda_c | s, \lambda_b - \lambda_c \rangle \langle l, s, 0, \lambda_b - \lambda_c | j_a, \lambda_b - \lambda_c \rangle\]

This is actually the pre-factor of \(g_ls\) in the amplitude formula.

Returns:

2-d array of real numbers

get_factor()[source]
get_factor_H(data, data_p, **kwargs)[source]
get_factor_angle_amp(data, data_p, **kwargs)[source]
get_factor_angle_helicity_amp(data, data_p, **kwargs)[source]
get_factor_m_dep(data, data_p, **kwargs)[source]
get_factor_variable()[source]
get_g_ls()[source]
get_helicity_amp(data, data_p, **kwargs)[source]
get_ls_amp(data, data_p, **kwargs)[source]
get_ls_amp_org(data, data_p, **kwargs)[source]
get_ls_list()[source]

get possible ls for decay, with l_list filter possible l

get_m_dep(data, data_p, **kwargs)[source]
get_params_head()[source]
get_relative_momentum(data, from_data=False)[source]
get_relative_momentum2(data, from_data=False)[source]
get_total_ls_list()[source]
init_params()[source]
mask_factor_vars()[source]
model_name = 'default'
set_ls(ls)[source]
class Particle(*args, running_width=True, bw_l=None, width_norm=False, params_head=None, **kwargs)[source]

Bases: BaseParticle, AmpBase

\[R(m) = \frac{1}{m_0^2 - m^2 - i m_0 \Gamma(m)}\]

(Source code, png, hires.png, pdf)

_images/tf_pwa-amp-core-1.png
amp_shape()[source]
get_amp(data, data_c, **kwargs)[source]
get_factor()[source]
get_mass()[source]
get_num_var()[source]
get_subdecay_mass(idx=0)[source]
get_sympy_dom(m, m0, g0, m1=None, m2=None, sheet=0)[source]
get_sympy_var()[source]
get_width()[source]
init_params()[source]
is_fixed_shape()[source]
model_name = 'BWR'
solve_pole(init=None, sheet=0, return_complex=True)[source]
class ParticleX(*args, running_width=True, bw_l=None, width_norm=False, params_head=None, **kwargs)[source]

Bases: Particle

simple particle model for mass, (used in expr)

\[R(m) = m\]

(Source code, png, hires.png, pdf)

_images/tf_pwa-amp-core-2.png
get_amp(data, *args, **kwargs)[source]
model_name = 'x'
class SimpleResonances(*args, **kwargs)[source]

Bases: Particle

get_amp(*args, **kwargs)[source]
data_device(data)[source]
get_decay(core, outs, **kwargs)[source]

method for getting decay of model

get_decay_chain(decays, **kwargs)[source]

method for getting decay of model

get_decay_model(model, num_outs=2)[source]
get_name(self, names)[source]
get_particle(*args, model='default', **kwargs)[source]

method for getting particle of model

get_particle_model(name)[source]
get_particle_model_name(p)[source]
get_relative_p(m_0, m_1, m_2)[source]

relative momentum for 0 -> 1 + 2

get_relative_p2(m_0, m_1, m_2)[source]

relative momentum for 0 -> 1 + 2

index_generator(base_map=None)[source]
load_decfile_particle(fname)[source]
ls_selector_qr(decay, ls_list)[source]
regist_decay(name=None, num_outs=2, f=None)

register a decay model

Params name:

model name used in configuration

Params f:

Model class

regist_particle(name=None, f=None)

register a particle model

Params name:

model name used in configuration

Params f:

Model class

register_decay(name=None, num_outs=2, f=None)[source]

register a decay model

Params name:

model name used in configuration

Params f:

Model class

register_decay_chain(name=None, f=None)[source]

register a decay model

Params name:

model name used in configuration

Params f:

Model class

register_particle(name=None, f=None)[source]

register a particle model

Params name:

model name used in configuration

Params f:

Model class

rename_data_dict(data, idx_map)[source]
simple_cache_fun(f)[source]
simple_deepcopy(dic)[source]
simple_resonance(name, fun=None, params=None)[source]

convert simple fun f(m) into a resonances model

Params name:

model name used in configuration

Params fun:

Model function

Params params:

arguments name list for parameters

trans_model(model)[source]
value_and_grad(f, var)[source]
variable_scope(vm=None)[source]

variabel name scope

cov_ten
class CovTenDecayChain(*args, is_cp=False, aligned=True, **kwargs)[source]

Bases: DecayChain

build_coupling_einsum(a, b, c, na, nb, nc, idx_map)[source]
build_decay_einsum(ls, idx_map=None)[source]
build_einsum()[source]
build_l_einsum(decay, l, s, idx_map)[source]
build_s_einsum(decay, l, s, idx_map)[source]
build_wave_function(particle, pi)[source]
get_amp(data_c, data_p, all_data=None, base_map=None, idx_map=None)[source]
get_finals_amp(data_p)[source]
get_m_dep_list(data_c, data_p, all_data=None)[source]
init_params(name='')[source]
model_name = 'cov_ten'
class CovTenDecayNew(*args, **kwargs)[source]

Bases: HelicityDecay

Decay Class for covariant tensor formula

get_amp(data, data_p, **kwargs)[source]
get_angle_amp(data, data_p, **kwargs)[source]
class CovTenDecaySimple(*args, **kwargs)[source]

Bases: CovTenDecayNew

Decay Class for covariant tensor formula

get_all_amp(data, data_p, **kwargs)[source]
get_amp(data, data_p, **kwargs)[source]
init_params()[source]
model_name = 'cov_ten_simple'
class EinSum(name, idx, inputs=None, replace_axis=[])[source]

Bases: object

call(inputs, cached=None)[source]
insert_extra_axis(name, indexs)[source]
set_inputs(name, value)[source]
class EinSumCall(name, idx, function)[source]

Bases: EinSum

call(inputs, cached=None)[source]
class EvalBoost(boost, sign=None)[source]

Bases: object

call(inputs, cached=None)[source]
class EvalP(core, l)[source]

Bases: object

P^{u}_{v}

class EvalT(decay, l)[source]

Bases: object

t^{u}

class EvalT2(decay, l)[source]

Bases: object

t^{u}

class IndexMap[source]

Bases: object

get(name)[source]
create_epsilon()[source]

epsilon^{a}_{bcd}

dot(p1, p2)[source]
mass2(t)[source]
wave_function(J, p)[source]
flatte
class ParticleFlate2(*args, im_sign=-1, l_list=None, has_bprime=True, no_m0=False, no_q0=False, cut_phsp=False, **kwargs)[source]

Bases: ParticleFlateGen

General Flatte like formula.

\[R(m) = \frac{1}{m_0^2 - m^2 - i m_0 [\sum_{i} \color{red}{g_i^2}\color{black} \frac{q_i}{m} \times \frac{m_0}{|q_{i0}|} \times \frac{|q_i|^{2l_i}}{|q_{i0}|^{2l_i}} B_{l_i}'^2(|q_i|,|q_{i0}|,d)]}\]
\[\begin{split}q_i = \begin{cases} \frac{\sqrt{(m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2)}}{2m} & (m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2) >= 0 \\ \frac{i\sqrt{|(m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2)|}}{2m} & (m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2) < 0 \\ \end{cases}\end{split}\]

It has the same options as FlatteGen.

get_coeff()[source]
model_name = 'Flatte2'
class ParticleFlateGen(*args, im_sign=-1, l_list=None, has_bprime=True, no_m0=False, no_q0=False, cut_phsp=False, **kwargs)[source]

Bases: ParticleFlatte

More General Flatte like formula

\[R(m) = \frac{1}{m_0^2 - m^2 - i m_0 [\sum_{i} g_i \frac{q_i}{m} \times \frac{m_0}{|q_{i0}|} \times \frac{|q_i|^{2l_i}}{|q_{i0}|^{2l_i}} B_{l_i}'^2(|q_i|,|q_{i0}|,d)]}\]
\[\begin{split}q_i = \begin{cases} \frac{\sqrt{(m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2)}}{2m} & (m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2) >= 0 \\ \frac{i\sqrt{|(m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2)|}}{2m} & (m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2) < 0 \\ \end{cases}\end{split}\]

Required input arguments mass_list: [[m11, m12], [m21, m22]] for \(m_{i,1}, m_{i,2}\). And addition arguments l_list: [l1, l2] for \(l_i\)

has_bprime=False to remove \(B_{l_i}'^2(|q_i|,|q_{i0}|,d)\).

cut_phsp=True to set \(q_i = 0\) when \((m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2) < 0\)

The plot use parameters \(m_0=0.7, m_{0,1}=m_{0,2}=0.1, m_{1,1}=m_{1,2}=0.3, g_0=0.3,g_1=0.2,l_0=0,l_1=1\).

no_m0=True to set \(i m_0 => i\) in the width part.

no_q0=True to remove \(\frac{m_0}{|q_{i0}|}\) and set others \(q_{i0}=1\).

get_amp(*args, **kwargs)[source]
get_coeff()[source]
get_num_var()[source]
get_sympy_dom(m, m0, *gi, sheet=0)[source]
model_name = 'FlatteGen'
class ParticleFlatte(*args, mass_list=None, im_sign=1, **kwargs)[source]

Bases: Particle

Flatte like formula

\[R(m) = \frac{1}{m_0^2 - m^2 + i m_0 (\sum_{i} g_i \frac{q_i}{m})}\]
\[\begin{split}q_i = \begin{cases} \frac{\sqrt{(m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2)}}{2m} & (m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2) >= 0 \\ \frac{i\sqrt{|(m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2)|}}{2m} & (m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2) < 0 \\ \end{cases}\end{split}\]

(Source code, png, hires.png, pdf)

_images/tf_pwa-amp-flatte-4.png

Required input arguments mass_list: [[m11, m12], [m21, m22]] for \(m_{i,1}, m_{i,2}\).

get_amp(*args, **kwargs)[source]
get_num_var()[source]
get_sympy_dom(m, m0, *gi, sheet=0)[source]
get_sympy_var()[source]
get_width(m=None)[source]
init_params()[source]
model_name = 'Flatte'
class ParticleFlatteC(*args, im_sign=-1, **kwargs)[source]

Bases: ParticleFlatte

Flatte like formula

\[R(m) = \frac{1}{m_0^2 - m^2 - i m_0 (\sum_{i} g_i \frac{q_i}{m})}\]
\[\begin{split}q_i = \begin{cases} \frac{\sqrt{(m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2)}}{2m} & (m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2) >= 0 \\ \frac{i\sqrt{|(m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2)|}}{2m} & (m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2) < 0 \\ \end{cases}\end{split}\]

Required input arguments mass_list: [[m11, m12], [m21, m22]] for \(m_{i,1}, m_{i,2}\).

model_name = 'FlatteC'
cal_monentum(m, ma, mb)[source]
cal_monentum_sympy(m, ma, mb)[source]
interpolation
class HistParticle(*args, **kwargs)[source]

Bases: InterpolationParticle

n_points()[source]
class Interp(*args, **kwargs)[source]

Bases: InterpolationParticle

linear interpolation for complex number

interp(m)[source]
model_name = 'interp_c'
class Interp1D3(*args, **kwargs)[source]

Bases: InterpolationParticle

Piecewise third order interpolation

interp(m)[source]
model_name = 'interp1d3'
class Interp1DLang(*args, **kwargs)[source]

Bases: InterpolationParticle

Lagrange interpolation

interp(m)[source]
model_name = 'interp_lagrange'
class Interp1DSpline(*args, **kwargs)[source]

Bases: InterpolationParticle

Spline interpolation function for model independent resonance

init_params()[source]
interp(m)[source]
model_name = 'spline_c'
class Interp1DSplineIdx(*args, **kwargs)[source]

Bases: InterpolationParticle

Spline function in index way.

use

min_m: 0.19
max_m: 0.91
interp_N: 8
with_bound: True

for mass range [0.19, 0.91] and 8 interpolation points

The first and last are fixed to zero unless set with_bound: True.

This is an example of \(k\exp (i k)\) for point k.

(Source code, png, hires.png, pdf)

_images/tf_pwa-amp-interpolation-1.png
init_params()[source]
interp(m)[source]
model_name = 'spline_c_idx'
class InterpHist(*args, **kwargs)[source]

Bases: InterpolationParticle

Interpolation for each bins as constant

interp(m)[source]
model_name = 'interp_hist'
class InterpHistIdx(*args, **kwargs)[source]

Bases: HistParticle

Interpolation for each bins as constant

use

min_m: 0.19
max_m: 0.91
interp_N: 8
with_bound: True

for mass range [0.19, 0.91] and 7 bins

The first and last are fixed to zero unless set with_bound: True.

This is an example of \(k\exp (i k)\) for point k.

(Source code, png, hires.png, pdf)

_images/tf_pwa-amp-interpolation-2.png
interp(m)[source]
model_name = 'hist_idx'
class InterpL3(*args, **kwargs)[source]

Bases: InterpolationParticle

interp(m)[source]
model_name = 'interp_l3'
class InterpLinearNpy(*args, **kwargs)[source]

Bases: InterpolationParticle

Linear interpolation model from a npy file with array of [mi, re(ai), im(ai)]. Required file: path_of_file.npy, for the path of npy file.

The example is exp(5 I m).

(Source code, png, hires.png, pdf)

_images/tf_pwa-amp-interpolation-3.png
get_point_values()[source]
init_params()[source]
interp(m)[source]
model_name = 'linear_npy'
class InterpLinearTxt(*args, **kwargs)[source]

Bases: InterpLinearNpy

Linear interpolation model from a txt file with array of [mi, re(ai), im(ai)]. Required file: path_of_file.txt, for the path of txt file.

The example is exp(5 I m).

(Source code, png, hires.png, pdf)

_images/tf_pwa-amp-interpolation-4.png
model_name = 'linear_txt'
class InterpSPPCHIP(*args, **kwargs)[source]

Bases: InterpolationParticle

Shape-Preserving Piecewise Cubic Hermite Interpolation Polynomial. It is monotonic in each interval.

(Source code, png, hires.png, pdf)

_images/tf_pwa-amp-interpolation-5.png
init_params()[source]
interp(m)[source]
model_name = 'sppchip'
class InterpolationParticle(*args, **kwargs)[source]

Bases: Particle

get_amp(data, *args, **kwargs)[source]
get_bin_index(m)[source]
get_point_values()[source]
init_params()[source]
interp(mass)[source]
n_points()[source]
create_sppchip_matrix(points)[source]

matrix to solve f(xi),f(xi+1),f’(xi),f’(xi+1)

do_spline_hmatrix(h_matrix, y, m, idx)[source]
get_matrix_interp1d3(x, xi)[source]
get_matrix_interp1d3_v2(x, xi)[source]
interp1d3(x, xi, yi)[source]
spline_matrix(x, xi, yi, bc_type='not-a-knot')[source]

calculate spline interpolation

spline_x_matrix(x, xi)[source]

build matrix of x for spline interpolation

spline_xi_matrix(xi, bc_type='not-a-knot')[source]

build matrix of xi for spline interpolation solve equation

\[S_i'(x_i) = S_{i-1}'(x_i)\]

and two bound condition. \(S_0'(x_0) = S_{n-1}'(x_n) = 0\)

sppchip(m, xi, y, idx=None, matrix=None)[source]

Shape-Preserving Piecewise Cubic Hermite Interpolation Polynomial. It is monotonic in each interval.

>>> from scipy.interpolate import pchip_interpolate
>>> x_observed = np.linspace(0.0, 10.0, 11)
>>> y_observed = np.sin(x_observed)
>>> x = np.linspace(min(x_observed), max(x_observed)-1e-12, num=100)
>>> y = pchip_interpolate(x_observed, y_observed, x)
>>> assert np.allclose(y, sppchip(x, x_observed, y_observed).numpy())
sppchip_coeffs(xi, y, matrix=None, eps=1e-12)[source]
kmatrix_simple
class KmatrixSimple(*args, **kwargs)[source]

Bases: KmatrixSplitLSParticle

simple Kmatrix formula.

K-matrix

\[K_{i,j} = \sum_{a} \frac{g_{i,a} g_{j,a}}{m_a^2 - m^2+i\epsilon}\]

P-vector

\[P_{i} = \sum_{a} \frac{\beta_{a} g_{i,a}}{m_a^2 - m^2 +i\epsilon} + f_{bkg,i}\]

total amplitude

\[R(m) = n (1 - K i \rho n^2)^{-1} P\]

barrief factor

\[n_{ii} = q_i^l B'_l(q_i, 1/d, d)\]

phase space factor

\[\rho_{ii} = q_i/m\]

\(q_i\) is 0 when below threshold

build_barrier_factor(s)[source]
build_k_matrix(s)[source]
build_p_vector(s)[source]
get_ls_amp(m)[source]
init_params()[source]
model_name = 'KmatrixSimple'
phsp_fractor(m, m1, m2)[source]
barrier_factor(m, m1, m2, l, d=3.0)[source]
get_relative_p(m, m1, m2)[source]
preprocess
class BasePreProcessor(decay_struct, root_config=None, model='defualt', **kwargs)[source]

Bases: HeavyCall

class CachedAmpPreProcessor(*args, **kwargs)[source]

Bases: BasePreProcessor

class CachedAnglePreProcessor(*args, **kwargs)[source]

Bases: BasePreProcessor

build_cached(x)[source]
class CachedShapePreProcessor(*args, **kwargs)[source]

Bases: CachedAmpPreProcessor

build_cached(x)[source]
create_preprocessor(decay_group, **kwargs)[source]
list_to_tuple(data)[source]
register_preprocessor(name=None, f=None)[source]

register a data mode

Params name:

mode name used in configuration

Params f:

Data Mode class

split_ls
class ParticleBWRLS(*args, **kwargs)[source]

Bases: ParticleLS

Breit Wigner with split ls running width

\[R_i (m) = \frac{g_i}{m_0^2 - m^2 - im_0 \Gamma_0 \frac{\rho}{\rho_0} (\sum_{i} g_i^2)}\]

, \(\rho = 2q/m\), the partial width factor is

\[g_i = \gamma_i \frac{q^l}{q_0^l} B_{l_i}'(q,q_0,d)\]

and keep normalize as

\[\sum_{i} \gamma_i^2 = 1.\]

The normalize is done by (\(\cos \theta_0, \sin\theta_0 \cos \theta_1, \cdots, \prod_i \sin\theta_i\))

factor_gamma(ls)[source]
get_barrier_factor(ls, q2, q02, d)[source]
get_ls_amp(m, ls, q2, q02, d=3.0)[source]
get_ls_amp_frac(m, ls, q2, q02, d=3.0)[source]
get_num_var()[source]
get_sympy_dom(m, m0, g0, thetas, m1=None, m2=None, sheet=0)[source]
get_sympy_var()[source]
init_params()[source]
model_name = 'BWR_LS'
class ParticleBWRLS2(*args, **kwargs)[source]

Bases: ParticleLS

Breit Wigner with split ls running width, each one use their own l,

\[R_i (m) = \frac{1}{m_0^2 - m^2 - im_0 \Gamma_0 \frac{\rho}{\rho_0} (g_i^2)}\]

, \(\rho = 2q/m\), the partial width factor is

\[g_i = \gamma_i \frac{q^l}{q_0^l} B_{l_i}'(q,q_0,d)\]
get_ls_amp(m, ls, q2, q02, d=3.0)[source]
model_name = 'BWR_LS2'
class ParticleDecayLS(*args, **kwargs)[source]

Bases: HelicityDecay

get_barrier_factor2(mass, q2, q02, d)[source]
init_params()[source]
model_name = 'LS-decay'
class ParticleLS(*args, **kwargs)[source]

Bases: Particle

get_amp(*args, **kwargs)[source]
get_ls_amp(m, ls, q2, q02, d=3)[source]
is_fixed_shape()[source]
class ParticleMultiBW(*args, **kwargs)[source]

Bases: ParticleMultiBWR

Combine Multi BW into one particle

dom_fun(m, m0, g0, q2, q02, l, d)[source]
model_name = 'MultiBW'
class ParticleMultiBWR(*args, **kwargs)[source]

Bases: ParticleLS

Combine Multi BWR into one particle

(Source code, png, hires.png, pdf)

_images/tf_pwa-amp-split_ls-1.png
dom_fun(m, m0, g0, q2, q02, l, d)[source]
get_barrier_factor(ls, q2, q02, d)[source]
get_ls_amp(m, ls, q2, q02, d=3.0)[source]
init_params()[source]
mass()[source]
model_name = 'MultiBWR'

app

Submodules and Subpackages

fit
fit(config='config.yml', init_params='init_params.json', method='BFGS')[source]

simple fit script

json_print(dic)[source]

print parameters as json

config_loader

Submodules and Subpackages

base_config
class BaseConfig(file_name, share_dict=None)[source]

Bases: object

load_config(file_name, share_dict=None)[source]
config_loader
class ConfigLoader(file_name, vm=None, share_dict=None)[source]

Bases: BaseConfig

class for loading config.yml

add_constraints(amp)[source]
add_decay_constraints(amp, dic=None)[source]
add_fix_var_constraints(amp, dic=None)[source]
add_free_var_constraints(amp, dic=None)[source]
add_from_trans_constraints(amp, dic=None)[source]
add_gauss_constr_constraints(amp, dic=None)[source]
add_particle_constraints(amp, dic=None)[source]
add_pre_trans_constraints(amp, dic=None)[source]
add_var_equal_constraints(amp, dic=None)[source]
add_var_range_constraints(amp, dic=None)[source]
attach_fix_params_error(params: dict, V_b=None) ndarray[source]

The minimal condition

\[-\frac{\partial\ln L(a,b)}{\partial a} = 0,\]

can be treated as a implect function \(a(b)\). The gradients is

\[\frac{\partial a }{\partial b} = - (\frac{\partial^2 \ln L(a,b)}{\partial a \partial a })^{-1} \frac{\partial \ln L(a,b)}{\partial a\partial b }.\]

The uncertanties from b with error matrix \(V_b\) can propagate to a as

\[V_a = \frac{\partial a }{\partial b} V_b \frac{\partial a }{\partial b}\]

This matrix will be added to the config.inv_he.

batch_sum_var(*args, **kwargs)[source]
cal_bins_numbers(adapter, data, phsp, read_data, bg=None, bg_weight=None)
cal_chi2(read_data=None, bins=[[2, 2], [2, 2], [2, 2]], mass=['R_BD', 'R_CD'])
cal_fitfractions(params={}, mcdata=None, res=None, exclude_res=[], batch=25000, method='old')[source]
cal_signal_yields(params={}, mcdata=None, batch=25000)[source]
check_valid_jp(decay_group)[source]
eval_amplitude(*p, extra=None)[source]
fit(data=None, phsp=None, bg=None, inmc=None, batch=65000, method='BFGS', check_grad=False, improve=False, reweight=False, maxiter=None, jac=True, print_init_nll=True, callback=None, grad_scale=1.0, gtol=0.001)[source]
fitNtimes(N, *args, **kwargs)[source]
free_for_extended(amp)[source]
generate_SDP(node, N=1000, include_charge=False, legacy=True)
generate_SDP_p(node, N=1000, legacy=False)
generate_phsp(N=1000, include_charge=False, cal_max=False)
generate_phsp_p(N=1000, cal_max=False)
generate_toy(N=1000, force=True, gen=None, gen_p=None, importance_f=None, max_N=100000, include_charge=False, cal_phsp_max=False)

A more accurate method for generating toy data.

Parameters:
  • N – number of events.

  • force – if romove extra data generated.

  • gen – optional function for generate phase space, the return value is same as config.get_data.

  • gen_p – optional function for generate phase space, the return value is dict as {B: pb, C: pc, D: pd}.

  • max_N – max number of events for every try.

generate_toy2(*args, **kwargs)
generate_toy_o(N=1000, force=True, max_N=100000)
generate_toy_p(N=1000, force=True, gen_p=None, importance_f=None, max_N=100000, include_charge=False, cal_phsp_max=False)

generate toy data momentum.

get_SDP_generator(node, include_charge=False, legacy=True)
get_SDP_p_generator(node, legacy=True)
get_all_data()[source]
get_all_frame()
get_all_plotdatas(data=None, phsp=None, bg=None, res=None, use_weighted=False)
get_amplitude(vm=None, name='')[source]
get_chain(idx)[source]
get_chain_property(idx, display=True)

Get chain name and curve style in plot

get_dalitz(a, b)
get_dalitz_boundary(a, b, N=1000)
get_dat_order(standard=False)[source]
get_data(idx)[source]
get_data_file(idx)[source]
get_data_index(sub, name)[source]
get_data_rec(name)[source]
get_decay(full=True)[source]
get_fcn(all_data=None, batch=65000, vm=None, name='')[source]
get_ndf()[source]
get_params(trainable_only=False)[source]
get_params_error(params=None, data=None, phsp=None, bg=None, inmc=None, batch=10000, using_cached=False, method=None, force_pos=True, correct_params=None)[source]

calculate parameters error

get_particle_function(name, d_norm=False)
get_phsp_generator(include_charge=False, nodes=[])
get_phsp_noeff()[source]
get_phsp_p_generator(nodes=[])
get_phsp_plot(tail='')[source]
get_plotter(legend_file=None, res=None, datasets=None, use_weighted=False)
likelihood_profile(var, var_min, var_max, N=100)[source]
load_cached_data(file_name=None)[source]
static load_config(file_name, share_dict={})[source]
mask_params(params)[source]
params_trans()[source]
plot_adaptive_2dpull(var1, var2, binning=[[2, 2], [2, 2], [2, 2]], ax=<module 'matplotlib.pyplot' from '/home/docs/checkouts/readthedocs.org/user_builds/tf-pwa/envs/stable/lib/python3.10/site-packages/matplotlib/pyplot.py'>, where={}, cut_zero=True, plot_scatter=True, scatter_style={'c': 'black', 's': 1}, **kwargs)
plot_partial_wave(params=None, data=None, phsp=None, bg=None, prefix='figure/', res=None, save_root=False, chains_id_method=None, phsp_rec=None, cut_function=<function <lambda>>, plot_function=None, **kwargs)

plot partial wave plots

Parameters:
  • self – ConfigLoader object

  • params – params, dict or FitResutls

  • data – data sample, a list of CalAngleData

  • phsp – phase space sample, a list of CalAngleData (the same size as data)

  • bg – background sample, a list of CalAngleData (the same size as data)

  • prefix – figure saving folder and nameing prefix

  • res – combination of resonaces in partial wave, list of (list of (string for resoances name or int for decay chain index))

  • save_root – if save weights in a root file, bool

  • chains_id_method – method of how legend label display, string

  • bin_scale – more binning in partial waves for a smooth histogram. int

  • batch – batching in calculating weights, int

  • smooth – if plot smooth binned kde shape or histogram, bool

  • single_legend – if save all legend in a file “legend.pdf”, bool

  • plot_pull – if plot the pull distribution, bool

  • format – save figure with image format, string (such as “.png”, “.jpeg”)

  • linestyle_file – legend linestyle configuration file name (YAML format), string (such as “legend.yml”)

plot_partial_wave_interf(res1, res2, **kwargs)
register_extra_constrains(name, f=None)[source]

add extra_constrains

classmethod register_function(name=None)[source]
reinit_params()[source]
static reweight_init_value(amp, phsp, ns=None)[source]

reset decay chain total and make the integration to be ns

save_cached_data(data, file_name=None)[source]
save_params(file_name)[source]
save_tensorflow_model(dir_name)[source]
set_params(params, neglect_params=None)[source]
class PlotParams(plot_config, decay_struct)[source]

Bases: dict

get_angle_vars(is_align=False)[source]
get_data_index(sub, name)[source]
get_mass_vars()[source]
get_params(params=None)[source]
set_prefix_constrains(vm, base, params_dic, self)[source]
validate_file_name(s)[source]
data
class MultiData(*args, **kwargs)[source]

Bases: SimpleData

get_data(idx) list[source]
get_n_data()[source]
get_phsp_noeff()[source]
process_scale(idx, data)[source]
set_lazy_call(data, idx)[source]
class SimpleData(dic, decay_struct, config=None)[source]

Bases: object

cal_angle(p4, **kwargs)[source]
get_all_data()[source]
get_dat_order(standard=False)[source]
get_data(idx) dict[source]
get_data_file(idx)[source]
get_data_index(sub, name)[source]
get_n_data()[source]
get_phsp_noeff()[source]
get_phsp_plot()[source]
get_weight_sign(idx)[source]
load_cached_data(file_name=None)[source]
load_data(files, weight_sign=1, weight_smear=None, **kwargs) dict[source]
load_extra_var(n_data, **kwargs)[source]
load_p4(fnames)[source]
load_weight_file(weight_files)[source]
process_scale(idx, data)[source]
save_cached_data(data, file_name=None)[source]
savetxt(file_name, data)[source]
set_lazy_call(data, idx)[source]
load_data_mode(dic, decay_struct, default_mode='multi', config=None)[source]
register_data_mode(name=None, f=None)[source]

register a data mode

Params name:

mode name used in configuration

Params f:

Data Mode class

data_root_lhcb
class RootData(*args, **kwargs)[source]

Bases: MultiData

create_data(p4, **kwargs)[source]
get_data(idx)[source]
get_p4(idx)[source]
get_weight(idx)[source]
load_var(idx, tail)[source]
build_matrix(order, matrix)[source]
custom_cond(x, dic, key=None)[source]
cut_data(data)[source]
touch_var(name, data, var, size, default=1)[source]
decay_config
class DecayConfig(dic, share_dict={})[source]

Bases: BaseConfig

decay_chain_cut(decays)[source]
decay_chain_cut_list = {}
decay_cut(decays)[source]
decay_cut_list = {'ls_cut': <function decay_cut_ls>, 'mass_cut': <function decay_cut_mass>}
static decay_item(decay_dict)[source]
disable_allow_cc(decay_group)[source]
get_decay(full=True)[source]
get_decay_struct(decay, particle_map=None, particle_params=None, top=None, finals=None, chain_params={}, process_cut=True)[source]

get decay structure for decay dict

static load_config(file_name, share_dict={})[source]
static particle_item(particle_list, share_dict={})[source]
static particle_item_list(particle_list)[source]
rename_params(params, is_particle=True)[source]
decay_cut_ls(decay)[source]
decay_cut_mass(decay)[source]
set_min_max(dic, name, name_min, name_max)[source]
extra
cal_bins_numbers(self, adapter, data, phsp, read_data, bg=None, bg_weight=None)[source]
cal_chi2(self, read_data=None, bins=[[2, 2], [2, 2], [2, 2]], mass=['R_BD', 'R_CD'])[source]
multi_config
class MultiConfig(file_names, vm=None, total_same=False, share_dict={}, multi_gpu=False)[source]

Bases: object

fit(datas=None, batch=65000, method='BFGS', maxiter=None, print_init_nll=False, callback=None)[source]
get_all_data()[source]
get_amplitudes(vm=None)[source]
get_args_value(bounds_dict)[source]
get_fcn(datas=None, vm=None, batch=65000)[source]
get_fcns(datas=None, vm=None, batch=65000)[source]
get_params(trainable_only=False)[source]
get_params_error(params=None, datas=None, batch=10000, using_cached=False)[source]
params_trans()[source]
plot_partial_wave(params=None, prefix='figure/all', **kwargs)[source]
reinit_params()[source]
save_params(file_name)[source]
set_params(params, neglect_params=None)[source]
particle_function
class ParticleFunction(config, name, d_norm=False)[source]

Bases: object

amp2s(m)[source]
density(m)[source]
mass_linspace(N)[source]
mass_range()[source]
phsp_factor(m)[source]
phsp_fractor(m)[source]
get_particle_function(config, name, d_norm=False)[source]
plot
class LineStyleSet(file_name, color_first=True)[source]

Bases: object

get(id_, default=None)[source]
get_style(id_)[source]
save()[source]
build_read_var_function(all_var, where={})[source]
create_chain_property(self, res)[source]
create_plot_var_dic(plot_params)[source]
default_color_generator(color_first)[source]
export_legend(ax, filename='legend.pdf', ncol=1)[source]

export legend in Axis ax to file filename

get_chain_property(self, idx, display=True)[source]

Get chain name and curve style in plot

get_chain_property_v1(self, idx, display)[source]
get_chain_property_v2(self, idx, display)[source]
get_dalitz(config, a, b)[source]
get_dalitz_boundary(config, a, b, N=1000)[source]
hist_error(data, bins=50, xrange=None, weights=1.0, kind='poisson')[source]
hist_line(data, weights, bins, xrange=None, inter=1, kind='UnivariateSpline')[source]

interpolate data from hostgram into a line

>>> import numpy as np
>>> import matplotlib.pyplot
>>> z = np.random.normal(size=1000)
>>> x, y = hist_line(z, None, 50)
>>> a = plt.plot(x, y)
hist_line_step(data, weights, bins, xrange=None, inter=1, kind='quadratic')[source]
>>> import numpy as np
>>> import matplotlib.pyplot
>>> z = np.random.normal(size=1000)
>>> x, y = hist_line_step(z, None, 50)
>>> a = plt.step(x, y)
plot_adaptive_2dpull(config, var1, var2, binning=[[2, 2], [2, 2], [2, 2]], ax=<module 'matplotlib.pyplot' from '/home/docs/checkouts/readthedocs.org/user_builds/tf-pwa/envs/stable/lib/python3.10/site-packages/matplotlib/pyplot.py'>, where={}, cut_zero=True, plot_scatter=True, scatter_style={'c': 'black', 's': 1}, **kwargs)[source]
plot_function_2dpull(data_dict, phsp_dict, bg_dict, var1='x', var2='y', binning=[[2, 2], [2, 2], [2, 2]], where={}, ax=<module 'matplotlib.pyplot' from '/home/docs/checkouts/readthedocs.org/user_builds/tf-pwa/envs/stable/lib/python3.10/site-packages/matplotlib/pyplot.py'>, cut_zero=True, plot_scatter=True, scatter_style={'c': 'black', 's': 1}, cmap='jet', **kwargs)[source]
plot_partial_wave(self, params=None, data=None, phsp=None, bg=None, prefix='figure/', res=None, save_root=False, chains_id_method=None, phsp_rec=None, cut_function=<function <lambda>>, plot_function=None, **kwargs)[source]

plot partial wave plots

Parameters:
  • self – ConfigLoader object

  • params – params, dict or FitResutls

  • data – data sample, a list of CalAngleData

  • phsp – phase space sample, a list of CalAngleData (the same size as data)

  • bg – background sample, a list of CalAngleData (the same size as data)

  • prefix – figure saving folder and nameing prefix

  • res – combination of resonaces in partial wave, list of (list of (string for resoances name or int for decay chain index))

  • save_root – if save weights in a root file, bool

  • chains_id_method – method of how legend label display, string

  • bin_scale – more binning in partial waves for a smooth histogram. int

  • batch – batching in calculating weights, int

  • smooth – if plot smooth binned kde shape or histogram, bool

  • single_legend – if save all legend in a file “legend.pdf”, bool

  • plot_pull – if plot the pull distribution, bool

  • format – save figure with image format, string (such as “.png”, “.jpeg”)

  • linestyle_file – legend linestyle configuration file name (YAML format), string (such as “legend.yml”)

plot_partial_wave_interf(self, res1, res2, **kwargs)[source]
plotter
class Frame(var, x_range=None, nbins=None, name=None, display=None, trans=None, **extra)[source]

Bases: object

get_histogram(data, partial=None, bin_scale=1)[source]
set_axis(axis, **config)[source]
class PlotAllData(amp, data, phsp, bg=None, res=None, use_weighted=False)[source]

Bases: object

get_all_histogram(var, bin_scale=3)[source]
class PlotData(dataset, weight=None, partial_weight=None, use_weighted=False)[source]

Bases: object

get_histogram(var, partial=None, **kwargs)[source]
total_size()[source]
class PlotDataGroup(datasets)[source]

Bases: object

get_histogram(var, partial=None, **kwargs)[source]
total_size()[source]
class Plotter(config, legend_file=None, res=None, datasets=None, use_weighted=False)[source]

Bases: object

add_ref_amp(ref_amp, name='reference fit')[source]
forzen_style()[source]
get_all_hist(frame, idx=None, bin_scale=3)[source]

create all partial wave histogram for observation frame.

Parameters:
  • name (Frame, or callable) – Function for get observation in datasets

  • idx (int, optional) – data index, None for all data, defaults to None

  • bin_scale (float, optional) – smooth bin scale, defaults to 3

Returns:

collection of histogram

Return type:

dict

get_label(key)[source]
get_plot_style(example_hist)[source]
get_res_style(key)[source]
old_style(extra_config=None, color_first=True)[source]

context for base style, see matplotlib.rcParams for more configuration

Parameters:
  • extra_config (ditc, optional) – new configs, defaults to None

  • color_first (bool, optional) – order of color and linestyle, defaults to True

plot_frame(name, idx=None, ax=<module 'matplotlib.pyplot' from '/home/docs/checkouts/readthedocs.org/user_builds/tf-pwa/envs/stable/lib/python3.10/site-packages/matplotlib/pyplot.py'>, bin_scale=3)[source]

plot frame for all partial wave

Parameters:
  • name (str) – data variable frame name

  • idx (int, optional) – data index, None for all data, defaults to None

  • bin_scale (float, optional) – smooth bin scale, defaults to 3

  • ax (matplotlib.Axes, optional) – plot on axis ax

Returns:

matplotlib.Axes

plot_frame_with_pull(name, idx=None, bin_scale=3, pull_config=None)[source]

plot frame with pull for all partial wave

Parameters:
  • name (str) – data variable frame name

  • idx (int, optional) – data index, None for all data, defaults to None

  • bin_scale (float, optional) – smooth bin scale, defaults to 3

  • pull_config (dict, optional) – pull plot style, defaults to None

Returns:

matplotlib.Axes for plot and pull

plot_var(frame, idx=None, ax=<module 'matplotlib.pyplot' from '/home/docs/checkouts/readthedocs.org/user_builds/tf-pwa/envs/stable/lib/python3.10/site-packages/matplotlib/pyplot.py'>, bin_scale=3)[source]

plot data observation for all partial wave

Parameters:
  • name (Frame, or callable) – Function for get observation in datasets

  • idx (int, optional) – data index, None for all data, defaults to None

  • bin_scale (float, optional) – smooth bin scale, defaults to 3

  • ax (matplotlib.Axes, optional) – plot axis

Returns:

save_all_frame(prefix='figure/', format='png', idx=None, plot_pull=False, pull_config=None)[source]

Save all frame in with prefix. like ConfigLoader.plot_partial_waves

Parameters:
  • prefix (str, optional) – prefix for file name, defaults to “figure/”

  • format (str, optional) – figure format, defaults to “png”

  • idx (int, optional) – dataset index, defaults to None

  • plot_pull (bool, optional) – if plot pulls, defaults to False

  • pull_config (dict, optional) – configuration for plot pulls, defaults to None

set_plot_item(example_hist)[source]
class ReadData(var, trans=None)[source]

Bases: object

class StyleSet(file_name)[source]

Bases: object

generate_new_style()[source]
get(key, value=None)[source]
save()[source]
set(key, value)[source]
get_all_frame(self)[source]
get_all_plotdatas(self, data=None, phsp=None, bg=None, res=None, use_weighted=False)[source]
get_plotter(self, legend_file=None, res=None, datasets=None, use_weighted=False)[source]
merge_hist(hists)[source]
sample
class AfterGenerator(gen, f_after=<function AfterGenerator.<lambda>>)[source]

Bases: BaseGenerator

cal_max_weight()[source]
generate(N)[source]
build_phsp_chain(decay_group)[source]

find common decay those mother particle mass is fixed

build_phsp_chain_sorted(st, final_mi, nodes)[source]

{A: [B,C, D], R: [B,C]} + {R: M} => ((mr, (mb, mc)), md)

create_cal_calangle(config, include_charge=False)[source]
gen_random_charge(N, random=True)[source]
generate_SDP(config, node, N=1000, include_charge=False, legacy=True)[source]
generate_SDP_p(config, node, N=1000, legacy=False)[source]
generate_phsp(config, N=1000, include_charge=False, cal_max=False)[source]
generate_phsp_p(config, N=1000, cal_max=False)[source]
generate_toy(config, N=1000, force=True, gen=None, gen_p=None, importance_f=None, max_N=100000, include_charge=False, cal_phsp_max=False)[source]

A more accurate method for generating toy data.

Parameters:
  • N – number of events.

  • force – if romove extra data generated.

  • gen – optional function for generate phase space, the return value is same as config.get_data.

  • gen_p – optional function for generate phase space, the return value is dict as {B: pb, C: pc, D: pd}.

  • max_N – max number of events for every try.

generate_toy2(config, *args, **kwargs)[source]
generate_toy_o(config, N=1000, force=True, max_N=100000)[source]
generate_toy_p(config, N=1000, force=True, gen_p=None, importance_f=None, max_N=100000, include_charge=False, cal_phsp_max=False)[source]

generate toy data momentum.

get_SDP_generator(config, node, include_charge=False, legacy=True)[source]
get_SDP_p_generator(config, node, legacy=True)[source]
get_SDP_p_generator_legacy(config, node)[source]
get_phsp_generator(config, include_charge=False, nodes=[])[source]
get_phsp_p_generator(config, nodes=[])[source]
perfer_node(struct, index, nodes)[source]

reorder struct to make node exisits in PhaseGenerator

single_sampling(phsp, amp, N)[source]
trans_node_order(struct, index, order_trans, level)[source]

data_trans

Submodules and Subpackages

dalitz
class Dalitz(m0, m1, m2, m3)[source]

Bases: object

generate_p(m12, m23)[source]

generate monmentum for dalitz variable

generate_p(m12, m23, m0, m1, m2, m3)[source]

generate monmentum by dalitz variable m12, m23

helicity_angle
class HelicityAngle(decay_chain)[source]

Bases: object

general implement for angle to monmentum trasform

build_data(ms, costheta, phi)[source]

generate monmentum with M_name = m

cal_angle(p4)[source]
eval_phsp_factor(ms)[source]
find_variable(dat)[source]
generate_p_mass(name, m, random=False)[source]

generate monmentum with M_name = m

get_all_mass(replace_mass)[source]
get_mass_range(name)[source]
get_phsp_factor(name, m)[source]
mass_linspace(name, N)[source]
class HelicityAngle1(decay_chain)[source]

Bases: object

simple implement for angle to monmentum trasform

generate_p(ms, costheta, phi)[source]
generate_p2(ms, costheta, phi)[source]
generate_p_mass(name, m, random=False)[source]

generate monmentum with M_name = m

get_phsp_factor(name, m)[source]
create_rotate_p(ps, ms, costheta, phi)[source]
create_rotate_p_decay(decay_chain, mass, data)[source]
generate_p(ms, msp, costheta, phi)[source]

ms(0) -> ms(1) + msp(0), costheta(0), phi(0) ms(1) -> ms(2) + msp(1), costheta(1), phi(1) … ms(n) -> ms(n+1) + msp(n), costheta(n), phi(n)

lorentz_neg(pc)[source]
normal(p)[source]

experimental

Submodules and Subpackages

build_amp
amp_matrix_as_dict(dec, hij)[source]
build_amp2s(dg)[source]
build_amp_matrix(dec, data, weight=None)[source]
build_angle_amp_matrix(dec, data, weight=None)[source]
build_params_vector(dg, data)[source]
build_sum_amplitude(dg, dec_chain, data)[source]
build_sum_angle_amplitude(dg, dec_chain, data)[source]
cached_amp(dg, data, matrix_method=<function build_angle_amp_matrix>)[source]
cached_amp2s(dg, data)[source]
extra_amp
extra_data
class MultiNpzData(*args, **kwargs)[source]

Bases: NpzData

get_data(idx) list[source]
get_phsp_noeff()[source]
class NpzData(dic, decay_struct, config=None)[source]

Bases: SimpleData

get_data(idx) dict[source]
get_particle_p()[source]
load_data(files, weights=None, weights_sign=1, charge=None) dict[source]
extra_function
extra_function(f0=None, using_numpy=True)[source]

Using extra function with numerical differentiation.

It can be used for numpy function or numba.vectorize function interface.

>>> import numpy as np
>>> sin2 = extra_function(np.sin)
>>> a = tf.Variable([1.0,2.0], dtype="float64")
>>> with tf.GradientTape(persistent=True) as tape0:
...     with tf.GradientTape(persistent=True) as tape:
...         b = sin2(a)
...     g, = tape.gradient(b, [a,])
...
>>> h, = tape0.gradient(g, [a,])
>>> assert np.allclose(np.sin([1.0,2.0]), b.numpy())
>>> assert np.allclose(np.cos([1.0,2.0]), g.numpy())
>>> assert np.sum(np.abs(-np.sin([1.0,2.0]) - h.numpy())) < 1e-3

The numerical accuracy is not so well for second derivative.

factor_system

Module for factor system.

` A = a1 ( B x C x D) + a2 (E x F) B = b1 B1 + b2 B2 `

is a tree structure ` A -> [(a1, [(b1, B1), (b2, B2)], C, D), (a2, E, F)] `

Each component is a path for root to a leaf. ` (a1, b1),  (a1, b2),  (a2,) `

We can add some options to change the possible combination. (TODO)

flatten_all(x)[source]
get_all_chain(a)[source]
get_all_partial_amp(amp, data, strip_part=[])[source]
get_chain_name(chain)[source]
get_id_variable(all_var, var)[source]
get_prod_chain(i)[source]
get_split_chain(a)[source]
partial_amp(amp, data, all_va, need_va)[source]
strip_variable(var_all, part=[])[source]
temp_var(vm)[source]
opt_int
build_int_matrix(dec, data, weight=None)[source]
build_int_matrix_batch(dec, data, batch=65000)[source]
build_params_matrix(dec)[source]
build_params_vector(dec, concat=True)[source]
build_sum_amplitude(dg, dec_chain, data)[source]
cached_int_mc(dec, data, batch=65000)[source]
gls_combine(fs)[source]
split_gls(dec_chain)[source]
wrap_function
class Count(idx=0)[source]

Bases: object

add(value=1)[source]
class WrapFun(f, jit_compile=False)[source]

Bases: object

generator

Submodules and Subpackages

breit_wigner
class BWGenerator(m0, gamma0, m_min, m_max)[source]

Bases: BaseGenerator

DataType

alias of ndarray

generate(N)[source]
integral(x)[source]
solve(x)[source]
generator
class ARGenerator(phsp, amp, max_weight=None)[source]

Bases: BaseGenerator

Acceptance-Rejection Sampling

generate(N)[source]
class BaseGenerator[source]

Bases: object

DataType = typing.Any
abstract generate(N: int) Any[source]
class GenTest(N_max, display=True)[source]

Bases: object

add_gen(n_gen)[source]
generate(N)[source]
set_gen(n_gen)[source]
multi_sampling(phsp, amp, N, max_N=200000, force=True, max_weight=None, importance_f=None, display=True)[source]
single_sampling2(phsp, amp, N, max_weight=None, importance_f=None)[source]
interp_nd
class InterpND(xs, z, indexing='ij')[source]

Bases: object

build_coeffs()[source]
generate(N)[source]
intgral_step()[source]
class InterpNDHist(xs, z, indexing='ij')[source]

Bases: object

build_coeffs()[source]
generate(N)[source]
interp_f(x)[source]
intgral_step()[source]
linear_interpolation
class LinearInterp(x, y, epsilon=1e-10)[source]

Bases: BaseGenerator

linear intepolation function for sampling

DataType

alias of ndarray

cal_coeffs()[source]
generate(N)[source]
integral(x)[source]
solve(x)[source]
class LinearInterpImportance(f, x)[source]

Bases: BaseGenerator

DataType

alias of ndarray

generate(N)[source]
interp_sample(f, xmin, xmax, interp_N, N)[source]
interp_sample_f(f, f_interp, N)[source]
interp_sample_once(f, f_interp, N, max_rnd)[source]
sample_test_function(x)[source]
plane_2d
class Interp2D(x, y, z)[source]

Bases: TriangleGenerator

class TriangleGenerator(x, y, z)[source]

Bases: object

cal_1d_shape()[source]
cal_coeff_left()[source]
cal_coeff_right()[source]
cal_coeffs()[source]

z = a x + b y + c

[ x_1 , y_1 , 1 ] [a] [z_1] [ x_2 , y_2 , 1 ] [b] = [z_2] [ x_3 , y_3 , 1 ] [c] = [z_3]

z_c = a x + b y + c

s^2 = (x-x_c)**2 + (y-y_c)**2 s = 1/b sqrt(a**2+b**2)(x-x_c) x = b s / sqrt(a**2+b**2) + x_c

cal_st_xy(x, y, bin_index=slice(None, None, None))[source]
cal_xy_st(s, t, bin_index=slice(None, None, None))[source]
generate(N)[source]
generate_st(N)[source]
solve_left(y, bin_index=slice(None, None, None))[source]
solve_right(y, bin_index=slice(None, None, None))[source]
solve_s(s_r, bin_index=slice(None, None, None))[source]

int (k_1 s + b_1 )^2 - (k_2 s + b_2)^2 ds = int d s_r

t_min_max(s, bin_index=slice(None, None, None))[source]
solve_2(a2, a1, x0, y)[source]

solve (a2 x**2 + a1 x)|_{x0}^{x} = y

solve_3(a3, a2, a1, x0, x_max, y)[source]

solve (a3 x**3 + a2 x**2 + a1 x**1)|_{x0}^{x} = y

square_dalitz_plot
class SDPGenerator(m0, mi, legacy=True)[source]

Bases: BaseGenerator

generate(N)[source]
>>> from tf_pwa.generator.square_dalitz_plot import SDPGenerator
>>> gen = SDPGenerator(3.0, [1.0, 0.5, 0.1])
>>> p1, p2, p3 = gen.generate(100)
generate_SDP(m0, mi, N=1000, legacy=True)[source]

generate square dalitz plot ditribution for 1,2

The legacy mode will include a cut off in the threshold.

(Source code, png, hires.png, pdf)

_images/tf_pwa-generator-square_dalitz_plot-1.png
square_dalitz_cut(p)[source]

Copy from EvtGen old version

\[|J| = 4 p q m_{12} \frac{\partial m_{12}}{\partial m'} \frac{\partial \cos\theta_{12}}{\partial \theta'}\]
\[\frac{\partial m_{12}}{\partial m'} = -\frac{\pi}{2} \sin (\pi m') (m_{12}^{max} - m_{12}^{min})\]
\[\frac{\partial \cos\theta_{12}}{\partial \theta'} = -\pi \sin (\pi \theta')\]
square_dalitz_variables(p)[source]

Variables used of square dalitz plot, the first 2 is \(m'\) and \(\theta'\).

\[m' = \frac{1}{\pi}\cos^{-1} \left(2 \frac{m_{12}-m^{min}_{12}}{m^{max}_{12}-m^{min}_{12}} - 1\right)\]
\[\theta' = \frac{1}{\pi} \theta_{12}\]

model

Submodules and Subpackages

cfit
class ModelCfitExtended(amp, w_bkg=0.001, bg_f=None, eff_f=None)[source]

Bases: Model

nll(data, mcdata, weight: Tensor = 1.0, batch=None, bg=None, mc_weight=None)[source]

Calculate NLL.

\[\begin{split}-\ln L = -\sum_{x_i \in data } w_i \ln P(x_i;\theta_k) \\ P(x_i;\theta_k) = (1-f_{bg})Amp(x_i; \theta_k) + f_{bg} Bg(x_{i};\theta_k)\end{split}\]
\[-\ln L_{2} = -\ln ( L \lambda^{N_{data}} / {N_{data}}! e^{-\lambda}) = -\ L - N_{data} \ln \lambda + \lambda + C\]
\[\lambda = 1/(1-f_{bg}) \int Amp(x_i; \theta_k) d \Phi\]
Parameters:
  • data – Data array

  • mcdata – MCdata array

  • weight – Weight of data???

  • batch – The length of array to calculate as a vector at a time. How to fold the data array may depend on the GPU computability.

  • bg – Background data array. It can be set to None if there is no such thing.

Returns:

Real number. The value of NLL.

nll_grad_batch(data, mcdata, weight, mc_weight)[source]
\[P = (1-frac) \frac{amp(data)}{\sum amp(phsp)} + frac \frac{bg(data)}{\sum bg(phsp)}\]
\[nll = - \sum log(p)\]
\[\frac{\partial nll}{\partial \theta} = -\sum \frac{1}{p} \frac{\partial p}{\partial \theta} = - \sum \frac{\partial \ln \bar{p}}{\partial \theta } +\frac{\partial nll}{\partial I_{sig} } \frac{\partial I_{sig} }{\partial \theta} +\frac{\partial nll}{\partial I_{bg}}\frac{\partial I_{sig}}{\partial \theta}\]
nll_grad_hessian(data, mcdata, weight=1.0, batch=24000, bg=None, mc_weight=1.0)[source]

The parameters are the same with self.nll(), but it will return Hessian as well.

\[\frac{\partial^2 f}{\partial x_i \partial x_j} = \frac{\partial y_k}{\partial x_i} \frac{\partial^2 f}{\partial y_k \partial y_l} \frac{\partial y_l}{\partial x_j} + \frac{\partial f}{\partial y_k} \frac{\partial^2 y_k}{\partial x_i \partial x_j}\]
\[y = \{x_i; I_{sig}, I_{bg}\}\]
\[\frac{\partial y_k }{\partial x_i} = (\delta_{ik};\frac{\partial I_{sig}}{\partial x_i}, \frac{\partial I_{bg}}{\partial x_i})\]
Return NLL:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

Return Hessian:

2-D Array of real numbers. The Hessian matrix of the variables.

class Model_cfit(amp, w_bkg=0.001, bg_f=None, eff_f=None, resolution_size=1)[source]

Bases: Model

nll(data, mcdata, weight: Tensor = 1.0, batch=None, bg=None, mc_weight=None)[source]

Calculate NLL.

\[\begin{split}-\ln L = -\sum_{x_i \in data } w_i \ln P(x_i;\theta_k) \\ P(x_i;\theta_k) = (1-f_{bg})Amp(x_i; \theta_k) + f_{bg} Bg(x_{i};\theta_k)\end{split}\]
Parameters:
  • data – Data array

  • mcdata – MCdata array

  • weight – Weight of data???

  • batch – The length of array to calculate as a vector at a time. How to fold the data array may depend on the GPU computability.

  • bg – Background data array. It can be set to None if there is no such thing.

Returns:

Real number. The value of NLL.

nll_grad_batch(data, mcdata, weight, mc_weight)[source]
\[P = (1-frac) \frac{amp(data)}{\sum amp(phsp)} + frac \frac{bg(data)}{\sum bg(phsp)}\]
\[nll = - \sum log(p)\]
\[\frac{\partial nll}{\partial \theta} = -\sum \frac{1}{p} \frac{\partial p}{\partial \theta} = - \sum \frac{\partial \ln \bar{p}}{\partial \theta } +\frac{\partial nll}{\partial I_{sig} } \frac{\partial I_{sig} }{\partial \theta} +\frac{\partial nll}{\partial I_{bg}}\frac{\partial I_{sig}}{\partial \theta}\]
nll_grad_hessian(data, mcdata, weight=1.0, batch=24000, bg=None, mc_weight=1.0)[source]

The parameters are the same with self.nll(), but it will return Hessian as well.

\[\frac{\partial^2 f}{\partial x_i \partial x_j} = \frac{\partial y_k}{\partial x_i} \frac{\partial^2 f}{\partial y_k \partial y_l} \frac{\partial y_l}{\partial x_j} + \frac{\partial f}{\partial y_k} \frac{\partial^2 y_k}{\partial x_i \partial x_j}\]
\[y = \{x_i; I_{sig}, I_{bg}\}\]
\[\frac{\partial y_k }{\partial x_i} = (\delta_{ik};\frac{\partial I_{sig}}{\partial x_i}, \frac{\partial I_{bg}}{\partial x_i})\]
Return NLL:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

Return Hessian:

2-D Array of real numbers. The Hessian matrix of the variables.

class Model_cfit_cached(amp, w_bkg=0.001, bg_f=None, eff_f=None)[source]

Bases: Model_cfit

nll_grad_batch(data, mcdata, weight, mc_weight)[source]
\[P = (1-frac) \frac{amp(data)}{\sum amp(phsp)} + frac \frac{bg(data)}{\sum bg(phsp)}\]
\[nll = - \sum log(p)\]
\[\frac{\partial nll}{\partial \theta} = -\sum \frac{1}{p} \frac{\partial p}{\partial \theta} = - \sum \frac{\partial \ln \bar{p}}{\partial \theta } +\frac{\partial nll}{\partial I_{sig} } \frac{\partial I_{sig} }{\partial \theta} +\frac{\partial nll}{\partial I_{bg}}\frac{\partial I_{sig}}{\partial \theta}\]
f_bg(data)[source]
f_eff(data)[source]
custom
class BaseCustomModel(amp, w_bkg=1.0, resolution_size=1, extended=False, **kwargs)[source]

Bases: Model

eval_nll_part(data, weight=None, norm=None, idx=0)[source]
eval_normal_factors(mcdata, weight=None)[source]
nll(data, mcdata, weight: Tensor = 1.0, batch=None, bg=None, mc_weight=1.0)[source]

Calculate NLL.

\[-\ln L = -\sum_{x_i \in data } w_i \ln f(x_i;\theta_k) + (\sum w_j ) \ln \sum_{x_i \in mc } f(x_i;\theta_k)\]
Parameters:
  • data – Data array

  • mcdata – MCdata array

  • weight – Weight of data???

  • batch – The length of array to calculate as a vector at a time. How to fold the data array may depend on the GPU computability.

  • bg – Background data array. It can be set to None if there is no such thing.

Returns:

Real number. The value of NLL.

nll_grad_batch(data, mcdata, weight, mc_weight)[source]

batch version of self.nll_grad()

\[- \frac{\partial \ln L}{\partial \theta_k } = -\sum_{x_i \in data } w_i \frac{\partial}{\partial \theta_k} \ln f(x_i;\theta_k) + (\sum w_j ) \left( \frac{ \partial }{\partial \theta_k} \sum_{x_i \in mc} f(x_i;\theta_k) \right) \frac{1}{ \sum_{x_i \in mc} f(x_i;\theta_k) }\]
Parameters:
  • data

  • mcdata

  • weight

  • mc_weight

Returns:

nll_grad_hessian(data, mcdata, weight=1.0, batch=24000, bg=None, mc_weight=1.0)[source]

The parameters are the same with self.nll(), but it will return Hessian as well.

Return NLL:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

Return Hessian:

2-D Array of real numbers. The Hessian matrix of the variables.

value_and_grad(fun)[source]
class SimpleCFitModel(amp, w_bkg=1.0, resolution_size=1, extended=False, **kwargs)[source]

Bases: BaseCustomModel

eval_nll_part(data, weight, norm, idx=0)[source]
eval_normal_factors(mcdata, weight)[source]
required_params = ['bg_frac']
class SimpleChi2Model(amp, w_bkg=1.0, resolution_size=1, extended=False, **kwargs)[source]

Bases: BaseCustomModel

fit amp = weight directly. Required set extended = True.

eval_nll_part(data, weight, norm, idx=0)[source]
class SimpleClipNllModel(amp, w_bkg=1.0, resolution_size=1, extended=False, **kwargs)[source]

Bases: SimpleNllModel

eval_nll_part(data, weight, norm, idx=0)[source]
class SimpleNllFracModel(amp, w_bkg=1.0, resolution_size=1, extended=False, **kwargs)[source]

Bases: BaseCustomModel

eval_nll_part(data, weight, norm, idx=0)[source]
eval_normal_factors(mcdata, weight)[source]
required_params = ['constr_frac', 'bg_frac']
class SimpleNllModel(amp, w_bkg=1.0, resolution_size=1, extended=False, **kwargs)[source]

Bases: BaseCustomModel

eval_nll_part(data, weight, norm, idx=0)[source]
eval_normal_factors(mcdata, weight)[source]
model

This module provides methods to calculate NLL(Negative Log-Likelihood) as well as its derivatives.

class BaseModel(signal, resolution_size=1, extended=False)[source]

Bases: object

This class implements methods to calculate NLL as well as its derivatives for an amplitude model. It may include data for both signal and background.

Parameters:

signal – Signal Model

get_params(trainable_only=False)[source]

It has interface to Amplitude.get_params().

grad_hessp_batch(p, data, mcdata, weight, mc_weight)[source]

self.nll_grad() is replaced by this one???

\[- \frac{\partial \ln L}{\partial \theta_k } = -\sum_{x_i \in data } w_i \frac{\partial}{\partial \theta_k} \ln f(x_i;\theta_k) + (\sum w_j ) \left( \frac{ \partial }{\partial \theta_k} \sum_{x_i \in mc} f(x_i;\theta_k) \right) \frac{1}{ \sum_{x_i \in mc} f(x_i;\theta_k) }\]
Parameters:
  • data

  • mcdata

  • weight

  • mc_weight

Returns:

nll(data, mcdata)[source]

Negative log-Likelihood

nll_grad(data, mcdata, batch=65000)[source]
nll_grad_batch(data, mcdata, weight, mc_weight)[source]

self.nll_grad() is replaced by this one???

\[- \frac{\partial \ln L}{\partial \theta_k } = -\sum_{x_i \in data } w_i \frac{\partial}{\partial \theta_k} \ln f(x_i;\theta_k) + (\sum w_j ) \left( \frac{ \partial }{\partial \theta_k} \sum_{x_i \in mc} f(x_i;\theta_k) \right) \frac{1}{ \sum_{x_i \in mc} f(x_i;\theta_k) }\]
Parameters:
  • data

  • mcdata

  • weight

  • mc_weight

Returns:

nll_grad_hessian(data, mcdata, batch=25000)[source]

The parameters are the same with self.nll(), but it will return Hessian as well.

Return NLL:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

Return Hessian:

2-D Array of real numbers. The Hessian matrix of the variables.

set_params(var)[source]

It has interface to Amplitude.set_params().

sum_log_integral_grad_batch(mcdata, ndata)[source]
sum_nll_grad_bacth(data)[source]
sum_resolution(w)[source]
property trainable_variables
class CombineFCN(model=None, data=None, mcdata=None, bg=None, fcns=None, batch=65000, gauss_constr={})[source]

Bases: object

This class implements methods to calculate the NLL as well as its derivatives for a general function.

Parameters:
  • model – List of model object.

  • data – List of data array.

  • mcdata – list of MCdata array.

  • bg – list of Background array.

  • batch – The length of array to calculate as a vector at a time. How to fold the data array may depend on the GPU computability.

get_grad(x={})[source]
Parameters:

x – List. Values of variables.

Return gradients:

List of real numbers. The gradients for each variable.

get_grad_hessp(x, p, batch)[source]
Parameters:

x – List. Values of variables.

Return nll:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

get_nll(x={})[source]
Parameters:

x – List. Values of variables.

Return nll:

Real number. The value of NLL.

get_nll_grad(x={})[source]
Parameters:

x – List. Values of variables.

Return nll:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

get_nll_grad_hessian(x={}, batch=None)[source]
Parameters:

x – List. Values of variables.

Return nll:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

Return hessian:

2-D Array of real numbers. The Hessian matrix of the variables.

get_params(trainable_only=False)[source]
grad(x={})[source]
grad_hessp(x, p, batch=None)[source]
nll_grad(x={})[source]
nll_grad_hessian(x={}, batch=None)[source]
class ConstrainModel(amp, w_bkg=1.0, constrain={})[source]

Bases: Model

negative log likelihood model with constrains

get_constrain_grad()[source]
constrain: Gauss(mean,sigma)

by add a term \(\frac{d}{d\theta_i}\frac{(\theta_i-\bar{\theta_i})^2}{2\sigma^2} = \frac{\theta_i-\bar{\theta_i}}{\sigma^2}\)

get_constrain_hessian()[source]

the constrained parameter’s 2nd differentiation

get_constrain_term()[source]
constrain: Gauss(mean,sigma)

by add a term \(\frac{(\theta_i-\bar{\theta_i})^2}{2\sigma^2}\)

nll(data, mcdata, weight=1.0, bg=None, batch=None)[source]

calculate negative log-likelihood

\[-\ln L = -\sum_{x_i \in data } w_i \ln f(x_i;\theta_i) + (\sum w_i ) \ln \sum_{x_i \in mc } f(x_i;\theta_i) + cons\]
nll_gradient(data, mcdata, weight=1.0, batch=None, bg=None)[source]

calculate negative log-likelihood with gradient

\[\frac{\partial }{\partial \theta_i }(-\ln L) = -\sum_{x_i \in data } w_i \frac{\partial }{\partial \theta_i } \ln f(x_i;\theta_i) + \frac{\sum w_i }{\sum_{x_i \in mc }f(x_i;\theta_i)} \sum_{x_i \in mc } \frac{\partial }{\partial \theta_i } f(x_i;\theta_i) + cons\]
class FCN(model, data, mcdata, bg=None, batch=65000, inmc=None, gauss_constr={})[source]

Bases: object

This class implements methods to calculate the NLL as well as its derivatives for a general function.

Parameters:
  • model – Model object.

  • data – Data array.

  • mcdata – MCdata array.

  • bg – Background array.

  • batch – The length of array to calculate as a vector at a time. How to fold the data array may depend on the GPU computability.

get_grad(x={})[source]
Parameters:

x – List. Values of variables.

Return gradients:

List of real numbers. The gradients for each variable.

get_grad_hessp(x, p, batch)[source]
get_nll(x={})[source]
Parameters:

x – List. Values of variables.

Return nll:

Real number. The value of NLL.

get_nll_grad(x={})[source]
Parameters:

x – List. Values of variables.

Return nll:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

get_nll_grad_hessian(x={}, batch=None)[source]
Parameters:

x – List. Values of variables.

Return nll:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

Return hessian:

2-D Array of real numbers. The Hessian matrix of the variables.

get_params(trainable_only=False)[source]
grad(x={})[source]
grad_hessp(x, p, batch=None)[source]
nll_grad(x={})[source]
nll_grad_hessian(x={}, batch=None)[source]
class GaussianConstr(vm, constraint={})[source]

Bases: object

get_constrain_grad()[source]
constraint: Gauss(mean,sigma)

by add a term \(\frac{d}{d\theta_i}\frac{(\theta_i-\bar{\theta_i})^2}{2\sigma^2} = \frac{\theta_i-\bar{\theta_i}}{\sigma^2}\)

get_constrain_hessian()[source]

the constrained parameter’s 2nd differentiation

get_constrain_term()[source]
constraint: Gauss(mean,sigma)

by add a term \(\frac{(\theta_i-\bar{\theta_i})^2}{2\sigma^2}\)

update(constraint={})[source]
class MixLogLikehoodFCN(model, data, mcdata, bg=None, batch=65000, gauss_constr={})[source]

Bases: CombineFCN

This class implements methods to calculate the NLL as well as its derivatives for a general function.

Parameters:
  • model – List of model object.

  • data – List of data array.

  • mcdata – list of MCdata array.

  • bg – list of Background array.

  • batch – The length of array to calculate as a vector at a time. How to fold the data array may depend on the GPU computability.

get_nll_grad(x={})[source]
Parameters:

x – List. Values of variables.

Return nll:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

class Model(amp, w_bkg=1.0, resolution_size=1, extended=False, **kwargs)[source]

Bases: object

This class implements methods to calculate NLL as well as its derivatives for an amplitude model. It may include data for both signal and background.

Parameters:
  • ampAllAmplitude object. The amplitude model.

  • w_bkg – Real number. The weight of background.

get_params(trainable_only=False)[source]

It has interface to Amplitude.get_params().

get_weight_data(data, weight=None, bg=None, alpha=True)[source]

Blend data and background data together multiplied by their weights.

Parameters:
  • data – Data array

  • weight – Weight for data

  • bg – Data array for background

  • alpha – Boolean. If it’s true, weight will be multiplied by a factor \(\alpha=\)???

Returns:

Data, weight. Their length both equals len(data)+len(bg).

grad_hessp_batch(p, data, mcdata, weight, mc_weight)[source]

self.nll_grad() is replaced by this one???

\[- \frac{\partial \ln L}{\partial \theta_k } = -\sum_{x_i \in data } w_i \frac{\partial}{\partial \theta_k} \ln f(x_i;\theta_k) + (\sum w_j ) \left( \frac{ \partial }{\partial \theta_k} \sum_{x_i \in mc} f(x_i;\theta_k) \right) \frac{1}{ \sum_{x_i \in mc} f(x_i;\theta_k) }\]
Parameters:
  • data

  • mcdata

  • weight

  • mc_weight

Returns:

mix_data_bakcground(data, bg)[source]
nll(data, mcdata, weight: Tensor = 1.0, batch=None, bg=None, mc_weight=1.0)[source]

Calculate NLL.

\[-\ln L = -\sum_{x_i \in data } w_i \ln f(x_i;\theta_k) + (\sum w_j ) \ln \sum_{x_i \in mc } f(x_i;\theta_k)\]
Parameters:
  • data – Data array

  • mcdata – MCdata array

  • weight – Weight of data???

  • batch – The length of array to calculate as a vector at a time. How to fold the data array may depend on the GPU computability.

  • bg – Background data array. It can be set to None if there is no such thing.

Returns:

Real number. The value of NLL.

nll_grad(data, mcdata, weight=1.0, batch=65000, bg=None, mc_weight=1.0)[source]

Calculate NLL and its gradients.

\[- \frac{\partial \ln L}{\partial \theta_k } = -\sum_{x_i \in data } w_i \frac{\partial}{\partial \theta_k} \ln f(x_i;\theta_k) + (\sum w_j ) \left( \frac{ \partial }{\partial \theta_k} \sum_{x_i \in mc} f(x_i;\theta_k) \right) \frac{1}{ \sum_{x_i \in mc} f(x_i;\theta_k) }\]

The parameters are the same with self.nll(), but it will return gradients as well.

Return NLL:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

nll_grad_batch(data, mcdata, weight, mc_weight)[source]

batch version of self.nll_grad()

\[- \frac{\partial \ln L}{\partial \theta_k } = -\sum_{x_i \in data } w_i \frac{\partial}{\partial \theta_k} \ln f(x_i;\theta_k) + (\sum w_j ) \left( \frac{ \partial }{\partial \theta_k} \sum_{x_i \in mc} f(x_i;\theta_k) \right) \frac{1}{ \sum_{x_i \in mc} f(x_i;\theta_k) }\]
Parameters:
  • data

  • mcdata

  • weight

  • mc_weight

Returns:

nll_grad_hessian(data, mcdata, weight=1.0, batch=24000, bg=None, mc_weight=1.0)[source]

The parameters are the same with self.nll(), but it will return Hessian as well.

Return NLL:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

Return Hessian:

2-D Array of real numbers. The Hessian matrix of the variables.

set_params(var)[source]

It has interface to Amplitude.set_params().

sum_log_integral_grad_batch(mcdata, ndata)[source]
sum_nll_grad_bacth(data)[source]
sum_resolution(w)[source]
class Model_new(amp, w_bkg=1.0, w_inmc=0, float_wmc=False)[source]

Bases: Model

This class implements methods to calculate NLL as well as its derivatives for an amplitude model. It may include data for both signal and background.

Parameters:
  • ampAllAmplitude object. The amplitude model.

  • w_bkg – Real number. The weight of background.

get_weight_data(data, weight=1.0, bg=None, inmc=None, alpha=True)[source]

Blend data and background data together multiplied by their weights.

Parameters:
  • data – Data array

  • weight – Weight for data

  • bg – Data array for background

  • alpha – Boolean. If it’s true, weight will be multiplied by a factor \(\alpha=\)???

Returns:

Data, weight. Their length both equals len(data)+len(bg).

nll(data, mcdata, weight: Tensor = 1.0, batch=None, bg=None)[source]

Calculate NLL.

\[-\ln L = -\sum_{x_i \in data } w_i \ln f(x_i;\theta_k) + (\sum w_j ) \ln \sum_{x_i \in mc } f(x_i;\theta_k)\]
Parameters:
  • data – Data array

  • mcdata – MCdata array

  • weight – Weight of data???

  • batch – The length of array to calculate as a vector at a time. How to fold the data array may depend on the GPU computability.

  • bg – Background data array. It can be set to None if there is no such thing.

Returns:

Real number. The value of NLL.

nll_grad_batch(data, mcdata, weight, mc_weight)[source]

self.nll_grad_new

nll_grad_hessian(data, mcdata, weight, mc_weight)[source]

The parameters are the same with self.nll(), but it will return Hessian as well.

Return NLL:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

Return Hessian:

2-D Array of real numbers. The Hessian matrix of the variables.

clip_log(x, _epsilon=1e-06)[source]

clip log to allowed large value

get_shape(x)[source]
sum_grad_hessp(f, p, data, var, weight=1.0, trans=<function identity>, resolution_size=1, args=(), kwargs=None)[source]

The parameters are the same with sum_gradient(), but this function will return hessian as well, which is the matrix of the second-order derivative.

Returns:

Real number NLL, list gradient, 2-D list hessian

sum_gradient(f, data, var, weight=1.0, trans=<function identity>, resolution_size=1, args=(), kwargs=None)[source]

NLL is the sum of trans(f(data)):math:*`weight; gradient is the derivatives for each variable in ``var`.

Parameters:
  • f – Function. The amplitude PDF.

  • data – Data array

  • var – List of strings. Names of the trainable variables in the PDF.

  • weight – Weight factor for each data point. It’s either a real number or an array of the same shape with data.

  • trans – Function. Transformation of data before multiplied by weight.

  • kwargs – Further arguments for f.

Returns:

Real number NLL, list gradient

sum_gradient_new(amp, data, mcdata, weight, mcweight, var, trans=<function log>, w_flatmc=<function <lambda>>, args=(), kwargs=None)[source]

NLL is the sum of trans(f(data)):math:*`weight; gradient is the derivatives for each variable in ``var`.

Parameters:
  • f – Function. The amplitude PDF.

  • data – Data array

  • var – List of strings. Names of the trainable variables in the PDF.

  • weight – Weight factor for each data point. It’s either a real number or an array of the same shape with data.

  • trans – Function. Transformation of data before multiplied by weight.

  • kwargs – Further arguments for f.

Returns:

Real number NLL, list gradient

sum_hessian(f, data, var, weight=1.0, trans=<function identity>, resolution_size=1, args=(), kwargs=None)[source]

The parameters are the same with sum_gradient(), but this function will return hessian as well, which is the matrix of the second-order derivative.

Returns:

Real number NLL, list gradient, 2-D list hessian

sum_hessian_new(amp, data, mcdata, weight, mcweight, var, trans=<function log>, w_flatmc=<function <lambda>>, args=(), kwargs=None)[source]

The parameters are the same with sum_gradient(), but this function will return hessian as well, which is the matrix of the second-order derivative.

Returns:

Real number NLL, list gradient, 2-D list hessian

opt_int
class ModelCachedAmp(amp, w_bkg=1.0)[source]

Bases: Model

This class implements methods to calculate NLL as well as its derivatives for an amplitude model with Cached Int. It may include data for both signal and background. Cached Int well cause wrong results when float parameters include mass or width.

Parameters:
  • ampAllAmplitude object. The amplitude model.

  • w_bkg – Real number. The weight of background.

grad_hessp_batch(p, data, mcdata, weight, mc_weight)[source]

self.nll_grad() is replaced by this one???

\[- \frac{\partial \ln L}{\partial \theta_k } = -\sum_{x_i \in data } w_i \frac{\partial}{\partial \theta_k} \ln f(x_i;\theta_k) + (\sum w_j ) \left( \frac{ \partial }{\partial \theta_k} \sum_{x_i \in mc} f(x_i;\theta_k) \right) \frac{1}{ \sum_{x_i \in mc} f(x_i;\theta_k) }\]
Parameters:
  • data

  • mcdata

  • weight

  • mc_weight

Returns:

nll_grad_batch(data, mcdata, weight, mc_weight)[source]

self.nll_grad() is replaced by this one???

\[- \frac{\partial \ln L}{\partial \theta_k } = -\sum_{x_i \in data } w_i \frac{\partial}{\partial \theta_k} \ln f(x_i;\theta_k) + (\sum w_j ) \left( \frac{ \partial }{\partial \theta_k} \sum_{x_i \in mc} f(x_i;\theta_k) \right) \frac{1}{ \sum_{x_i \in mc} f(x_i;\theta_k) }\]
Parameters:
  • data

  • mcdata

  • weight

  • mc_weight

Returns:

sum_log_integral_grad_batch(mcdata, ndata)[source]
sum_nll_grad_bacth(data)[source]
class ModelCachedInt(amp, w_bkg=1.0)[source]

Bases: Model

This class implements methods to calculate NLL as well as its derivatives for an amplitude model with Cached Int. It may include data for both signal and background. Cached Int well cause wrong results when float parameters include mass or width.

Parameters:
  • ampAllAmplitude object. The amplitude model.

  • w_bkg – Real number. The weight of background.

build_cached_int(mcdata, mc_weight, batch=65000)[source]
get_cached_int(mc_id)[source]
nll_grad_batch(data, mcdata, weight, mc_weight)[source]

self.nll_grad() is replaced by this one???

\[- \frac{\partial \ln L}{\partial \theta_k } = -\sum_{x_i \in data } w_i \frac{\partial}{\partial \theta_k} \ln f(x_i;\theta_k) + (\sum w_j ) \left( \frac{ \partial }{\partial \theta_k} \sum_{x_i \in mc} f(x_i;\theta_k) \right) \frac{1}{ \sum_{x_i \in mc} f(x_i;\theta_k) }\]
Parameters:
  • data

  • mcdata

  • weight

  • mc_weight

Returns:

nll_grad_hessian(data, mcdata, weight=1.0, batch=24000, bg=None, mc_weight=1.0)[source]

The parameters are the same with self.nll(), but it will return Hessian as well.

Return NLL:

Real number. The value of NLL.

Return gradients:

List of real numbers. The gradients for each variable.

Return Hessian:

2-D Array of real numbers. The Hessian matrix of the variables.

sum_grad_hessp_data2(f, p, var, data, data2, weight=1.0, trans=<function identity>, resolution_size=1, args=(), kwargs=None)[source]

The parameters are the same with sum_gradient(), but this function will return hessian as well, which is the matrix of the second-order derivative.

Returns:

Real number NLL, list gradient, 2-D list hessian

sum_gradient(fs, var, weight=1.0, trans=<function identity>, args=(), kwargs=None)[source]

NLL is the sum of trans(f(data)):math:*`weight; gradient is the derivatives for each variable in ``var`.

Parameters:
  • f – Function. The amplitude PDF.

  • var – List of strings. Names of the trainable variables in the PDF.

  • weight – Weight factor for each data point. It’s either a real number or an array of the same shape with data.

  • trans – Function. Transformation of data before multiplied by weight.

  • kwargs – Further arguments for f.

Returns:

Real number NLL, list gradient

sum_gradient_data2(f, var, data, cached_data, weight=1.0, trans=<function identity>, args=(), kwargs=None)[source]

NLL is the sum of trans(f(data)):math:*`weight; gradient is the derivatives for each variable in ``var`.

Parameters:
  • f – Function. The amplitude PDF.

  • var – List of strings. Names of the trainable variables in the PDF.

  • weight – Weight factor for each data point. It’s either a real number or an array of the same shape with data.

  • trans – Function. Transformation of data before multiplied by weight.

  • kwargs – Further arguments for f.

Returns:

Real number NLL, list gradient

adaptive_bins

adaptive split data into bins.

class AdaptiveBound(base_data, bins, base_bound=None)[source]

Bases: object

adaptive bound cut for data value

static base_bound(data)[source]

base bound for the data

get_bool_mask(data)[source]

bool mask for splitting data

get_bound_patch(**kwargs)[source]
get_bounds()[source]

get split data bounds

get_bounds_data()[source]

get split data bounds, and the data after splitting

static loop_split_bound(datas, n, base_bound=None)[source]

loop for multi_split_bound, so n is list of list of int

static multi_split_bound(datas, n, base_bound=None)[source]

multi data for single_split_bound, so n is list of int

>>> data = np.array([[1.0, 2.0, 1.4, 3.1], [2.0, 1.0, 3.0, 1.0]])
>>> bound, _ = AdaptiveBound.multi_split_bound(data, [2, 1])
>>> [(i[0][0]+1e-6, i[1][0]+1e-6) for i in bound]
[(1.0..., 1.7...), (1.7..., 3.1...)]
plot_bound(ax, **kwargs)[source]
static single_split_bound(data, n=2, base_bound=None)[source]

split data in the order of data value

>>> data = np.array([1.0, 2.0, 1.4, 3.1])
>>> AdaptiveBound.single_split_bound(data)
[(1.0, 1.7...), (1.7..., 3.1...)]
split_data(data)[source]

split data, the shape is same as base_data

split_full_data(data, base_index=None)[source]

split structure data, (TODO because large IO, the method is slow.)

adaptive_shape(m, bins, xmin, xmax)[source]
binning_shape_function(m, bins)[source]
cal_chi2(numbers, n_fp)[source]

angle

This module implements three classes Vector3, LorentzVector, EulerAngle .

class AlignmentAngle(alpha=0.0, beta=0.0, gamma=0.0)[source]

Bases: EulerAngle

static angle_px_px(p1, x1, p2, x2)[source]
class EulerAngle(alpha=0.0, beta=0.0, gamma=0.0)[source]

Bases: dict

This class provides methods for Eular angle \((\alpha,\beta,\gamma)\)

static angle_zx_z_getx(z1, x1, z2)[source]

The Eular angle from coordinate 1 to coordinate 2. Only the z-axis is provided for coordinate 2, so \(\gamma\) is set to be 0.

Parameters:
  • z1 – Vector3 z-axis of the initial coordinate

  • x1 – Vector3 x-axis of the initial coordinate

  • z2 – Vector3 z-axis of the final coordinate

Return eular_angle:

EularAngle object with \(\gamma=0\).

Return x2:

Vector3 object, which is the x-axis of the final coordinate when \(\gamma=0\).

static angle_zx_zx(z1, x1, z2, x2)[source]

The Euler angle from coordinate 1 to coordinate 2 (right-hand coordinates).

Parameters:
  • z1 – Vector3 z-axis of the initial coordinate

  • x1 – Vector3 x-axis of the initial coordinate

  • z2 – Vector3 z-axis of the final coordinate

  • x2 – Vector3 x-axis of the final coordinate

Returns:

Euler Angle object

static angle_zx_zzz_getx(z, x, zi)[source]

The Eular angle from coordinate 1 to coordinate 2. Z-axis of coordinate 2 is the normal vector of a plane.

Parameters:
  • z1 – Vector3 z-axis of the initial coordinate

  • x1 – Vector3 x-axis of the initial coordinate

  • z – list of Vector3 of the plane point.

Return eular_angle:

EularAngle object.

Return x2:

list of Vector3 object, which is the x-axis of the final coordinate in zi.

class LorentzVector[source]

Bases: Tensor

This class provides methods for Lorentz vectors (T,X,Y,Z). or -T???

Dot(other)[source]
M()[source]

The invariant mass

M2()[source]

The invariant mass squared

beta()[source]
boost(p)[source]

Boost this Lorentz vector into the frame indicated by the 3-d vector p.

boost_matrix()[source]
boost_vector()[source]

\(\beta=(X,Y,Z)/T\) :return: 3-d vector \(\beta\)

static from_p4(p_0, p_1, p_2, p_3)[source]

Given p_0 is a real number, it will make it transform into the same shape with p_1.

gamma()[source]
get_T()[source]
get_X()[source]
get_Y()[source]
get_Z()[source]
get_e()[source]

rm???

get_metric()[source]

The metric is (1,-1,-1,-1) by default

neg()[source]

The negative vector

omega()[source]
rest_vector(other)[source]

Boost another Lorentz vector into the rest frame of \(\beta\).

vect()[source]

It returns the 3-d vector (X,Y,Z).

class SU2M(x)[source]

Bases: dict

static Boost_z(omega)[source]
static Boost_z_from_p(p)[source]
static Rotation_y(beta)[source]
static Rotation_z(alpha)[source]
get_euler_angle()[source]
inv()[source]
class Vector3[source]

Bases: Tensor

This class provides methods for 3-d vectors (X,Y,Z)

angle_from(x, y)[source]

The angle from x-axis providing the x,y axis to define a 3-d coordinate.

x  ->  self
|      /
| ang /
|    /
|   /
|  /
| /
|/
-------- y
Parameters:
  • x – A Vector3 instance as x-axis

  • y – A Vector3 instance as y-axis. It should be perpendicular to the x-axis.

cos_theta(other)[source]

cos theta of included angle

cross(other)[source]

Cross product with another Vector3 instance

cross_unit(other)[source]

The unit vector of the cross product with another Vector3 object. It has interface to tf.linalg.normalize().

dot(other)[source]

Dot product with another Vector3 object

get_X()[source]
get_Y()[source]
get_Z()[source]
norm()[source]
norm2()[source]

The norm square

unit()[source]

The unit vector of itself. It has interface to tf.linalg.normalize().

kine_max(s12, m0, m1, m2, m3)[source]

max s23 for s12 in p0 -> p1 p2 p3

kine_min(s12, m0, m1, m2, m3)[source]

min s23 for s12 in p0 -> p1 p2 p3

kine_min_max(s12, m0, m1, m2, m3)[source]

min max s23 for s12 in p0 -> p1 p2 p3

applications

This module provides functions that implements user-friendly interface to the functions and methods in other modules. It acts like a synthesis of all the other modules of their own physical purposes. In general, users only need to import functions in this module to implement their physical analysis instead of going into every modules. There are some example files where you can figure out how it is used.

cal_hesse_correct(fcn, params={}, corr_params={}, force_pos=True)[source]
cal_hesse_error(fcn, params={}, check_posi_def=True, force_pos=True, save_npy=True)[source]

This function calculates the errors of all trainable variables. The errors are given by the square root of the diagonal of the inverse Hessian matrix.

Parameters:

model – Model.

Return hesse_error:

List of errors.

Return inv_he:

The inverse Hessian matrix.

cal_significance(nll1, nll2, ndf)[source]

This function calculates the statistical significance.

Parameters:
  • nll1 – Float. NLL of the first PDF.

  • nll2 – Float. NLL of the second PDF.

  • ndf – The difference of the degrees of freedom of the two PDF.

Returns:

Float. The statistical significance

compare_result(value1, value2, error1, error2=None, figname=None, yrange=None, periodic_vars=None)[source]

Compare two groups of fitting results. If only one error is provided, the figure is \(\frac{\mu_1-\mu_2}{\sigma_1}\); if both errors are provided, the figure is \(\frac{\mu_1-\mu_2}{\sqrt{\sigma_1^2+\sigma_2^2}}\).

Parameters:
  • value1 – Dictionary

  • value2 – Dictionary

  • error1 – Dictionary

  • error2 – Dictionary. By default it’s None.

  • figname – String. The output file

  • yrange – Float. If it’s not given, there is no y-axis limit in the figure.

  • periodic_vars – List of strings. The periodic variables.

Returns:

Dictionary of quality figure of each variable.

corr_coef_matrix(err_mtx)[source]

This function obtains correlation coefficients matrix of all trainable variables from *.npy file.

Parameters:

err_mtx – Array or string (name of the npy file).

Returns:

Numpy 2-d array. The correlation coefficient matrix.

fit(Use='scipy', **kwargs)[source]

Fit the amplitude model using scipy, iminuit or pymultinest. It imports fit_scipy, fit_minuit, fit_multinest from module tf_pwa.fit.

Parameters:
  • Use – String. If it’s "scipy", it will call fit_scipy; if it’s "minuit", it will call fit_minuit; if it’s "multinest", it will call fit_multinest.

  • kwargs – The arguments will be passed to the three functions above.

For fit_scipy

Parameters:
  • fcn – FCN object to be minimized.

  • method – String. Options in scipy.optimize. For now, it implements interface to such as “BFGS”, “L-BFGS-B”, “basinhopping”.

  • bounds_dict – Dictionary of boundary constrain to variables.

  • kwargs – Other arguments passed on to scipy.optimize functions.

Returns:

FitResult object, List of NLLs, List of point arrays.

For fit_minuit

Parameters:
  • fcn – FCN object to be minimized.

  • bounds_dict – Dictionary of boundary constrain to variables.

  • hesse – Boolean. Whether to call hesse() after migrad(). It’s True by default.

  • minos – Boolean. Whether to call minos() after hesse(). It’s False by default.

Returns:

Minuit object

For fit_multinest (WIP)

Parameters:

fcn – FCN object to be minimized.

fit_fractions(amp, mcdata, inv_he=None, params=None, batch=25000, res=None, method='old')[source]

This function calculate fit fractions of the resonances as well as their coherent pairs. It imports cal_fitfractions and cal_fitfractions_no_grad from module tf_pwa.fitfractions.

\[FF_{i} = \frac{\int |A_i|^2 d\Omega }{ \int |\sum_{i}A_i|^2 d\Omega }\approx \frac{\sum |A_i|^2 }{\sum|\sum_{i} A_{i}|^2}\]

gradients???:

\[FF_{i,j} = \frac{\int 2Re(A_i A_j*) d\Omega }{ \int |\sum_{i}A_i|^2 d\Omega } = \frac{\int |A_i +A_j|^2 d\Omega }{ \int |\sum_{i}A_i|^2 d\Omega } - FF_{i} - FF_{j}\]

hessians:

\[\frac{\partial }{\partial \theta_i }\frac{f(\theta_i)}{g(\theta_i)} = \frac{\partial f(\theta_i)}{\partial \theta_i} \frac{1}{g(\theta_i)} - \frac{\partial g(\theta_i)}{\partial \theta_i} \frac{f(\theta_i)}{g^2(\theta_i)}\]
Parameters:
  • amp – Amplitude object.

  • mcdata – MCdata array.

  • inv_he – The inverse of Hessian matrix. If it’s not given, the errors will not be calculated.

Return frac:

Dictionary of fit fractions for each resonance.

Return err_frac:

Dictionary of their errors. If inv_he is None, it will be a dictionary of None.

force_pos_def(h)[source]

from pricession lost hessian matrix eigen value is small

dot(H, v[:,i] = e[i] v[:,i] dot(H, v)[:,i] = e[i] v[:,i] dot(inv(v), dot(H, v)) = diag(e) H = dot(v, dot(diag(e), inv(v))

force_pos_def_minuit2(inv_he)[source]

force positive defined of error matrix

gen_data(amp, Ndata, mcfile, Nbg=0, wbg=0, Poisson_fluc=False, bgfile=None, genfile=None, particles=None)[source]

This function is used to generate toy data according to an amplitude model.

Parameters:
  • amp – AmplitudeModel???

  • particles – List of final particles

  • Ndata – Integer. Number of data

  • mcfile – String. The MC sample file used to generate signal data.

  • Nbg – Integer. Number of background. By default it’s 0.

  • wbg – Float. Weight of background. By default it’s 0.

  • Poisson_fluc – Boolean. If it’s True, The number of data will be decided by a Poisson distribution around the given value.

  • bgfile – String. The background sample file used to generate a certain number of background data.

  • genfile – String. The file to store the generated toy.

Returns:

tensorflow.Tensor. The generated toy data.

gen_mc(mother, daughters, number, outfile=None)[source]

This function generates phase-space MC data (without considering the effect of detector performance). It imports PhaseSpaceGenerator from module tf_pwa.phasespace.

Parameters:
  • mother – Float. The invariant mass of the mother particle.

  • daughters – List of float. The invariant masses of the daughter particles.

  • number – Integer. The number of MC data generated.

  • outfile – String. The file to store the generated MC.

Returns:

Numpy array. The generated MC data.

likelihood_profile(m, var_names, bins=20, minos=True)[source]

Calculate the likelihood profile for a variable.

Parameters:
  • m – Minuit object

  • var_names – Either a string or a list of strings

  • bins – Integer

  • minos – Boolean. If it’s False, the function will call Minuit.profile() to derive the 1-d scan of var_names; if it’s True, the function will call Minuit.mnprofile() to derive the likelihood profile, which is much more time-consuming.

Returns:

Dictionary indexed by var_names. It contains the return of either Minuit.mnprofile() or Minuit.profile().

num_hess_inv_3point(fcn, params={}, _epsilon=0.0005)[source]

This function calculates the errors of all trainable variables. The errors are given by the square root of the diagonal of the inverse Hessian matrix.

Parameters:

model – Model.

Return hesse_error:

List of errors.

Return inv_he:

The inverse Hessian matrix.

plot_pull(data, name, nbins=20, norm=False, value=None, error=None)[source]

This function is used to plot the pull for a data sample.

Parameters:
  • data – List

  • name – String. Name of the sample

  • nbins – Integer. Number of bins in the histogram

  • norm – Boolean. Whether to normalize the histogram

  • value – Float. Mean value in normalization

  • error – Float or list. Sigma value(s) in normalization

Returns:

The fitted mu, sigma, as well as their errors

breit_wigner

This module provides functions to describe the lineshapes of the intermediate particles, namely generalized Breit-Wigner function. Users can also define new lineshape using the function wrapper regist_lineshape().

BW(m, m0, g0, *args)[source]

Breit-Wigner function

\[BW(m) = \frac{1}{m_0^2 - m^2 - i m_0 \Gamma_0 }\]
BWR(m, m0, g0, q, q0, L, d)[source]

Relativistic Breit-Wigner function (with running width). It’s also set as the default lineshape.

\[BW(m) = \frac{1}{m_0^2 - m^2 - i m_0 \Gamma(m)}\]
BWR2(m, m0, g0, q2, q02, L, d)[source]

Relativistic Breit-Wigner function (with running width). Allow complex \(\Gamma\).

\[BW(m) = \frac{1}{m_0^2 - m^2 - i m_0 \Gamma(m)}\]
BWR_normal(m, m0, g0, q2, q02, L, d)[source]

Relativistic Breit-Wigner function (with running width) with a normal factor.

\[BW(m) = \frac{\sqrt{m_0 \Gamma(m)}}{m_0^2 - m^2 - i m_0 \Gamma(m)}\]
Bprime(L, q, q0, d)[source]

Blatt-Weisskopf barrier factors. E.g. the first three orders

\(L\)

\(B_L'(q,q_0,d)\)

0

1

1

\(\sqrt{\frac{(q_0d)^2+1}{(qd)^2+1}}\)

2

\(\sqrt{\frac{(q_0d)^4+3*(q_0d)^2+9}{(qd)^4+3*(qd)^2+9}}\)

\(d\) is 3.0 by default.

Bprime_num(L, q, d)[source]

The numerator (as well as the denominator) inside the square root in the barrier factor

Bprime_polynomial(l, z)[source]

It stores the Blatt-Weisskopf polynomial up to the fifth order (\(L=5\))

Parameters:
  • l – The order

  • z – The variable in the polynomial

Returns:

The calculated value

Bprime_q2(L, q2, q02, d)[source]

Blatt-Weisskopf barrier factors.

GS(m, m0, g0, q, q0, L, d, c_daug2Mass=0.13957039, c_daug3Mass=0.1349768)[source]
Gamma(m, gamma0, q, q0, L, m0, d)[source]

Running width in the RBW

\[\Gamma(m) = \Gamma_0 \left(\frac{q}{q_0}\right)^{2L+1}\frac{m_0}{m} B_{L}'^2(q,q_0,d)\]
Gamma2(m, gamma0, q2, q02, L, m0, d)[source]

Running width in the RBW

\[\Gamma(m) = \Gamma_0 \left(\frac{q}{q_0}\right)^{2L+1}\frac{m_0}{m} B_{L}'^2(q,q_0,d)\]
barrier_factor(l, q, q0, d=3.0, axis=0)[source]

Barrier factor multiplied with \(q^L\), which is used as a combination in the amplitude expressions. The values are cached for \(L\) ranging from 0 to l.

barrier_factor2(l, q, q0, d=3.0, axis=-1)[source]

???

dFun(s, daug2Mass, daug3Mass)[source]
dh_dsFun(s, daug2Mass, daug3Mass)[source]
fsFun(s, m2, gam, daug2Mass, daug3Mass)[source]
get_bprime_coeff(l)[source]

The coefficients of polynomial in Bprime function.

\[|\theta_{l}(jw)|^2 = \sum_{i=0}^{l} c_i w^{2 i}\]
hFun(s, daug2Mass, daug3Mass)[source]
one(*args)[source]

A uniform function

regist_lineshape(name=None)[source]

It will be used as a wrapper to define various Breit-Wigner functions

Parameters:

name – String name of the BW function

Returns:

A function used in a wrapper

reverse_bessel_polynomials(n, x)[source]

Reverse Bessel polynomials.

\[\theta_{n}(x) = \sum_{k=0}^{n} \frac{(n+k)!}{(n-k)!k!} \frac{x^{n-k}}{2^k}\]
to_complex(i)[source]
twoBodyCMmom(m_0, m_1, m_2)[source]

relative momentum for 0 -> 1 + 2

cal_angle

This module provides functions which are useful when calculating the angular variables.

The full data structure provided is

{
    "particle": {
        A: {"p": ..., "m": ...},
        (C, D): {"p": ..., "m": ...},
        ...
    },

    "decay":{
        [A->(C, D)+B, (C, D)->C+D]:
        {
            A->(C, D)+B: {
                (C, D): {
                    "ang": {
                        "alpha":[...],
                        "beta": [...],
                        "gamma": [...]
                    },
                    "z": [[x1,y1,z1],...],
                    "x": [[x2,y2,z2],...]
                },
                B: {...}
            },

            (C, D)->C+D: {
                C: {
                    ...,
                    "aligned_angle": {
                    "alpha": [...],
                    "beta": [...],
                    "gamma": [...]
                }
            },
                D: {...}
            },
            A->(B, D)+C: {...},
            (B, D)->B+D: {...}
        },
    ...
    }
}

Inner nodes are named as tuple of particles.

class CalAngleData[source]

Bases: dict

get_angle(decay, p)[source]

get hilicity angle of decay which product particle p

get_decay()[source]
get_mass(name)[source]
get_momentum(name)[source]
get_weight()[source]
mass_hist(name, bins='sqrt', **kwargs)[source]
savetxt(file_name, order=None, cp_trans=False, save_charge=False)[source]
Getp(M_0, M_1, M_2)[source]

Consider a two-body decay \(M_0\rightarrow M_1M_2\). In the rest frame of \(M_0\), the momentum of \(M_1\) and \(M_2\) are definite.

Parameters:
  • M_0 – The invariant mass of \(M_0\)

  • M_1 – The invariant mass of \(M_1\)

  • M_2 – The invariant mass of \(M_2\)

Returns:

the momentum of \(M_1\) (or \(M_2\))

Getp2(M_0, M_1, M_2)[source]

Consider a two-body decay \(M_0\rightarrow M_1M_2\). In the rest frame of \(M_0\), the momentum of \(M_1\) and \(M_2\) are definite.

Parameters:
  • M_0 – The invariant mass of \(M_0\)

  • M_1 – The invariant mass of \(M_1\)

  • M_2 – The invariant mass of \(M_2\)

Returns:

the momentum of \(M_1\) (or \(M_2\))

add_mass(data: dict, _decay_chain: DecayChain | None = None) dict[source]
add particles mass array for data momentum.

{top:{p:momentum},inner:{p:..},outs:{p:..}} => {top:{p:momentum,m:mass},…}

add_relative_momentum(data: dict)[source]

add add rest frame momentum scalar from data momentum.

from {"particle": {A: {"m": ...}, ...}, "decay": {A->B+C: {...}, ...}

to {"particle": {A: {"m": ...}, ...},"decay": {[A->B+C,...]: {A->B+C:{...,"|q|": ...},...},...}

add_weight(data: dict, weight: float = 1.0) dict[source]
add inner data weights for data.

{…} => {…, “weight”: weights}

aligned_angle_ref_rule1(decay_group, decay_chain_struct, decay_data, data)[source]
aligned_angle_ref_rule2(decay_group, decay_chain_struct, decay_data, data)[source]
cal_angle(data, decay_group: DecayGroup, using_topology=True, random_z=True, r_boost=True, final_rest=True, align_ref=None, only_left_angle=False)

Calculate helicity angle for particle momentum, add aligned angle.

Params data:

dict as {particle: {“p”:…}}

Returns:

Dictionary of data

cal_angle_from_momentum(p, decs: DecayGroup, using_topology=True, center_mass=False, r_boost=True, random_z=False, batch=65000, align_ref=None, only_left_angle=False) CalAngleData[source]

Transform 4-momentum data in files for the amplitude model automatically via DecayGroup.

Parameters:
  • p – 4-momentum data

  • decs – DecayGroup

Returns:

Dictionary of data

cal_angle_from_momentum_base(p, decs: DecayGroup, using_topology=True, center_mass=False, r_boost=True, random_z=False, batch=65000, align_ref=None, only_left_angle=False) CalAngleData[source]

Transform 4-momentum data in files for the amplitude model automatically via DecayGroup.

Parameters:
  • p – 4-momentum data

  • decs – DecayGroup

Returns:

Dictionary of data

cal_angle_from_momentum_id_swap(p, decs: DecayGroup, using_topology=True, center_mass=False, r_boost=True, random_z=False, batch=65000, align_ref=None, only_left_angle=False) CalAngleData[source]
cal_angle_from_momentum_single(p, decs: DecayGroup, using_topology=True, center_mass=False, r_boost=True, random_z=True, align_ref=None, only_left_angle=False) CalAngleData[source]

Transform 4-momentum data in files for the amplitude model automatically via DecayGroup.

Parameters:
  • p – 4-momentum data

  • decs – DecayGroup

Returns:

Dictionary of data

cal_angle_from_particle(data, decay_group: DecayGroup, using_topology=True, random_z=True, r_boost=True, final_rest=True, align_ref=None, only_left_angle=False)[source]

Calculate helicity angle for particle momentum, add aligned angle.

Params data:

dict as {particle: {“p”:…}}

Returns:

Dictionary of data

cal_chain_boost(data, decay_chain: DecayChain) dict[source]

calculate chain boost for a decay chain

cal_helicity_angle(data: dict, decay_chain: DecayChain, base_z=array([0., 0., 1.]), base_x=array([1., 0., 0.])) dict[source]

Calculate helicity angle for A -> B + C: \(\theta_{B}^{A}, \phi_{B}^{A}\) from momentum.

from {A:{p:momentum},B:{p:...},C:{p:...}}

to {A->B+C:{B:{"ang":{"alpha":...,"beta":...,"gamma":...},"x":...,"z"},...}}

cal_single_boost(data, decay_chain: DecayChain) dict[source]
cp_swap_p(p4, finals, id_particles, cp_particles)[source]
get_chain_data(data, decay_chain=None)[source]

get all independent data for a decay chain

get_key_content(dic, key_path)[source]

get key content. E.g. get_key_content(data, ‘/particle/(B, C)/m’)

>>> data = {"particle": {"(B, C)": {"p": 0.1, "m": 1.0},"B":1.0}}
>>> get_key_content(data, '/particle/(B, C)/m')
1.0
get_keys(dic, key_path='')[source]

get_keys of nested dictionary

>>> a = {"a": 1, "b": {"c": 2}}
>>> get_keys(a)
['/a', '/b/c']
get_relative_momentum(data: dict, decay_chain: DecayChain)[source]

add add rest frame momentum scalar from data momentum.

from {"particle": {A: {"m": ...}, ...}, "decay": {A->B+C: {...}, ...}

to {"particle": {A: {"m": ...}, ...},"decay": {A->B+C:{...,"|q|": ...},...}

identical_particles_swap(id_particles)[source]
identical_particles_swap_p(p4, id_particles)[source]
infer_momentum(data, decay_chain: DecayChain) dict[source]
infer momentum of all particles in the decay chain from outer particles momentum.

{outs:{p:momentum}} => {top:{p:momentum},inner:{p:..},outs:{p:..}}

parity_trans(p, charges)[source]
prepare_data_from_dat_file(fnames)[source]

[deprecated] angle for amplitude.py

prepare_data_from_dat_file4(fnames)[source]

[deprecated] angle for amplitude4.py

prepare_data_from_decay(fnames, decs, particles=None, dtype=None, charges=None, **kwargs)[source]

Transform 4-momentum data in files for the amplitude model automatically via DecayGroup.

Parameters:
  • fnames – File name(s).

  • decs – DecayGroup

  • particles – List of Particle. The final particles.

  • dtype – Data type.

Returns:

Dictionary

struct_momentum(p, center_mass=True) dict[source]
restructure momentum as dict

{outs:momentum} => {outs:{p:momentum}}

cg

This module provides the function cg_coef() to calculate the Clebsch-Gordan coefficients \(\langle j_1m_1j_2m_2|JM\rangle\).

This function has interface to SymPy functions if it’s installed correctly. Otherwise, it will depend on the input file tf_pwa/cg_table.json.

cg_coef(jb, jc, mb, mc, ja, ma)[source]

It returns the CG coefficient \(\langle j_bm_bj_cm_c|j_am_a\rangle\), as in a decay from particle a to b and c. It will either call sympy.physics.quantum.cg() or get_cg_coef().

get_cg_coef(j1, j2, m1, m2, j, m)[source]

If SymPy is not installed, [deprecation] cg_coef() will call this function and derive the CG coefficient by searching into tf_pwa/cg_table.json.

In fact, tf_pwa/cg_table.json only stores some of the coefficients, the others will be obtained by this function using some symmetry properties of the CG table.

Note

tf_pwa/cg_table.json only contains the cases where \(j_1,j_2\leqslant4\), but this should be enough for most cases in PWA.

config

class ConfigManager[source]

Bases: dict

create_config(default=None)[source]
get_config(name, default=<tf_pwa.config._Default object>)

get a configuration.

regist_config(name, var=None)

regist a configuration.

set_config(name, var)

set a configuration.

temp_config(name, var)[source]
using_amplitude(var)

cov_ten_ir

FL(s1, s2, s3, S, L)[source]
FS(s1, s2, s3, S)[source]
F_Sigma(s1, s2, s3, S, L)[source]
MassiveTransAngle(p1, p2)[source]
MasslessTransAngle(p1, p2)[source]
NumSL(*args, **kwargs)[source]
NumSL0(s1, s2, s3)[source]
NumSL1(s1, s2, s3)[source]
NumSL2(s1, s2, s3)[source]
NumSL3(s1, s2, s3)[source]
PWFA(p1, m1_zero, s1, p2, m2_zero, s2, s, S, L)[source]
SCombLS(s1, s2, s3, i)[source]

给出一组线性独立且完备的LS组合: i=0 代表三个粒子均有质量的情况; i=1 代表粒子2无质量的情况; i=2 代表粒子2和3均无质量的情况; i=3 代表初末态三粒子均无质量的情况。输出结果是一个集合,其元素为二元数组 {S,L}

WFunc1(s1, s2, s3, S, L)[source]
WFunc2(s1, s2, s3, S, L)[source]
amp0ls(s1, lens1, s2, lens2, s, lens, theta, phi, S, L)[source]
cg_in_amp0ls(s1, lens1, s2, lens2, s, lens, S, L)[source]
delta_idx_in_amp0ls(s1, lens1, s2, lens2, s, lens, l)[source]
force_int(f)[source]
ls_selector_weight(decay, all_ls)[source]
normal_factor(L)[source]
sphericalHarmonic(l, theta, phi)[source]
wigerDx(j, alpha, beta, gamma)[source]
xyzToangle(pxyz)[source]

data

module for describing data process.

All data structure is describing as nested combination of dict or list for ndarray. Data process is a translation from data structure to another data structure or typical ndarray. Data cache can be implemented based on the dynamic features of list and dict.

The full data structure is

{
"particle":{
    "A":{"p":...,"m":...}
    ...
},
"decay":[
    {
    "A->R1+B": {
        "R1": {
        "ang":  {
            "alpha":[...],
            "beta": [...],
            "gamma": [...]
        },
        "z": [[x1,y1,z1],...],
        "x": [[x2,y2,z2],...]
        },
        "B" : {...}
    },
    "R->C+D": {
        "C": {
        ...,
        "aligned_angle":{
            "alpha":[...],
            "beta":[...],
            "gamma":[...]
        }
        },
        "D": {...}
    },
    },
    {
    "A->R2+C": {...},
    "R2->B+D": {...}
    },
    ...
],
"weight": [...]
}
class EvalLazy(f)[source]

Bases: object

class HeavyCall(f)[source]

Bases: object

class LazyCall(f, x, *args, **kwargs)[source]

Bases: object

as_dataset(batch=65000)[source]
batch(batch, axis=0)[source]
copy()[source]
create_new(f, x, *args, **kwargs)[source]
eval()[source]
get(index, value=None)[source]
get_weight()[source]
merge(*other, axis=0)[source]
set_cached_file(cached_file, name)[source]
class LazyFile(x, *args, **kwargs)[source]

Bases: LazyCall

as_dataset(batch=65000)[source]
create_new(f, x, *args, **kwargs)[source]
eval()[source]
batch_call(function, data, batch=10000)[source]
batch_call_numpy(function, data, batch=10000)[source]
batch_sum(function, data, batch=10000)[source]
check_nan(data, no_raise=False)[source]

check if there is nan in data

data_cut(data, expr, var_map=None)[source]

cut data with boolean expression

Parameters:
  • data – data need to cut

  • expr – cut expression

  • var_map – variable map between parameters in expr and data, [option]

Returns:

data after being cut,

data_generator(data, fun=<function _data_split>, args=(), kwargs=None, MAX_ITER=1000)[source]

Data generator: call fun to each data as a generator. The extra arguments will be passed to fun.

data_index(data, key, no_raise=False)[source]

Indexing data for key or a list of keys.

data_map(data, fun, args=(), kwargs=None)[source]

Apply fun for each data. It returns the same structure.

data_mask(data, select)[source]

This function using boolean mask to select data.

Parameters:
  • data – data to select

  • select – 1-d boolean array for selection

Returns:

data after selection

data_merge(*data, axis=0)[source]

This function merges data with the same structure.

data_replace(data, key, value)[source]
data_shape(data, axis=0, all_list=False)[source]

Get data size.

Parameters:
  • data – Data array

  • axis – Integer. ???

  • all_list – Boolean. ???

Returns:

data_split(data, batch_size, axis=0)[source]

Split data for batch_size each in axis.

Parameters:
  • data – structured data

  • batch_size – Integer, data size for each split data

  • axis – Integer, axis for split, [option]

Returns:

a generator for split data

>>> data = {"a": [np.array([1.0, 2.0]), np.array([3.0, 4.0])], "b": {"c": np.array([5.0, 6.0])}, "d": [], "e": {}}
>>> for i, data_i in enumerate(data_split(data, 1)):
...     print(i, data_to_numpy(data_i))
...
0 {'a': [array([1.]), array([3.])], 'b': {'c': array([5.])}, 'd': [], 'e': {}}
1 {'a': [array([2.]), array([4.])], 'b': {'c': array([6.])}, 'd': [], 'e': {}}
data_strip(data, keys)[source]
data_struct(data)[source]

get the structure of data, keys and shape

data_to_numpy(dat)[source]

Convert Tensor data to numpy.ndarray.

data_to_tensor(dat)[source]

convert data to tensorflow.Tensor.

flatten_dict_data(data, fun=<built-in method format of str object>)[source]

Flatten data as dict with structure named as fun.

load_dat_file(fnames, particles, dtype=None, split=None, order=None, _force_list=False, mmap_mode=None)[source]

Load *.dat file(s) of 4-momenta of the final particles.

Parameters:
  • fnames – String or list of strings. File names.

  • particles – List of Particle. Final particles.

  • dtype – Data type.

  • split – sizes of each splited dat files

  • order – transpose order

Returns:

Dictionary of data indexed by Particle.

load_data(file_name, **kwargs)[source]

Load data file from save_data. The arguments will be passed to numpy.load().

save_data(file_name, obj, **kwargs)[source]

Save structured data to files. The arguments will be passed to numpy.save().

save_dataz(file_name, obj, **kwargs)[source]

Save compressed structured data to files. The arguments will be passed to numpy.save().

set_random_seed(seed)[source]

set random seed for random, numpy and tensorflow

split_generator(data, batch_size, axis=0)

Split data for batch_size each in axis.

Parameters:
  • data – structured data

  • batch_size – Integer, data size for each split data

  • axis – Integer, axis for split, [option]

Returns:

a generator for split data

>>> data = {"a": [np.array([1.0, 2.0]), np.array([3.0, 4.0])], "b": {"c": np.array([5.0, 6.0])}, "d": [], "e": {}}
>>> for i, data_i in enumerate(data_split(data, 1)):
...     print(i, data_to_numpy(data_i))
...
0 {'a': [array([1.]), array([3.])], 'b': {'c': array([5.])}, 'd': [], 'e': {}}
1 {'a': [array([2.]), array([4.])], 'b': {'c': array([6.])}, 'd': [], 'e': {}}

dec_parser

module for parsing decay card *.dec file

do_command(cmd, params, lines)[source]

do command in commands

get_decay(words, lines)[source]

parser decay command

get_particles(words, _lines)[source]

parser particles command

get_words(lines)[source]

get all words in a lines

load_dec(s)[source]

load *.dec string

load_dec_file(f)[source]

load *.dec file

process_decay_card(lines)[source]

process all the files as a generator

regist_command(name=None)[source]

regist command function for command call

remove_comment(words)[source]

remove comment string which starts with ‘#’.

sigle_decay(s)[source]

do each decay line

split_lines(s)[source]

split each lines

dfun

This module provides functions to calculate the Wigner D-function.

\(D^{j}_{m_1,m_2}(\alpha,\beta,\gamma)=e^{-im_1\alpha}d^{j}_{m_1,m_2}(\beta)e^{-im_2\gamma}\), where the expression of the Wigner d-function is

\[d^{j}_{m_1,m_2}(\beta) = \sum_{l=0}^{2j} w_{l}^{(j,m_1,m_2)}\sin^{l}(\frac{\beta}{2}) \cos^{2j-l}(\frac{\beta}{2}),\]

where the weight \(w_{l}^{(j,m_1,m_2)}\) in each term satisfies

\[w^{(j,m_1,m_2)}_{l} = (-1)^{m_1-m_2+k}\frac{\sqrt{(j+m_1)!(j-m_1)!(j+m_2)!(j-m_2)!}}{(j-m_1-k)!(j+m_2-k)!(m_1-m_2+k)!k!}\]

when \(k=\frac{l+m_2-m_1}{2} \in [\max(0,m_2-m_1),\min(j-m_1,j+m_2)]\), and otherwise \(w^{(j,m_1,m_2)}_{l} = 0\).

D_matrix_conj(alpha, beta, gamma, j)[source]

The conjugated D-matrix element with indices (\(m_1,m_2\)) is

\[D^{j}_{m_1,m_2}(\alpha, \beta, \gamma)^\star = e^{i m_1 \alpha} d^{j}_{m_1,m_2}(\beta) e^{i m_2 \gamma}\]
Parameters:
  • alpha – Array

  • beta – Array

  • gamma – Array

  • j – Integer \(2j\) in the formula

Returns:

Array of the conjugated D-matrices. Same shape as alpha, beta, and gamma

Dfun_delta(d, ja, la, lb, lc=(0,))[source]

The decay from particle a to b and c requires \(|l_b-l_c|\leqslant j\)

\(D_{ma,mb-mc} = \delta[(m1,m2)->(ma, mb,mc))] D_{m1,m2}\)

Dfun_delta_v2(d, ja, la, lb, lc=(0,))[source]

The decay from particle a to b and c requires \(|l_b-l_c|\leqslant j\)

\(D_{ma,mb-mc} = \delta[(m1,m2)->(ma, mb,mc))] D_{m1,m2}\)

delta_D_index(j, la, lb, lc)[source]
delta_D_trans(j, la, lb, lc)[source]

The decay from particle a to b and c requires \(|l_b-l_c|\leqslant j\)

(ja,ja) -> (ja,jb,jc)???

exp_i(theta, mi)[source]

\(e^{im\theta}\)

Parameters:
  • theta – Array \(\theta\) in the formula

  • mi – Integer or half-integer \(m\) in the formula

Returns:

Array of tf.complex. Same length as theta

get_D_matrix_for_angle(angle, j, cached=True)[source]

Interface to D_matrix_conj()

Parameters:
  • angle – Dict of angle data {“alpha”,”beta”,”gamma”}

  • j – Integer \(2j\) in the formula

  • cached – Haven’t been used???

Returns:

Array of the conjugated D-matrices. Same length as the angle data

get_D_matrix_lambda(angle, ja, la, lb, lc=None)[source]

Get the D-matrix element

Parameters:
  • angle – Dict of angle data {“alpha”,”beta”,”gamma”}

  • ja

  • la

  • lb

  • lc

Returns:

small_d_matrix(theta, j)[source]

The matrix element of \(d^{j}(\theta)\) is \(d^{j}_{m_1,m_2}(\theta) = \sum_{l=0}^{2j} w_{l}^{(j,m_1,m_2)}\sin^{l}(\frac{\theta}{2}) \cos^{2j-l}(\frac{\theta}{2})\)

Parameters:
  • theta – Array \(\theta\) in the formula

  • j – Integer \(2j\) in the formula???

Returns:

The d-matrices array. Same length as theta

small_d_weight(j)[source]

For a certain j, the weight coefficient with index (\(m_1,m_2,l\)) is \(w^{(j,m_1,m_2)}_{l} = (-1)^{m_1-m_2+k}\frac{\sqrt{(j+m_1)!(j-m_1)!(j+m_2)!(j-m_2)!}}{(j-m_1-k)!(j+m_2-k)!(m_1-m_2+k)!k!}\), and \(l\) is an integer ranging from 0 to \(2j\).

Parameters:

j – Integer \(2j\) in the formula???

Returns:

Of the shape (j +1, j +1, j +1). The indices correspond to (\(l,m_1,m_2\))

einsum

class Einsum(expr, shapes)[source]

Bases: object

einsum(expr, *args, **kwargs)[source]
ordered_indices(expr, shapes)[source]

find a better order to reduce transpose.

remove_size1(expr, *args, extra=None)[source]

remove order independent indices (size 1)

replace_ellipsis(expr, shapes)[source]
replace_none_in_shape(x, num=-1)[source]
symbol_generate(base_map)[source]
tensor_einsum_reduce_sum(expr, *args, order)[source]

“abe,bcf->acef” =reshape=> “ab1e1,1bc1f->acef” =product=> “abcef->acef” =reduce_sum=> “acef”

err_num

class NumberError(value, error=1.0)[source]

Bases: object

basic class for propagation of error

apply(fun, grad=None, dx=1e-05)[source]
property error
exp()[source]
log()[source]
property value
cal_err(fun, *args, grad=None, dx=1e-05, kwargs=None)[source]

expermental

fit

class FitResult(params, fcn, min_nll, ndf=0, success=True, hess_inv=None)[source]

Bases: object

save_as(file_name, save_hess=False)[source]
set_error(error)[source]
exception LargeNumberError[source]

Bases: ValueError

except_result(fcn, ndf)[source]
fit_minuit(fcn, bounds_dict={}, hesse=True, minos=False, **kwargs)[source]
fit_minuit_v1(fcn, bounds_dict={}, hesse=True, minos=False, **kwargs)[source]
Parameters:
  • fcn

  • bounds_dict

  • hesse

  • minos

Returns:

fit_minuit_v2(fcn, bounds_dict={}, hesse=True, minos=False, **kwargs)[source]
Parameters:
  • fcn

  • bounds_dict

  • hesse

  • minos

Returns:

fit_multinest(fcn)[source]
fit_newton_cg(fcn, method='Newton-CG', use_hessp=False, check_hess=False, gtol=0.0001)[source]
fit_root_fitter(fcn)[source]
fit_scipy(fcn, method='BFGS', bounds_dict={}, check_grad=False, improve=False, maxiter=None, jac=True, callback=None, standard_complex=True, grad_scale=1.0, gtol=0.001)[source]
Parameters:
  • fcn

  • method

  • bounds_dict

  • kwargs

Returns:

fit_improve

class Cached_FG(f_g, grad_scale=1.0)[source]

Bases: object

fun(x)[source]
grad(x)[source]
exception LineSearchWarning[source]

Bases: RuntimeWarning

class Seq(size=5)[source]

Bases: object

add(x)[source]
arg_max()[source]
get_max()[source]
fmin_bfgs_f(f_g, x0, B0=None, M=2, gtol=1e-05, Delta=10.0, maxiter=None, callback=None, norm_ord=inf, **_kwargs)[source]

test BFGS with nonmonote line search

line_search_nonmonote(f, myfprime, xk, pk, gfk=None, old_fval=None, fk=None, old_old_fval=None, args=(), c1=0.5, maxiter=10)[source]
line_search_wolfe2(f, myfprime, xk, pk, gfk=None, fk=None, old_fval=None, old_old_fval=None, args=(), c1=0.0001, c2=0.9, amax=None, extra_condition=None, maxiter=10)[source]

Find alpha that satisfies strong Wolfe conditions.

Parameters:
  • f (callable f(x,*args)) – Objective function.

  • myfprime (callable f'(x,*args)) – Objective function gradient.

  • xk (ndarray) – Starting point.

  • pk (ndarray) – Search direction.

  • gfk (ndarray, optional) – Gradient value for x=xk (xk being the current parameter estimate). Will be recomputed if omitted.

  • old_fval (float, optional) – Function value for x=xk. Will be recomputed if omitted.

  • old_old_fval (float, optional) – Function value for the point preceding x=xk

  • args (tuple, optional) – Additional arguments passed to objective function.

  • c1 (float, optional) – Parameter for Armijo condition rule.

  • c2 (float, optional) – Parameter for curvature condition rule.

  • amax (float, optional) – Maximum step size

  • extra_condition (callable, optional) – A callable of the form extra_condition(alpha, x, f, g) returning a boolean. Arguments are the proposed step alpha and the corresponding x, f and g values. The line search accepts the value of alpha only if this callable returns True. If the callable returns False for the step length, the algorithm will continue with new iterates. The callable is only called for iterates satisfying the strong Wolfe conditions.

  • maxiter (int, optional) – Maximum number of iterations to perform

Returns:

  • alpha (float or None) – Alpha for which x_new = x0 + alpha * pk, or None if the line search algorithm did not converge.

  • fc (int) – Number of function evaluations made.

  • gc (int) – Number of gradient evaluations made.

  • new_fval (float or None) – New function value f(x_new)=f(x0+alpha*pk), or None if the line search algorithm did not converge.

  • old_fval (float) – Old function value f(x0).

  • new_slope (float or None) – The local slope along the search direction at the new value <myfprime(x_new), pk>, or None if the line search algorithm did not converge.

Notes

Uses the line search algorithm to enforce strong Wolfe conditions. See Wright and Nocedal, ‘Numerical Optimization’, 1999, pg. 59-60.

For the zoom phase it uses an algorithm by […].

minimize(fun, x0, args=(), method=None, jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None)[source]
scalar_search_wolfe2(phi, derphi, phi0=None, old_phi0=None, derphi0=None, c1=0.0001, c2=0.9, amax=None, extra_condition=None, maxiter=10)[source]

Find alpha that satisfies strong Wolfe conditions.

alpha > 0 is assumed to be a descent direction.

Parameters:
  • phi (callable phi(alpha)) – Objective scalar function.

  • derphi (callable phi'(alpha)) – Objective function derivative. Returns a scalar.

  • phi0 (float, optional) – Value of phi at 0

  • old_phi0 (float, optional) – Value of phi at previous point

  • derphi0 (float, optional) – Value of derphi at 0

  • c1 (float, optional) – Parameter for Armijo condition rule.

  • c2 (float, optional) – Parameter for curvature condition rule.

  • amax (float, optional) – Maximum step size

  • extra_condition (callable, optional) – A callable of the form extra_condition(alpha, phi_value) returning a boolean. The line search accepts the value of alpha only if this callable returns True. If the callable returns False for the step length, the algorithm will continue with new iterates. The callable is only called for iterates satisfying the strong Wolfe conditions.

  • maxiter (int, optional) – Maximum number of iterations to perform

Returns:

  • alpha_star (float or None) – Best alpha, or None if the line search algorithm did not converge.

  • phi_star (float) – phi at alpha_star

  • phi0 (float) – phi at 0

  • derphi_star (float or None) – derphi at alpha_star, or None if the line search algorithm did not converge.

Notes

Uses the line search algorithm to enforce strong Wolfe conditions. See Wright and Nocedal, ‘Numerical Optimization’, 1999, pg. 59-60.

For the zoom phase it uses an algorithm by […].

fitfractions

class FitFractions(amp, res)[source]

Bases: object

append_int(mcdata, *args, weight=None, no_grad=False, **kwargs)[source]
get_frac(error_matrix=None, sum_diag=True)[source]
get_frac_diag_sum(error_matrix=None)[source]
get_frac_grad(sum_diag=True)[source]
init_res_table()[source]
integral(mcdata, *args, batch=None, no_grad=False, **kwargs)[source]
cal_fitfractions(amp, mcdata, res=None, batch=None, args=(), kwargs=None)[source]

defination:

\[FF_{i} = \frac{\int |A_i|^2 d\Omega }{ \int |\sum_{i}A_i|^2 d\Omega } \approx \frac{\sum |A_i|^2 }{\sum|\sum_{i} A_{i}|^2}\]

interference fitfraction:

\[FF_{i,j} = \frac{\int 2Re(A_i A_j*) d\Omega }{ \int |\sum_{i}A_i|^2 d\Omega } = \frac{\int |A_i +A_j|^2 d\Omega }{ \int |\sum_{i}A_i|^2 d\Omega } - FF_{i} - FF_{j}\]

gradients (for error transfer):

\[\frac{\partial }{\partial \theta_i }\frac{f(\theta_i)}{g(\theta_i)} = \frac{\partial f(\theta_i)}{\partial \theta_i} \frac{1}{g(\theta_i)} - \frac{\partial g(\theta_i)}{\partial \theta_i} \frac{f(\theta_i)}{g^2(\theta_i)}\]
cal_fitfractions_no_grad(amp, mcdata, res=None, batch=None, args=(), kwargs=None)[source]

calculate fit fractions without gradients.

eval_integral(f, data, var, weight=None, args=(), no_grad=False, kwargs=None)[source]
nll_grad(f, var, args=(), kwargs=None, options=None)[source]
sum_gradient(amp, data, var, weight=1.0, func=<function <lambda>>, grad=True, args=(), kwargs=None)[source]
sum_no_gradient(amp, data, var, weight=1.0, func=<function <lambda>>, *, grad=False, args=(), kwargs=None)

formula

BWR_LS_dom(m, m0, g0, thetas, ls, m1, m2, d=3.0, fix_bug1=False)[source]
BWR_coupling_dom(m, m0, g0, l, m1, m2, d=3.0)[source]
BWR_dom(m, m0, g0, l, m1, m2, d=3.0)[source]
BW_dom(m, m0, g0)[source]
Bprime_polynomial(l, z)[source]
create_complex_root_sympy_tfop(f, var, x, x0, epsilon=1e-12, prec=50)[source]
get_relative_p(ma, mb, mc)[source]
get_relative_p2(ma, mb, mc)[source]

function

nll_funciton(f, data, phsp)[source]

nagtive log likelihood for minimize

gpu_info

get_gpu_free_memory(i=0)[source]
get_gpu_info(s)[source]
get_gpu_total_memory(i=0)[source]
get_gpu_used_memory(i=0)[source]

histogram

class Hist1D(binning, count, error=None)[source]

Bases: object

property bin_center
property bin_width
chi2()[source]
draw(ax=<module 'matplotlib.pyplot' from '/home/docs/checkouts/readthedocs.org/user_builds/tf-pwa/envs/stable/lib/python3.10/site-packages/matplotlib/pyplot.py'>, **kwargs)[source]
draw_bar(ax=<module 'matplotlib.pyplot' from '/home/docs/checkouts/readthedocs.org/user_builds/tf-pwa/envs/stable/lib/python3.10/site-packages/matplotlib/pyplot.py'>, **kwargs)[source]
draw_error(ax=<module 'matplotlib.pyplot' from '/home/docs/checkouts/readthedocs.org/user_builds/tf-pwa/envs/stable/lib/python3.10/site-packages/matplotlib/pyplot.py'>, fmt='none', **kwargs)[source]
draw_hist(ax=<module 'matplotlib.pyplot' from '/home/docs/checkouts/readthedocs.org/user_builds/tf-pwa/envs/stable/lib/python3.10/site-packages/matplotlib/pyplot.py'>, **kwargs)[source]
draw_kde(ax=<module 'matplotlib.pyplot' from '/home/docs/checkouts/readthedocs.org/user_builds/tf-pwa/envs/stable/lib/python3.10/site-packages/matplotlib/pyplot.py'>, kind='gauss', bin_scale=1.0, **kwargs)[source]
draw_line(ax=<module 'matplotlib.pyplot' from '/home/docs/checkouts/readthedocs.org/user_builds/tf-pwa/envs/stable/lib/python3.10/site-packages/matplotlib/pyplot.py'>, num=1000, kind='UnivariateSpline', **kwargs)[source]
draw_pull(ax=<module 'matplotlib.pyplot' from '/home/docs/checkouts/readthedocs.org/user_builds/tf-pwa/envs/stable/lib/python3.10/site-packages/matplotlib/pyplot.py'>, **kwargs)[source]
get_bin_weight()[source]
get_count()[source]
static histogram(m, *args, weights=None, mask_error=inf, **kwargs)[source]
ndf()[source]
scale_to(other)[source]
class WeightedData(m, *args, weights=None, **kwargs)[source]

Bases: Hist1D

draw_kde(ax=<module 'matplotlib.pyplot' from '/home/docs/checkouts/readthedocs.org/user_builds/tf-pwa/envs/stable/lib/python3.10/site-packages/matplotlib/pyplot.py'>, kind='gauss', bin_scale=1.0, **kwargs)[source]
scale_to(other)[source]
cauchy(x)[source]
epanechnikov(x)[source]
gauss(x)[source]
interp_hist(binning, y, num=1000, kind='UnivariateSpline')[source]

interpolate data from hostgram into a line

plot_hist(binning, count, ax=<module 'matplotlib.pyplot' from '/home/docs/checkouts/readthedocs.org/user_builds/tf-pwa/envs/stable/lib/python3.10/site-packages/matplotlib/pyplot.py'>, **kwargs)[source]
uniform(x)[source]
weighted_kde(m, w, bw, kind='gauss')[source]

main

regist_subcommand(name=None, arg_fun=None, args=None)[source]

params_trans

class ParamsTrans(vm, err_matrix)[source]

Bases: object

get_error(vals, keep=False)[source]
get_error_matrix(vals, keep=False)[source]
get_grad(val, keep=False)[source]
mask_params(params)[source]
trans()[source]

particle

This module implements classes to describe particles and decays.

class BaseDecay(core, outs, name=None, disable=False, p_break=False, c_break=True, curve_style=None, **kwargs)[source]

Bases: object

Base Decay object

Parameters:
  • core – Particle. The mother particle

  • outs – List of Particle. The daughter particles

  • name – String. Name of the decay

  • disable – Boolean. If it’s True???

as_config()[source]
get_id()[source]
property name
class BaseParticle(name, J=0, P=-1, C=None, spins=None, mass=None, width=None, id_=None, disable=False, **kwargs)[source]

Bases: object

Base Particle object. Name is “name[:id]”.

Parameters:
  • name – String. Name of the particle

  • J – Integer or half-integer. The total spin

  • P – 1 or -1. The parity

  • spins – List. The spin quantum numbers. If it’s not provided, spins will be tuple(range(-J, J + 1)).

  • mass – Real variable

  • width – Real variable

add_creator(d)[source]

Add a decay reaction where the particle is created.

Parameters:

d – BaseDecay object

add_decay(d)[source]
Parameters:

d – BaseDecay object

as_config()[source]
chain_decay()[source]

return all decay chain self decay to

get_resonances()[source]

return all resonances self decay to

property name
remove_decay(d)[source]
Parameters:

d – BaseDecay object

set_name(name, id_=None)[source]
class Decay(core, outs, name=None, disable=False, p_break=False, c_break=True, curve_style=None, **kwargs)[source]

Bases: BaseDecay

General Decay object

barrier_factor(q, q0)[source]

The cached barrier factors with \(d=3.0\) for all \(l\). For barrier factor, refer to tf_pwa.breit_wigner.Bprime(L, q, q0, d)

Parameters:
  • q – Data array

  • q0 – Real number

Returns:

1-d array for every data point

generate_params(name=None, _ls=True)[source]

Generate the name of the variable for every (l,s) pair. In PWA, the variable is usually expressed as \(g_ls\).

Parameters:
  • name – String. It is the name of the decay by default

  • _ls

    ???

Returns:

List of strings

get_cg_matrix()[source]

The matrix indexed by \([(l,s),(\lambda_b,\lambda_c)]\). The matrix element is

\[\sqrt{\frac{ 2 l + 1 }{ 2 j_a + 1 }} \langle j_b, j_c, \lambda_b, - \lambda_c | s, \lambda_b - \lambda_c \rangle \langle l, s, 0, \lambda_b - \lambda_c | j_a, \lambda_b - \lambda_c \rangle\]

This is actually the pre-factor of \(g_ls\) in the amplitude formula.

Returns:

2-d array of real numbers

get_l_list()[source]

List of l in self.get_ls_list()

get_ls_list()[source]

It has interface to tf_pwa.particle.GetA2BC_LS_list(ja, jb, jc, pa, pb, pc) :return: List of (l,s) pairs

get_min_l()[source]

The minimal l in the LS coupling

class DecayChain(chain)[source]

Bases: object

A decay chain. E.g. \(A\rightarrow BC, B\rightarrow DE\)

Parameters:

chain

???

depth_first(node_first=True)[source]

depth first travel for decay

static from_particles(top, finals)[source]

Build possible decay chain Topology. E.g. a -> [b,c,d] => [[a->rb,r->cd],[a->rc,r->bd],[a->rd,r->bc]]

Parameters:
  • top – Particle

  • finals – List of Particle

Returns:

DecayChain

static from_sorted_table(decay_dict)[source]

Create decay chain from a topology independent structure. E.g. {a:[b,c,d],r:[c,d],b:[b],c:[c],d:[d]} => [a->rb,r->cd]

Parameters:

decay_dict – Dictionary

Returns:

DecayChain

get_all_particles()[source]

get all particles in the decay chains

get_id()[source]

return identity of the decay

get_particle_decay(particle)[source]

get particle decay in the chain

sorted_table()[source]

A topology independent structure. E.g. [a->rb,r->cd] => {a:[b,c,d],r:[c,d],b:[b],c:[c],d:[d]}

Returns:

Dictionary indexed by Particle

sorted_table_layers()[source]

Get the layer of decay chain as sorted table. Or the list of particle with the same number of final particles. So, the first item is always None. E.g. [a->rb,r->cd] => [None, [(b, [b]), (c, [c]), (d, [d])], [(r, [c, d])], [(a, [b, c, d])]]

Returns:

List of dictionary

standard_topology()[source]

standard topology structure of the decay chain, all inner particle will be replace as a tuple of out particles. for example [A->R+C, R->B+D], is [A->(B, D)+C, (B, D)->B+D]

topology_id(identical=True)[source]

topology identy

Parameters:

identical – allow identical particle in finals

Returns:

topology_map(other=None)[source]

Mapping relations of the same topology decay E.g. [A->R+B,R->C+D],[A->Z+B,Z->C+D] => {A:A,B:B,C:C,D:D,R:Z,A->R+B:A->Z+B,R->C+D:Z->C+D}

topology_same(other, identical=True)[source]

whether self and other is the same topology

Parameters:
  • other – other decay chains

  • identical – using identical particles

Returns:

class DecayGroup(chains)[source]

Bases: object

A group of two-body decays.

Parameters:

chains – List of DecayChain

as_config()[source]
get_chain_from_particle(names)[source]

get the first decay chain has all particles in names

get_chains_map(chains=None)[source]
Parameters:

chains

Returns:

get_decay_chain(id_)[source]

get decay chain from identity string

get_particle(name)[source]

get particle by name

topology_structure(identical=False, standard=True)[source]
Parameters:
  • identical

  • standard

Returns:

GetA2BC_LS_list(ja, jb, jc, pa=None, pb=None, pc=None, p_break=False, ca=None)[source]

The \(L-S\) coupling for the decay \(A\rightarrow BC\), where \(L\) is the orbital angular momentum of \(B\) and \(B\), and \(S\) is the superposition of their spins. It’s required that \(|J_B-J_C|\leq S \leq J_B+J_C\) and \(|L-S|\leq J_A \leq L+S\). It’s also required by the conservation of P parity that \(L\) is keep \(P_A = P_B P_C (-1)^{l}\).

Parameters:
  • jaJ of particle A

  • jbJ of particle B

  • jcJ of particle C

  • paP of particle A

  • pbP of particle B

  • pcP of particle C

  • p_break – allow p voilate

  • ca – enabel c partity select c=(-1)^(l+s)

Returns:

List of \((l,s)\) pairs.

class ParticleList(initlist=None)[source]

Bases: UserList

List for Particle

cp_charge_group(finals, id_p, cp)[source]
cross_combine(x)[source]

Combine every two of a list, as well as give every one of them.??? Can be put to utils.py

Parameters:

x

Returns:

simple_cache_fun(f)[source]
Parameters:

f

Returns:

split_len(dicts)[source]

Split a dictionary of lists by their length. E.g. {“b”:[2],”c”:[1,3],”d”:[1]} => [None, [(‘b’, [2]), (‘d’, [1])], [(‘c’, [1, 3])]]

Put to utils.py or _split_len if not used anymore???

Parameters:

dicts – Dictionary

Returns:

List of dictionary

split_particle_type(decays)[source]
split_particle_type_list(decays)[source]

Separate initial particle, intermediate particles, final particles in a decay chain.

Parameters:

decays – DecayChain

Returns:

Set of initial Particle, set of intermediate Particle, set of final Particle

phasespace

class ChainGenerator(m0, mi)[source]

Bases: object

struct = m0 -> [m1, m2, m3] # (m0, [m1, m2, m3]) m0 -> float mi -> float | struct

cal_max_weight()[source]
generate(N)[source]
get_gen(idx_gen)[source]
class PhaseSpaceGenerator(m0, mass)[source]

Bases: object

Phase Space Generator for n-body decay

cal_max_weight()[source]
flatten_mass(ms, importances=True)[source]

sampling from mass with weight

generate(n_iter: int, force=True, flatten=True, importances=True) list[source]

generate n_iter events

Parameters:
  • n_iter – number of events

  • force – switch for cutting generated data to required size

  • flatten – switch for sampling with weights

Returns:

daughters 4-momentum, list of ndarray with shape (n_iter, 4)

generate_mass(n_iter)[source]

generate possible inner mass.

generate_momentum(mass, n_iter=None)[source]

generate random momentum from mass, boost them to a same rest frame

generate_momentum_i(m0, m1, m2, n_iter, p_list=[])[source]

\(|p|\) = m0,m1,m2 in m0 rest frame :param p_list: extra list for momentum need to boost

get_mass_range()[source]
get_weight(ms, importances=True)[source]

calculate weight of mass

\[w = \frac{1}{w_{max}} \frac{1}{M}\prod_{i=0}^{n-2} q(M_i,M_{i+1},m_{i+1})\]
mass_importances(mass)[source]

generate possible inner mass.

set_decay(m0, mass)[source]

set decay mass, calculate max weight

\[w_{max} = \frac{1}{M}\prod_{i=0}^{n-2} q(max(M_i),min(M_{i+1}),m_{i+1})\]
\[max(M_i) = M_0 - \sum_{j=1}^{i} (m_j)\]
\[min(M_i) = \sum_{j=i}^{n} (m_j)\]
class UniformGenerator(a, b)[source]

Bases: object

generate(N)[source]
generate_phsp(m0, mi, N=1000)[source]

general method to generate decay chain phase sapce >>> (a, b), c = generate_phsp(1.0, ( … (0.3, (0.1, 0.1)), … 0.2), … N = 10) >>> assert np.allclose(LorentzVector.M(a+b+c), 1.0)

get_p(M, ma, mb)[source]

root_io

load_Ttree(tree)[source]

load TTree as dict

load_root_data(fnames)[source]

load root file as dict

save_dict_to_root(dic, file_name, tree_name=None)[source]

This function stores data arrays in the form of a dictionary into a root file. It provides a convenient interface to uproot.

Parameters:
  • dic – Dictionary of data

  • file_name – String

  • tree_name – String. By default it’s “tree”.

significance

erfc_inverse(x)[source]

erfc-1(x) = - 1/sqrt(2) * normal_quantile( 0.5 * x)

normal_quantile(p)[source]

Computes quantiles for standard normal distribution N(0, 1) at probability p

prob(chi_2, ndf)[source]

Computation of the probability for a certain Chi-squared (chi2) and number of degrees of freedom (ndf).

Calculations are based on the incomplete gamma function P(a,x), where a=ndf/2 and x=chi2/2.

P(a,x) represents the probability that the observed Chi-squared for a correct model should be less than the value chi2.

The returned probability corresponds to 1-P(a,x), which denotes the probability that an observed Chi-squared exceeds the value chi2 by chance, even for a correct model.

significance(l1: float, l2: float, ndf: int) float[source]

computation of significance for log-likelihood fit values l1 and l2 with number of degrees of freedom (ndf).

tensorflow_wrapper

class Module[source]

Bases: object

numpy_cross(a, b)[source]
regist_function(name, var=None, base_mod=<tf_pwa.tensorflow_wrapper.Module object>)[source]
set_gpu_mem_growth()[source]

transform

class BaseTransform(x: list | str, **kwargs)[source]

Bases: object

call(x: Tensor) Tensor[source]
inverse(y: Tensor) Tensor[source]
read(x: dict) Tensor[source]
class LinearTrans(x: list | str, k: float = 1.0, b: float = 0.0, **kwargs)[source]

Bases: BaseTransform

call(x) Tensor[source]
inverse(x: Tensor) Tensor[source]
create_trans(item: dict) BaseTransform[source]

utils

This module provides some functions that may be useful in other modules.

class AttrDict[source]

Bases: dict

array_split(data, batch=None)[source]

Split a data array. batch is the number of data in a row.

check_positive_definite(m)[source]

check if matrix m is postive definite

>>> check_positive_definite([[1.0,0.0],[0.0, 0.1]])
True
>>> check_positive_definite([[1.0,0.0],[1.0,-0.1]])
eigvalues:  [-0.1  1. ]
False
create_dir(name)[source]
create_test_config(model_name, params={}, plot_params={})[source]
deep_iter(base, deep=1)[source]
deep_ordered_iter(base, deep=1)[source]
deep_ordered_range(size, deep=1, start=0)[source]
error_print(x, err=None, dig=None)[source]

It returns a format string “value +/- error”. The precision is modified according to err

Parameters:
  • x – Value

  • err – Error

Returns:

String

fit_normal(data, weights=None)[source]

Fit data distribution with Gaussian distribution. Though minimize the negative log likelihood function

\[- \ln L = \frac{1}{2}\sum w_i \frac{(\mu - x_i )^2}{\sigma^2} + (\sum w_i) \ln (\sqrt{2\pi} \sigma )\]

the fit result can be solved as

\[\frac{\partial (-\ln L)}{\partial \mu} = 0 \Rightarrow \bar{\mu} = \frac{\sum w_i x_i}{ \sigma^2 \sum w_i}\]
\[\frac{\partial (-\ln L)}{\partial \sigma} = 0 \Rightarrow \bar{\sigma} = \sqrt{\frac{\sum w_i (\bar{\mu} - x_i)^2}{\sum w_i}}\]

From hessian

\[\frac{\partial^2 (-\ln L)}{\partial \mu^2} = \frac{\sum w_i}{\sigma^2}\]
\[\frac{\partial^2 (-\ln L)}{\partial \sigma^2} = 3\sum \frac{\sum w_i (\mu - x)^2}{\sigma^4} - \frac{\sum w_i}{\sigma^2}\]

the error matrix can wrotten as [[ \(\bar{\sigma}^2/N\) , 0], [0, \(\bar{\sigma}^2/(2N)\) ]] .

flatten_dict_data(data, fun=<built-in method format of str object>)[source]

Flatten nested dictionary data into one layer dictionary.

Returns:

Dictionary

flatten_np_data(data)
is_complex(x)[source]

If x is of type complex, it returns True.

load_config_file(name)[source]

Load config file such as Resonances.yml.

Parameters:

name – File name. Either yml file or json file.

Returns:

Dictionary read from the file.

plot_particle_model(model_name, params={}, plot_params={}, axis=None, special_points=None)[source]
pprint(dicts)[source]

Print dictionary using json format.

print_dic(dic)[source]

Another way to print dictionary.

save_frac_csv(file_name, fit_frac)[source]
std_periodic_var(p, mid=0.0, pi=3.141592653589793)[source]

Transform a periodic variable into its range.

>>> std_periodic_var(math.pi)
-3.1415...
>>> std_periodic_var(2*math.pi + 0.01)
0.0...
Parameters:
  • p – Value

  • mid – The middle value

  • pi – Half-range

Returns:

The transformed value

std_polar(rho, phi)[source]

To standardize a polar variable. By standard form, it means \(\rho>0, -\pi<\phi<\pi\).

Parameters:
  • rho – Real number

  • phi – Real number

Returns:

rho, phi

time_print(f)[source]

It provides a wrapper to print the time cost on a process.

tuple_table(fit_frac, ignore_items=['sum_diag'])[source]

variable

This module implements classes and methods to manage the variables in fitting.

class Bound(a=None, b=None, func=None)[source]

Bases: object

This class provides methods to implement the boundary constraint for a variable. It has dependence on SymPy . The boundary-transforming function can transform a variable x defined in the real domain to a variable y defined in a limited range (a,b). y should be the physical parameter but x is the one used while fitting.

Parameters:
  • a – Real number. The lower boundary

  • b – Real number. The upper boundary

  • func – String. The boundary-transforming function. By default, if neither a or b is None, func is “(b-a)*(sin(x)+1)/2+a”; else if only a is None, func is “b+1-sqrt(x**2+1)”; else if only b is None, func is “a-1+sqrt(x**2+1)”; else func is “x”.

a, b, func can be refered by self.lower, self.upper, self.func.

get_d2ydx2(val)[source]

To calculate the derivative \(\frac{dy}{dx}\).

Parameters:

val – Real number x

Returns:

Real number \(\frac{dy}{dx}\)

get_dydx(val)[source]

To calculate the derivative \(\frac{dy}{dx}\).

Parameters:

val – Real number x

Returns:

Real number \(\frac{dy}{dx}\)

get_func()[source]

Initialize the function string into sympy objects.

Returns:

sympy objects f, df, inv, which are the function, its derivative and its inverse function.

get_x2y(val)[source]

To derive y from x

Parameters:

val – Real number x

Returns:

Real number y

get_y2x(val)[source]

To derive x from y. y will be set to a if y<a, and y will be set to b if y>b.

Parameters:

val – Real number y

Returns:

Real number x

class SumVar(value, grad, var, hess=None)[source]

Bases: object

from_call(var, *args, **kwargs)[source]
from_call_with_hess(var, *args, **kwargs)[source]
class Variable(name, shape=None, cplx=False, vm=None, overwrite=True, is_cp=False, **kwargs)[source]

Bases: object

This class has interface to VarsManager. It is convenient for users to define a group of real variables, since it may be more perceptually intuitive to define them together.

By calling the instance of this class, it returns the value of this variable. The type is tf.Tensor.

Parameters:
  • name – The name of the variable group

  • shape – The shape of the group. E.g. for a 4*3*2 matrix, shape is [4,3,2]. By default, shape is [] for a real variable.

  • cplx – Boolean. Whether the variable (or the variables) are complex or not.

  • vm – VarsManager. It is by default the one automatically defined in the global scope by the program.

  • overwrite – Boolean. If it’s True, the program will not throw a warning when overwrite a variable with the same name.

  • kwargs – Other arguments that may be used when calling self.real_var() or self.cplx_var()

cplx_cpvar(polar=True, fix=False, fix_vals=(1.0, 0.0, 0.0, 0.0), value=0.0)[source]

It implements interface to VarsManager.add_complex_var(), but supports variables that are not of non-shape.

Parameters:
  • polar – Boolean. Whether the variable is defined in polar coordinate or in Cartesian coordinate.

  • fix – Boolean. Whether the variable is fixed. It’s enabled only if self.shape is None.

  • fix_vals – Length-4 tuple. The value of the fixed complex variable is fix_vals[0]+fix_vals[1]j.

cplx_var(polar=None, fix=False, fix_vals=(1.0, 0.0))[source]

It implements interface to VarsManager.add_complex_var(), but supports variables that are not of non-shape.

Parameters:
  • polar – Boolean. Whether the variable is defined in polar coordinate or in Cartesian coordinate.

  • fix – Boolean. Whether the variable is fixed. It’s enabled only if self.shape is None.

  • fix_vals – Length-2 tuple. The value of the fixed complex variable is fix_vals[0]+fix_vals[1]j.

factor_names()[source]
fixed(value=None)[source]

Fix this Variable. Note only non-shape real Variable supports this method.

Parameters:

value – Real number. The fixed value

freed()[source]

Set free this Variable. Note only non-shape Variable supports this method.

init_name_list()[source]
is_fixed()[source]
r_shareto(Var)[source]

Share the radium component to another Variable of the same shape. Only complex Variable supports this method.

Parameters:

Var – Variable.

real_var(value=None, range_=None, fix=False)[source]

It implements interface to VarsManager.add_real_var(), but supports variables that are not of non-shape.

Parameters:
  • value – Real number. The value of all real components.

  • range – Length-2 array. The length of all real components.

  • fix – Boolean. Whether the variable is fixed.

rename(new_name)[source]

Rename this Variable.

sameas(Var)[source]

Set the Variable to be the same with another Variable of the same shape.

Parameters:

Var – Variable.

set_bound(bound, func=None, overwrite=False)[source]

Set boundary for this Variable. Note only non-shape real Variable supports this method.

Parameters:
  • bound – Length-2 tuple.

  • func – String. Refer to class tf_pwa.variable.Bound.

  • overwrite – Boolean. If it’s True, the program will not throw a warning when overwrite a variable with the same name.

set_fix_idx(fix_idx=None, fix_vals=None, free_idx=None)[source]
Parameters:
  • fix_idx – Interger or list of integers. Which complex component in the innermost layer of the variable is fixed. E.g. If self.shape==[2,3,4] and fix_idx==[1,2], then Variable()[i][j][1] and Variable()[i][j][2] will be the fixed value.

  • fix_vals – Float or length-2 float list for complex variable. The fixed value.

  • free_idx – Interger or list of integers. Which complex component in the innermost layer of the variable is set free. E.g. If self.shape==[2,3,4] and fix_idx==[0], then Variable()[i][j][0] will be set free.

set_phi(phi, index=None)[source]
set_rho(rho, index=None)[source]
set_same_ratio()[source]
set_value(value, index=None)[source]
property value

Ndarray of self.shape.

Type:

return

property variables

Names of the real variables contained in this Variable instance.

Returns:

List of string.

class VarsManager(name='', dtype=tf.float64, multi_gpu=None)[source]

Bases: object

This class provides methods to operate the variables in fitting. Every variable is a 1-d tf.Variable of dtype (tf.float64 by default).

All variables are stored in a dictionary self.variables. The indices of the dictionary are the variables’ names, so name property in tf.Variable does not matter. All methods intended to change the variables are operating self.variables directly.

Besides, all trainable variables’ names will be stored in a list self.trainable_vars.

add_cartesiancp_var(name, polar=None, trainable=True, fix_vals=(1.0, 0.0, 0.0, 0.0))[source]

Add a complex variable. Two real variables named name+’r’ and name+’i’ will be added into self.variables. The initial values will be given automatically according to its form of coordinate.

Parameters:
  • name – The name of the complex variable.

  • polar – Boolean. If it’s True, name+’r’ and name+’i’ are defined in polar coordinate; otherwise they are defined in Cartesian coordinate.

  • trainable – Boolean. If it’s True, real variables name+’r’ and name+’i’ will be trainable.

  • fix_vals – Length-4 array. If trainable=False, the fixed values for name+’r’ and name+’i’ are fix_vals[0], fix_vals[1] respectively.

add_complex_var(name, polar=None, trainable=True, fix_vals=(1.0, 0.0))[source]

Add a complex variable. Two real variables named name+’r’ and name+’i’ will be added into self.variables. The initial values will be given automatically according to its form of coordinate.

Parameters:
  • name – The name of the complex variable.

  • polar – Boolean. If it’s True, name+’r’ and name+’i’ are defined in polar coordinate; otherwise they are defined in Cartesian coordinate.

  • trainable – Boolean. If it’s True, real variables name+’r’ and name+’i’ will be trainable.

  • fix_vals – Length-2 array. If trainable=False, the fixed values for name+’r’ and name+’i’ are fix_vals[0], fix_vals[1] respectively.

add_real_var(name, value=None, range_=None, trainable=True)[source]

Add a real variable named name into self.variables. If value and range_ are not provided, the initial value is set to be a uniform random number between 0 and 1.

Parameters:
  • name – The name of the variable, the index of this variable in self.variables

  • value – The initial value.

  • range – Length-2 array. It’s useless if value is given. Otherwise the initial value is set to be a uniform random number between range_[0] and range_[0].

  • trainable – Boolean. If it’s True, the variable is trainable while fitting.

batch_sum_var(fun, data, batch=65000)[source]
error_trans(err_matrix)[source]
get(name, val_in_fit=True)[source]

Get a real variable. If val_in_fit is True, this is the variable used in fitting, not considering its boundary transformation.

Parameters:

name – String

Returns:

tf.Variable

get_all_dic(trainable_only=False)[source]

Get a dictionary of all variables.

Parameters:

trainable_only – Boolean. If it’s True, the dictionary only contains the trainable variables.

Returns:

Dictionary

get_all_val(val_in_fit=False)[source]

Get the values of all trainable variables.

Parameters:

val_in_fit – Boolean. If it’s True, the values will be the ones that are actually used in fitting (thus may not be the physical values because of the boundary transformation).

Returns:

List of real numbers.

mask_params(params)[source]
minimize(fcn, jac=True, method='BFGS', mini_kwargs={})[source]

minimize a give function

minimize_error(fcn, fit_result)[source]
read(name)[source]
refresh_vars(init_val=None, bound_dic=None)[source]

Refresh all trainable variables

remove_bound()[source]

Remove a boundary for a variable

remove_var(name)[source]

Remove a variable from self.variables. More specifically, two variables (name+’r’ and name+’i’) will be removed if it’s complex.

Parameters:

name – The name of the variable

rename_var(name, new_name, cplx=False)[source]

Rename a variable.

Parameters:
  • name – Name of the variable

  • new_name – New name

  • cplx – Boolean. Users should indicate if this variable is complex or not.

rp2xy(name)[source]

Transform a complex variable into Cartesian coordinate. :param name: String

rp2xy_all(name_list=None)[source]

If name_list is not provided, this method will transform all complex variables into Cartesian coordinate.

Parameters:

name_list – List of names of complex variables

set(name, value, val_in_fit=True)[source]

Set value for a real variable. If val_in_fit is True, this is the variable used in fitting, not considering its boundary transformation.

Parameters:
  • name – String

  • value – Real number

set_all(vals, val_in_fit=False)[source]

Set values for some variables.

Parameters:

vals – It can be either a dictionary or a list of real numbers. If it’s a list, the values correspond to all trainable variables in order.

set_bound(bound_dic, func=None, overwrite=False)[source]

Set boundary for the trainable variables. The variables will be constrained in their ranges while fitting.

Parameters:
  • bound_dic – Dictionary. E.g. {“name1”:(-1.0,1.0), “name2”:(None,1.0)}. In this example, None means it has no lower limit.

  • func – String. Users can provide a string to describe the transforming function. For details, refer to class tf_pwa.variable.Bound.

  • overwrite – Boolean. If it’s True, the program will not throw a warning when overwrite a variable with the same name.

set_fix(name, value=None, unfix=False)[source]

Fix or unfix a variable (change the trainability) :param name: The name of the variable :param value: The fixed value. It’s useless if unfix=True. :param unfix: Boolean. If it’s True, the variable will become trainable rather than be fixed.

set_same(name_list, cplx=False)[source]

Set some variables to be the same.

Parameters:
  • name_list – List of strings. Name of the variables.

  • cplx – Boolean. Whether the variables are complex or real.

set_share_r(name_list)[source]

If some complex variables want to share their radia variable while their phase variable are still different. Users can set this type of constrain using this method.

Parameters:

name_list – List of strings. Note the strings should be the name of the complex variables rather than of their radium parts.

set_trans_var(xvals)[source]

\(y = y(x)\)

Parameters:

fcn_grad – The return of class tf_pwa.model???

Returns:

standard_complex()[source]
std_polar(name)[source]

Transform a complex variable into standard polar coordinate, which mean its radium part is positive, and its phase part is between \(-\pi\) to \(\pi\). :param name: String

std_polar_all()[source]

Transform all complex variables into standard polar coordinate.

temp_params(params)[source]
property trainable_variables

List of tf.Variable. It is similar to tf.keras.Model.trainable_variables.

trans_error_matrix(hess_inv, xvals)[source]

Bound trans for error matrix \(F(x)=F(y(x))\), \(V_y = y' V_x y'\)

Returns:

trans_f_grad_hess(f)[source]

\(F(x)=F(y(x))\), \(G(x)=\frac{dF}{dx}=\frac{dF}{dy}\frac{dy}{dx}\)

Parameters:

fcn_grad – The return of class tf_pwa.model???

Returns:

trans_fcn_grad(fcn_grad)[source]

\(F(x)=F(y(x))\), \(G(x)=\frac{dF}{dx}=\frac{dF}{dy}\frac{dy}{dx}\)

Parameters:

fcn_grad – The return of class tf_pwa.model???

Returns:

trans_grad_hessp(f)[source]

\(F(x)=F(y(x))\), \(G(x)=\frac{dF}{dx}=\frac{dF}{dy}\frac{dy}{dx}\)

Parameters:

fcn_grad – The return of class tf_pwa.model???

Returns:

trans_params(polar)[source]

Transform all complex variables into either polar coordinate or Cartesian coordinate.

Parameters:

polar – Boolean

xy2rp(name)[source]

Transform a complex variable into polar coordinate. :param name: String

xy2rp_all(name_list=None)[source]

If name_list is not provided, this method will transform all complex variables into polar coordinate.

Parameters:

name_list – List of names of complex variables

combineVM(vm1, vm2, name='', same_list=None)[source]

This function combines two VarsManager objects into one. (WIP)

Parameters:
  • name – The name of this combined VarsManager

  • same_list – To make some variables in the two VarsManager to be the same. E.g. if same_list = ["var",["var1","var2"]], then “var” in vm1 and vm2 will be the same, and “var1” in vm1 and “var2” in vm2 will be the same.

deep_stack(dic, deep=1)[source]

version

vis

class DotGenerator(top)[source]

Bases: object

static dot_chain(chains, has_label=True)[source]
dot_default_edge = '    "{}" -> "{}";\n'
dot_default_node = '    "{}" [shape=none];\n'
dot_head = '\ndigraph {\n    rankdir=LR;\n    node [shape=point];\n    edge [arrowhead=none, labelfloat=true];\n'
dot_label_edge = '    "{}" -> "{}" [label="{}"];\n'
dot_ranksame = '    {{ rank=same {} }};\n'
dot_tail = '}\n'
get_dot_source()[source]
draw_decay_struct(decay_chain, show=False, **kwargs)[source]
get_decay_layout(decay_chain)[source]
get_layout(decay_chain, xs, ys)[source]
get_node_layout(decay_chain)[source]
plot_decay_struct(decay_chain, ax=<module 'matplotlib.pyplot' from '/home/docs/checkouts/readthedocs.org/user_builds/tf-pwa/envs/stable/lib/python3.10/site-packages/matplotlib/pyplot.py'>)[source]
reorder_final_particle(decay_chain, ys)[source]

weight_smear

dirichlet_smear(weight, **kwargs)[source]
gamma_smear(weight, **kwargs)[source]
get_weight_smear(name)[source]
poisson_smear(weight, **kwargs)[source]
register_weight_smear(name)[source]

See also

Available Resonances Model

  1. "default", "BWR" (Particle)

    \[R(m) = \frac{1}{m_0^2 - m^2 - i m_0 \Gamma(m)}\]

    (Source code, png, hires.png, pdf)

    _images/particle_model-1.png
  2. "x" (ParticleX)

simple particle model for mass, (used in expr)

\[R(m) = m\]

(Source code, png, hires.png, pdf)

_images/particle_model-2.png
  1. "BWR2" (ParticleBWR2)

    \[R(m) = \frac{1}{m_0^2 - m^2 - i m_0 \Gamma(m)}\]

    The difference of BWR, BWR2 is the behavior when mass is below the threshold ( \(m_0 = 0.1 < 0.1 + 0.1 = m_1 + m_2\)).

    (Source code, png, hires.png, pdf)

    _images/particle_model-3.png
  2. "BWR_below" (ParticleBWRBelowThreshold)

    \[R(m) = \frac{1}{m_0^2 - m^2 - i m_0 \Gamma(m)}\]
  3. "BWR_coupling" (ParticleBWRCoupling)

    Force \(q_0=1/d\) to avoid below theshold condition for BWR model, and remove other constant parts, then the \(\Gamma_0\) is coupling parameters.

    \[R(m) = \frac{1}{m_0^2 - m^2 - i m_0 \Gamma_0 \frac{q}{m} q^{2l} B_L'^2(q, 1/d, d)}\]

    (Source code, png, hires.png, pdf)

    _images/particle_model-4.png
  4. "BWR_normal" (ParticleBWR_normal)

    \[R(m) = \frac{\sqrt{m_0 \Gamma(m)}}{m_0^2 - m^2 - i m_0 \Gamma(m)}\]
  5. "GS_rho" (ParticleGS)

    Gounaris G.J., Sakurai J.J., Phys. Rev. Lett., 21 (1968), pp. 244-247

    c_daug2Mass: mass for daughter particle 2 (\(\pi^{+}\)) 0.13957039

    c_daug3Mass: mass for daughter particle 3 (\(\pi^{0}\)) 0.1349768

    \[R(m) = \frac{1 + D \Gamma_0 / m_0}{(m_0^2 -m^2) + f(m) - i m_0 \Gamma(m)}\]
    \[f(m) = \Gamma_0 \frac{m_0 ^2 }{q_0^3} \left[q^2 [h(m)-h(m_0)] + (m_0^2 - m^2) q_0^2 \frac{d h}{d m}|_{m0} \right]\]
    \[h(m) = \frac{2}{\pi} \frac{q}{m} \ln \left(\frac{m+2q}{2m_{\pi}} \right)\]
    \[\frac{d h}{d m}|_{m0} = h(m_0) [(8q_0^2)^{-1} - (2m_0^2)^{-1}] + (2\pi m_0^2)^{-1}\]
    \[D = \frac{f(0)}{\Gamma_0 m_0} = \frac{3}{\pi}\frac{m_\pi^2}{q_0^2} \ln \left(\frac{m_0 + 2q_0}{2 m_\pi }\right) + \frac{m_0}{2\pi q_0} - \frac{m_\pi^2 m_0}{\pi q_0^3}\]
  6. "BW" (ParticleBW)

    \[R(m) = \frac{1}{m_0^2 - m^2 - i m_0 \Gamma_0}\]

    (Source code, png, hires.png, pdf)

    _images/particle_model-5.png
  7. "LASS" (ParticleLass)

    \[R(m) = \frac{m}{q cot \delta_B - i q} + e^{2i \delta_B}\frac{m_0 \Gamma_0 \frac{m_0}{q_0}} {(m_0^2 - m^2) - i m_0\Gamma_0 \frac{q}{m}\frac{m_0}{q_0}}\]
    \[cot \delta_B = \frac{1}{a q} + \frac{1}{2} r q\]
    \[e^{2i\delta_B} = \cos 2 \delta_B + i \sin 2\delta_B = \frac{cot^2\delta_B -1 }{cot^2 \delta_B +1} + i \frac{2 cot \delta_B }{cot^2 \delta_B +1 }\]
  8. "one" (ParticleOne)

    \[R(m) = 1\]

    (Source code, png, hires.png, pdf)

    _images/particle_model-6.png
  9. "exp" (ParticleExp)

    \[R(m) = e^{-|a| m}\]
  10. "exp_com" (ParticleExpCom)

    \[R(m) = e^{-(a+ib) m^2}\]

    lineshape when \(a=1.0, b=10.\)

    (Source code, png, hires.png, pdf)

    _images/particle_model-7.png
  11. "Flatte" (ParticleFlatte)

    Flatte like formula

\[R(m) = \frac{1}{m_0^2 - m^2 + i m_0 (\sum_{i} g_i \frac{q_i}{m})}\]
\[\begin{split}q_i = \begin{cases} \frac{\sqrt{(m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2)}}{2m} & (m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2) >= 0 \\ \frac{i\sqrt{|(m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2)|}}{2m} & (m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2) < 0 \\ \end{cases}\end{split}\]

(Source code, png, hires.png, pdf)

_images/particle_model-8.png

Required input arguments mass_list: [[m11, m12], [m21, m22]] for \(m_{i,1}, m_{i,2}\).

  1. "FlatteC" (ParticleFlatteC)

    Flatte like formula

\[R(m) = \frac{1}{m_0^2 - m^2 - i m_0 (\sum_{i} g_i \frac{q_i}{m})}\]
\[\begin{split}q_i = \begin{cases} \frac{\sqrt{(m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2)}}{2m} & (m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2) >= 0 \\ \frac{i\sqrt{|(m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2)|}}{2m} & (m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2) < 0 \\ \end{cases}\end{split}\]

Required input arguments mass_list: [[m11, m12], [m21, m22]] for \(m_{i,1}, m_{i,2}\).

  1. "FlatteGen" (ParticleFlateGen)

    More General Flatte like formula

\[R(m) = \frac{1}{m_0^2 - m^2 - i m_0 [\sum_{i} g_i \frac{q_i}{m} \times \frac{m_0}{|q_{i0}|} \times \frac{|q_i|^{2l_i}}{|q_{i0}|^{2l_i}} B_{l_i}'^2(|q_i|,|q_{i0}|,d)]}\]
\[\begin{split}q_i = \begin{cases} \frac{\sqrt{(m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2)}}{2m} & (m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2) >= 0 \\ \frac{i\sqrt{|(m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2)|}}{2m} & (m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2) < 0 \\ \end{cases}\end{split}\]

Required input arguments mass_list: [[m11, m12], [m21, m22]] for \(m_{i,1}, m_{i,2}\). And addition arguments l_list: [l1, l2] for \(l_i\)

has_bprime=False to remove \(B_{l_i}'^2(|q_i|,|q_{i0}|,d)\).

cut_phsp=True to set \(q_i = 0\) when \((m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2) < 0\)

The plot use parameters \(m_0=0.7, m_{0,1}=m_{0,2}=0.1, m_{1,1}=m_{1,2}=0.3, g_0=0.3,g_1=0.2,l_0=0,l_1=1\).

no_m0=True to set \(i m_0 => i\) in the width part.

no_q0=True to remove \(\frac{m_0}{|q_{i0}|}\) and set others \(q_{i0}=1\).

  1. "Flatte2" (ParticleFlate2)

    General Flatte like formula.

\[R(m) = \frac{1}{m_0^2 - m^2 - i m_0 [\sum_{i} \color{red}{g_i^2}\color{black} \frac{q_i}{m} \times \frac{m_0}{|q_{i0}|} \times \frac{|q_i|^{2l_i}}{|q_{i0}|^{2l_i}} B_{l_i}'^2(|q_i|,|q_{i0}|,d)]}\]
\[\begin{split}q_i = \begin{cases} \frac{\sqrt{(m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2)}}{2m} & (m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2) >= 0 \\ \frac{i\sqrt{|(m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2)|}}{2m} & (m^2-(m_{i,1}+m_{i,2})^2)(m^2-(m_{i,1}-m_{i,2})^2) < 0 \\ \end{cases}\end{split}\]

It has the same options as FlatteGen.

  1. "KMatrixSingleChannel" (KmatrixSingleChannelParticle)

    K matrix model for single channel multi pole.

    \[K = \sum_{i} \frac{m_i \Gamma_i(m)}{m_i^2 - m^2 }\]
    \[P = \sum_{i} \frac{\beta_i m_0 \Gamma_0 }{ m_i^2 - m^2}\]

    the barrier factor is included in gls

    \[R(m) = (1-iK)^{-1} P\]

    requird mass_list: [pole1, pole2] and width_list: [width1, width2].

    (Source code, png, hires.png, pdf)

    _images/particle_model-13.png
  2. "KMatrixSplitLS" (KmatrixSplitLSParticle)

    K matrix model for single channel multi pole and the same channel with different (l, s) coupling.

    \[K_{a,b} = \sum_{i} \frac{m_i \sqrt{\Gamma_{a,i}(m)\Gamma_{b,i}(m)}}{m_i^2 - m^2 }\]
    \[P_{b} = \sum_{i} \frac{\beta_i m_0 \Gamma_{b,i0} }{ m_i^2 - m^2}\]

    the barrier factor is included in gls

    \[R(m) = (1-iK)^{-1} P\]
  3. "KmatrixSimple" (KmatrixSimple)

    simple Kmatrix formula.

    K-matrix

    \[K_{i,j} = \sum_{a} \frac{g_{i,a} g_{j,a}}{m_a^2 - m^2+i\epsilon}\]

    P-vector

    \[P_{i} = \sum_{a} \frac{\beta_{a} g_{i,a}}{m_a^2 - m^2 +i\epsilon} + f_{bkg,i}\]

    total amplitude

    \[R(m) = n (1 - K i \rho n^2)^{-1} P\]

    barrief factor

    \[n_{ii} = q_i^l B'_l(q_i, 1/d, d)\]

    phase space factor

    \[\rho_{ii} = q_i/m\]

    \(q_i\) is 0 when below threshold

  4. "BWR_LS" (ParticleBWRLS)

    Breit Wigner with split ls running width

    \[R_i (m) = \frac{g_i}{m_0^2 - m^2 - im_0 \Gamma_0 \frac{\rho}{\rho_0} (\sum_{i} g_i^2)}\]

    , \(\rho = 2q/m\), the partial width factor is

    \[g_i = \gamma_i \frac{q^l}{q_0^l} B_{l_i}'(q,q_0,d)\]

    and keep normalize as

    \[\sum_{i} \gamma_i^2 = 1.\]

    The normalize is done by (\(\cos \theta_0, \sin\theta_0 \cos \theta_1, \cdots, \prod_i \sin\theta_i\))

  5. "BWR_LS2" (ParticleBWRLS2)

    Breit Wigner with split ls running width, each one use their own l,

    \[R_i (m) = \frac{1}{m_0^2 - m^2 - im_0 \Gamma_0 \frac{\rho}{\rho_0} (g_i^2)}\]

    , \(\rho = 2q/m\), the partial width factor is

    \[g_i = \gamma_i \frac{q^l}{q_0^l} B_{l_i}'(q,q_0,d)\]
  6. "MultiBWR" (ParticleMultiBWR)

    Combine Multi BWR into one particle

    (Source code, png, hires.png, pdf)

    _images/particle_model-14.png
  7. "MultiBW" (ParticleMultiBW)

    Combine Multi BW into one particle

  8. "linear_npy" (InterpLinearNpy)

    Linear interpolation model from a npy file with array of [mi, re(ai), im(ai)]. Required file: path_of_file.npy, for the path of npy file.

    The example is exp(5 I m).

    (Source code, png, hires.png, pdf)

    _images/particle_model-15.png
  9. "linear_txt" (InterpLinearTxt)

Linear interpolation model from a txt file with array of [mi, re(ai), im(ai)].

Required file: path_of_file.txt, for the path of txt file.

The example is exp(5 I m).

(Source code, png, hires.png, pdf)

_images/particle_model-16.png
  1. "interp" (Interp)

linear interpolation for real number

  1. "interp_c" (Interp)

linear interpolation for complex number

  1. "spline_c" (Interp1DSpline)

Spline interpolation function for model independent resonance

  1. "interp1d3" (Interp1D3)

Piecewise third order interpolation

  1. "interp_lagrange" (Interp1DLang)

Lagrange interpolation

  1. "interp_hist" (InterpHist)

Interpolation for each bins as constant

  1. "hist_idx" (InterpHistIdx)

Interpolation for each bins as constant

use

min_m: 0.19
max_m: 0.91
interp_N: 8
with_bound: True

for mass range [0.19, 0.91] and 7 bins

The first and last are fixed to zero unless set with_bound: True.

This is an example of \(k\exp (i k)\) for point k.

(Source code, png, hires.png, pdf)

_images/particle_model-17.png
  1. "spline_c_idx" (Interp1DSplineIdx)

Spline function in index way.

use

min_m: 0.19
max_m: 0.91
interp_N: 8
with_bound: True

for mass range [0.19, 0.91] and 8 interpolation points

The first and last are fixed to zero unless set with_bound: True.

This is an example of \(k\exp (i k)\) for point k.

(Source code, png, hires.png, pdf)

_images/particle_model-18.png
  1. "sppchip" (InterpSPPCHIP)

    Shape-Preserving Piecewise Cubic Hermite Interpolation Polynomial. It is monotonic in each interval.

    (Source code, png, hires.png, pdf)

    _images/particle_model-19.png

Available Decay Model

2-body decays

  1. "gls-bf", "default" (HelicityDecay)

default decay model

The total amplitude is

\[A = H_{\lambda_{B},\lambda_{C}}^{A \rightarrow B+C} D^{J_A*}_{\lambda_{A}, \lambda_{B}-\lambda_{C}} (\varphi,\theta,0)\]

The helicity coupling is

\[H_{\lambda_{B},\lambda_{C}}^{A \rightarrow B+C} = \sum_{ls} g_{ls} \sqrt{\frac{2l+1}{2 J_{A}+1}} \langle l 0; s \delta|J_{A} \delta\rangle \langle J_{B} \lambda_{B} ;J_{C} -\lambda_{C} | s \delta \rangle q^{l} B_{l}'(q, q_0, d)\]

The fit parameters is \(g_{ls}\)

There are some options

(1). has_bprime=False will remove the \(B_{l}'(q, q_0, d)\) part.

(2). has_barrier_factor=False will remove the \(q^{l} B_{l}'(q, q_0, d)\) part.

(3). barrier_factor_norm=True will replace \(q^l\) with \((q/q_{0})^l\)

(4). below_threshold=True will replace the mass used to calculate \(q_0\) with

\[m_0^{eff} = m^{min} + \frac{m^{max} - m^{min}}{2}(1+tanh \frac{m_0 - \frac{m^{max} + m^{min}}{2}}{m^{max} - m^{min}})\]

(5). l_list=[l1, l2] and ls_list=[[l1, s1], [l2, s2]] options give the list of all possible LS used in the decay.

(6). no_q0=True will set the \(q_0=1\).

  1. "helicity_full" (HelicityDecayNP)

    Full helicity amplitude

    \[A = H_{m_1, m_2} D_{m_0, m_1-m_2}^{J_0 *}(\varphi, \theta,0)\]

    fit parameters is \(H_{m_1, m_2}\).

  2. "helicity_parity" (HelicityDecayP)

    \[H_{- m1, - m2} = P_0 P_1 P_2 (-1)^{J_1 + J_2 - J_0} H_{m1, m2}\]
  3. "gls-cpv" (HelicityDecayCPV)

    decay model for CPV

Tensorflow and Cudatoolkit Version

  1. Why are there two separate conda requirements file?
    • requirements-min.txt limits the tensorflow version up to 2.2. Beyond this version, conda will install the wrong dependency versions, in particular cudatoolkit versions and sometimes python3.

    • tensorflow_2_6_requirements.txt manually selects the correct python and cudatoolkit versions to match the tensorflow-2.6.0 build on conda-forge.

  2. Should I use the latest tensorflow version?
    • We highly recommend Ampere card users (RTX 30 series for example), to install their conda environments with tensorflow_2_6_requirements.txt which uses cudatoolkit version 11.2.

  3. Why should Ampere use cudatoolkit version > 11.0?
    • To avoid a few minutes of overhead due to JIT compilation.

    • cudatoolkit version < 11.0 does not have pre-compiled CUDA binaries for Ampere architecture. So older cudatoolkit versions have to JIT compile the PTX code everytime tensorflow uses the GPU hence the overhead.

    • See this explanation about old CUDA versions and JIT compile.

  4. Will you update the tensorflow_2_X_requirements.txt file regularly to the latest available version on `conda`?
    • We do not guarantee any regular updates on tensorflow_2_X_requirements.txt.

    • We will update this should particular build become unavailable on conda or a new release of GPUs require a tensorflow and cudatoolkit update. Please notify us if this is the case.

FAQ

1. Precission Loss

message: Desired error not nescessarily achieved due to precision loss.

Check the jac value,

1.1 If all absulute value is small. it is acceptable beacuse of the precision.

1.2 If some absulte value is large. It is the bad parameters or problem in models.

1.3 Avoid negative weights

2. NaN value in fit

message: NaN result encountered.

2.1 Check the data.

There a script (scripts/check_nan.py) to check it.

2.1.1 No stange value in data, (nan, infs …).

2.1.2 The data order should be \(E, p_x, p_y,p_z\), \(E\) is the first.

2.1.3 The mass should be valid, \(E^2 - p_x^2 - p_y^2 - p_z^2 > 0\), and for any combinations of final particles, mab > ma + mb.

2.1.4 Avoid 0 in weights.

2.2 Check the model.

2.2.1 The resonaces mass should be valid, for example in the mass range (m1+m2, m0-m3), out of the threshold required special options.

3. NaN value when getting params error.

numpy.linalg.LinAlgError: Array must not contain infs or NaN.

3.1 Similar as sec 2.2.

3.2 Bad fit parameters: too wide width or two narrow width, reach the boundary and so on.

3.3 Bad gradients. No gradients or the gradients is not corret for fit paramters.

4. Singular Matrix when getting params error

numpy.linalg.LinAlgError: Singular matrix

4.1 Free paramters are not used in model.

4.2 Used numpy for calculation of variable. The calculation have to be done in get_amp with TensorFlow.

...
  def init_params(self):
     self.a = self.add_var("a")
  def get_amp(self, data, *args, **kwargs):
     # avoid use numpy for varible as
     a = np.sin(self.a())
     # use tensorflow instead
     a = tf.sin(self.a())

5. Out of memory (OOM)

5.1 GPU

tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape ... device:GPU:0 by allocator GPU_0_bfc [Op:...]

5.1.1 Reduce batch size at config.fit(batch=65000) and config.get_params_error(batch=13000) in fit.py.

5.1.2 Use option for large data size, such as lazy call

# config.yml
data:
   lazy_call: True

5.1.3 Try to use small data sample, or simple cases (less final particles).

5.1.4 Some speical model required large memory (such as interpolation model), try other model.

5.2 CPU

killed

5.2.1 Try to allocate more memory. There should be some options for job.

5.2.2 Similar as sec 5.1

6. Bad config.yml

6.1 yaml parse error

yaml.parser.ParserError: while parsing ..

Check the yaml file (see https://yaml.org): the indent, speical chars ,:}], unicode and so on.

6.2 Decay chain

AssertionError: not only one top particle

The decay chain should be complete. All the item in decay should decay from initial to finals.

6.3 Decay chain 2

RuntimeError: not decay chain aviable, check you config.yml

6.3.1 Similar as sec 6.2.

6.3.2 Check the information in remove decay chain, see the reson why those decays are not aviable.

ls not aviable means no possible LS combination allowed. Check the spin and parity. If allow parity voilate, add p_break: True to decay.

For starters

Take the decay in config_sample.yml for example, we will generate a toy, and then use it to conduct an analysis.

First, we use the class ConfigLoader to load the configuration file, and thus building the amplitude expression.

from tf_pwa.config_loader import ConfigLoader
config = ConfigLoader("config_sample.yml")
amp = config.get_amplitude()

We can also use a json file to set the parameters in the amplitude formula config.set_params("parameters.json"). Otherwise, the parameters are set randomly.

Next, we can use functions in tf_pwa.applications to directly generate data samples. We need to provide the masses of A and B, C, D to generate the PhaseSpace MC, and then use it to generate the toy data.

from tf_pwa.applications import gen_mc, gen_data
PHSP = gen_mc(mother=4.6, daughters=[2.00698, 2.01028, 0.13957], number=100000, outfile="PHSP.dat")
data = gen_data(amp, Ndata=5000, mcfile="PHSP.dat", genfile="data.dat")

Now that we have updated data.dat and PHSP.dat, we’d better load the configuration file again, and then fit the data.

config = ConfigLoader("config_sample.yml")
fit_result = config.fit()

Fitting is the major part of an analysis, and it could also be the most time-consuming part. For this example (the complexity of the amplitude expression matters a lot), the time for fitting is about xxx (running on xxxGPU). Then we can step further to complete the analysis, like calculating the fit fractions.

errors = config.get_params_error(fit_result)
fit_result.save_as("final_parameters.json")
fit_frac, err_frac = config.cal_fitfractions()

We can use error_print in tf_pwa.utils to print the fitting parameters as well as the fit fractions.

from tf_pwa.utils import error_print
print("########## fitting parameters:")
for key, value in config.get_params().items():
   print(key, error_print(value, errors.get(key, None)))
print("########## fit fractions:")
for i in fit_frac:
   print(i, " : " + error_print(fit_frac[i], err_frac.get(i, None)))

If the plotting options are also provided in the config_sample.yml, we can also plot the distributions of variables indicated in the configuration file.

config.plot_partial_wave(fit_result, prefix="figure/")

The figures will be saved under path figure. Here are the three invariant mass pairs for example.

(three pictures here)

We can do a lot more using tf_pwa. For more examples, please see path tutorials.

Indices and tables