sasmodels package¶
Subpackages¶
Submodules¶
sasmodels.alignment module¶
GPU data alignment.
Some web sites say that maximizing performance for OpenCL code requires aligning data on certain memory boundaries. The following functions provide this service:
align_data()
aligns an existing array, returning a new array of the
correct alignment.
align_empty()
to create an empty array of the correct alignment.
Set alignment to gpu environment attribute boundary.
Note: This code is unused. So far, tests have not demonstrated any improvement from forcing correct alignment. The tests should be repeated with arrays forced away from the target boundaries to decide whether it is really required.
- sasmodels.alignment.align_data(x, dtype, alignment=128)¶
Return a copy of an array on the alignment boundary.
- sasmodels.alignment.align_empty(shape, dtype, alignment=128)¶
Return an empty array aligned on the alignment boundary.
sasmodels.bumps_model module¶
Wrap sasmodels for direct use by bumps.
Model
is a wrapper for the sasmodels kernel which defines a
bumps Parameter box for each kernel parameter. Model accepts keyword
arguments to set the initial value for each parameter.
Experiment
combines the Model function with a data file loaded by
the sasview data loader. Experiment takes a cutoff parameter controlling
how far the polydispersity integral extends.
- class sasmodels.bumps_model.Any(*args, **kwargs)¶
Bases:
object
Special type indicating an unconstrained type.
Any is compatible with every type.
Any assumed to have all methods.
All values assumed to be instances of Any.
Note that all the above statements are true from the point of view of static type checkers. At runtime, Any should not be used with instance checks.
- __dict__ = mappingproxy({'__module__': 'typing', '__doc__': 'Special type indicating an unconstrained type.\n\n - Any is compatible with every type.\n - Any assumed to have all methods.\n - All values assumed to be instances of Any.\n\n Note that all the above statements are true from the point of view of\n static type checkers. At runtime, Any should not be used with instance\n checks.\n ', '__new__': <staticmethod(<function Any.__new__>)>, '__dict__': <attribute '__dict__' of 'Any' objects>, '__weakref__': <attribute '__weakref__' of 'Any' objects>, '__annotations__': {}})¶
- __doc__ = 'Special type indicating an unconstrained type.\n\n - Any is compatible with every type.\n - Any assumed to have all methods.\n - All values assumed to be instances of Any.\n\n Note that all the above statements are true from the point of view of\n static type checkers. At runtime, Any should not be used with instance\n checks.\n '¶
- __module__ = 'typing'¶
- static __new__(cls, *args, **kwargs)¶
- __weakref__¶
list of weak references to the object
- sasmodels.bumps_model.BumpsParameter¶
alias of
Parameter
- class sasmodels.bumps_model.Data1D(x: ndarray | None = None, y: ndarray | None = None, dx: ndarray | None = None, dy: ndarray | None = None)¶
Bases:
object
1D data object.
Note that this definition matches the attributes from sasview, with some generic 1D data vectors and some SAS specific definitions. Some refactoring to allow consistent naming conventions between 1D, 2D and SESANS data would be helpful.
Attributes
x, dx: \(q\) vector and gaussian resolution
y, dy: \(I(q)\) vector and measurement uncertainty
mask: values to include in plotting/analysis
dxl: slit widths for slit smeared data, with dx ignored
qmin, qmax: range of \(q\) values in x
filename: label for the data line
_xaxis, _xunit: label and units for the x axis
_yaxis, _yunit: label and units for the y axis
- __dict__ = mappingproxy({'__module__': 'sasmodels.data', '__doc__': '\n 1D data object.\n\n Note that this definition matches the attributes from sasview, with\n some generic 1D data vectors and some SAS specific definitions. Some\n refactoring to allow consistent naming conventions between 1D, 2D and\n SESANS data would be helpful.\n\n **Attributes**\n\n *x*, *dx*: $q$ vector and gaussian resolution\n\n *y*, *dy*: $I(q)$ vector and measurement uncertainty\n\n *mask*: values to include in plotting/analysis\n\n *dxl*: slit widths for slit smeared data, with *dx* ignored\n\n *qmin*, *qmax*: range of $q$ values in *x*\n\n *filename*: label for the data line\n\n *_xaxis*, *_xunit*: label and units for the *x* axis\n\n *_yaxis*, *_yunit*: label and units for the *y* axis\n ', '__init__': <function Data1D.__init__>, 'xaxis': <function Data1D.xaxis>, 'yaxis': <function Data1D.yaxis>, '__dict__': <attribute '__dict__' of 'Data1D' objects>, '__weakref__': <attribute '__weakref__' of 'Data1D' objects>, '__annotations__': {}})¶
- __doc__ = '\n 1D data object.\n\n Note that this definition matches the attributes from sasview, with\n some generic 1D data vectors and some SAS specific definitions. Some\n refactoring to allow consistent naming conventions between 1D, 2D and\n SESANS data would be helpful.\n\n **Attributes**\n\n *x*, *dx*: $q$ vector and gaussian resolution\n\n *y*, *dy*: $I(q)$ vector and measurement uncertainty\n\n *mask*: values to include in plotting/analysis\n\n *dxl*: slit widths for slit smeared data, with *dx* ignored\n\n *qmin*, *qmax*: range of $q$ values in *x*\n\n *filename*: label for the data line\n\n *_xaxis*, *_xunit*: label and units for the *x* axis\n\n *_yaxis*, *_yunit*: label and units for the *y* axis\n '¶
- __init__(x: ndarray | None = None, y: ndarray | None = None, dx: ndarray | None = None, dy: ndarray | None = None) None ¶
- __module__ = 'sasmodels.data'¶
- __weakref__¶
list of weak references to the object
- xaxis(label: str, unit: str) None ¶
set the x axis label and unit
- yaxis(label: str, unit: str) None ¶
set the y axis label and unit
- class sasmodels.bumps_model.Data2D(x: ndarray | None = None, y: ndarray | None = None, z: ndarray | None = None, dx: ndarray | None = None, dy: ndarray | None = None, dz: ndarray | None = None)¶
Bases:
object
2D data object.
Note that this definition matches the attributes from sasview. Some refactoring to allow consistent naming conventions between 1D, 2D and SESANS data would be helpful.
Attributes
qx_data, dqx_data: \(q_x\) matrix and gaussian resolution
qy_data, dqy_data: \(q_y\) matrix and gaussian resolution
data, err_data: \(I(q)\) matrix and measurement uncertainty
mask: values to exclude from plotting/analysis
qmin, qmax: range of \(q\) values in x
filename: label for the data line
_xaxis, _xunit: label and units for the x axis
_yaxis, _yunit: label and units for the y axis
_zaxis, _zunit: label and units for the y axis
Q_unit, I_unit: units for Q and intensity
x_bins, y_bins: grid steps in x and y directions
- __dict__ = mappingproxy({'__module__': 'sasmodels.data', '__doc__': '\n 2D data object.\n\n Note that this definition matches the attributes from sasview. Some\n refactoring to allow consistent naming conventions between 1D, 2D and\n SESANS data would be helpful.\n\n **Attributes**\n\n *qx_data*, *dqx_data*: $q_x$ matrix and gaussian resolution\n\n *qy_data*, *dqy_data*: $q_y$ matrix and gaussian resolution\n\n *data*, *err_data*: $I(q)$ matrix and measurement uncertainty\n\n *mask*: values to exclude from plotting/analysis\n\n *qmin*, *qmax*: range of $q$ values in *x*\n\n *filename*: label for the data line\n\n *_xaxis*, *_xunit*: label and units for the *x* axis\n\n *_yaxis*, *_yunit*: label and units for the *y* axis\n\n *_zaxis*, *_zunit*: label and units for the *y* axis\n\n *Q_unit*, *I_unit*: units for Q and intensity\n\n *x_bins*, *y_bins*: grid steps in *x* and *y* directions\n ', '__init__': <function Data2D.__init__>, 'xaxis': <function Data2D.xaxis>, 'yaxis': <function Data2D.yaxis>, 'zaxis': <function Data2D.zaxis>, '__dict__': <attribute '__dict__' of 'Data2D' objects>, '__weakref__': <attribute '__weakref__' of 'Data2D' objects>, '__annotations__': {}})¶
- __doc__ = '\n 2D data object.\n\n Note that this definition matches the attributes from sasview. Some\n refactoring to allow consistent naming conventions between 1D, 2D and\n SESANS data would be helpful.\n\n **Attributes**\n\n *qx_data*, *dqx_data*: $q_x$ matrix and gaussian resolution\n\n *qy_data*, *dqy_data*: $q_y$ matrix and gaussian resolution\n\n *data*, *err_data*: $I(q)$ matrix and measurement uncertainty\n\n *mask*: values to exclude from plotting/analysis\n\n *qmin*, *qmax*: range of $q$ values in *x*\n\n *filename*: label for the data line\n\n *_xaxis*, *_xunit*: label and units for the *x* axis\n\n *_yaxis*, *_yunit*: label and units for the *y* axis\n\n *_zaxis*, *_zunit*: label and units for the *y* axis\n\n *Q_unit*, *I_unit*: units for Q and intensity\n\n *x_bins*, *y_bins*: grid steps in *x* and *y* directions\n '¶
- __init__(x: ndarray | None = None, y: ndarray | None = None, z: ndarray | None = None, dx: ndarray | None = None, dy: ndarray | None = None, dz: ndarray | None = None) None ¶
- __module__ = 'sasmodels.data'¶
- __weakref__¶
list of weak references to the object
- xaxis(label: str, unit: str) None ¶
set the x axis label and unit
- yaxis(label: str, unit: str) None ¶
set the y axis label and unit
- zaxis(label: str, unit: str) None ¶
set the y axis label and unit
- class sasmodels.bumps_model.DataMixin¶
Bases:
object
DataMixin captures the common aspects of evaluating a SAS model for a particular data set, including calculating Iq and evaluating the resolution function. It is used in particular by
DirectModel
, which evaluates a SAS model parameters as key word arguments to the calculator method, and bybumps_model.Experiment
, which wraps the model and data for use with the Bumps fitting engine. It is not currently used bysasview_model.SasviewModel
since this will require a number of changes to SasView before we can do it._interpret_data initializes the data structures necessary to manage the calculations. This sets attributes in the child class such as data_type and resolution.
_calc_theory evaluates the model at the given control values.
_set_data sets the intensity data in the data object, possibly with random noise added. This is useful for simulating a dataset with the results from _calc_theory.
- __dict__ = mappingproxy({'__module__': 'sasmodels.direct_model', '__doc__': '\n DataMixin captures the common aspects of evaluating a SAS model for a\n particular data set, including calculating Iq and evaluating the\n resolution function. It is used in particular by :class:`DirectModel`,\n which evaluates a SAS model parameters as key word arguments to the\n calculator method, and by :class:`.bumps_model.Experiment`, which wraps the\n model and data for use with the Bumps fitting engine. It is not\n currently used by :class:`.sasview_model.SasviewModel` since this will\n require a number of changes to SasView before we can do it.\n\n *_interpret_data* initializes the data structures necessary\n to manage the calculations. This sets attributes in the child class\n such as *data_type* and *resolution*.\n\n *_calc_theory* evaluates the model at the given control values.\n\n *_set_data* sets the intensity data in the data object,\n possibly with random noise added. This is useful for simulating a\n dataset with the results from *_calc_theory*.\n ', '_interpret_data': <function DataMixin._interpret_data>, '_set_data': <function DataMixin._set_data>, '_calc_theory': <function DataMixin._calc_theory>, '__dict__': <attribute '__dict__' of 'DataMixin' objects>, '__weakref__': <attribute '__weakref__' of 'DataMixin' objects>, '__annotations__': {}})¶
- __doc__ = '\n DataMixin captures the common aspects of evaluating a SAS model for a\n particular data set, including calculating Iq and evaluating the\n resolution function. It is used in particular by :class:`DirectModel`,\n which evaluates a SAS model parameters as key word arguments to the\n calculator method, and by :class:`.bumps_model.Experiment`, which wraps the\n model and data for use with the Bumps fitting engine. It is not\n currently used by :class:`.sasview_model.SasviewModel` since this will\n require a number of changes to SasView before we can do it.\n\n *_interpret_data* initializes the data structures necessary\n to manage the calculations. This sets attributes in the child class\n such as *data_type* and *resolution*.\n\n *_calc_theory* evaluates the model at the given control values.\n\n *_set_data* sets the intensity data in the data object,\n possibly with random noise added. This is useful for simulating a\n dataset with the results from *_calc_theory*.\n '¶
- __module__ = 'sasmodels.direct_model'¶
- __weakref__¶
list of weak references to the object
- _calc_theory(pars: Mapping[str, float], cutoff: float = 0.0) ndarray ¶
- _interpret_data(data: Data1D | Data2D | SesansData, model: KernelModel) None ¶
- _set_data(Iq: ndarray, noise: float | None = None) None ¶
- class sasmodels.bumps_model.Experiment(data: Data1D | Data2D, model: Model, cutoff: float = 1e-05, name: str | None = None, extra_pars: Dict[str, Parameter] | None = None)¶
Bases:
DataMixin
Bumps wrapper for a SAS experiment.
data is a
data.Data1D
,data.Data2D
ordata.SesansData
object. Usedata.empty_data1D()
ordata.empty_data2D()
to define \(q, \Delta q\) calculation points for displaying the SANS curve when there is no measured data.model is a
Model
object.cutoff is the integration cutoff, which avoids computing the the SAS model where the polydispersity weight is low.
The resulting model can be used directly in a Bumps FitProblem call.
- __annotations__ = {}¶
- __doc__ = '\n Bumps wrapper for a SAS experiment.\n\n *data* is a :class:`.data.Data1D`, :class:`.data.Data2D` or\n :class:`.data.SesansData` object. Use :func:`.data.empty_data1D` or\n :func:`.data.empty_data2D` to define $q, \\Delta q$ calculation\n points for displaying the SANS curve when there is no measured data.\n\n *model* is a :class:`Model` object.\n\n *cutoff* is the integration cutoff, which avoids computing the\n the SAS model where the polydispersity weight is low.\n\n The resulting model can be used directly in a Bumps FitProblem call.\n '¶
- __init__(data: Data1D | Data2D, model: Model, cutoff: float = 1e-05, name: str | None = None, extra_pars: Dict[str, Parameter] | None = None) None ¶
- __module__ = 'sasmodels.bumps_model'¶
- _cache: Dict[str, ndarray] = None¶
- nllf() float ¶
Return the negative log likelihood of seeing data given the model parameters, up to a normalizing constant which depends on the data uncertainty.
- numpoints() float ¶
Return the number of data points
- parameters() Dict[str, Parameter] ¶
Return a dictionary of parameters
- plot(view: str = None) None ¶
Plot the data and residuals.
- residuals() ndarray ¶
Return theory minus data normalized by uncertainty.
- property resolution: None | Resolution¶
resolution.Resolution
applied to the data, if any.
- save(basename: str) None ¶
Save the model parameters and data into a file.
Not Implemented except for sesans fits.
- simulate_data(noise: float = None) None ¶
Generate simulated data.
- theory() ndarray ¶
Return the theory corresponding to the model parameters.
This method uses lazy evaluation, and requires model.update() to be called when the parameters have changed.
- update() None ¶
Call when model parameters have changed and theory needs to be recalculated.
- class sasmodels.bumps_model.KernelModel¶
Bases:
object
Model definition for the compute engine.
- __dict__ = mappingproxy({'__module__': 'sasmodels.kernel', '__doc__': '\n Model definition for the compute engine.\n ', 'info': None, 'dtype': None, 'make_kernel': <function KernelModel.make_kernel>, 'release': <function KernelModel.release>, '__dict__': <attribute '__dict__' of 'KernelModel' objects>, '__weakref__': <attribute '__weakref__' of 'KernelModel' objects>, '__annotations__': {'info': 'ModelInfo', 'dtype': 'np.dtype'}})¶
- __doc__ = '\n Model definition for the compute engine.\n '¶
- __module__ = 'sasmodels.kernel'¶
- __weakref__¶
list of weak references to the object
- dtype: dtype = None¶
- make_kernel(q_vectors: List[ndarray]) Kernel ¶
Instantiate a kernel for evaluating the model at q_vectors.
- release() None ¶
Free resources associated with the kernel.
- class sasmodels.bumps_model.Model(model: KernelModel, **kwargs: Dict[str, float | Parameter])¶
Bases:
object
Bumps wrapper for a SAS model.
model is a runnable module as returned from
core.load_model()
.cutoff is the polydispersity weight cutoff.
Any additional key=value pairs are model dependent parameters.
- __dict__ = mappingproxy({'__module__': 'sasmodels.bumps_model', '__doc__': '\n Bumps wrapper for a SAS model.\n\n *model* is a runnable module as returned from :func:`.core.load_model`.\n\n *cutoff* is the polydispersity weight cutoff.\n\n Any additional *key=value* pairs are model dependent parameters.\n ', '__init__': <function Model.__init__>, 'parameters': <function Model.parameters>, 'state': <function Model.state>, '__dict__': <attribute '__dict__' of 'Model' objects>, '__weakref__': <attribute '__weakref__' of 'Model' objects>, '__annotations__': {}})¶
- __doc__ = '\n Bumps wrapper for a SAS model.\n\n *model* is a runnable module as returned from :func:`.core.load_model`.\n\n *cutoff* is the polydispersity weight cutoff.\n\n Any additional *key=value* pairs are model dependent parameters.\n '¶
- __init__(model: KernelModel, **kwargs: Dict[str, float | Parameter]) None ¶
- __module__ = 'sasmodels.bumps_model'¶
- __weakref__¶
list of weak references to the object
- parameters() Dict[str, Parameter] ¶
Return a dictionary of parameters objects for the parameters, excluding polydispersity distribution type.
- state() Dict[str, Parameter | str] ¶
Return a dictionary of current values for all the parameters, including polydispersity distribution type.
- class sasmodels.bumps_model.ModelInfo¶
Bases:
object
Interpret the model definition file, categorizing the parameters.
The module can be loaded with a normal python import statement if you know which module you need, or with __import__(‘sasmodels.model.’+name) if the name is in a string.
The structure should be mostly static, other than the delayed definition of Iq, Iqac and Iqabc if they need to be defined.
- Imagnetic: None | str | Callable[[...], ndarray] = None¶
Returns I(qx, qy, a, b, …). The interface follows
Iq
.
- Iq: None | str | Callable[[...], ndarray] = None¶
Returns I(q, a, b, …) for parameters a, b, etc. defined by the parameter table. Iq can be defined as a python function, or as a C function. If it is defined in C, then set Iq to the body of the C function, including the return statement. This function takes values for q and each of the parameters as separate double values (which may be converted to float or long double by sasmodels). All source code files listed in
source
will be loaded before the Iq function is defined. If Iq is not present, then sources should define static double Iq(double q, double a, double b, …) which will return I(q, a, b, …). Multiplicity parameters are sent as pointers to doubles. Constants in floating point expressions should include the decimal point. Seegenerate
for more details. If have_Fq is True, then Iq should return an interleaved array of \([\sum F(q_1), \sum F^2(q_1), \ldots, \sum F(q_n), \sum F^2(q_n)]\).
- Iqabc: None | str | Callable[[...], ndarray] = None¶
Returns I(qa, qb, qc, a, b, …). The interface follows
Iq
.
- Iqac: None | str | Callable[[...], ndarray] = None¶
Returns I(qab, qc, a, b, …). The interface follows
Iq
.
- Iqxy: None | str | Callable[[...], ndarray] = None¶
Returns I(qx, qy, a, b, …). The interface follows
Iq
.
- __dict__ = mappingproxy({'__module__': 'sasmodels.modelinfo', '__doc__': "\n Interpret the model definition file, categorizing the parameters.\n\n The module can be loaded with a normal python import statement if you\n know which module you need, or with __import__('sasmodels.model.'+name)\n if the name is in a string.\n\n The structure should be mostly static, other than the delayed definition\n of *Iq*, *Iqac* and *Iqabc* if they need to be defined.\n ", 'filename': None, 'basefile': None, 'id': None, 'name': None, 'title': None, 'description': None, 'parameters': None, 'base': None, 'translation': None, 'composition': None, 'hidden': None, 'docs': None, 'category': None, 'single': None, 'opencl': None, 'structure_factor': None, 'have_Fq': False, 'radius_effective_modes': None, 'source': None, 'c_code': None, 'valid': None, 'form_volume': None, 'shell_volume': None, 'radius_effective': None, 'Iq': None, 'Iqxy': None, 'Iqac': None, 'Iqabc': None, 'Imagnetic': None, 'profile': None, 'profile_axes': None, 'sesans': None, 'random': None, 'lineno': None, 'tests': None, '__init__': <function ModelInfo.__init__>, 'get_hidden_parameters': <function ModelInfo.get_hidden_parameters>, '__dict__': <attribute '__dict__' of 'ModelInfo' objects>, '__weakref__': <attribute '__weakref__' of 'ModelInfo' objects>, '__annotations__': {'filename': 'Optional[str]', 'basefile': 'Optional[str]', 'id': 'str', 'name': 'str', 'title': 'str', 'description': 'str', 'parameters': 'ParameterTable', 'base': 'ParameterTable', 'translation': 'Optional[str]', 'composition': 'Optional[Tuple[str, List[ModelInfo]]]', 'hidden': 'Optional[Callable[[int], Set[str]]]', 'docs': 'str', 'category': 'Optional[str]', 'single': 'bool', 'opencl': 'bool', 'structure_factor': 'bool', 'radius_effective_modes': 'List[str]', 'source': 'List[str]', 'c_code': 'Optional[str]', 'valid': 'str', 'form_volume': 'Union[None, str, Callable[[np.ndarray], float]]', 'shell_volume': 'Union[None, str, Callable[[np.ndarray], float]]', 'radius_effective': 'Union[None, Callable[[int, np.ndarray], float]]', 'Iq': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqxy': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqac': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqabc': 'Union[None, str, Callable[[...], np.ndarray]]', 'Imagnetic': 'Union[None, str, Callable[[...], np.ndarray]]', 'profile': 'Optional[Callable[[np.ndarray], None]]', 'profile_axes': 'Tuple[str, str]', 'sesans': 'Optional[Callable[[np.ndarray], np.ndarray]]', 'random': 'Optional[Callable[[], Dict[str, float]]]', 'lineno': 'Dict[str, int]', 'tests': 'List[TestCondition]'}})¶
- __doc__ = "\n Interpret the model definition file, categorizing the parameters.\n\n The module can be loaded with a normal python import statement if you\n know which module you need, or with __import__('sasmodels.model.'+name)\n if the name is in a string.\n\n The structure should be mostly static, other than the delayed definition\n of *Iq*, *Iqac* and *Iqabc* if they need to be defined.\n "¶
- __init__() None ¶
- __module__ = 'sasmodels.modelinfo'¶
- __weakref__¶
list of weak references to the object
- base: ParameterTable = None¶
For reparameterized systems, base is the base parameter table. For normal systems it is simply a copy of parameters.
- basefile: str | None = None¶
Base file is usually filename, but not when a model has been reparameterized, in which case it is the file containing the original model definition. This is needed to signal an additional dependency for the model time stamp, and so that the compiler reports correct file for syntax errors.
- c_code: str | None = None¶
inline source code, added after all elements of source
- category: str | None = None¶
Location of the model description in the documentation. This takes the form of “section” or “section:subsection”. So for example, porod uses category=”shape-independent” so it is in the Shape-Independent Functions section whereas capped_cylinder uses: category=”shape:cylinder”, which puts it in the Cylinder Functions section.
- composition: Tuple[str, List[ModelInfo]] | None = None¶
Composition is None if this is an independent model, or it is a tuple with comoposition type (‘product’ or ‘misture’) and a list of
ModelInfo
blocks for the composed objects. This allows us to rebuild a complete mixture or product model from the info block. composition is not given in the model definition file, but instead arises when the model is constructed using names such as sphere*hardsphere or cylinder+sphere.
- description: str = None¶
Long description of the model.
- docs: str = None¶
Doc string from the top of the model file. This should be formatted using ReStructuredText format, with latex markup in “.. math” environments, or in dollar signs. This will be automatically extracted to a .rst file by
generate.make_doc()
, then converted to HTML or PDF by Sphinx.
- filename: str | None = None¶
Full path to the file defining the kernel, if any.
- form_volume: None | str | Callable[[ndarray], float] = None¶
Returns the form volume for python-based models. Form volume is needed for volume normalization in the polydispersity integral. If no parameters are volume parameters, then form volume is not needed. For C-based models, (with
source
defined, or withIq
defined using a string containing C code), form_volume must also be C code, either defined as a string, or in the sources.
Returns the set of hidden parameters for the model. control is the value of the control parameter. Note that multiplicity models have an implicit control parameter, which is the parameter that controls the multiplicity.
- have_Fq = False¶
True if the model defines an Fq function with signature
void Fq(double q, double *F1, double *F2, ...)
Different variants require different parameters. In order to show just the parameters needed for the variant selected, you should provide a function hidden(control) -> set([‘a’, ‘b’, …]) indicating which parameters need to be hidden. For multiplicity models, you need to use the complete name of the parameter, including its number. So for example, if variant “a” uses only sld1 and sld2, then sld3, sld4 and sld5 of multiplicity parameter sld[5] should be in the hidden set.
- id: str = None¶
Id of the kernel used to load it from the filesystem.
- lineno: Dict[str, int] = None¶
Line numbers for symbols defining C code
- name: str = None¶
Display name of the model, which defaults to the model id but with capitalization of the parts so for example core_shell defaults to “Core Shell”.
- opencl: bool = None¶
True if the model can be run as an opencl model. If for some reason the model cannot be run in opencl (e.g., because the model passes functions by reference), then set this to false.
- parameters: ParameterTable = None¶
Model parameter table. Parameters are defined using a list of parameter definitions, each of which is contains parameter name, units, default value, limits, type and description. See
Parameter
for details on the individual parameters. The parameters are gathered into aParameterTable
, which provides various views into the parameter list.
- profile: Callable[[ndarray], None] | None = None¶
Returns a model profile curve x, y. If profile is defined, this curve will appear in response to the Show button in SasView. Use
profile_axes
to set the axis labels. Note that y values will be scaled by 1e6 before plotting.
- profile_axes: Tuple[str, str] = None¶
Axis labels for the
profile
plot. The default is [‘x’, ‘y’]. Only the x component is used for now.
- radius_effective: None | Callable[[int, ndarray], float] = None¶
Computes the effective radius of the shape given the volume parameters. Only needed for models defined in python that can be used for monodisperse approximation for non-dilute solutions, P@S. The first argument is the integer effective radius mode, with default 0.
- radius_effective_modes: List[str] = None¶
List of options for computing the effective radius of the shape, or None if the model is not usable as a form factor model.
- random: Callable[[], Dict[str, float]] | None = None¶
Returns a random parameter set for the model
- sesans: Callable[[ndarray], ndarray] | None = None¶
Returns sesans(z, a, b, …) for models which can directly compute the SESANS correlation function. Note: not currently implemented.
- shell_volume: None | str | Callable[[ndarray], float] = None¶
Returns the shell volume for python-based models. Form volume and shell volume are needed for volume normalization in the polydispersity integral and structure interactions for hollow shapes. If no parameters are volume parameters, then shell volume is not needed. For C-based models, (with
source
defined, or withIq
defined using a string containing C code), shell_volume must also be C code, either defined as a string, or in the sources.
- single: bool = None¶
True if the model can be computed accurately with single precision. This is True by default, but models such as bcc_paracrystal set it to False because they require double precision calculations.
- source: List[str] = None¶
List of C source files used to define the model. The source files should define the Iq function, and possibly Iqac or Iqabc if the model defines orientation parameters. Files containing the most basic functions must appear first in the list, followed by the files that use those functions.
- structure_factor: bool = None¶
True if the model is a structure factor used to model the interaction between form factor models. This will default to False if it is not provided in the file.
- tests: List[Tuple[Mapping[str, float | List[float]], str | float | List[float] | Tuple[float, float] | List[Tuple[float, float]], float | List[float]]] = None¶
The set of tests that must pass. The format of the tests is described in
model_test
.
- title: str = None¶
Short description of the model.
- translation: str | None = None¶
Parameter translation code to convert from parameters table from caller to the base table used to evaluate the model.
- valid: str = None¶
Expression which evaluates to True if the input parameters are valid and the model can be computed, or False otherwise. Invalid parameter sets will not be included in the weighted \(I(Q)\) calculation or its volume normalization. Use C syntax for the expressions, with || for or && for and and ! for not. Any non-magnetic parameter can be used.
- class sasmodels.bumps_model.Reference(obj, attr, **kw)¶
Bases:
Parameter
Create an adaptor so that a model attribute can be treated as if it were a parameter. This allows only direct access, wherein the storage for the parameter value is provided by the underlying model.
Indirect access, wherein the storage is provided by the parameter, cannot be supported since the parameter has no way to detect that the model is asking for the value of the attribute. This means that model attributes cannot be assigned to parameter expressions without some trigger to update the values of the attributes in the model.
- __doc__ = '\n Create an adaptor so that a model attribute can be treated as if it\n were a parameter. This allows only direct access, wherein the\n storage for the parameter value is provided by the underlying model.\n\n Indirect access, wherein the storage is provided by the parameter, cannot\n be supported since the parameter has no way to detect that the model\n is asking for the value of the attribute. This means that model\n attributes cannot be assigned to parameter expressions without some\n trigger to update the values of the attributes in the model.\n '¶
- __init__(obj, attr, **kw)¶
- __module__ = 'bumps.parameter'¶
- to_dict()¶
Return a dict represention of the object.
- property value¶
- class sasmodels.bumps_model.Resolution¶
Bases:
object
Abstract base class defining a 1D resolution function.
q is the set of q values at which the data is measured.
q_calc is the set of q values at which the theory needs to be evaluated. This may extend and interpolate the q values.
apply is the method to call with I(q_calc) to compute the resolution smeared theory I(q).
- __dict__ = mappingproxy({'__module__': 'sasmodels.resolution', '__doc__': '\n Abstract base class defining a 1D resolution function.\n\n *q* is the set of q values at which the data is measured.\n\n *q_calc* is the set of q values at which the theory needs to be evaluated.\n This may extend and interpolate the q values.\n\n *apply* is the method to call with I(q_calc) to compute the resolution\n smeared theory I(q).\n \n ', 'q': None, 'q_calc': None, 'apply': <function Resolution.apply>, '__dict__': <attribute '__dict__' of 'Resolution' objects>, '__weakref__': <attribute '__weakref__' of 'Resolution' objects>, '__annotations__': {'q': 'np.ndarray', 'q_calc': 'np.ndarray'}})¶
- __doc__ = '\n Abstract base class defining a 1D resolution function.\n\n *q* is the set of q values at which the data is measured.\n\n *q_calc* is the set of q values at which the theory needs to be evaluated.\n This may extend and interpolate the q values.\n\n *apply* is the method to call with I(q_calc) to compute the resolution\n smeared theory I(q).\n \n '¶
- __module__ = 'sasmodels.resolution'¶
- __weakref__¶
list of weak references to the object
- apply(theory)¶
Smear theory by the resolution function, returning Iq.
- q: ndarray = None¶
- q_calc: ndarray = None¶
- sasmodels.bumps_model.create_parameters(model_info: ModelInfo, **kwargs: float | str | Parameter) Tuple[Dict[str, Parameter], Dict[str, str]] ¶
Generate Bumps parameters from the model info.
model_info is returned from
generate.model_info()
on the model definition module.Any additional key=value pairs are initial values for the parameters to the models. Uninitialized parameters will use the model default value. The value can be a float, a bumps parameter, or in the case of the distribution type parameter, a string.
Returns a dictionary of {name: BumpsParameter} containing the bumps parameters for each model parameter, and a dictionary of {name: str} containing the polydispersity distribution types.
- sasmodels.bumps_model.plot_theory(data: Data1D | Data2D | SesansData, theory: ndarray | None, resid: ndarray | None = None, view: str | None = None, use_data: bool = True, limits: Tuple[float, float] | None = None, Iq_calc: ndarray | None = None) None ¶
Plot theory calculation.
data is needed to define the graph properties such as labels and units, and to define the data mask.
theory is a matrix of the same shape as the data.
view is log, linear or normed
use_data is True if the data should be plotted as well as the theory.
limits sets the intensity limits on the plot; if None then the limits are inferred from the data. If (-inf, inf) then use auto limits.
Iq_calc is the raw theory values without resolution smearing
sasmodels.compare module¶
Program to compare models using different compute engines.
This program lets you compare results between OpenCL and DLL versions of the code and between precision (half, fast, single, double, quad), where fast precision is single precision using native functions for trig, etc., and may not be completely IEEE 754 compliant. This lets make sure that the model calculations are stable, or if you need to tag the model as double precision only.
Run using “./sascomp -h” in the sasmodels root to see the command line options. To run from from an installed version of sasmodels, use “python -m sasmodels.compare -h”.
Note that there is no way within sasmodels to select between an OpenCL CPU device and a GPU device, but you can do so by setting the SAS_OPENCL environment variable. Start a python interpreter and enter:
import pyopencl as cl
cl.create_some_context()
This will prompt you to select from the available OpenCL devices and tell you which string to use for the SAS_OPENCL variable. On Windows you will need to remove the quotes.
- class sasmodels.compare.Calculator(*args, **kwargs)¶
Bases:
Protocol
Kernel calculator takes par=value keyword arguments.
- __abstractmethods__ = frozenset({})¶
- __call__(**par: float) ndarray ¶
Call self as a function.
- __dict__ = mappingproxy({'__module__': 'sasmodels.compare', '__doc__': 'Kernel calculator takes *par=value* keyword arguments.', '__call__': <function Calculator.__call__>, '__dict__': <attribute '__dict__' of 'Calculator' objects>, '__weakref__': <attribute '__weakref__' of 'Calculator' objects>, '__parameters__': (), '_is_protocol': True, '__subclasshook__': <function Protocol.__init_subclass__.<locals>._proto_hook>, '__init__': <function _no_init_or_replace_init>, '__abstractmethods__': frozenset(), '_abc_impl': <_abc._abc_data object>, '__annotations__': {}})¶
- __doc__ = 'Kernel calculator takes *par=value* keyword arguments.'¶
- __init__(*args, **kwargs)¶
- __module__ = 'sasmodels.compare'¶
- __parameters__ = ()¶
- __subclasshook__()¶
Abstract classes can override this to customize issubclass().
This is invoked early on by abc.ABCMeta.__subclasscheck__(). It should return True, False or NotImplemented. If it returns NotImplemented, the normal algorithm is used. Otherwise, it overrides the normal algorithm (and the outcome is cached).
- __weakref__¶
list of weak references to the object
- _abc_impl = <_abc._abc_data object>¶
- _is_protocol = True¶
- class sasmodels.compare.Explore(opts: Dict[str, Any])¶
Bases:
object
Bumps wrapper for a SAS model comparison.
The resulting object can be used as a Bumps fit problem so that parameters can be adjusted in the GUI, with plots updated on the fly.
- __dict__ = mappingproxy({'__module__': 'sasmodels.compare', '__doc__': '\n Bumps wrapper for a SAS model comparison.\n\n The resulting object can be used as a Bumps fit problem so that\n parameters can be adjusted in the GUI, with plots updated on the fly.\n ', '__init__': <function Explore.__init__>, 'revert_values': <function Explore.revert_values>, 'model_update': <function Explore.model_update>, 'numpoints': <function Explore.numpoints>, 'parameters': <function Explore.parameters>, 'nllf': <function Explore.nllf>, 'plot': <function Explore.plot>, '__dict__': <attribute '__dict__' of 'Explore' objects>, '__weakref__': <attribute '__weakref__' of 'Explore' objects>, '__annotations__': {}})¶
- __doc__ = '\n Bumps wrapper for a SAS model comparison.\n\n The resulting object can be used as a Bumps fit problem so that\n parameters can be adjusted in the GUI, with plots updated on the fly.\n '¶
- __module__ = 'sasmodels.compare'¶
- __weakref__¶
list of weak references to the object
- model_update() None ¶
Respond to signal that model parameters have been changed.
- nllf() float ¶
Return cost.
- numpoints() int ¶
Return the number of points.
- plot(view: str = 'log') None ¶
Plot the data and residuals.
- revert_values() None ¶
Restore starting values of the parameters.
- sasmodels.compare.MATH = {'acos': <built-in function acos>, 'acosh': <built-in function acosh>, 'asin': <built-in function asin>, 'asinh': <built-in function asinh>, 'atan': <built-in function atan>, 'atan2': <built-in function atan2>, 'atanh': <built-in function atanh>, 'cbrt': <built-in function cbrt>, 'ceil': <built-in function ceil>, 'comb': <built-in function comb>, 'copysign': <built-in function copysign>, 'cos': <built-in function cos>, 'cosh': <built-in function cosh>, 'degrees': <built-in function degrees>, 'dist': <built-in function dist>, 'e': 2.718281828459045, 'erf': <built-in function erf>, 'erfc': <built-in function erfc>, 'exp': <built-in function exp>, 'exp2': <built-in function exp2>, 'expm1': <built-in function expm1>, 'fabs': <built-in function fabs>, 'factorial': <built-in function factorial>, 'floor': <built-in function floor>, 'fmod': <built-in function fmod>, 'frexp': <built-in function frexp>, 'fsum': <built-in function fsum>, 'gamma': <built-in function gamma>, 'gcd': <built-in function gcd>, 'hypot': <built-in function hypot>, 'inf': inf, 'isclose': <built-in function isclose>, 'isfinite': <built-in function isfinite>, 'isinf': <built-in function isinf>, 'isnan': <built-in function isnan>, 'isqrt': <built-in function isqrt>, 'lcm': <built-in function lcm>, 'ldexp': <built-in function ldexp>, 'lgamma': <built-in function lgamma>, 'log': <built-in function log>, 'log10': <built-in function log10>, 'log1p': <built-in function log1p>, 'log2': <built-in function log2>, 'modf': <built-in function modf>, 'nan': nan, 'nextafter': <built-in function nextafter>, 'perm': <built-in function perm>, 'pi': 3.141592653589793, 'pow': <built-in function pow>, 'prod': <built-in function prod>, 'radians': <built-in function radians>, 'remainder': <built-in function remainder>, 'sin': <built-in function sin>, 'sinh': <built-in function sinh>, 'sqrt': <built-in function sqrt>, 'tan': <built-in function tan>, 'tanh': <built-in function tanh>, 'tau': 6.283185307179586, 'trunc': <built-in function trunc>, 'ulp': <built-in function ulp>}¶
list of math functions for use in evaluating parameters
- sasmodels.compare._format_par(name: str, value: float = 0.0, pd: float = 0.0, n: int = 0, nsigma: float = 3.0, pdtype: str = 'gaussian', relative_pd: bool = False, M0: float = 0.0, mphi: float = 0.0, mtheta: float = 0.0) str ¶
- sasmodels.compare._print_stats(label: str, err: np.ma.ndarray) None ¶
- sasmodels.compare._random_pd(model_info: ModelInfo, pars: Dict[str, float], is2d: bool) None ¶
Generate a random dispersity distribution for the model.
1% no shape dispersity 85% single shape parameter 13% two shape parameters 1% three shape parameters
If oriented, then put dispersity in theta, add phi and psi dispersity with 10% probability for each.
- sasmodels.compare._randomize_one(model_info: ModelInfo, name: str, value: float) float ¶
Randomize a single parameter.
- sasmodels.compare._show_invalid(data: Data, theory: np.ma.ndarray) None ¶
Display a list of the non-finite values in theory.
- sasmodels.compare._swap_pars(pars: ModelInfo, a: str, b: str) None ¶
Swap polydispersity and magnetism when swapping parameters.
Assume the parameters are of the same basic type (volume, sld, or other), so that if, for example, radius_pd is in pars but radius_bell_pd is not, then after the swap radius_bell_pd will be the old radius_pd and radius_pd will be removed.
- sasmodels.compare.build_math_context() Dict[str, Callable] ¶
build dictionary of functions from math module
- sasmodels.compare.columnize(items: List[str], indent: str = '', width: int = None) str ¶
Format a list of strings into columns.
Returns a string with carriage returns ready for printing.
- sasmodels.compare.compare(opts: Dict[str, Any], limits: Tuple[float, float] | None = None, maxdim: float | None = None) Tuple[float, float] ¶
Preform a comparison using options from the command line.
limits are the display limits on the graph, either to set the y-axis for 1D or to set the colormap scale for 2D. If None, then they are inferred from the data and returned. When exploring using Bumps, the limits are set when the model is initially called, and maintained as the values are adjusted, making it easier to see the effects of the parameters.
maxdim DEPRECATED Use opts[‘maxdim’] instead.
- sasmodels.compare.constrain_pars(model_info: ModelInfo, pars: Mapping[str, float]) None ¶
Restrict parameters to valid values.
This includes model specific code for models such as capped_cylinder which need to support within model constraints (cap radius more than cylinder radius in this case).
Warning: this updates the pars dictionary in place.
- sasmodels.compare.delay(dt)¶
Return number date-time delta as number seconds
- sasmodels.compare.get_pars(model_info: ModelInfo) Mapping[str, float] ¶
Extract default parameters from the model definition.
- sasmodels.compare.isnumber(s: str) bool ¶
Return True if string contains an int or float
- sasmodels.compare.limit_dimensions(model_info: ModelInfo, pars: Mapping[str, float], maxdim: float) None ¶
Limit parameters of units of Ang to maxdim.
- sasmodels.compare.main(*argv: str) None ¶
Main program.
- sasmodels.compare.make_data(opts: Dict[str, Any]) Tuple[Data, np.ndarray] ¶
Generate an empty dataset, used with the model to set Q points and resolution.
opts contains the options, with ‘qmax’, ‘nq’, ‘sesans’, ‘res’, ‘accuracy’, ‘is2d’ and ‘view’ parsed from the command line.
- sasmodels.compare.make_engine(model_info: ModelInfo, data: Data1D | Data2D | SesansData, dtype: str, cutoff: float, ngauss: int = 0) Calculator ¶
Generate the appropriate calculation engine for the given datatype.
Datatypes with ‘!’ appended are evaluated using external C DLLs rather than OpenCL.
- sasmodels.compare.parameter_range(p: str, v: float) Tuple[float, float] ¶
Choose a parameter range based on parameter name and initial value.
- sasmodels.compare.parlist(model_info: ModelInfo, pars: Mapping[str, float], is2d: bool) str ¶
Format the parameter list for printing.
- sasmodels.compare.parse_pars(opts: Dict[str, Any], maxdim: float = None) Tuple[Dict[str, float], Dict[str, float]] ¶
Generate parameter sets for base and comparison models.
Returns a pair of parameter dictionaries.
The default parameter values come from the model, or a randomized model if a seed value is given. Next, evaluate any parameter expressions, constraining the value of the parameter within and between models.
Note: When generating random parameters, the seed must already be set with a call to np.random.seed(opts[‘seed’]).
opts controls the parameter generation:
opts = { 'info': (model_info 1, model_info 2), 'seed': -1, # if seed>=0 then randomize parameters 'mono': False, # force monodisperse random parameters 'magnetic': False, # force nonmagetic random parameters 'maxdim': np.inf, # limit particle size to maxdim for random pars 'values': ['par=expr', ...], # override parameter values in model 'show_pars': False, # Show parameter values 'is2d': False, # Show values for orientation parameters }
The values of par=expr are evaluated approximately as:
import numpy as np from math import * from parameter_set import * parameter_set.par = eval(expr)
That is, you can use arbitrary python math expressions including the functions defined in the math library and the numpy library. You can also use the existing parameter values, which will either be the model defaults or the randomly generated values if seed is non-negative.
To compare different values of the same parameter, use par=expr,expr. The first parameter set will have the values from the first expression and the second parameter set will have the values from the second expression. Note that the second expression is evaluated using the values from the first expression, which allows things like:
length=2*radius,length+3
which will compare length to length+3 when length is set to 2*radius.
maxdim DEPRECATED Use opts[‘maxdim’] instead.
- sasmodels.compare.plot_models(opts: Dict[str, Any], result: Dict[str, Any], limits: Tuple[float, float] | None = None, setnum: int = 0) Tuple[float, float] ¶
Plot the results from
run_models()
.
- sasmodels.compare.plot_profile(model_info: ModelInfo, label: List[Tuple[float, ndarray, ndarray]] = 'base', **args: float) None ¶
Plot the profile returned by the model profile method.
model_info defines model parameters, etc.
label is the legend label for the plotted line.
args are parameter=value pairs for the model profile function.
- sasmodels.compare.print_models(kind=None)¶
Print the list of available models in columns.
- class sasmodels.compare.push_seed(seed: int | None = None)¶
Bases:
object
Set the seed value for the random number generator.
When used in a with statement, the random number generator state is restored after the with statement is complete.
- Parameters:
- seedint or array_like, optional
Seed for RandomState
- Example:
Seed can be used directly to set the seed:
>>> from numpy.random import randint >>> push_seed(24) <...push_seed object at...> >>> print(randint(0,1000000,3)) [242082 899 211136]
Seed can also be used in a with statement, which sets the random number generator state for the enclosed computations and restores it to the previous state on completion:
>>> with push_seed(24): ... print(randint(0,1000000,3)) [242082 899 211136]
Using nested contexts, we can demonstrate that state is indeed restored after the block completes:
>>> with push_seed(24): ... print(randint(0,1000000)) ... with push_seed(24): ... print(randint(0,1000000,3)) ... print(randint(0,1000000)) 242082 [242082 899 211136] 899
The restore step is protected against exceptions in the block:
>>> with push_seed(24): ... print(randint(0,1000000)) ... try: ... with push_seed(24): ... print(randint(0,1000000,3)) ... raise Exception() ... except Exception: ... print("Exception raised") ... print(randint(0,1000000)) 242082 [242082 899 211136] Exception raised 899
- __dict__ = mappingproxy({'__module__': 'sasmodels.compare', '__doc__': '\n Set the seed value for the random number generator.\n\n When used in a with statement, the random number generator state is\n restored after the with statement is complete.\n\n :Parameters:\n\n *seed* : int or array_like, optional\n Seed for RandomState\n\n :Example:\n\n Seed can be used directly to set the seed::\n\n >>> from numpy.random import randint\n >>> push_seed(24)\n <...push_seed object at...>\n >>> print(randint(0,1000000,3))\n [242082 899 211136]\n\n Seed can also be used in a with statement, which sets the random\n number generator state for the enclosed computations and restores\n it to the previous state on completion::\n\n >>> with push_seed(24):\n ... print(randint(0,1000000,3))\n [242082 899 211136]\n\n Using nested contexts, we can demonstrate that state is indeed\n restored after the block completes::\n\n >>> with push_seed(24):\n ... print(randint(0,1000000))\n ... with push_seed(24):\n ... print(randint(0,1000000,3))\n ... print(randint(0,1000000))\n 242082\n [242082 899 211136]\n 899\n\n The restore step is protected against exceptions in the block::\n\n >>> with push_seed(24):\n ... print(randint(0,1000000))\n ... try:\n ... with push_seed(24):\n ... print(randint(0,1000000,3))\n ... raise Exception()\n ... except Exception:\n ... print("Exception raised")\n ... print(randint(0,1000000))\n 242082\n [242082 899 211136]\n Exception raised\n 899\n ', '__init__': <function push_seed.__init__>, '__enter__': <function push_seed.__enter__>, '__exit__': <function push_seed.__exit__>, '__dict__': <attribute '__dict__' of 'push_seed' objects>, '__weakref__': <attribute '__weakref__' of 'push_seed' objects>, '__annotations__': {}})¶
- __doc__ = '\n Set the seed value for the random number generator.\n\n When used in a with statement, the random number generator state is\n restored after the with statement is complete.\n\n :Parameters:\n\n *seed* : int or array_like, optional\n Seed for RandomState\n\n :Example:\n\n Seed can be used directly to set the seed::\n\n >>> from numpy.random import randint\n >>> push_seed(24)\n <...push_seed object at...>\n >>> print(randint(0,1000000,3))\n [242082 899 211136]\n\n Seed can also be used in a with statement, which sets the random\n number generator state for the enclosed computations and restores\n it to the previous state on completion::\n\n >>> with push_seed(24):\n ... print(randint(0,1000000,3))\n [242082 899 211136]\n\n Using nested contexts, we can demonstrate that state is indeed\n restored after the block completes::\n\n >>> with push_seed(24):\n ... print(randint(0,1000000))\n ... with push_seed(24):\n ... print(randint(0,1000000,3))\n ... print(randint(0,1000000))\n 242082\n [242082 899 211136]\n 899\n\n The restore step is protected against exceptions in the block::\n\n >>> with push_seed(24):\n ... print(randint(0,1000000))\n ... try:\n ... with push_seed(24):\n ... print(randint(0,1000000,3))\n ... raise Exception()\n ... except Exception:\n ... print("Exception raised")\n ... print(randint(0,1000000))\n 242082\n [242082 899 211136]\n Exception raised\n 899\n '¶
- __enter__() None ¶
- __init__(seed: int | None = None) None ¶
- __module__ = 'sasmodels.compare'¶
- __weakref__¶
list of weak references to the object
- sasmodels.compare.randomize_pars(model_info: ModelInfo, pars: Mapping[str, float], maxdim: float = inf, is2d: bool = False) Mapping[str, float] ¶
Generate random values for all of the parameters.
Valid ranges for the random number generator are guessed from the name of the parameter; this will not account for constraints such as cap radius greater than cylinder radius in the capped_cylinder model, so
constrain_pars()
needs to be called afterward..
- sasmodels.compare.run_models(opts: Dict[str, Any], verbose: bool = False) Dict[str, Any] ¶
Process a parameter set, return calculation results and times.
- sasmodels.compare.set_beam_stop(data: Data, radius: float, outer: float = None) None ¶
Add a beam stop of the given radius. If outer, make an annulus.
- sasmodels.compare.set_spherical_integration_parameters(opts: Dict[str, Any], steps: int) None ¶
Set integration parameters for spherical integration over the entire surface in theta-phi coordinates.
- sasmodels.compare.suppress_magnetism(pars: Mapping[str, float]) Mapping[str, float] ¶
Complete eliminate magnetism of the model to test models more quickly.
- sasmodels.compare.suppress_pd(pars: Mapping[str, float]) Mapping[str, float] ¶
Complete eliminate polydispersity of the model to test models more quickly.
- sasmodels.compare.tic() Callable[[], float] ¶
Timer function.
Use “toc=tic()” to start the clock and “toc()” to measure a time interval.
- sasmodels.compare.time_calculation(calculator: Calculator, pars: Mapping[str, float], evals: int = 1)¶
Compute the average calculation time over N evaluations.
An additional call is generated without polydispersity in order to initialize the calculation engine, and make the average more stable.
sasmodels.compare_many module¶
Program to compare results from many random parameter sets for a given model.
The result is a comma separated value (CSV) table that can be redirected from standard output into a file and loaded into a spreadsheet.
The models are compared for each parameter set and if the difference is
greater than expected for that precision, the parameter set is labeled
as bad and written to the output, along with the random seed used to
generate that parameter value. This seed can be used with compare
to reload and display the details of the model.
- sasmodels.compare_many.calc_stats(target: ndarray, value: ndarray, index: Any) Tuple[float, float, float, float] ¶
Calculate statistics between the target value and the computed value.
target and value are the vectors being compared, with the difference normalized by target to get relative error. Only the elements listed in index are used, though index may be and empty slice defined by slice(None, None).
Returns:
maxrel the maximum relative difference
rel95 the relative difference with the 5% biggest differences ignored
maxabs the maximum absolute difference for the 5% biggest differences
maxval the maximum value for the 5% biggest differences
- sasmodels.compare_many.compare_instance(name: str, data: Data, index: Any, N: int = 1, mono: bool = True, cutoff: float = 1e-05, base: str = 'single', comp: str = 'double') None ¶
Compare the model under different calculation engines.
name is the name of the model.
data is the data object giving \(q, \Delta q\) calculation points.
index is the active set of points.
N is the number of comparisons to make.
cutoff is the polydispersity weight cutoff to make the calculation a little bit faster.
base and comp are the names of the calculation engines to compare.
- sasmodels.compare_many.main(argv: List[str]) None ¶
Main program.
- sasmodels.compare_many.print_column_headers(pars: Dict[str, float], parts: List[str]) None ¶
Generate column headers for the differences and for the parameters, and print them to standard output.
- sasmodels.compare_many.print_help() None ¶
Print usage string, the option description and the list of available models.
- sasmodels.compare_many.print_usage() None ¶
Print the command usage string.
sasmodels.conversion_table module¶
Parameter conversion table
CONVERSION_TABLE gives the old model name and a dictionary of old parameter
names for each parameter in sasmodels. This is used by convert
to
determine the equivalent parameter set when comparing a sasmodels model to
the models defined in previous versions of SasView and sasmodels. This is now
versioned based on the version number of SasView.
When any sasmodels parameter or model name is changed, this must be modified to account for that.
Usage:
<old_Sasview_version> : {
<new_model_name> : [
<old_model_name> ,
{
<new_param_name_1> : <old_param_name_1>,
...
<new_param_name_n> : <old_param_name_n>
}
]
}
Any future parameter and model name changes can and should be given in this table for future compatibility.
sasmodels.convert module¶
Convert models to and from sasview.
- sasmodels.convert._check_one(name, seed=None)¶
Generate a random set of parameters for name, and check that they can be converted back to SasView 3.x and forward again to sasmodels. Raises an error if the parameters are changed.
- sasmodels.convert._conversion_target(model_name, version=(3, 1, 2))¶
Find the sasmodel name which translates into the sasview name.
Note: CoreShellEllipsoidModel translates into core_shell_ellipsoid:1. This is necessary since there is only one variant in sasmodels for the two variants in sasview.
- sasmodels.convert._convert_pars(pars, mapping)¶
Rename the parameters and any associated polydispersity attributes.
- sasmodels.convert._dot_pd_to_underscore_pd(par)¶
- sasmodels.convert._get_translation_table(model_info, version=(3, 1, 2))¶
- sasmodels.convert._hand_convert(name, oldpars, version=(3, 1, 2))¶
- sasmodels.convert._hand_convert_3_1_2_to_4_1(name, oldpars)¶
- sasmodels.convert._is_sld(model_info, par)¶
Return True if parameter is a magnetic magnitude or SLD parameter.
- sasmodels.convert._pd_to_underscores(pars)¶
- sasmodels.convert._remove_pd(pars, key, name)¶
Remove polydispersity from the parameter list.
Note: operates in place
- sasmodels.convert._rename_magnetic_angles(pars)¶
Change name of magnetic angle.
- sasmodels.convert._rename_magnetic_pars(pars)¶
Change from M0:par to par_M0, etc.
- sasmodels.convert._rescale(par, scale)¶
- sasmodels.convert._rescale_sld(model_info, pars, scale)¶
rescale all sld parameters in the new model definition by scale so the numbers are nicer. Relies on the fact that all sld parameters in the new model definition end with sld. For backward conversion use scale=1e-6. For forward conversion use scale=1e6.
- sasmodels.convert._revert_pars(pars, mapping)¶
Rename the parameters and any associated polydispersity attributes.
- sasmodels.convert._trim_vectors(model_info, pars, oldpars)¶
- sasmodels.convert.constrain_new_to_old(model_info, pars)¶
Restrict parameter values to those that will match sasview.
- sasmodels.convert.convert_model(name, pars, use_underscore=False, model_version=(3, 1, 2))¶
Convert model from old style parameter names to new style.
- sasmodels.convert.revert_name(model_info)¶
Translate model name back to the name used in SasView 3.x
- sasmodels.convert.revert_pars(model_info, pars)¶
Convert model from new style parameter names to old style.
- sasmodels.convert.test_backward_forward()¶
Test conversion of model parameters from 4.x to 3.x and back.
sasmodels.core module¶
Core model handling routines.
- class sasmodels.core.Any(*args, **kwargs)¶
Bases:
object
Special type indicating an unconstrained type.
Any is compatible with every type.
Any assumed to have all methods.
All values assumed to be instances of Any.
Note that all the above statements are true from the point of view of static type checkers. At runtime, Any should not be used with instance checks.
- __annotations__ = {}¶
- __dict__ = mappingproxy({'__module__': 'typing', '__doc__': 'Special type indicating an unconstrained type.\n\n - Any is compatible with every type.\n - Any assumed to have all methods.\n - All values assumed to be instances of Any.\n\n Note that all the above statements are true from the point of view of\n static type checkers. At runtime, Any should not be used with instance\n checks.\n ', '__new__': <staticmethod(<function Any.__new__>)>, '__dict__': <attribute '__dict__' of 'Any' objects>, '__weakref__': <attribute '__weakref__' of 'Any' objects>, '__annotations__': {}})¶
- __doc__ = 'Special type indicating an unconstrained type.\n\n - Any is compatible with every type.\n - Any assumed to have all methods.\n - All values assumed to be instances of Any.\n\n Note that all the above statements are true from the point of view of\n static type checkers. At runtime, Any should not be used with instance\n checks.\n '¶
- __module__ = 'typing'¶
- static __new__(cls, *args, **kwargs)¶
- __weakref__¶
list of weak references to the object
- class sasmodels.core.KernelModel¶
Bases:
object
Model definition for the compute engine.
- __annotations__ = {'dtype': 'np.dtype', 'info': 'ModelInfo'}¶
- __dict__ = mappingproxy({'__module__': 'sasmodels.kernel', '__doc__': '\n Model definition for the compute engine.\n ', 'info': None, 'dtype': None, 'make_kernel': <function KernelModel.make_kernel>, 'release': <function KernelModel.release>, '__dict__': <attribute '__dict__' of 'KernelModel' objects>, '__weakref__': <attribute '__weakref__' of 'KernelModel' objects>, '__annotations__': {'info': 'ModelInfo', 'dtype': 'np.dtype'}})¶
- __doc__ = '\n Model definition for the compute engine.\n '¶
- __module__ = 'sasmodels.kernel'¶
- __weakref__¶
list of weak references to the object
- dtype: dtype = None¶
- make_kernel(q_vectors: List[ndarray]) Kernel ¶
Instantiate a kernel for evaluating the model at q_vectors.
- release() None ¶
Free resources associated with the kernel.
- class sasmodels.core.ModelInfo¶
Bases:
object
Interpret the model definition file, categorizing the parameters.
The module can be loaded with a normal python import statement if you know which module you need, or with __import__(‘sasmodels.model.’+name) if the name is in a string.
The structure should be mostly static, other than the delayed definition of Iq, Iqac and Iqabc if they need to be defined.
- Imagnetic: None | str | Callable[[...], ndarray] = None¶
Returns I(qx, qy, a, b, …). The interface follows
Iq
.
- Iq: None | str | Callable[[...], ndarray] = None¶
Returns I(q, a, b, …) for parameters a, b, etc. defined by the parameter table. Iq can be defined as a python function, or as a C function. If it is defined in C, then set Iq to the body of the C function, including the return statement. This function takes values for q and each of the parameters as separate double values (which may be converted to float or long double by sasmodels). All source code files listed in
source
will be loaded before the Iq function is defined. If Iq is not present, then sources should define static double Iq(double q, double a, double b, …) which will return I(q, a, b, …). Multiplicity parameters are sent as pointers to doubles. Constants in floating point expressions should include the decimal point. Seegenerate
for more details. If have_Fq is True, then Iq should return an interleaved array of \([\sum F(q_1), \sum F^2(q_1), \ldots, \sum F(q_n), \sum F^2(q_n)]\).
- Iqabc: None | str | Callable[[...], ndarray] = None¶
Returns I(qa, qb, qc, a, b, …). The interface follows
Iq
.
- Iqac: None | str | Callable[[...], ndarray] = None¶
Returns I(qab, qc, a, b, …). The interface follows
Iq
.
- Iqxy: None | str | Callable[[...], ndarray] = None¶
Returns I(qx, qy, a, b, …). The interface follows
Iq
.
- __annotations__ = {'Imagnetic': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iq': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqabc': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqac': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqxy': 'Union[None, str, Callable[[...], np.ndarray]]', 'base': 'ParameterTable', 'basefile': 'Optional[str]', 'c_code': 'Optional[str]', 'category': 'Optional[str]', 'composition': 'Optional[Tuple[str, List[ModelInfo]]]', 'description': 'str', 'docs': 'str', 'filename': 'Optional[str]', 'form_volume': 'Union[None, str, Callable[[np.ndarray], float]]', 'hidden': 'Optional[Callable[[int], Set[str]]]', 'id': 'str', 'lineno': 'Dict[str, int]', 'name': 'str', 'opencl': 'bool', 'parameters': 'ParameterTable', 'profile': 'Optional[Callable[[np.ndarray], None]]', 'profile_axes': 'Tuple[str, str]', 'radius_effective': 'Union[None, Callable[[int, np.ndarray], float]]', 'radius_effective_modes': 'List[str]', 'random': 'Optional[Callable[[], Dict[str, float]]]', 'sesans': 'Optional[Callable[[np.ndarray], np.ndarray]]', 'shell_volume': 'Union[None, str, Callable[[np.ndarray], float]]', 'single': 'bool', 'source': 'List[str]', 'structure_factor': 'bool', 'tests': 'List[TestCondition]', 'title': 'str', 'translation': 'Optional[str]', 'valid': 'str'}¶
- __dict__ = mappingproxy({'__module__': 'sasmodels.modelinfo', '__doc__': "\n Interpret the model definition file, categorizing the parameters.\n\n The module can be loaded with a normal python import statement if you\n know which module you need, or with __import__('sasmodels.model.'+name)\n if the name is in a string.\n\n The structure should be mostly static, other than the delayed definition\n of *Iq*, *Iqac* and *Iqabc* if they need to be defined.\n ", 'filename': None, 'basefile': None, 'id': None, 'name': None, 'title': None, 'description': None, 'parameters': None, 'base': None, 'translation': None, 'composition': None, 'hidden': None, 'docs': None, 'category': None, 'single': None, 'opencl': None, 'structure_factor': None, 'have_Fq': False, 'radius_effective_modes': None, 'source': None, 'c_code': None, 'valid': None, 'form_volume': None, 'shell_volume': None, 'radius_effective': None, 'Iq': None, 'Iqxy': None, 'Iqac': None, 'Iqabc': None, 'Imagnetic': None, 'profile': None, 'profile_axes': None, 'sesans': None, 'random': None, 'lineno': None, 'tests': None, '__init__': <function ModelInfo.__init__>, 'get_hidden_parameters': <function ModelInfo.get_hidden_parameters>, '__dict__': <attribute '__dict__' of 'ModelInfo' objects>, '__weakref__': <attribute '__weakref__' of 'ModelInfo' objects>, '__annotations__': {'filename': 'Optional[str]', 'basefile': 'Optional[str]', 'id': 'str', 'name': 'str', 'title': 'str', 'description': 'str', 'parameters': 'ParameterTable', 'base': 'ParameterTable', 'translation': 'Optional[str]', 'composition': 'Optional[Tuple[str, List[ModelInfo]]]', 'hidden': 'Optional[Callable[[int], Set[str]]]', 'docs': 'str', 'category': 'Optional[str]', 'single': 'bool', 'opencl': 'bool', 'structure_factor': 'bool', 'radius_effective_modes': 'List[str]', 'source': 'List[str]', 'c_code': 'Optional[str]', 'valid': 'str', 'form_volume': 'Union[None, str, Callable[[np.ndarray], float]]', 'shell_volume': 'Union[None, str, Callable[[np.ndarray], float]]', 'radius_effective': 'Union[None, Callable[[int, np.ndarray], float]]', 'Iq': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqxy': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqac': 'Union[None, str, Callable[[...], np.ndarray]]', 'Iqabc': 'Union[None, str, Callable[[...], np.ndarray]]', 'Imagnetic': 'Union[None, str, Callable[[...], np.ndarray]]', 'profile': 'Optional[Callable[[np.ndarray], None]]', 'profile_axes': 'Tuple[str, str]', 'sesans': 'Optional[Callable[[np.ndarray], np.ndarray]]', 'random': 'Optional[Callable[[], Dict[str, float]]]', 'lineno': 'Dict[str, int]', 'tests': 'List[TestCondition]'}})¶
- __doc__ = "\n Interpret the model definition file, categorizing the parameters.\n\n The module can be loaded with a normal python import statement if you\n know which module you need, or with __import__('sasmodels.model.'+name)\n if the name is in a string.\n\n The structure should be mostly static, other than the delayed definition\n of *Iq*, *Iqac* and *Iqabc* if they need to be defined.\n "¶
- __init__() None ¶
- __module__ = 'sasmodels.modelinfo'¶
- __weakref__¶
list of weak references to the object
- base: ParameterTable = None¶
For reparameterized systems, base is the base parameter table. For normal systems it is simply a copy of parameters.
- basefile: str | None = None¶
Base file is usually filename, but not when a model has been reparameterized, in which case it is the file containing the original model definition. This is needed to signal an additional dependency for the model time stamp, and so that the compiler reports correct file for syntax errors.
- c_code: str | None = None¶
inline source code, added after all elements of source
- category: str | None = None¶
Location of the model description in the documentation. This takes the form of “section” or “section:subsection”. So for example, porod uses category=”shape-independent” so it is in the Shape-Independent Functions section whereas capped_cylinder uses: category=”shape:cylinder”, which puts it in the Cylinder Functions section.
- composition: Tuple[str, List[ModelInfo]] | None = None¶
Composition is None if this is an independent model, or it is a tuple with comoposition type (‘product’ or ‘misture’) and a list of
ModelInfo
blocks for the composed objects. This allows us to rebuild a complete mixture or product model from the info block. composition is not given in the model definition file, but instead arises when the model is constructed using names such as sphere*hardsphere or cylinder+sphere.
- description: str = None¶
Long description of the model.
- docs: str = None¶
Doc string from the top of the model file. This should be formatted using ReStructuredText format, with latex markup in “.. math” environments, or in dollar signs. This will be automatically extracted to a .rst file by
generate.make_doc()
, then converted to HTML or PDF by Sphinx.
- filename: str | None = None¶
Full path to the file defining the kernel, if any.
- form_volume: None | str | Callable[[ndarray], float] = None¶
Returns the form volume for python-based models. Form volume is needed for volume normalization in the polydispersity integral. If no parameters are volume parameters, then form volume is not needed. For C-based models, (with
source
defined, or withIq
defined using a string containing C code), form_volume must also be C code, either defined as a string, or in the sources.
Returns the set of hidden parameters for the model. control is the value of the control parameter. Note that multiplicity models have an implicit control parameter, which is the parameter that controls the multiplicity.
- have_Fq = False¶
True if the model defines an Fq function with signature
void Fq(double q, double *F1, double *F2, ...)
Different variants require different parameters. In order to show just the parameters needed for the variant selected, you should provide a function hidden(control) -> set([‘a’, ‘b’, …]) indicating which parameters need to be hidden. For multiplicity models, you need to use the complete name of the parameter, including its number. So for example, if variant “a” uses only sld1 and sld2, then sld3, sld4 and sld5 of multiplicity parameter sld[5] should be in the hidden set.
- id: str = None¶
Id of the kernel used to load it from the filesystem.
- lineno: Dict[str, int] = None¶
Line numbers for symbols defining C code
- name: str = None¶
Display name of the model, which defaults to the model id but with capitalization of the parts so for example core_shell defaults to “Core Shell”.
- opencl: bool = None¶
True if the model can be run as an opencl model. If for some reason the model cannot be run in opencl (e.g., because the model passes functions by reference), then set this to false.
- parameters: ParameterTable = None¶
Model parameter table. Parameters are defined using a list of parameter definitions, each of which is contains parameter name, units, default value, limits, type and description. See
Parameter
for details on the individual parameters. The parameters are gathered into aParameterTable
, which provides various views into the parameter list.
- profile: Callable[[ndarray], None] | None = None¶
Returns a model profile curve x, y. If profile is defined, this curve will appear in response to the Show button in SasView. Use
profile_axes
to set the axis labels. Note that y values will be scaled by 1e6 before plotting.
- profile_axes: Tuple[str, str] = None¶
Axis labels for the
profile
plot. The default is [‘x’, ‘y’]. Only the x component is used for now.
- radius_effective: None | Callable[[int, ndarray], float] = None¶
Computes the effective radius of the shape given the volume parameters. Only needed for models defined in python that can be used for monodisperse approximation for non-dilute solutions, P@S. The first argument is the integer effective radius mode, with default 0.
- radius_effective_modes: List[str] = None¶
List of options for computing the effective radius of the shape, or None if the model is not usable as a form factor model.
- random: Callable[[], Dict[str, float]] | None = None¶
Returns a random parameter set for the model
- sesans: Callable[[ndarray], ndarray] | None = None¶
Returns sesans(z, a, b, …) for models which can directly compute the SESANS correlation function. Note: not currently implemented.
- shell_volume: None | str | Callable[[ndarray], float] = None¶
Returns the shell volume for python-based models. Form volume and shell volume are needed for volume normalization in the polydispersity integral and structure interactions for hollow shapes. If no parameters are volume parameters, then shell volume is not needed. For C-based models, (with
source
defined, or withIq
defined using a string containing C code), shell_volume must also be C code, either defined as a string, or in the sources.
- single: bool = None¶
True if the model can be computed accurately with single precision. This is True by default, but models such as bcc_paracrystal set it to False because they require double precision calculations.
- source: List[str] = None¶
List of C source files used to define the model. The source files should define the Iq function, and possibly Iqac or Iqabc if the model defines orientation parameters. Files containing the most basic functions must appear first in the list, followed by the files that use those functions.
- structure_factor: bool = None¶
True if the model is a structure factor used to model the interaction between form factor models. This will default to False if it is not provided in the file.
- tests: List[Tuple[Mapping[str, float | List[float]], str | float | List[float] | Tuple[float, float] | List[Tuple[float, float]], float | List[float]]] = None¶
The set of tests that must pass. The format of the tests is described in
model_test
.
- title: str = None¶
Short description of the model.
- translation: str | None = None¶
Parameter translation code to convert from parameters table from caller to the base table used to evaluate the model.
- valid: str = None¶
Expression which evaluates to True if the input parameters are valid and the model can be computed, or False otherwise. Invalid parameter sets will not be included in the weighted \(I(Q)\) calculation or its volume normalization. Use C syntax for the expressions, with || for or && for and and ! for not. Any non-magnetic parameter can be used.
- sasmodels.core._matches(name, kind)¶
- sasmodels.core.basename(p)¶
Returns the final component of a pathname
- sasmodels.core.build_model(model_info: ModelInfo, dtype: str = None, platform: str = 'ocl') KernelModel ¶
Prepare the model for the default execution platform.
This will return an OpenCL model, a DLL model or a python model depending on the model and the computing platform.
model_info is the model definition structure returned from
load_model_info()
.dtype indicates whether the model should use single or double precision for the calculation. Choices are ‘single’, ‘double’, ‘quad’, ‘half’, or ‘fast’. If dtype ends with ‘!’, then force the use of the DLL rather than OpenCL for the calculation.
platform should be “dll” to force the dll to be used for C models, otherwise it uses the default “ocl”.
- sasmodels.core.glob(pathname, *, root_dir=None, dir_fd=None, recursive=False, include_hidden=False)¶
Return a list of paths matching a pathname pattern.
The pattern may contain simple shell-style wildcards a la fnmatch. Unlike fnmatch, filenames starting with a dot are special cases that are not matched by ‘*’ and ‘?’ patterns by default.
If include_hidden is true, the patterns ‘*’, ‘?’, ‘**’ will match hidden directories.
If recursive is true, the pattern ‘**’ will match any files and zero or more directories and subdirectories.
- sasmodels.core.joinpath(path, *paths)¶
- sasmodels.core.list_models(kind: str = None) List[str] ¶
Return the list of available models on the model path.
kind can be one of the following:
all: all models
py: python models only
c: c models only
single: c models which support single precision
double: c models which require double precision
opencl: c models which run in opencl
dll: c models which do not run in opencl
1d: models without orientation
2d: models with orientation
magnetic: models supporting magnetic sld
nommagnetic: models without magnetic parameter
For multiple conditions, combine with plus. For example, c+single+2d would return all oriented models implemented in C which can be computed accurately with single precision arithmetic.
- sasmodels.core.list_models_main() int ¶
Run list_models as a main program. See
list_models()
for the kinds of models that can be requested on the command line.
- sasmodels.core.load_model(model_name: str, dtype: str = None, platform: str = 'ocl') KernelModel ¶
Load model info and build model.
model_name is the name of the model, or perhaps a model expression such as sphere*hardsphere or sphere+cylinder.
dtype and platform are given by
build_model()
.
- sasmodels.core.load_model_info(model_string: str) ModelInfo ¶
Load a model definition given the model name.
model_string is the name of the model, or perhaps a model expression such as sphere*cylinder or sphere+cylinder. Use ‘@’ for a structure factor product, e.g. sphere@hardsphere. Custom models can be specified by prefixing the model name with ‘custom.’, e.g. ‘custom.MyModel+sphere’.
This returns a handle to the module defining the model. This can be used with functions in generate to build the docs or extract model info.
- sasmodels.core.merge_deps(old, new)¶
Merge two dependency lists. The lists are partially ordered, with all dependents coming after the items they depend on, but otherwise order doesn’t matter. The merged list preserves the partial ordering. So if old and new both include the item “c”, then all items that come before “c” in old and new will come before “c” in the result, and all items that come after “c” in old and new will come after “c” in the result.
- sasmodels.core.parse_dtype(model_info: ModelInfo, dtype: str = None, platform: str = None) Tuple[dtype, bool, str] ¶
Interpret dtype string, returning np.dtype, fast flag and platform.
Possible types include ‘half’, ‘single’, ‘double’ and ‘quad’. If the type is ‘fast’, then this is equivalent to dtype ‘single’ but using fast native functions rather than those with the precision level guaranteed by the OpenCL standard. ‘default’ will choose the appropriate default for the model and platform.
Platform preference can be specfied (“ocl”, “cuda”, “dll”), with the default being OpenCL or CUDA if available, otherwise DLL. If the dtype name ends with ‘!’ then platform is forced to be DLL rather than GPU. The default platform is set by the environment variable SAS_OPENCL, SAS_OPENCL=driver:device for OpenCL, SAS_OPENCL=cuda:device for CUDA or SAS_OPENCL=none for DLL.
This routine ignores the preferences within the model definition. This is by design. It allows us to test models in single precision even when we have flagged them as requiring double precision so we can easily check the performance on different platforms without having to change the model definition.
- sasmodels.core.precompile_dlls(path: str, dtype: str = 'double') List[str] ¶
Precompile the dlls for all builtin models, returning a list of dll paths.
path is the directory in which to save the dlls. It will be created if it does not already exist.
This can be used when build the windows distribution of sasmodels which may be missing the OpenCL driver and the dll compiler.
- sasmodels.core.reparameterize(base, parameters, translation, filename=None, title=None, insert_after=None, docs=None, name=None, source=None)¶
Reparameterize an existing model.
base is the original modelinfo. This cannot be a reparameterized model; only one level of reparameterization is supported.
parameters are the new parameter definitions that will be included in the model info.
translation is a string each line containing var = expr. The variable var can be a new intermediate value, or it can be a parameter from the base model that will be replace by the expression. The expression expr can be any C99 expression, including C-style if-expressions condition ? value1 : value2. Expressions can use any new or existing parameter that is not being replaced including intermediate values that are previously defined. Parameters can only be assigned once, never updated. C99 math functions are available, as well as any functions defined in the base model or included in source (see below).
filename is the filename for the replacement model. This is usually __file__, giving the path to the model file, but it could also be a nominal filename for translations defined on-the-fly.
title is the model title, which defaults to base.title plus “ (reparameterized)”.
insert_after controls parameter placement. By default, the new parameters replace the old parameters in their original position. Instead, you can provide a dictionary {‘par’: ‘newpar1,newpar2’} indicating that new parameters named newpar1 and newpar2 should be included in the table after the existing parameter par, or at the beginning if par is the empty string.
docs constains the doc string for the translated model, which by default references the base model and gives the translation text.
name is the model name (default =
"constrained_" + base.name
).source is a list any additional C source files that should be included to define functions and constants used in the translation expressions. This will be included after all sources for the base model. Sources will only be included once, even if they are listed in both places, so feel free to list all dependencies for the helper function, such as “lib/polevl.c”.
- sasmodels.core.test_composite() None ¶
Check that model load works
- sasmodels.core.test_composite_order()¶
Check that mixture models produce the same result independent of ordder.
sasmodels.data module¶
SAS data representations.
Plotting functions for data sets:
plot_data()
plots the data file.
plot_theory()
plots a calculated result from the model.
Wrappers for the sasview data loader and data manipulations:
load_data()
loads a sasview data file.
set_beam_stop()
masks the beam stop from the data.
set_half()
selects the right or left half of the data, which can be useful for shear measurements which have not been properly corrected for path length and reflections.
set_top()
cuts the top part off the data.
Empty data sets for evaluating models without data:
empty_data1D()
creates an empty dataset, which is useful for plotting a theory function before the data is measured.
empty_data2D()
creates an empty 2D dataset.
Note that the empty datasets use a minimal representation of the SasView objects so that models can be run without SasView on the path. You could also use these for your own data loader.
- class sasmodels.data.Data1D(x: ndarray | None = None, y: ndarray | None = None, dx: ndarray | None = None, dy: ndarray | None = None)¶
Bases:
object
1D data object.
Note that this definition matches the attributes from sasview, with some generic 1D data vectors and some SAS specific definitions. Some refactoring to allow consistent naming conventions between 1D, 2D and SESANS data would be helpful.
Attributes
x, dx: \(q\) vector and gaussian resolution
y, dy: \(I(q)\) vector and measurement uncertainty
mask: values to include in plotting/analysis
dxl: slit widths for slit smeared data, with dx ignored
qmin, qmax: range of \(q\) values in x
filename: label for the data line
_xaxis, _xunit: label and units for the x axis
_yaxis, _yunit: label and units for the y axis
- __annotations__ = {}¶
- __dict__ = mappingproxy({'__module__': 'sasmodels.data', '__doc__': '\n 1D data object.\n\n Note that this definition matches the attributes from sasview, with\n some generic 1D data vectors and some SAS specific definitions. Some\n refactoring to allow consistent naming conventions between 1D, 2D and\n SESANS data would be helpful.\n\n **Attributes**\n\n *x*, *dx*: $q$ vector and gaussian resolution\n\n *y*, *dy*: $I(q)$ vector and measurement uncertainty\n\n *mask*: values to include in plotting/analysis\n\n *dxl*: slit widths for slit smeared data, with *dx* ignored\n\n *qmin*, *qmax*: range of $q$ values in *x*\n\n *filename*: label for the data line\n\n *_xaxis*, *_xunit*: label and units for the *x* axis\n\n *_yaxis*, *_yunit*: label and units for the *y* axis\n ', '__init__': <function Data1D.__init__>, 'xaxis': <function Data1D.xaxis>, 'yaxis': <function Data1D.yaxis>, '__dict__': <attribute '__dict__' of 'Data1D' objects>, '__weakref__': <attribute '__weakref__' of 'Data1D' objects>, '__annotations__': {}})¶
- __doc__ = '\n 1D data object.\n\n Note that this definition matches the attributes from sasview, with\n some generic 1D data vectors and some SAS specific definitions. Some\n refactoring to allow consistent naming conventions between 1D, 2D and\n SESANS data would be helpful.\n\n **Attributes**\n\n *x*, *dx*: $q$ vector and gaussian resolution\n\n *y*, *dy*: $I(q)$ vector and measurement uncertainty\n\n *mask*: values to include in plotting/analysis\n\n *dxl*: slit widths for slit smeared data, with *dx* ignored\n\n *qmin*, *qmax*: range of $q$ values in *x*\n\n *filename*: label for the data line\n\n *_xaxis*, *_xunit*: label and units for the *x* axis\n\n *_yaxis*, *_yunit*: label and units for the *y* axis\n '¶
- __init__(x: ndarray | None = None, y: ndarray | None = None, dx: ndarray | None = None, dy: ndarray | None = None) None ¶
- __module__ = 'sasmodels.data'¶
- __weakref__¶
list of weak references to the object
- xaxis(label: str, unit: str) None ¶
set the x axis label and unit
- yaxis(label: str, unit: str) None ¶
set the y axis label and unit
- class sasmodels.data.Data2D(x: ndarray | None = None, y: ndarray | None = None, z: ndarray | None = None, dx: ndarray | None = None, dy: ndarray | None = None, dz: ndarray | None = None)¶
Bases:
object
2D data object.
Note that this definition matches the attributes from sasview. Some refactoring to allow consistent naming conventions between 1D, 2D and SESANS data would be helpful.
Attributes
qx_data, dqx_data: \(q_x\) matrix and gaussian resolution
qy_data, dqy_data: \(q_y\) matrix and gaussian resolution
data, err_data: \(I(q)\) matrix and measurement uncertainty
mask: values to exclude from plotting/analysis
qmin, qmax: range of \(q\) values in x
filename: label for the data line
_xaxis, _xunit: label and units for the x axis
_yaxis, _yunit: label and units for the y axis
_zaxis, _zunit: label and units for the y axis
Q_unit, I_unit: units for Q and intensity
x_bins, y_bins: grid steps in x and y directions
- __annotations__ = {}¶
- __dict__ = mappingproxy({'__module__': 'sasmodels.data', '__doc__': '\n 2D data object.\n\n Note that this definition matches the attributes from sasview. Some\n refactoring to allow consistent naming conventions between 1D, 2D and\n SESANS data would be helpful.\n\n **Attributes**\n\n *qx_data*, *dqx_data*: $q_x$ matrix and gaussian resolution\n\n *qy_data*, *dqy_data*: $q_y$ matrix and gaussian resolution\n\n *data*, *err_data*: $I(q)$ matrix and measurement uncertainty\n\n *mask*: values to exclude from plotting/analysis\n\n *qmin*, *qmax*: range of $q$ values in *x*\n\n *filename*: label for the data line\n\n *_xaxis*, *_xunit*: label and units for the *x* axis\n\n *_yaxis*, *_yunit*: label and units for the *y* axis\n\n *_zaxis*, *_zunit*: label and units for the *y* axis\n\n *Q_unit*, *I_unit*: units for Q and intensity\n\n *x_bins*, *y_bins*: grid steps in *x* and *y* directions\n ', '__init__': <function Data2D.__init__>, 'xaxis': <function Data2D.xaxis>, 'yaxis': <function Data2D.yaxis>, 'zaxis': <function Data2D.zaxis>, '__dict__': <attribute '__dict__' of 'Data2D' objects>, '__weakref__': <attribute '__weakref__' of 'Data2D' objects>, '__annotations__': {}})¶
- __doc__ = '\n 2D data object.\n\n Note that this definition matches the attributes from sasview. Some\n refactoring to allow consistent naming conventions between 1D, 2D and\n SESANS data would be helpful.\n\n **Attributes**\n\n *qx_data*, *dqx_data*: $q_x$ matrix and gaussian resolution\n\n *qy_data*, *dqy_data*: $q_y$ matrix and gaussian resolution\n\n *data*, *err_data*: $I(q)$ matrix and measurement uncertainty\n\n *mask*: values to exclude from plotting/analysis\n\n *qmin*, *qmax*: range of $q$ values in *x*\n\n *filename*: label for the data line\n\n *_xaxis*, *_xunit*: label and units for the *x* axis\n\n *_yaxis*, *_yunit*: label and units for the *y* axis\n\n *_zaxis*, *_zunit*: label and units for the *y* axis\n\n *Q_unit*, *I_unit*: units for Q and intensity\n\n *x_bins*, *y_bins*: grid steps in *x* and *y* directions\n '¶
- __init__(x: ndarray | None = None, y: ndarray | None = None, z: ndarray | None = None, dx: ndarray | None = None, dy: ndarray | None = None, dz: ndarray | None = None) None ¶
- __module__ = 'sasmodels.data'¶
- __weakref__¶
list of weak references to the object
- xaxis(label: str, unit: str) None ¶
set the x axis label and unit
- yaxis(label: str, unit: str) None ¶
set the y axis label and unit
- zaxis(label: str, unit: str) None ¶
set the y axis label and unit
- class sasmodels.data.Detector(pixel_size: Tuple[float, float] = (None, None), distance: float = None)¶
Bases:
object
Detector attributes.
- __dict__ = mappingproxy({'__module__': 'sasmodels.data', '__doc__': '\n Detector attributes.\n ', '__init__': <function Detector.__init__>, '__dict__': <attribute '__dict__' of 'Detector' objects>, '__weakref__': <attribute '__weakref__' of 'Detector' objects>, '__annotations__': {}})¶
- __doc__ = '\n Detector attributes.\n '¶
- __init__(pixel_size: Tuple[float, float] = (None, None), distance: float = None) None ¶
- __module__ = 'sasmodels.data'¶
- __weakref__¶
list of weak references to the object
- class sasmodels.data.Sample¶
Bases:
object
Sample attributes.
- __dict__ = mappingproxy({'__module__': 'sasmodels.data', '__doc__': '\n Sample attributes.\n ', '__init__': <function Sample.__init__>, '__dict__': <attribute '__dict__' of 'Sample' objects>, '__weakref__': <attribute '__weakref__' of 'Sample' objects>, '__annotations__': {}})¶
- __doc__ = '\n Sample attributes.\n '¶
- __init__() None ¶
- __module__ = 'sasmodels.data'¶
- __weakref__¶
list of weak references to the object
- class sasmodels.data.SesansData(**kw)¶
Bases:
Data1D
SESANS data object.
This is just
Data1D
with a wavelength parameter.x is spin echo length and y is polarization (P/P0).
- __annotations__ = {}¶
- __doc__ = '\n SESANS data object.\n\n This is just :class:`Data1D` with a wavelength parameter.\n\n *x* is spin echo length and *y* is polarization (P/P0).\n '¶
- __init__(**kw)¶
- __module__ = 'sasmodels.data'¶
- isSesans = True¶
- class sasmodels.data.Source¶
Bases:
object
Beam attributes.
- __dict__ = mappingproxy({'__module__': 'sasmodels.data', '__doc__': '\n Beam attributes.\n ', '__init__': <function Source.__init__>, '__dict__': <attribute '__dict__' of 'Source' objects>, '__weakref__': <attribute '__weakref__' of 'Source' objects>, '__annotations__': {}})¶
- __doc__ = '\n Beam attributes.\n '¶
- __init__() None ¶
- __module__ = 'sasmodels.data'¶
- __weakref__¶
list of weak references to the object
- class sasmodels.data.Vector(x: float = None, y: float = None, z: float | None = None)¶
Bases:
object
3-space vector of x, y, z
- __dict__ = mappingproxy({'__module__': 'sasmodels.data', '__doc__': '\n 3-space vector of *x*, *y*, *z*\n ', '__init__': <function Vector.__init__>, '__dict__': <attribute '__dict__' of 'Vector' objects>, '__weakref__': <attribute '__weakref__' of 'Vector' objects>, '__annotations__': {}})¶
- __doc__ = '\n 3-space vector of *x*, *y*, *z*\n '¶
- __init__(x: float = None, y: float = None, z: float | None = None) None ¶
- __module__ = 'sasmodels.data'¶
- __weakref__¶
list of weak references to the object
- sasmodels.data._as_numpy(data)¶
- sasmodels.data._build_matrix(self, plottable)¶
Build a matrix for 2d plot from a vector Returns a matrix (image) with ~ square binning Requirement: need 1d array formats of self.data, self.qx_data, and self.qy_data where each one corresponds to z, x, or y axis values
- sasmodels.data._fillup_pixels(image=None, weights=None)¶
Fill z values of the empty cells of 2d image matrix with the average over up-to next nearest neighbor points
- Parameters:
image – (2d matrix with some zi = None)
- Returns:
image (2d array )
- TODO:
Find better way to do for-loop below
- sasmodels.data._get_bins(self)¶
get bins set x_bins and y_bins into self, 1d arrays of the index with ~ square binning Requirement: need 1d array formats of self.qx_data, and self.qy_data where each one corresponds to x, or y axis values
- sasmodels.data._plot_2d_signal(data: Data2D, signal: ndarray, vmin: float | None = None, vmax: float | None = None, view: str = None) Tuple[float, float] ¶
Plot the target value for the data. This could be the data itself, the theory calculation, or the residuals.
scale can be ‘log’ for log scale data, or ‘linear’.
- sasmodels.data._plot_result1D(data: Data1D, theory: ndarray | None, resid: ndarray | None, view: str, use_data: bool, limits: Tuple[float, float] | None = None, Iq_calc: ndarray | None = None) None ¶
Plot the data and residuals for 1D data.
- sasmodels.data._plot_result2D(data: Data2D, theory: ndarray | None, resid: ndarray | None, view: str, use_data: bool, limits: Tuple[float, float] | None = None) None ¶
Plot the data and residuals for 2D data.
- sasmodels.data._plot_result_sesans(data: SesansData, theory: ndarray | None, resid: ndarray | None, view: str | None, use_data: bool, limits: Tuple[float, float] | None = None) None ¶
Plot SESANS results.
- sasmodels.data.demo() None ¶
Load and plot a SAS dataset.
- sasmodels.data.empty_data1D(q: ndarray, resolution: float = 0.0, L: float = 0.0, dL: float = 0.0) Data1D ¶
Create empty 1D data using the given q as the x value.
rms resolution \(\Delta q/q\) defaults to 0%. If wavelength L and rms wavelength divergence dL are defined, then resolution defines rms \(\Delta \theta/\theta\) for the lowest q, with \(\theta\) derived from \(q = 4\pi/\lambda \sin(\theta)\).
- sasmodels.data.empty_data2D(qx: ndarray, qy: ndarray | None = None, resolution: float = 0.0) Data2D ¶
Create empty 2D data using the given mesh.
If qy is missing, create a square mesh with qy=qx.
resolution dq/q defaults to 5%.
- sasmodels.data.empty_sesans(z, wavelength=None, zacceptance=None)¶
- sasmodels.data.load_data(filename: str, index: int = 0) Data1D | Data2D | SesansData ¶
Load data using a sasview loader.
- sasmodels.data.plot_data(data: Data1D | Data2D | SesansData, view: str = None, limits: Tuple[float, float] | None = None) None ¶
Plot data loaded by the sasview loader.
data is a sasview data object, either 1D, 2D or SESANS.
view is log, linear or normed.
limits sets the intensity limits on the plot; if None then the limits are inferred from the data.
- sasmodels.data.plot_theory(data: Data1D | Data2D | SesansData, theory: ndarray | None, resid: ndarray | None = None, view: str | None = None, use_data: bool = True, limits: Tuple[float, float] | None = None, Iq_calc: ndarray | None = None) None ¶
Plot theory calculation.
data is needed to define the graph properties such as labels and units, and to define the data mask.
theory is a matrix of the same shape as the data.
view is log, linear or normed
use_data is True if the data should be plotted as well as the theory.
limits sets the intensity limits on the plot; if None then the limits are inferred from the data. If (-inf, inf) then use auto limits.
Iq_calc is the raw theory values without resolution smearing
- sasmodels.data.protect(func: Callable) Callable ¶
Decorator to wrap calls in an exception trapper which prints the exception and continues. Keyboard interrupts are ignored.
- sasmodels.data.set_beam_stop(data: Data1D | Data2D | SesansData, radius: float, outer: float | None = None) None ¶
Add a beam stop of the given radius. If outer, make an annulus.
- sasmodels.data.set_half(data: Data1D | Data2D | SesansData, half: str) None ¶
Select half of the data, either “right” or “left”.
- sasmodels.data.set_top(data: Data1D | Data2D | SesansData, cutoff: float) None ¶
Chop the top off the data, above cutoff.
sasmodels.details module¶
Kernel Call Details¶
When calling sas computational kernels with polydispersity there are a
number of details that need to be sent to the caller. This includes the
list of polydisperse parameters, the number of points in the polydispersity
weight distribution, and which parameter is the “theta” parameter for
polar coordinate integration. The CallDetails
object maintains
this data. Use make_details()
to build a details object which
can be passed to one of the computational kernels.
- class sasmodels.details.CallDetails(model_info: ModelInfo)¶
Bases:
object
Manage the polydispersity information for the kernel call.
Conceptually, a polydispersity calculation is an integral over a mesh in n-D space where n is the number of polydisperse parameters. In order to keep the program responsive, and not crash the GPU, only a portion of the mesh is computed at a time. Meshes with a large number of points will therefore require many calls to the polydispersity loop. Restarting a nested loop in the middle requires that the indices of the individual mesh dimensions can be computed for the current loop location. This is handled by the pd_stride vector, with n//stride giving the loop index and n%stride giving the position in the sub loops.
One of the parameters may be the latitude. When integrating in polar coordinates, the total circumference decreases as latitude varies from pi r^2 at the equator to 0 at the pole, and the weight associated with a range of latitude values needs to be scaled by this circumference. This scale factor needs to be updated each time the theta value changes. theta_par indicates which of the values in the parameter vector is the latitude parameter, or -1 if there is no latitude parameter in the model. In practice, the normalization term cancels if the latitude is not a polydisperse parameter.
- __dict__ = mappingproxy({'__module__': 'sasmodels.details', '__doc__': '\n Manage the polydispersity information for the kernel call.\n\n Conceptually, a polydispersity calculation is an integral over a mesh\n in n-D space where n is the number of polydisperse parameters. In order\n to keep the program responsive, and not crash the GPU, only a portion\n of the mesh is computed at a time. Meshes with a large number of points\n will therefore require many calls to the polydispersity loop. Restarting\n a nested loop in the middle requires that the indices of the individual\n mesh dimensions can be computed for the current loop location. This\n is handled by the *pd_stride* vector, with n//stride giving the loop\n index and n%stride giving the position in the sub loops.\n\n One of the parameters may be the latitude. When integrating in polar\n coordinates, the total circumference decreases as latitude varies from\n pi r^2 at the equator to 0 at the pole, and the weight associated\n with a range of latitude values needs to be scaled by this circumference.\n This scale factor needs to be updated each time the theta value\n changes. *theta_par* indicates which of the values in the parameter\n vector is the latitude parameter, or -1 if there is no latitude\n parameter in the model. In practice, the normalization term cancels\n if the latitude is not a polydisperse parameter.\n ', 'parts': None, '__init__': <function CallDetails.__init__>, 'pd_par': <property object>, 'pd_length': <property object>, 'pd_offset': <property object>, 'pd_stride': <property object>, 'num_eval': <property object>, 'num_weights': <property object>, 'num_active': <property object>, 'theta_par': <property object>, 'show': <function CallDetails.show>, '__dict__': <attribute '__dict__' of 'CallDetails' objects>, '__weakref__': <attribute '__weakref__' of 'CallDetails' objects>, '__annotations__': {'parts': 'List["CallDetails"]', 'offset': 'np.ndarray', 'length': 'np.ndarray'}})¶
- __doc__ = '\n Manage the polydispersity information for the kernel call.\n\n Conceptually, a polydispersity calculation is an integral over a mesh\n in n-D space where n is the number of polydisperse parameters. In order\n to keep the program responsive, and not crash the GPU, only a portion\n of the mesh is computed at a time. Meshes with a large number of points\n will therefore require many calls to the polydispersity loop. Restarting\n a nested loop in the middle requires that the indices of the individual\n mesh dimensions can be computed for the current loop location. This\n is handled by the *pd_stride* vector, with n//stride giving the loop\n index and n%stride giving the position in the sub loops.\n\n One of the parameters may be the latitude. When integrating in polar\n coordinates, the total circumference decreases as latitude varies from\n pi r^2 at the equator to 0 at the pole, and the weight associated\n with a range of latitude values needs to be scaled by this circumference.\n This scale factor needs to be updated each time the theta value\n changes. *theta_par* indicates which of the values in the parameter\n vector is the latitude parameter, or -1 if there is no latitude\n parameter in the model. In practice, the normalization term cancels\n if the latitude is not a polydisperse parameter.\n '¶
- __module__ = 'sasmodels.details'¶
- __weakref__¶
list of weak references to the object
- property num_active¶
Number of active polydispersity loops
- property num_eval¶
Total size of the pd mesh
- property num_weights¶
Total length of all the weight vectors
- parts: List[CallDetails] = None¶
- property pd_length¶
Number of weights for each polydisperse parameter
- property pd_offset¶
Offsets for the individual weight vectors in the set of weights
- property pd_par¶
List of polydisperse parameters
- property pd_stride¶
Stride in the pd mesh for each pd dimension
- show(values=None)¶
Print the polydispersity call details to the console
- property theta_par¶
Location of the theta parameter in the parameter vector
- sasmodels.details.convert_magnetism(parameters: ParameterTable, values: Sequence[ndarray]) bool ¶
Convert magnetism values from polar to rectangular coordinates.
Returns True if any magnetism is present.
- sasmodels.details.correct_theta_weights(parameters: ParameterTable, dispersity: Sequence[ndarray], weights: Sequence[ndarray]) Sequence[ndarray] ¶
Deprecated Theta weights will be computed in the kernel wrapper if they are needed.
If there is a theta parameter, update the weights of that parameter so that the cosine weighting required for polar integration is preserved.
Avoid evaluation strictly at the pole, which would otherwise send the weight to zero. This is probably not a problem in practice (if dispersity is +/- 90, then you probably should be using a 1-D model of the circular average).
Note: scale and background parameters are not include in the tuples for dispersity and weights, so index is parameters.theta_offset, not parameters.theta_offset+2
Returns updated weights vectors
- sasmodels.details.dispersion_mesh(model_info: ModelInfo, mesh: List[Tuple[float, ndarray, ndarray]]) Tuple[List[ndarray], List[ndarray]] ¶
Create a mesh grid of dispersion parameters and weights.
mesh is a list of (value, dispersity, weights), where the values are the individual parameter values, and (dispersity, weights) is the distribution of parameter values.
Only the volume parameters should be included in this list. Orientation parameters do not affect the calculation of effective radius or volume ratio. This is convenient since it avoids the distinction between value and dispersity that is present in orientation parameters but not shape parameters.
Returns [p1,p2,…],w where pj is a vector of values for parameter j and w is a vector containing the products for weights for each parameter set in the vector.
- sasmodels.details.make_details(model_info: ModelInfo, length: ndarray, offset: ndarray, num_weights: int) CallDetails ¶
Return a
CallDetails
object for a polydisperse calculation of the model defined by model_info. Polydispersity is defined by the length of the polydispersity distribution for each parameter and the offset of the distribution in the polydispersity array. Monodisperse parameters should use a polydispersity length of one with weight 1.0. num_weights is the total length of the polydispersity array.
- sasmodels.details.make_kernel_args(kernel: Kernel, mesh: Tuple[List[np.ndarray], List[np.ndarray]]) Tuple[CallDetails, np.ndarray, bool] ¶
Converts (value, dispersity, weight) for each parameter into kernel pars.
Returns a CallDetails object indicating the polydispersity, a data object containing the different values, and the magnetic flag indicating whether any magnetic magnitudes are non-zero. Magnetic vectors (M0, phi, theta) are converted to rectangular coordinates (mx, my, mz).
sasmodels.direct_model module¶
Class interface to the model calculator.
Calling a model is somewhat non-trivial since the functions called depend on the data type. For 1D data the Iq kernel needs to be called, for 2D data the Iqxy kernel needs to be called, and for SESANS data the Iq kernel needs to be called followed by a Hankel transform. Before the kernel is called an appropriate q calculation vector needs to be constructed. This is not the simple q vector where you have measured the data since the resolution calculation will require values beyond the range of the measured data. After the calculation the resolution calculator must be called to return the predicted value for each measured data point.
DirectModel
is a callable object that takes parameter=value
keyword arguments and returns the appropriate theory values for the data.
DataMixin
does the real work of interpreting the data and calling
the model calculator. This is used by DirectModel
, which uses
direct parameter values and by bumps_model.Experiment
which wraps
the parameter values in boxes so that the user can set fitting ranges, etc.
on the individual parameters and send the model to the Bumps optimizers.
- class sasmodels.direct_model.DataMixin¶
Bases:
object
DataMixin captures the common aspects of evaluating a SAS model for a particular data set, including calculating Iq and evaluating the resolution function. It is used in particular by
DirectModel
, which evaluates a SAS model parameters as key word arguments to the calculator method, and bybumps_model.Experiment
, which wraps the model and data for use with the Bumps fitting engine. It is not currently used bysasview_model.SasviewModel
since this will require a number of changes to SasView before we can do it._interpret_data initializes the data structures necessary to manage the calculations. This sets attributes in the child class such as data_type and resolution.
_calc_theory evaluates the model at the given control values.
_set_data sets the intensity data in the data object, possibly with random noise added. This is useful for simulating a dataset with the results from _calc_theory.
- __annotations__ = {}¶
- __dict__ = mappingproxy({'__module__': 'sasmodels.direct_model', '__doc__': '\n DataMixin captures the common aspects of evaluating a SAS model for a\n particular data set, including calculating Iq and evaluating the\n resolution function. It is used in particular by :class:`DirectModel`,\n which evaluates a SAS model parameters as key word arguments to the\n calculator method, and by :class:`.bumps_model.Experiment`, which wraps the\n model and data for use with the Bumps fitting engine. It is not\n currently used by :class:`.sasview_model.SasviewModel` since this will\n require a number of changes to SasView before we can do it.\n\n *_interpret_data* initializes the data structures necessary\n to manage the calculations. This sets attributes in the child class\n such as *data_type* and *resolution*.\n\n *_calc_theory* evaluates the model at the given control values.\n\n *_set_data* sets the intensity data in the data object,\n possibly with random noise added. This is useful for simulating a\n dataset with the results from *_calc_theory*.\n ', '_interpret_data': <function DataMixin._interpret_data>, '_set_data': <function DataMixin._set_data>, '_calc_theory': <function DataMixin._calc_theory>, '__dict__': <attribute '__dict__' of 'DataMixin' objects>, '__weakref__': <attribute '__weakref__' of 'DataMixin' objects>, '__annotations__': {}})¶
- __doc__ = '\n DataMixin captures the common aspects of evaluating a SAS model for a\n particular data set, including calculating Iq and evaluating the\n resolution function. It is used in particular by :class:`DirectModel`,\n which evaluates a SAS model parameters as key word arguments to the\n calculator method, and by :class:`.bumps_model.Experiment`, which wraps the\n model and data for use with the Bumps fitting engine. It is not\n currently used by :class:`.sasview_model.SasviewModel` since this will\n require a number of changes to SasView before we can do it.\n\n *_interpret_data* initializes the data structures necessary\n to manage the calculations. This sets attributes in the child class\n such as *data_type* and *resolution*.\n\n *_calc_theory* evaluates the model at the given control values.\n\n *_set_data* sets the intensity data in the data object,\n possibly with random noise added. This is useful for simulating a\n dataset with the results from *_calc_theory*.\n '¶
- __module__ = 'sasmodels.direct_model'¶
- __weakref__¶
list of weak references to the object
- _calc_theory(pars: Mapping[str, float], cutoff: float = 0.0) ndarray ¶
- _interpret_data(data: Data1D | Data2D | SesansData, model: KernelModel) None ¶
- _set_data(Iq: ndarray, noise: float | None = None) None ¶
- class sasmodels.direct_model.DirectModel(data: Data1D | Data2D | SesansData, model: KernelModel, cutoff: float = 1e-05)¶
Bases:
DataMixin
Create a calculator object for a model.
data is 1D SAS, 2D SAS or SESANS data
model is a model calculator return from
core.load_model()
cutoff is the polydispersity weight cutoff.
- __annotations__ = {}¶
- __call__(**pars: float) ndarray ¶
Call self as a function.
- __doc__ = '\n Create a calculator object for a model.\n\n *data* is 1D SAS, 2D SAS or SESANS data\n\n *model* is a model calculator return from :func:`.core.load_model`\n\n *cutoff* is the polydispersity weight cutoff.\n '¶
- __init__(data: Data1D | Data2D | SesansData, model: KernelModel, cutoff: float = 1e-05) None ¶
- __module__ = 'sasmodels.direct_model'¶
- profile(**pars: float) None ¶
Generate a plottable profile.
- simulate_data(noise: float | None = None, **pars: float) None ¶
Generate simulated data for the model.
- sasmodels.direct_model.Gxi(model, xi, **pars)¶
Compute SESANS correlation G’ = G(xi) - G(0) for model. See
Iq()
for details on model and parameters.
- sasmodels.direct_model.Iq(model, q, dq=None, ql=None, qw=None, **pars)¶
Compute I(q) for model. Resolution is dq for pinhole or ql and qw for slit geometry. Use 0 or None for infinite slits.
Model is the name of a builtin or custom model, or a model expression, such as sphere+sphere for a mixture of spheres of different radii, or sphere@hardsphere for concentrated solutions where the dilute approximation no longer applies.
Use additional keywords for model parameters, tagged with _pd, _pd_n, _pd_nsigma, _pd_type to set polydispersity parameters, or _M0, _mphi, _mtheta for magnetic parameters.
This is not intended for use when the same I(q) is evaluated many times with different parameter values. For that you should set up the model with model = build_model(load_model_info(model_name)), set up a data object to define q values and resolution, then use calculator = DirectModel(data, model) to set up a calculator, or problem = bumps.FitProblem(sasmodels.bumps_model.Experiment(data, model)) to define a fit problem for uses with the bumps optimizer. Data can be loaded using the sasdata package, or use one of the empty data generators from sasmodels.data.
Models are cached. Custom models will not be reloaded even if the underlying files have changed. If you are using this in a long running application then you will need to call sasmodels.direct_model._model_cache.clear() to reset the cache and force custom model reload.
- sasmodels.direct_model.Iqxy(model, qx, qy, dqx=None, dqy=None, **pars)¶
Compute I(qx, qy) for model. Resolution is dqx and dqy. See
Iq()
for details on model and parameters.
- sasmodels.direct_model._direct_calculate(model, data, pars)¶
- sasmodels.direct_model._make_sesans_transform(data)¶
- sasmodels.direct_model._pop_par_weights(parameter: Parameter, values: Dict[str, float], active: bool = True) Tuple[float, ndarray, ndarray] ¶
Generate the distribution for parameter name given the parameter values in pars.
Uses “name”, “name_pd”, “name_pd_type”, “name_pd_n”, “name_pd_sigma” from the pars dictionary for parameter value and parameter dispersion.
- sasmodels.direct_model.call_Fq(calculator: Kernel, pars: Mapping[str, float], cutoff: float = 0.0, mono: bool = False) ndarray ¶
Like
call_kernel()
, but returning F, F^2, R_eff, V_shell, V_form/V_shell.For solid objects V_shell is equal to V_form and the volume ratio is 1.
Use parameter radius_effective_mode to select the effective radius calculation to use amongst the radius_effective_modes list given in the model.
- sasmodels.direct_model.call_kernel(calculator: Kernel, pars: Mapping[str, float], cutoff: float = 0.0, mono: bool = False) ndarray ¶
Call kernel returned from model.make_kernel with parameters pars.
cutoff is the limiting value for the product of dispersion weights used to perform the multidimensional dispersion calculation more quickly at a slight cost to accuracy. The default value of cutoff=0 integrates over the entire dispersion cube. Using cutoff=1e-5 can be 50% faster, but with an error of about 1%, which is usually less than the measurement uncertainty.
mono is True if polydispersity should be set to none on all parameters.
- sasmodels.direct_model.call_profile(model_info: ModelInfo, pars: Mapping[str, float] = None) Tuple[ndarray, ndarray, Tuple[str, str]] ¶
Returns the profile x, y, (xlabel, ylabel) representing the model.
- sasmodels.direct_model.get_mesh(model_info: ModelInfo, values: Dict[str, float], dim: str = '1d', mono: bool = False) List[Tuple[float, ndarray, ndarray]] ¶
Retrieve the dispersity mesh described by the parameter set.
Returns a list of (value, dispersity, weights) with one tuple for each parameter in the model call parameters. Inactive parameters return the default value with a weight of 1.0.
- sasmodels.direct_model.main() None ¶
Program to evaluate a particular model at a set of q values.
- sasmodels.direct_model.test_reparameterize() None ¶
Check simple reparameterized models will load and build
- sasmodels.direct_model.test_simple_interface()¶
sasmodels.exception module¶
Utility to add annotations to python exceptions.
- sasmodels.exception.annotate_exception(msg, exc=None)¶
Add an annotation to the current exception, which can then be forwarded to the caller using a bare “raise” statement to raise the annotated exception. If the exception exc is provided, then that exception is the one that is annotated, otherwise sys.exc_info is used.
Example:
>>> D = {} >>> try: ... print(D['hello']) ... except: ... annotate_exception("while accessing 'D'") ... raise Traceback (most recent call last): ... KeyError: "hello while accessing 'D'"
sasmodels.generate module¶
SAS model constructor.
Small angle scattering models are defined by a set of kernel functions:
Iq(q, p1, p2, …) returns the scattering at q for a form with particular dimensions averaged over all orientations.
Iqac(qab, qc, p1, p2, …) returns the scattering at qab, qc for a rotationally symmetric form with particular dimensions. qab, qc are determined from shape orientation and scattering angles. This call is used if the shape has orientation parameters theta and phi.
Iqabc(qa, qb, qc, p1, p2, …) returns the scattering at qa, qb, qc for a form with particular dimensions. qa, qb, qc are determined from shape orientation and scattering angles. This call is used if the shape has orientation parameters theta, phi and psi.
Iqxy(qx, qy, p1, p2, …) returns the scattering at qx, qy. Use this to create an arbitrary 2D theory function, needed for q-dependent background functions and for models with non-uniform magnetism.
form_volume(p1, p2, …) returns the volume of the form with particular dimension, or 1.0 if no volume normalization is required.
shell_volume(p1, p2, …) returns the volume of the shell for forms which are hollow.
radius_effective(mode, p1, p2, …) returns the effective radius of the form with particular dimensions. Mode determines the type of effective radius returned, with mode=1 for equivalent volume.
These functions are defined in a kernel module .py script and an associated set of .c files. The model constructor will use them to create models with polydispersity across volume and orientation parameters, and provide scale and background parameters for each model.
C code should be stylized C-99 functions written for OpenCL. All functions need prototype declarations even if the are defined before they are used. Although OpenCL supports #include preprocessor directives, the list of includes should be given as part of the metadata in the kernel module definition. The included files should be listed using a path relative to the kernel module, or if using “lib/file.c” if it is one of the standard includes provided with the sasmodels source. The includes need to be listed in order so that functions are defined before they are used.
Floating point values should be declared as double. For single precision calculations, double will be replaced by float. The single precision conversion will also tag floating point constants with “f” to make them single precision constants. When using integral values in floating point expressions, they should be expressed as floating point values by including a decimal point. This includes 0., 1. and 2.
OpenCL has a sincos function which can improve performance when both the sin and cos values are needed for a particular argument. Since this function does not exist in C99, all use of sincos should be replaced by the macro SINCOS(value, sn, cn) where sn and cn are previously declared double variables. When compiled for systems without OpenCL, SINCOS will be replaced by sin and cos calls. If value is an expression, it will appear twice in this case; whether or not it will be evaluated twice depends on the quality of the compiler.
The kernel module must set variables defining the kernel meta data:
id is an implicit variable formed from the filename. It will be a valid python identifier, and will be used as the reference into the html documentation, with ‘_’ replaced by ‘-‘.
name is the model name as displayed to the user. If it is missing, it will be constructed from the id.
title is a short description of the model, suitable for a tool tip, or a one line model summary in a table of models.
description is an extended description of the model to be displayed while the model parameters are being edited.
parameters is the list of parameters. Parameters in the kernel functions must appear in the same order as they appear in the parameters list. Two additional parameters, scale and background are added to the beginning of the parameter list. They will show up in the documentation as model parameters, but they are never sent to the kernel functions. Note that effect_radius and volfraction must occur first in structure factor calculations.
category is the default category for the model. The category is two level structure, with the form “group:section”, indicating where in the manual the model will be located. Models are alphabetical within their section.
source is the list of C-99 source files that must be joined to create the OpenCL kernel functions. The files defining the functions need to be listed before the files which use the functions.
form_volume, Iq, Iqac, Iqabc are strings containing the C source code for the body of the volume, Iq, and Iqac functions respectively. These can also be defined in the last source file.
Iq, Iqac, Iqabc also be instead be python functions defining the kernel. If they are marked as Iq.vectorized = True then the kernel is passed the entire q vector at once, otherwise it is passed values one q at a time. The performance improvement of this step is significant.
valid expression that evaluates to True if the input parameters are valid (e.g., “bell_radius >= radius” for the barbell or capped cylinder models). The expression can call C functions, including those defined in your model file.
A modelinfo.ModelInfo
structure is constructed from the kernel meta
data and returned to the caller.
Valid inputs should be identified by the valid expression. Particularly with polydispersity, there are some sets of shape parameters which lead to nonsensical forms, such as a capped cylinder where the cap radius is smaller than the cylinder radius. The polydispersity calculation will ignore these points, effectively chopping the parameter weight distributions at the boundary of the infeasible region. The resulting scattering will be set to background, even for models with no polydispersity. If the valid expression misses some parameter combinations and they reach the kernel, the kernel should probably return NaN rather than zero. Even if the volume also evaluates to zero for these parameters, the distribution weights are still accumulated and the average volume calculation will be slightly off.
The doc string at the start of the kernel module will be used to construct the model documentation web pages. Embedded figures should appear in the subdirectory “img” beside the model definition, and tagged with the kernel module name to avoid collision with other models. Some file systems are case-sensitive, so only use lower case characters for file names and extensions.
Code follows the C99 standard with the following extensions and conditions:
M_PI_180 = pi/180
M_4PI_3 = 4pi/3
square(x) = x*x
cube(x) = x*x*x
sas_sinx_x(x) = sin(x)/x, with sin(0)/0 -> 1
all double precision constants must include the decimal point
all double declarations may be converted to half, float, or long double
FLOAT_SIZE is the number of bytes in the converted variables
load_kernel_module()
loads the model definition file and
modelinfo.make_model_info()
parses it. make_source()
converts C-based model definitions to C source code, including the
polydispersity integral. model_sources()
returns the list of
source files the model depends on, and ocl_timestamp()
returns
the latest time stamp amongst the source files (so you can check if
the model needs to be rebuilt).
The function make_doc()
extracts the doc string and adds the
parameter table to the top. make_figure in sasmodels/doc/genmodel
creates the default figure for the model. [These two sets of code
should mignrate into docs.py so docs can be updated in one place].
- sasmodels.generate._add_source(source, code, path, lineno=1)¶
Add a file to the list of source code chunks, tagged with path and line.
- sasmodels.generate._build_translation(model_info, table_id='_v', var_prefix='_var_')¶
Interpret parameter translation block, if any.
model_info contains the parameter table and any translation definition for converting between model parameters and the base calculation model.
table_id is the internal label used for the parameter call table. It must be of the form “_table” matching whatever table variable name that appears in the macros such as CALL_VOLUME() and CALL_IQ().
var_prefix is a tag to attach to intermediate variables to avoid collision with variables used inside kernel_iq.
Returns:
subs = {name: expr, …} parameter substitution table for calling into the kernel function
translation = “#define TRANSLATION_VARS(_v) _var_name = expr ….”
validity = “#define VALID(_v) …”
The returned subs is used to generate the substitions for CALL_VOLUME etc., parameters, via
_call_pars()
.The returned translation and validity macros need to be included inside the generated model files. They are the same for all variants (1D, 2D, magnetic) so can be defined once along side the parameter table. Even though they are expanded independently in each variant.
- sasmodels.generate._build_translation_vars(table_id, variables)¶
Build TRANSLATION_VARS macro for C which builds intermediate values.
g.,
#define TRANSLATION_VARS(_v) \ const double _temporary_Re = cbrt(_v.volume/_v.eccentricity/M_4PI_3)
- sasmodels.generate._build_validity_check(eq, table_id, subs)¶
Substitute parameter expressions into validity test, returning the VALID(_table) macro.
- sasmodels.generate._call_pars(pars: str, subs: List[Parameter]) List[str] ¶
Return a list of prefix+parameter from parameter items.
pars is the list of parameters from the base model.
subs contains the translation equations with references to parameters from the new parameter table. If there is no translation, then subs is just a list of references into the base table.
- sasmodels.generate._clean_source_filename(path: str) str ¶
Make the source filename into a canonical, relative form if possible
Remove the common start of the file path (if there is one), yielding the path relative to this file, such as:
./kernel_iq.c
./models/sphere.c
./models/lib/sas_J0.c
This is a format that the compiler/debugger understand for indicating included files with relative paths. Omitting the common parent to the paths means that the irrelevant detail of the temporary directory where the source was unpacked for compilation is not included in pre-compiled models.
- sasmodels.generate._convert_section_titles_to_boldface(lines: Sequence[str]) Iterator[str] ¶
Do the actual work of identifying and converting section headings.
- sasmodels.generate._convert_type(source: str, type_name: str, constant_flag: str) str ¶
Replace ‘double’ with type_name in source, tagging floating point constants with constant_flag.
- sasmodels.generate._fix_tgmath_int(source: str) str ¶
Replace f(integer) with f(integer.) for sin, cos, pow, etc.
OS X OpenCL complains that it can’t resolve the type generic calls to the standard math functions when they are called with integer constants, but this does not happen with the Windows Intel driver for example. To avoid confusion on the matrix marketplace, automatically promote integers to floats if we recognize them in the source.
The specific functions we look for are:
trigonometric: sin, asin, sinh, asinh, etc., and atan2 exponential: exp, exp2, exp10, expm1, log, log2, log10, logp1 power: pow, pown, powr, sqrt, rsqrt, rootn special: erf, erfc, tgamma float: fabs, fmin, fmax
Note that we don’t convert the second argument of dual argument functions: atan2, fmax, fmin, pow, powr. This could potentially be a problem for pow(x, 2), but that case seems to work without change.
- sasmodels.generate._gen_fn(model_info: ModelInfo, name: str, pars: List[Parameter]) str ¶
Generate a function given pars and body.
Returns the following string:
double fn(double a, double b, ...); double fn(double a, double b, ...) { .... }
- sasmodels.generate._kernels(kernel: Dict[str, str], call_iq: str, clear_iq: str, call_iqxy: str, clear_iqxy: str, name: str) List[str] ¶
- sasmodels.generate._search(search_path: List[str], filename: str) str ¶
Find filename in search_path.
Raises ValueError if file does not exist.
- sasmodels.generate._split_translation(translation)¶
Process the translation string, which is a sequence of assignments.
Blanks and comments (c-style and python-style) are stripped.
Conditional expressions should use C syntax (! || && ? :) not python.
- sasmodels.generate._tag_float(source, constant_flag)¶
- sasmodels.generate.contains_Fq(source: List[str]) bool ¶
Return True if C source defines “void Fq(“.
- sasmodels.generate.contains_shell_volume(source: List[str]) bool ¶
Return True if C source defines “double shell_volume(“.
- sasmodels.generate.convert_section_titles_to_boldface(s: str) str ¶
Use explicit bold-face rather than section headings so that the table of contents is not polluted with section names from the model documentation.
Sections are identified as the title line followed by a line of punctuation at least as long as the title line.
- sasmodels.generate.convert_type(source: str, dtype: dtype) str ¶
Convert code from double precision to the desired type.
Floating point constants are tagged with ‘f’ for single precision or ‘L’ for long double precision.
- sasmodels.generate.demo_time() None ¶
Show how long it takes to process a model.
- sasmodels.generate.dll_timestamp(model_info: ModelInfo) int ¶
Return a timestamp for the model corresponding to the most recently changed file or dependency.
- sasmodels.generate.find_xy_mode(source: List[str]) bool ¶
Return the xy mode as qa, qac, qabc or qxy.
Note this is not a C parser, and so can be easily confused by non-standard syntax. Also, it will incorrectly identify the following as having 2D models:
/* double Iqac(qab, qc, ...) { ... fill this in later ... } */
If you want to comment out the function, use // on the front of the line:
/* // double Iqac(qab, qc, ...) { ... fill this in later ... } */
- sasmodels.generate.format_units(units: str) str ¶
Convert units into ReStructured Text format.
- sasmodels.generate.get_data_path(external_dir, target_file)¶
Search for the target file relative in the installed application.
Search first in the location of the generate module in case we are running directly from the distribution. Search next to the python executable for windows installs. Search in the ../Resources directory next to the executable for Mac OS/X installs.
- sasmodels.generate.indent(s: str, depth: int) str ¶
Indent a string of text with depth additional spaces on each line.
- sasmodels.generate.kernel_name(model_info: ModelInfo, variant: str) str ¶
Name of the exported kernel symbol.
variant is “Iq”, “Iqxy” or “Imagnetic”.
- sasmodels.generate.load_kernel_module(model_name: str) ModuleType ¶
Return the kernel module named in model_name.
If the name ends in .py then load it as a custom model using
custom.__init__.load_custom_kernel_module()
, otherwise load it as a builtin from sasmodels.models.
- sasmodels.generate.load_template(filename: str) str ¶
Load template file from sasmodels resource directory.
- sasmodels.generate.main() None ¶
Program which prints the source produced by the model.
- sasmodels.generate.make_partable(pars: List[Parameter]) str ¶
Generate the parameter table to include in the sphinx documentation.
- sasmodels.generate.make_source(model_info: ModelInfo) Dict[str, str] ¶
Generate the OpenCL/ctypes kernel from the module info.
Uses source files found in the given search path. Returns None if this is a pure python model, with no C source components.
- sasmodels.generate.model_sources(model_info: ModelInfo) List[str] ¶
Return a list of the sources file paths for the module.
- sasmodels.generate.ocl_timestamp(model_info: ModelInfo) int ¶
Return a timestamp for the model corresponding to the most recently changed file or dependency.
Note that this does not look at the time stamps for the OpenCL header information since that need not trigger a recompile of the DLL.
- sasmodels.generate.read_text(f)¶
- sasmodels.generate.set_integration_size(info: ModelInfo, n: int) None ¶
Update the model definition, replacing the gaussian integration with a gaussian integration of a different size.
Note: this really ought to be a method in modelinfo, but that leads to import loops.
- sasmodels.generate.tag_source(source: str) str ¶
Return a unique tag for the source code.
- sasmodels.generate.test_tag_float()¶
Check that floating point constants are identified and tagged with ‘f’
- sasmodels.generate.view_html(model_name: str) None ¶
Load the model definition and view its help.
sasmodels.gengauss module¶
Generate the Gauss-Legendre integration points and save them as a C file.
- sasmodels.gengauss.gengauss(n, path)¶
Save the Gauss-Legendre integration points for length n into file path.
sasmodels.guyou module¶
Convert between latitude-longitude and Guyou map coordinates.
- sasmodels.guyou._ellipticJi(u, v, m)¶
- sasmodels.guyou._ellipticJi_imag(v, m)¶
- sasmodels.guyou._ellipticJi_real(u, m)¶
- sasmodels.guyou.ellipticFi(phi, psi, m)¶
Returns F(phi+ipsi|m). See Abramowitz and Stegun, 17.4.11.
- sasmodels.guyou.ellipticJi(u, v, m)¶
Returns [sn, cn, dn](u + iv|m).
- sasmodels.guyou.guyou(lam, phi)¶
Transform from (latitude, longitude) to point (x, y)
- sasmodels.guyou.guyou_invert(x, y)¶
Transform from point (x, y) on plot to (latitude, longitude)
- sasmodels.guyou.main()¶
Show the Guyou transformation
- sasmodels.guyou.plot_grid()¶
Plot the latitude-longitude grid for Guyou transform
sasmodels.jitter module¶
Jitter Explorer¶
Application to explore orientation angle and angular dispersity.
From the command line:
# Show docs
python -m sasmodels.jitter --help
# Guyou projection jitter, uniform over 20 degree theta and 10 in phi
python -m sasmodels.jitter --projection=guyou --dist=uniform --jitter=20,10,0
From a jupyter cell:
import ipyvolume as ipv
from sasmodels import jitter
import importlib; importlib.reload(jitter)
jitter.set_plotter("ipv")
size = (10, 40, 100)
view = (20, 0, 0)
#size = (15, 15, 100)
#view = (60, 60, 0)
dview = (0, 0, 0)
#dview = (5, 5, 0)
#dview = (15, 180, 0)
#dview = (180, 15, 0)
projection = 'equirectangular'
#projection = 'azimuthal_equidistance'
#projection = 'guyou'
#projection = 'sinusoidal'
#projection = 'azimuthal_equal_area'
dist = 'uniform'
#dist = 'gaussian'
jitter.run(size=size, view=view, jitter=dview, dist=dist, projection=projection)
#filename = projection+('_theta' if dview[0] == 180 else '_phi' if dview[1] == 180 else '')
#ipv.savefig(filename+'.png')
- sasmodels.jitter.PLOT_ENGINE(calculator, draw_shape, size, view, jitter, dist, mesh, projection)¶
- class sasmodels.jitter.Quaternion(w, r)¶
Bases:
object
Quaternion(w, r) = w + ir[0] + jr[1] + kr[2]
Quaternion.from_angle_axis(theta, r) for a rotation of angle theta about an axis oriented toward the direction r. This defines a unit quaternion, normalizing \(r\) to the unit vector \(\hat r\), and setting quaternion \(Q = \cos \theta + \sin \theta \hat r\)
Quaternion objects can be multiplied, which applies a rotation about the given axis, allowing composition of rotations without risk of gimbal lock. The resulting quaternion is applied to a set of points using Q.rot(v).
- __dict__ = mappingproxy({'__module__': 'sasmodels.jitter', '__doc__': '\n Quaternion(w, r) = w + ir[0] + jr[1] + kr[2]\n\n Quaternion.from_angle_axis(theta, r) for a rotation of angle theta about\n an axis oriented toward the direction r. This defines a unit quaternion,\n normalizing $r$ to the unit vector $\\hat r$, and setting quaternion\n $Q = \\cos \\theta + \\sin \\theta \\hat r$\n\n Quaternion objects can be multiplied, which applies a rotation about the\n given axis, allowing composition of rotations without risk of gimbal lock.\n The resulting quaternion is applied to a set of points using *Q.rot(v)*.\n ', '__init__': <function Quaternion.__init__>, 'from_angle_axis': <staticmethod(<function Quaternion.from_angle_axis>)>, '__mul__': <function Quaternion.__mul__>, 'rot': <function Quaternion.rot>, 'conj': <function Quaternion.conj>, 'inv': <function Quaternion.inv>, 'norm': <function Quaternion.norm>, '__str__': <function Quaternion.__str__>, '__dict__': <attribute '__dict__' of 'Quaternion' objects>, '__weakref__': <attribute '__weakref__' of 'Quaternion' objects>, '__annotations__': {}})¶
- __doc__ = '\n Quaternion(w, r) = w + ir[0] + jr[1] + kr[2]\n\n Quaternion.from_angle_axis(theta, r) for a rotation of angle theta about\n an axis oriented toward the direction r. This defines a unit quaternion,\n normalizing $r$ to the unit vector $\\hat r$, and setting quaternion\n $Q = \\cos \\theta + \\sin \\theta \\hat r$\n\n Quaternion objects can be multiplied, which applies a rotation about the\n given axis, allowing composition of rotations without risk of gimbal lock.\n The resulting quaternion is applied to a set of points using *Q.rot(v)*.\n '¶
- __init__(w, r)¶
- __module__ = 'sasmodels.jitter'¶
- __mul__(other)¶
Multiply quaterions
- __str__()¶
Return str(self).
- __weakref__¶
list of weak references to the object
- conj()¶
Conjugate quaternion
- static from_angle_axis(theta, r)¶
Build quaternion as rotation theta about axis r
- inv()¶
Inverse quaternion
- norm()¶
Quaternion length
- rot(v)¶
Transform point v by quaternion
- sasmodels.jitter.R_to_xyz(R)¶
Return phi, theta, psi Tait-Bryan angles corresponding to the given rotation matrix.
Extracting Euler Angles from a Rotation Matrix Mike Day, Insomniac Games https://d3cw3dd2w32x2b.cloudfront.net/wp-content/uploads/2012/07/euler-angles1.pdf Based on: Shoemake’s “Euler Angle Conversion”, Graphics Gems IV, pp. 222-229
- sasmodels.jitter.Rx(angle)¶
Construct a matrix to rotate points about x by angle degrees.
- sasmodels.jitter.Ry(angle)¶
Construct a matrix to rotate points about y by angle degrees.
- sasmodels.jitter.Rz(angle)¶
Construct a matrix to rotate points about z by angle degrees.
- sasmodels.jitter._build_sc()¶
- sasmodels.jitter._draw_crystal(axes, size, view, jitter, atoms=None)¶
- sasmodels.jitter._ipv_fix_color(kw)¶
- sasmodels.jitter._ipv_plot(calculator, draw_shape, size, view, jitter, dist, mesh, projection)¶
- sasmodels.jitter._ipv_set_transparency(kw, obj)¶
- sasmodels.jitter._mpl_plot(calculator, draw_shape, size, view, jitter, dist, mesh, projection)¶
- sasmodels.jitter.apply_jitter(jitter, points)¶
Apply the jitter transform to a set of points.
Points are stored in a 3 x n numpy matrix, not a numpy array or tuple.
- sasmodels.jitter.build_model(model_name, n=150, qmax=0.5, **pars)¶
Build a calculator for the given shape.
model_name is any sasmodels model. n and qmax define an n x n mesh on which to evaluate the model. The remaining parameters are stored in the returned calculator as calculator.pars. They are used by
draw_scattering()
to set the non-orientation parameters in the calculation.Returns a calculator function which takes a dictionary or parameters and produces Iqxy. The Iqxy value needs to be reshaped to an n x n matrix for plotting. See the
direct_model.DirectModel
class for details.
- sasmodels.jitter.clipped_range(data, portion=1.0, mode='central')¶
Determine range from data.
If portion is 1, use full range, otherwise use the center of the range or the top of the range, depending on whether mode is ‘central’ or ‘top’.
- sasmodels.jitter.draw_axes(axes, origin=(-1, -1, -1), length=(2, 2, 2))¶
Draw wireframe axes lines, with given origin and length
- sasmodels.jitter.draw_bcc(axes, size, view, jitter, steps=None, alpha=1)¶
Draw points for body-centered cubic paracrystal
- sasmodels.jitter.draw_beam(axes, view=(0, 0), alpha=0.5, steps=25)¶
Draw the beam going from source at (0, 0, 1) to detector at (0, 0, -1)
- sasmodels.jitter.draw_box(axes, size, view)¶
Draw a wireframe box at a particular view.
- sasmodels.jitter.draw_ellipsoid(axes, size, view, jitter, steps=25, alpha=1)¶
Draw an ellipsoid.
- sasmodels.jitter.draw_fcc(axes, size, view, jitter, steps=None, alpha=1)¶
Draw points for face-centered cubic paracrystal
- sasmodels.jitter.draw_jitter(axes, view, jitter, dist='gaussian', size=(0.1, 0.4, 1.0), draw_shape=<function draw_parallelepiped>, projection='equirectangular', alpha=0.8, views=None)¶
Represent jitter as a set of shapes at different orientations.
- sasmodels.jitter.draw_labels(axes, view, jitter, text)¶
Draw text at a particular location.
- sasmodels.jitter.draw_mesh(axes, view, jitter, radius=1.2, n=11, dist='gaussian', projection='equirectangular')¶
Draw the dispersion mesh showing the theta-phi orientations at which the model will be evaluated.
- sasmodels.jitter.draw_parallelepiped(axes, size, view, jitter, steps=None, color=(0.6, 1.0, 0.6), alpha=1)¶
Draw a parallelepiped surface, with view and jitter.
- sasmodels.jitter.draw_person_on_sphere(axes, view, height=0.5, radius=1.0)¶
Draw a person on the surface of a sphere.
view indicates (latitude, longitude, orientation)
- sasmodels.jitter.draw_sc(axes, size, view, jitter, steps=None, alpha=1)¶
Draw points for simple cubic paracrystal
- sasmodels.jitter.draw_scattering(calculator, axes, view, jitter, dist='gaussian')¶
Plot the scattering for the particular view.
calculator is returned from
build_model()
. axes are the 3D axes on which the data will be plotted. view and jitter are the current orientation and orientation dispersity. dist is one of the sasmodels weight distributions.
- sasmodels.jitter.draw_sphere(axes, radius=1.0, steps=25, center=(0, 0, 0), color='w', alpha=1.0)¶
Draw a sphere
- sasmodels.jitter.get_projection(projection)¶
jitter projections <https://en.wikipedia.org/wiki/List_of_map_projections>
- equirectangular (standard latitude-longitude mesh)
<https://en.wikipedia.org/wiki/Equirectangular_projection> Allows free movement in phi (around the equator), but theta is limited to +/- 90, and points are cos-weighted. Jitter in phi is uniform in weight along a line of latitude. With small theta and phi ranging over +/- 180 this forms a wobbling disk. With small phi and theta ranging over +/- 90 this forms a wedge like a slice of an orange.
- azimuthal_equidistance (Postel)
<https://en.wikipedia.org/wiki/Azimuthal_equidistant_projection> Preserves distance from center, and so is an excellent map for representing a bivariate gaussian on the surface. Theta and phi operate identically, cutting wegdes from the antipode of the viewing angle. This unfortunately does not allow free movement in either theta or phi since the orthogonal wobble decreases to 0 as the body rotates through 180 degrees.
- sinusoidal (Sanson-Flamsteed, Mercator equal-area)
<https://en.wikipedia.org/wiki/Sinusoidal_projection> Preserves arc length with latitude, giving bad behaviour at theta near +/- 90. Theta and phi operate somewhat differently, so a system with a-b-c dtheta-dphi-dpsi will not give the same value as one with b-a-c dphi-dtheta-dpsi, as would be the case for azimuthal equidistance. Free movement using theta or phi uniform over +/- 180 will work, but not as well as equirectangular phi, with theta being slightly worse. Computationally it is much cheaper for wide theta-phi meshes since it excludes points which lie outside the sinusoid near theta +/- 90 rather than packing them close together as in equirectangle. Note that the poles will be slightly overweighted for theta > 90 with the circle from theta at 90+dt winding backwards around the pole, overlapping the circle from theta at 90-dt.
- Guyou (hemisphere-in-a-square) not weighted
<https://en.wikipedia.org/wiki/Guyou_hemisphere-in-a-square_projection> With tiling, allows rotation in phi or theta through +/- 180, with uniform spacing. Both theta and phi allow free rotation, with wobble in the orthogonal direction reasonably well behaved (though not as good as equirectangular phi). The forward/reverse transformations relies on elliptic integrals that are somewhat expensive, so the behaviour has to be very good to justify the cost and complexity. The weighting function for each point has not yet been computed. Note: run the module guyou.py directly and it will show the forward and reverse mappings.
- azimuthal_equal_area incomplete
<https://en.wikipedia.org/wiki/Lambert_azimuthal_equal-area_projection> Preserves the relative density of the surface patches. Not that useful and not completely implemented
- Gauss-Kreuger not implemented
<https://en.wikipedia.org/wiki/Transverse_Mercator_projection#Ellipsoidal_transverse_Mercator> Should allow free movement in theta, but phi is distorted.
- sasmodels.jitter.ipv_axes()¶
Build a matplotlib style Axes interface for ipyvolume
- sasmodels.jitter.main()¶
Command line interface to the jitter viewer.
- sasmodels.jitter.make_image(z, kw)¶
Convert numpy array z into a PIL RGB image.
- sasmodels.jitter.make_vec(*args)¶
Turn all elements of args into numpy arrays
- sasmodels.jitter.map_colors(z, kw)¶
Process matplotlib-style colour arguments.
Pulls ‘cmap’, ‘alpha’, ‘vmin’, and ‘vmax’ from th kw dictionary, setting the kw[‘color’] to an RGB array. These are ignored if ‘c’ or ‘color’ are set inside kw.
- sasmodels.jitter.orient_relative_to_beam(view, points)¶
Apply the view transform to a set of points.
Points are stored in a 3 x n numpy matrix, not a numpy array or tuple.
- sasmodels.jitter.orient_relative_to_beam_quaternion(view, points)¶
Apply the view transform to a set of points.
Points are stored in a 3 x n numpy matrix, not a numpy array or tuple.
This variant uses quaternions rather than rotation matrices for the computation. It works but it is not used because it doesn’t solve any problems. The challenge of mapping theta/phi/psi to SO(3) does not disappear by calculating the transform differently.
- sasmodels.jitter.run(model_name='parallelepiped', size=(10, 40, 100), view=(0, 0, 0), jitter=(0, 0, 0), dist='gaussian', mesh=30, projection='equirectangular')¶
Show an interactive orientation and jitter demo.
model_name is one of: sphere, ellipsoid, triaxial_ellipsoid, parallelepiped, cylinder, or sc/fcc/bcc_paracrystal
size gives the dimensions (a, b, c) of the shape.
view gives the initial view (theta, phi, psi) of the shape.
view gives the initial jitter (dtheta, dphi, dpsi) of the shape.
dist is the type of dispersition: gaussian, rectangle, or uniform.
mesh is the number of points in the dispersion mesh.
projection is the map projection to use for the mesh: equirectangular, sinusoidal, guyou, azimuthal_equidistance, or azimuthal_equal_area.
- sasmodels.jitter.select_calculator(model_name, n=150, size=(10, 40, 100))¶
Create a model calculator for the given shape.
model_name is one of sphere, cylinder, ellipsoid, triaxial_ellipsoid, parallelepiped or bcc_paracrystal. n is the number of points to use in the q range. qmax is chosen based on model parameters for the given model to show something intersting.
Returns calculator and tuple size (a,b,c) giving minor and major equitorial axes and polar axis respectively. See
build_model()
for details on the returned calculator.
- sasmodels.jitter.set_plotter(name)¶
Setting the plotting engine to matplotlib/ipyvolume or equivalently mpl/ipv.
- sasmodels.jitter.test_qrot()¶
Quaternion checks
- sasmodels.jitter.transform_xyz(view, jitter, x, y, z)¶
Send a set of (x,y,z) points through the jitter and view transforms.
sasmodels.kernel module¶
Execution kernel interface¶
KernelModel
defines the interface to all kernel models.
In particular, each model should provide a KernelModel.make_kernel()
call which returns an executable kernel, Kernel
, that operates
on the given set of q_vector inputs. On completion of the computation,
the kernel should be released, which also releases the inputs.
- class sasmodels.kernel.Kernel¶
Bases:
object
Instantiated model for the compute engine, applied to a particular q.
Subclasses should define __init__() to set up the kernel inputs, and _call_kernel() to evaluate the kernel:
def __init__(self, ...): ... self.q_input = <q-value class with nq attribute> self.info = <ModelInfo object> self.dim = <'1d' or '2d'> self.dtype = <kernel.dtype> size = 2*self.q_input.nq+4 if self.info.have_Fq else self.q_input.nq+4 size = size + <extra padding if needed for kernel> self.result = np.empty(size, dtype=self.dtype) def _call_kernel(self, call_details, values, cutoff, magnetic, radius_effective_mode): # type: (CallDetails, np.ndarray, np.ndarray, float, bool, int) -> None ... # call <kernel> nq = self.q_input.nq if self.info.have_Fq: # models that compute both F and F^2 end = 2*nq if have_Fq else nq self.result[0:end:2] = F**2 self.result[1:end:2] = F else: end = nq self.result[0:end] = Fsq self.result[end + 0] = total_weight self.result[end + 1] = form_volume self.result[end + 2] = shell_volume self.result[end + 3] = radius_effective
- Fq(call_details: CallDetails, values: ndarray, cutoff: ndarray, magnetic: float, radius_effective_mode: bool = 0) ndarray ¶
Returns <F(q)>, <F(q)^2>, effective radius, shell volume and form:shell volume ratio. The <F(q)> term may be None if the form factor does not support direct computation of \(F(q)\)
\(P(q) = <F^2(q)>/<V>\) is used for structure factor calculations,
\[I(q) = \text{scale} \cdot P(q) \cdot S(q) + \text{background}\]For the beta approximation, this becomes
\[I(q) = \text{scale} P (1 + <F>^2/<F^2> (S - 1)) + \text{background} = \text{scale}/<V> (<F^2> + <F>^2 (S - 1)) + \text{background}\]\(<F(q)>\) and \(<F^2(q)>\) are averaged by polydispersity in shape and orientation, with each configuration \(x_k\) having form factor \(F(q, x_k)\), weight \(w_k\) and volume \(V_k\). The result is:
\[P(q)=\frac{\sum w_k F^2(q, x_k) / \sum w_k}{\sum w_k V_k / \sum w_k}\]The form factor itself is scaled by volume and contrast to compute the total scattering. This is then squared, and the volume weighted F^2 is then normalized by volume F. For a given density, the number of scattering centers is assumed to scale linearly with volume. Later scaling the resulting \(P(q)\) by the volume fraction of particles gives the total scattering on an absolute scale. Most models incorporate the volume fraction into the overall scale parameter. An exception is vesicle, which includes the volume fraction parameter in the model itself, scaling \(F\) by \(\surd V_f\) so that the math for the beta approximation works out.
By scaling \(P(q)\) by total weight \(\sum w_k\), there is no need to make sure that the polydisperisity distributions normalize to one. In particular, any distibution values \(x_k\) outside the valid domain of \(F\) will not be included, and the distribution will be implicitly truncated. This is controlled by the parameter limits defined in the model (which truncate the distribution before calling the kernel) as well as any region excluded using the INVALID macro defined within the model itself.
The volume used in the polydispersity calculation is the form volume for solid objects or the shell volume for hollow objects. Shell volume should be used within \(F\) so that the normalizing scale represents the volume fraction of the shell rather than the entire form. This corresponds to the volume fraction of shell-forming material added to the solvent.
The calculation of \(S\) requires the effective radius and the volume fraction of the particles. The model can have several different ways to compute effective radius, with the radius_effective_mode parameter used to select amongst them. The volume fraction of particles should be determined from the total volume fraction of the form, not just the shell volume fraction. This makes a difference for hollow shapes, which need to scale the volume fraction by the returned volume ratio when computing \(S\). For solid objects, the shell volume is set to the form volume so this scale factor evaluates to one and so can be used for both hollow and solid shapes.
- Iq(call_details: CallDetails, values: ndarray, cutoff: ndarray, magnetic: float) ndarray ¶
Returns I(q) from the polydisperse average scattering.
\[I(q) = \text{scale} \cdot P(q) + \text{background}\]With the correct choice of model and contrast, setting scale to the volume fraction \(V_f\) of particles should match the measured absolute scattering. Some models (e.g., vesicle) have volume fraction built into the model, and do not need an additional scale.
- __call__(call_details: CallDetails, values: ndarray, cutoff: ndarray, magnetic: float) ndarray ¶
Returns I(q) from the polydisperse average scattering.
\[I(q) = \text{scale} \cdot P(q) + \text{background}\]With the correct choice of model and contrast, setting scale to the volume fraction \(V_f\) of particles should match the measured absolute scattering. Some models (e.g., vesicle) have volume fraction built into the model, and do not need an additional scale.
- __dict__ = mappingproxy({'__module__': 'sasmodels.kernel', '__doc__': "\n Instantiated model for the compute engine, applied to a particular *q*.\n\n Subclasses should define *__init__()* to set up the kernel inputs, and\n *_call_kernel()* to evaluate the kernel::\n\n def __init__(self, ...):\n ...\n self.q_input = <q-value class with nq attribute>\n self.info = <ModelInfo object>\n self.dim = <'1d' or '2d'>\n self.dtype = <kernel.dtype>\n size = 2*self.q_input.nq+4 if self.info.have_Fq else self.q_input.nq+4\n size = size + <extra padding if needed for kernel>\n self.result = np.empty(size, dtype=self.dtype)\n\n def _call_kernel(self, call_details, values, cutoff, magnetic,\n radius_effective_mode):\n # type: (CallDetails, np.ndarray, np.ndarray, float, bool, int) -> None\n ... # call <kernel>\n nq = self.q_input.nq\n if self.info.have_Fq: # models that compute both F and F^2\n end = 2*nq if have_Fq else nq\n self.result[0:end:2] = F**2\n self.result[1:end:2] = F\n else:\n end = nq\n self.result[0:end] = Fsq\n self.result[end + 0] = total_weight\n self.result[end + 1] = form_volume\n self.result[end + 2] = shell_volume\n self.result[end + 3] = radius_effective\n ", 'dim': None, 'info': None, 'dtype': None, 'q_input': None, 'result': None, 'Iq': <function Kernel.Iq>, '__call__': <function Kernel.Iq>, 'Fq': <function Kernel.Fq>, 'release': <function Kernel.release>, '_call_kernel': <function Kernel._call_kernel>, '__dict__': <attribute '__dict__' of 'Kernel' objects>, '__weakref__': <attribute '__weakref__' of 'Kernel' objects>, '__annotations__': {'dim': 'str', 'info': 'ModelInfo', 'dtype': 'np.dtype', 'q_input': 'Any', 'result': 'np.ndarray'}})¶
- __doc__ = "\n Instantiated model for the compute engine, applied to a particular *q*.\n\n Subclasses should define *__init__()* to set up the kernel inputs, and\n *_call_kernel()* to evaluate the kernel::\n\n def __init__(self, ...):\n ...\n self.q_input = <q-value class with nq attribute>\n self.info = <ModelInfo object>\n self.dim = <'1d' or '2d'>\n self.dtype = <kernel.dtype>\n size = 2*self.q_input.nq+4 if self.info.have_Fq else self.q_input.nq+4\n size = size + <extra padding if needed for kernel>\n self.result = np.empty(size, dtype=self.dtype)\n\n def _call_kernel(self, call_details, values, cutoff, magnetic,\n radius_effective_mode):\n # type: (CallDetails, np.ndarray, np.ndarray, float, bool, int) -> None\n ... # call <kernel>\n nq = self.q_input.nq\n if self.info.have_Fq: # models that compute both F and F^2\n end = 2*nq if have_Fq else nq\n self.result[0:end:2] = F**2\n self.result[1:end:2] = F\n else:\n end = nq\n self.result[0:end] = Fsq\n self.result[end + 0] = total_weight\n self.result[end + 1] = form_volume\n self.result[end + 2] = shell_volume\n self.result[end + 3] = radius_effective\n "¶
- __module__ = 'sasmodels.kernel'¶
- __weakref__¶
list of weak references to the object
- _call_kernel(call_details: CallDetails, values: ndarray, cutoff: ndarray, magnetic: float, radius_effective_mode: bool) None ¶
Call the kernel. Subclasses defining kernels for particular execution engines need to provide an implementation for this.
- dim: str = None¶
Kernel dimension, either “1d” or “2d”.
- dtype: dtype = None¶
Numerical precision for the computation.
- release() None ¶
Free resources associated with the kernel instance.
- result: ndarray = None¶
Place to hold result of _call_kernel() for subclass.
- class sasmodels.kernel.KernelModel¶
Bases:
object
Model definition for the compute engine.
- __annotations__ = {'dtype': 'np.dtype', 'info': 'ModelInfo'}¶
- __dict__ = mappingproxy({'__module__': 'sasmodels.kernel', '__doc__': '\n Model definition for the compute engine.\n ', 'info': None, 'dtype': None, 'make_kernel': <function KernelModel.make_kernel>, 'release': <function KernelModel.release>, '__dict__': <attribute '__dict__' of 'KernelModel' objects>, '__weakref__': <attribute '__weakref__' of 'KernelModel' objects>, '__annotations__': {'info': 'ModelInfo', 'dtype': 'np.dtype'}})¶
- __doc__ = '\n Model definition for the compute engine.\n '¶
- __module__ = 'sasmodels.kernel'¶
- __weakref__¶
list of weak references to the object
- dtype: dtype = None¶
- make_kernel(q_vectors: List[ndarray]) Kernel ¶
Instantiate a kernel for evaluating the model at q_vectors.
- release() None ¶
Free resources associated with the kernel.
sasmodels.kernelcl module¶
GPU driver for C kernels
TODO: docs are out of date
There should be a single GPU environment running on the system. This
environment is constructed on the first call to environment()
, and the
same environment is returned on each call.
After retrieving the environment, the next step is to create the kernel.
This is done with a call to GpuEnvironment.compile_program()
, which
returns the type of data used by the kernel.
Next a GpuInput
object should be created with the correct kind
of data. This data object can be used by multiple kernels, for example,
if the target model is a weighted sum of multiple kernels. The data
should include any extra evaluation points required to compute the proper
data smearing. This need not match the square grid for 2D data if there
is an index saying which q points are active.
Together the GpuInput, the program, and a device form a GpuKernel
.
This kernel is used during fitting, receiving new sets of parameters and
evaluating them. The output value is stored in an output buffer on the
devices, where it can be combined with other structure factors and form
factors and have instrumental resolution effects applied.
In order to use OpenCL for your models, you will need OpenCL drivers for your machine. These should be available from your graphics card vendor. Intel provides OpenCL drivers for CPUs as well as their integrated HD graphics chipsets. AMD also provides drivers for Intel CPUs, but as of this writing the performance is lacking compared to the Intel drivers. NVidia combines drivers for CUDA and OpenCL in one package. The result is a bit messy if you have multiple drivers installed. You can see which drivers are available by starting python and running:
import pyopencl as cl cl.create_some_context(interactive=True)
Once you have done that, it will show the available drivers which you can select. It will then tell you that you can use these drivers automatically by setting the SAS_OPENCL environment variable, which is PYOPENCL_CTX equivalent but not conflicting with other pyopnecl programs.
Some graphics cards have multiple devices on the same card. You cannot yet use both of them concurrently to evaluate models, but you can run the program twice using a different device for each session.
OpenCL kernels are compiled when needed by the device driver. Some drivers produce compiler output even when there is no error. You can see the output by setting PYOPENCL_COMPILER_OUTPUT=1. It should be harmless, albeit annoying.
- class sasmodels.kernelcl.GpuEnvironment¶
Bases:
object
GPU context for OpenCL, with possibly many devices and one queue per device.
- __dict__ = mappingproxy({'__module__': 'sasmodels.kernelcl', '__doc__': '\n GPU context for OpenCL, with possibly many devices and one queue per device.\n ', '__init__': <function GpuEnvironment.__init__>, 'has_type': <function GpuEnvironment.has_type>, 'compile_program': <function GpuEnvironment.compile_program>, '__dict__': <attribute '__dict__' of 'GpuEnvironment' objects>, '__weakref__': <attribute '__weakref__' of 'GpuEnvironment' objects>, '__annotations__': {}})¶
- __doc__ = '\n GPU context for OpenCL, with possibly many devices and one queue per device.\n '¶
- __init__() None ¶
- __module__ = 'sasmodels.kernelcl'¶
- __weakref__¶
list of weak references to the object
- compile_program(name: str, source: str, dtype: dtype, fast: bool, timestamp: float) Program ¶
Compile the program for the device in the given context.
- has_type(dtype: dtype) bool ¶
Return True if all devices support a given type.
- class sasmodels.kernelcl.GpuInput(q_vectors: List[ndarray], dtype: dtype = dtype('float32'))¶
Bases:
object
Make q data available to the gpu.
q_vectors is a list of q vectors, which will be [q] for 1-D data, and [qx, qy] for 2-D data. Internally, the vectors will be reallocated to get the best performance on OpenCL, which may involve shifting and stretching the array to better match the memory architecture. Additional points will be evaluated with q=1e-3.
dtype is the data type for the q vectors. The data type should be set to match that of the kernel, which is an attribute of
GpuModel
. Note that not all kernels support double precision, so even if the program was created for double precision, the GpuModel.dtype may be single precision.Call
release()
when complete. Even if not called directly, the buffer will be released when the data object is freed.- __del__() None ¶
- __dict__ = mappingproxy({'__module__': 'sasmodels.kernelcl', '__doc__': '\n Make q data available to the gpu.\n\n *q_vectors* is a list of q vectors, which will be *[q]* for 1-D data,\n and *[qx, qy]* for 2-D data. Internally, the vectors will be reallocated\n to get the best performance on OpenCL, which may involve shifting and\n stretching the array to better match the memory architecture. Additional\n points will be evaluated with *q=1e-3*.\n\n *dtype* is the data type for the q vectors. The data type should be\n set to match that of the kernel, which is an attribute of\n :class:`GpuModel`. Note that not all kernels support double\n precision, so even if the program was created for double precision,\n the *GpuModel.dtype* may be single precision.\n\n Call :meth:`release` when complete. Even if not called directly, the\n buffer will be released when the data object is freed.\n ', 'nq': 0, 'dtype': dtype('float32'), 'is_2d': False, 'q': None, 'q_b': None, '__init__': <function GpuInput.__init__>, 'release': <function GpuInput.release>, '__del__': <function GpuInput.__del__>, '__dict__': <attribute '__dict__' of 'GpuInput' objects>, '__weakref__': <attribute '__weakref__' of 'GpuInput' objects>, '__annotations__': {}})¶
- __doc__ = '\n Make q data available to the gpu.\n\n *q_vectors* is a list of q vectors, which will be *[q]* for 1-D data,\n and *[qx, qy]* for 2-D data. Internally, the vectors will be reallocated\n to get the best performance on OpenCL, which may involve shifting and\n stretching the array to better match the memory architecture. Additional\n points will be evaluated with *q=1e-3*.\n\n *dtype* is the data type for the q vectors. The data type should be\n set to match that of the kernel, which is an attribute of\n :class:`GpuModel`. Note that not all kernels support double\n precision, so even if the program was created for double precision,\n the *GpuModel.dtype* may be single precision.\n\n Call :meth:`release` when complete. Even if not called directly, the\n buffer will be released when the data object is freed.\n '¶
- __init__(q_vectors: List[ndarray], dtype: dtype = dtype('float32')) None ¶
- __module__ = 'sasmodels.kernelcl'¶
- __weakref__¶
list of weak references to the object
- dtype = dtype('float32')¶
- is_2d = False¶
- nq = 0¶
- q = None¶
- q_b = None¶
- release() None ¶
Free the buffer associated with the q value.
- class sasmodels.kernelcl.GpuKernel(model: GpuModel, q_vectors: List[ndarray])¶
Bases:
Kernel
Callable SAS kernel.
model is the GpuModel object to call
The kernel is derived from
kernel.Kernel
, providing the _call_kernel() method to evaluate the kernel for a given set of parameters. Because of the need to move the q values to the GPU before evaluation, the kernel is instantiated for a particular set of q vectors, and can be called many times without transfering q each time.Call
release()
when done with the kernel instance.- __annotations__ = {}¶
- __del__() None ¶
- __doc__ = '\n Callable SAS kernel.\n\n *model* is the GpuModel object to call\n\n The kernel is derived from :class:`.kernel.Kernel`, providing the\n *_call_kernel()* method to evaluate the kernel for a given set of\n parameters. Because of the need to move the q values to the GPU before\n evaluation, the kernel is instantiated for a particular set of q vectors,\n and can be called many times without transfering q each time.\n\n Call :meth:`release` when done with the kernel instance.\n '¶
- __module__ = 'sasmodels.kernelcl'¶
- _call_kernel(call_details: CallDetails, values: ndarray, cutoff: float, magnetic: bool, radius_effective_mode: int) None ¶
Call the kernel. Subclasses defining kernels for particular execution engines need to provide an implementation for this.
- _result_b: Buffer = None¶
- dim: str = ''¶
Kernel dimensions (1d or 2d).
- dtype: dtype = None¶
Kernel precision.
- release() None ¶
Release resources associated with the kernel.
- result: ndarray = None¶
Calculation results, updated after each call to _call_kernel().
- class sasmodels.kernelcl.GpuModel(source: Dict[str, str], model_info: ModelInfo, dtype: dtype = dtype('float32'), fast: bool = False)¶
Bases:
KernelModel
GPU wrapper for a single model.
source and model_info are the model source and interface as returned from
generate.make_source()
andmodelinfo.make_model_info()
.dtype is the desired model precision. Any numpy dtype for single or double precision floats will do, such as ‘f’, ‘float32’ or ‘single’ for single and ‘d’, ‘float64’ or ‘double’ for double. Double precision is an optional extension which may not be available on all devices. Half precision (‘float16’,’half’) may be available on some devices. Fast precision (‘fast’) is a loose version of single precision, indicating that the compiler is allowed to take shortcuts.
- __annotations__ = {}¶
- __doc__ = "\n GPU wrapper for a single model.\n\n *source* and *model_info* are the model source and interface as returned\n from :func:`.generate.make_source` and :func:`.modelinfo.make_model_info`.\n\n *dtype* is the desired model precision. Any numpy dtype for single\n or double precision floats will do, such as 'f', 'float32' or 'single'\n for single and 'd', 'float64' or 'double' for double. Double precision\n is an optional extension which may not be available on all devices.\n Half precision ('float16','half') may be available on some devices.\n Fast precision ('fast') is a loose version of single precision, indicating\n that the compiler is allowed to take shortcuts.\n "¶
- __init__(source: Dict[str, str], model_info: ModelInfo, dtype: dtype = dtype('float32'), fast: bool = False) None ¶
- __module__ = 'sasmodels.kernelcl'¶
- _kernels: Dict[str, Kernel] = None¶
- _prepare_program() None ¶
- _program: Program = None¶
- dtype: dtype = None¶
- fast: bool = False¶
- get_function(name: str) Kernel ¶
Fetch the kernel from the environment by name, compiling it if it does not already exist.
- make_kernel(q_vectors: List[ndarray]) GpuKernel ¶
Instantiate a kernel for evaluating the model at q_vectors.
- source: str = ''¶
- sasmodels.kernelcl._create_some_context() Context ¶
Protected call to cl.create_some_context without interactivity.
Uses SAS_OPENCL or PYOPENCL_CTX if they are set in the environment, otherwise scans for the most appropriate device using
_get_default_context()
. Ignore SAS_OPENCL=OpenCL, which indicates that an OpenCL device should be used without specifying which one (and not a CUDA device, or no GPU).
- sasmodels.kernelcl._get_default_context() List[Context] ¶
Get an OpenCL context, preferring GPU over CPU, and preferring Intel drivers over AMD drivers.
- sasmodels.kernelcl.compile_model(context: Context, source: str, dtype: dtype, fast: bool = False) Program ¶
Build a model to run on the gpu.
Returns the compiled program and its type.
Raises an error if the desired precision is not available.
- sasmodels.kernelcl.environment() GpuEnvironment ¶
Returns a singleton
GpuEnvironment
.This provides an OpenCL context and one queue per device.
- sasmodels.kernelcl.fix_pyopencl_include() None ¶
Monkey patch pyopencl to allow spaces in include file path.
- sasmodels.kernelcl.get_warp(kernel: Kernel, queue: CommandQueue) int ¶
Return the size of an execution batch for kernel running on queue.
- sasmodels.kernelcl.has_type(device: Device, dtype: dtype) bool ¶
Return true if device supports the requested precision.
- sasmodels.kernelcl.quote_path(v: str) str ¶
Quote the path if it is not already quoted.
If v starts with ‘-’, then assume that it is a -I option or similar and do not quote it. This is fragile: -Ipath with space needs to be quoted.
- sasmodels.kernelcl.reset_environment() GpuEnvironment ¶
Return a new OpenCL context, such as after a change to SAS_OPENCL.
- sasmodels.kernelcl.use_opencl() bool ¶
Return True if OpenCL is the default computational engine
sasmodels.kernelcuda module¶
GPU driver for C kernels (with CUDA)
To select cuda, use SAS_OPENCL=cuda, or SAS_OPENCL=cuda:n for a particular device number. If no device number is specified, then look for CUDA_DEVICE=n or a file ~/.cuda-device containing n for the device number. Otherwise, try all available device numbers.
TODO: docs are out of date
There should be a single GPU environment running on the system. This
environment is constructed on the first call to environment()
, and the
same environment is returned on each call.
After retrieving the environment, the next step is to create the kernel.
This is done with a call to GpuEnvironment.compile_program()
, which
returns the type of data used by the kernel.
Next a GpuInput
object should be created with the correct kind
of data. This data object can be used by multiple kernels, for example,
if the target model is a weighted sum of multiple kernels. The data
should include any extra evaluation points required to compute the proper
data smearing. This need not match the square grid for 2D data if there
is an index saying which q points are active.
Together the GpuInput, the program, and a device form a GpuKernel
.
This kernel is used during fitting, receiving new sets of parameters and
evaluating them. The output value is stored in an output buffer on the
devices, where it can be combined with other structure factors and form
factors and have instrumental resolution effects applied.
In order to use OpenCL for your models, you will need OpenCL drivers for your machine. These should be available from your graphics card vendor. Intel provides OpenCL drivers for CPUs as well as their integrated HD graphics chipsets. AMD also provides drivers for Intel CPUs, but as of this writing the performance is lacking compared to the Intel drivers. NVidia combines drivers for CUDA and OpenCL in one package. The result is a bit messy if you have multiple drivers installed. You can see which drivers are available by starting python and running:
import pyopencl as cl cl.create_some_context(interactive=True)
Once you have done that, it will show the available drivers which you can select. It will then tell you that you can use these drivers automatically by setting the SAS_OPENCL environment variable, which is PYOPENCL_CTX equivalent but not conflicting with other pyopnecl programs.
Some graphics cards have multiple devices on the same card. You cannot yet use both of them concurrently to evaluate models, but you can run the program twice using a different device for each session.
OpenCL kernels are compiled when needed by the device driver. Some drivers produce compiler output even when there is no error. You can see the output by setting PYOPENCL_COMPILER_OUTPUT=1. It should be harmless, albeit annoying.
- class sasmodels.kernelcuda.GpuEnvironment(devnum: int = None)¶
Bases:
object
GPU context for CUDA.
- __del__()¶
- __dict__ = mappingproxy({'__module__': 'sasmodels.kernelcuda', '__doc__': '\n GPU context for CUDA.\n ', 'context': None, '__init__': <function GpuEnvironment.__init__>, 'release': <function GpuEnvironment.release>, '__del__': <function GpuEnvironment.__del__>, 'has_type': <function GpuEnvironment.has_type>, 'compile_program': <function GpuEnvironment.compile_program>, '__dict__': <attribute '__dict__' of 'GpuEnvironment' objects>, '__weakref__': <attribute '__weakref__' of 'GpuEnvironment' objects>, '__annotations__': {'context': 'cuda.Context'}})¶
- __doc__ = '\n GPU context for CUDA.\n '¶
- __init__(devnum: int = None) None ¶
- __module__ = 'sasmodels.kernelcuda'¶
- __weakref__¶
list of weak references to the object
- compile_program(name: str, source: str, dtype: np.dtype, fast: bool, timestamp: float) SourceModule ¶
Compile the program for the device in the given context.
- context: cuda.Context = None¶
- has_type(dtype: dtype) bool ¶
Return True if all devices support a given type.
- release()¶
Free the CUDA device associated with this context.
- class sasmodels.kernelcuda.GpuInput(q_vectors: List[ndarray], dtype: dtype = dtype('float32'))¶
Bases:
object
Make q data available to the gpu.
q_vectors is a list of q vectors, which will be [q] for 1-D data, and [qx, qy] for 2-D data. Internally, the vectors will be reallocated to get the best performance on OpenCL, which may involve shifting and stretching the array to better match the memory architecture. Additional points will be evaluated with q=1e-3.
dtype is the data type for the q vectors. The data type should be set to match that of the kernel, which is an attribute of
GpuModel
. Note that not all kernels support double precision, so even if the program was created for double precision, the GpuModel.dtype may be single precision.Call
release()
when complete. Even if not called directly, the buffer will be released when the data object is freed.- __del__() None ¶
- __dict__ = mappingproxy({'__module__': 'sasmodels.kernelcuda', '__doc__': '\n Make q data available to the gpu.\n\n *q_vectors* is a list of q vectors, which will be *[q]* for 1-D data,\n and *[qx, qy]* for 2-D data. Internally, the vectors will be reallocated\n to get the best performance on OpenCL, which may involve shifting and\n stretching the array to better match the memory architecture. Additional\n points will be evaluated with *q=1e-3*.\n\n *dtype* is the data type for the q vectors. The data type should be\n set to match that of the kernel, which is an attribute of\n :class:`GpuModel`. Note that not all kernels support double\n precision, so even if the program was created for double precision,\n the *GpuModel.dtype* may be single precision.\n\n Call :meth:`release` when complete. Even if not called directly, the\n buffer will be released when the data object is freed.\n ', '__init__': <function GpuInput.__init__>, 'release': <function GpuInput.release>, '__del__': <function GpuInput.__del__>, '__dict__': <attribute '__dict__' of 'GpuInput' objects>, '__weakref__': <attribute '__weakref__' of 'GpuInput' objects>, '__annotations__': {}})¶
- __doc__ = '\n Make q data available to the gpu.\n\n *q_vectors* is a list of q vectors, which will be *[q]* for 1-D data,\n and *[qx, qy]* for 2-D data. Internally, the vectors will be reallocated\n to get the best performance on OpenCL, which may involve shifting and\n stretching the array to better match the memory architecture. Additional\n points will be evaluated with *q=1e-3*.\n\n *dtype* is the data type for the q vectors. The data type should be\n set to match that of the kernel, which is an attribute of\n :class:`GpuModel`. Note that not all kernels support double\n precision, so even if the program was created for double precision,\n the *GpuModel.dtype* may be single precision.\n\n Call :meth:`release` when complete. Even if not called directly, the\n buffer will be released when the data object is freed.\n '¶
- __init__(q_vectors: List[ndarray], dtype: dtype = dtype('float32')) None ¶
- __module__ = 'sasmodels.kernelcuda'¶
- __weakref__¶
list of weak references to the object
- release() None ¶
Free the buffer associated with the q value.
- class sasmodels.kernelcuda.GpuKernel(model: GpuModel, q_vectors: List[ndarray])¶
Bases:
Kernel
Callable SAS kernel.
model is the GpuModel object to call
The kernel is derived from
kernel.Kernel
, providing the _call_kernel() method to evaluate the kernel for a given set of parameters. Because of the need to move the q values to the GPU before evaluation, the kernel is instantiated for a particular set of q vectors, and can be called many times without transfering q each time.Call
release()
when done with the kernel instance.- __annotations__ = {}¶
- __del__() None ¶
- __doc__ = '\n Callable SAS kernel.\n\n *model* is the GpuModel object to call\n\n The kernel is derived from :class:`.kernel.Kernel`, providing the\n *_call_kernel()* method to evaluate the kernel for a given set of\n parameters. Because of the need to move the q values to the GPU before\n evaluation, the kernel is instantiated for a particular set of q vectors,\n and can be called many times without transfering q each time.\n\n Call :meth:`release` when done with the kernel instance.\n '¶
- __module__ = 'sasmodels.kernelcuda'¶
- _call_kernel(call_details: CallDetails, values: ndarray, cutoff: float, magnetic: bool, radius_effective_mode: int) None ¶
Call the kernel. Subclasses defining kernels for particular execution engines need to provide an implementation for this.
- dim: str = ''¶
Kernel dimensions (1d or 2d).
- dtype: dtype = None¶
Kernel precision.
- release() None ¶
Release resources associated with the kernel.
- result: ndarray = None¶
Calculation results, updated after each call to _call_kernel().
- class sasmodels.kernelcuda.GpuModel(source: Dict[str, str], model_info: ModelInfo, dtype: dtype = dtype('float32'), fast: bool = False)¶
Bases:
KernelModel
GPU wrapper for a single model.
source and model_info are the model source and interface as returned from
generate.make_source()
andmodelinfo.make_model_info()
.dtype is the desired model precision. Any numpy dtype for single or double precision floats will do, such as ‘f’, ‘float32’ or ‘single’ for single and ‘d’, ‘float64’ or ‘double’ for double. Double precision is an optional extension which may not be available on all devices. Half precision (‘float16’,’half’) may be available on some devices. Fast precision (‘fast’) is a loose version of single precision, indicating that the compiler is allowed to take shortcuts.
- __annotations__ = {}¶
- __doc__ = "\n GPU wrapper for a single model.\n\n *source* and *model_info* are the model source and interface as returned\n from :func:`.generate.make_source` and :func:`.modelinfo.make_model_info`.\n\n *dtype* is the desired model precision. Any numpy dtype for single\n or double precision floats will do, such as 'f', 'float32' or 'single'\n for single and 'd', 'float64' or 'double' for double. Double precision\n is an optional extension which may not be available on all devices.\n Half precision ('float16','half') may be available on some devices.\n Fast precision ('fast') is a loose version of single precision, indicating\n that the compiler is allowed to take shortcuts.\n "¶
- __init__(source: Dict[str, str], model_info: ModelInfo, dtype: dtype = dtype('float32'), fast: bool = False) None ¶
- __module__ = 'sasmodels.kernelcuda'¶
- _kernels: Dict[str, cuda.Function] = None¶
- _prepare_program() None ¶
- _program: SourceModule = None¶
- dtype: np.dtype = None¶
- fast: bool = False¶
- get_function(name: str) cuda.Function ¶
Fetch the kernel from the environment by name, compiling it if it does not already exist.
- make_kernel(q_vectors: List[ndarray]) GpuKernel ¶
Instantiate a kernel for evaluating the model at q_vectors.
- source: str = ''¶
- sasmodels.kernelcuda._add_device_tag(match: None) str ¶
replace qualifiers with __device__ qualifiers if needed
- sasmodels.kernelcuda.compile_model(source: str, dtype: np.dtype, fast: bool = False) SourceModule ¶
Build a model to run on the gpu.
Returns the compiled program and its type. The returned type will be float32 even if the desired type is float64 if any of the devices in the context do not support the cl_khr_fp64 extension.
- sasmodels.kernelcuda.environment() GpuEnvironment ¶
Returns a singleton
GpuEnvironment
.This provides an OpenCL context and one queue per device.
- sasmodels.kernelcuda.has_type(dtype: dtype) bool ¶
Return true if device supports the requested precision.
- sasmodels.kernelcuda.mark_device_functions(source: str) str ¶
Mark all function declarations as __device__ functions (except kernel).
- sasmodels.kernelcuda.partition(n)¶
Constructs block and grid arguments for n elements.
- sasmodels.kernelcuda.reset_environment() None ¶
Call to create a new OpenCL context, such as after a change to SAS_OPENCL.
- sasmodels.kernelcuda.show_device_functions(source: str) str ¶
Show all discovered function declarations, but don’t change any.
- sasmodels.kernelcuda.sync()¶
- Overview:
Waits for operation in the current context to complete.
Note: Maybe context.synchronize() is sufficient.
- sasmodels.kernelcuda.use_cuda() bool ¶
Returns True if CUDA is the default compute engine.
sasmodels.kerneldll module¶
DLL driver for C kernels
If the environment variable SAS_OPENMP is set, then sasmodels will attempt to compile with OpenMP flags so that the model can use all available kernels. This may or may not be available on your compiler toolchain. Depending on operating system and environment.
Windows does not have provide a compiler with the operating system. Instead, we assume that TinyCC is installed and available. This can be done with a simple pip command if it is not already available:
pip install tinycc
If Microsoft Visual C++ is available (because VCINSTALLDIR is defined in the environment), then that will be used instead. Microsoft Visual C++ for Python is available from Microsoft:
If neither compiler is available, sasmodels will check for MinGW, the GNU compiler toolchain. This available in packages such as Anaconda and PythonXY, or available stand alone. This toolchain has had difficulties on some systems, and may or may not work for you.
You can control which compiler to use by setting SAS_COMPILER in the environment:
tinycc (Windows): use the TinyCC compiler shipped with SasView
msvc (Windows): use the Microsoft Visual C++ compiler
mingw (Windows): use the MinGW GNU cc compiler
unix (Linux): use the system cc compiler.
unix (Mac): use the clang compiler. You will need XCode installed, and the XCode command line tools. Mac comes with OpenCL drivers, so generally this will not be needed.
Both msvc and mingw require that the compiler is available on your path. For msvc, this can done by running vcvarsall.bat in a windows terminal. Install locations are system dependent, such as:
C:Program Files (x86)Common FilesMicrosoftVisual C++ for Python9.0vcvarsall.bat
or maybe
C:UsersyournameAppDataLocalProgramsCommonMicrosoftVisual C++ for Python9.0vcvarsall.bat
OpenMP for msvc requires the Microsoft vcomp90.dll library, which doesn’t seem to be included with the compiler, nor does there appear to be a public download location. There may be one on your machine already in a location such as:
C:Windowswinsxsx86_microsoft.vc90.openmp*vcomp90.dll
If you copy this to somewhere on your path, such as the python directory or the install directory for this application, then OpenMP should be supported.
For full control of the compiler, define a function compile_command(source,output) which takes the name of the source file and the name of the output file and returns a compile command that can be evaluated in the shell. For even more control, replace the entire compile(source,output) function.
The global attribute ALLOW_SINGLE_PRECISION_DLLS should be set to False if you wish to prevent single precision floating point evaluation for the compiled models, otherwise set it defaults to True.
- class sasmodels.kerneldll.DllKernel(kernel: Callable[[], ndarray], model_info: ModelInfo, q_input: PyInput)¶
Bases:
Kernel
Callable SAS kernel.
kernel is the c function to call.
model_info is the module information
q_input is the DllInput q vectors at which the kernel should be evaluated.
The resulting call method takes the pars, a list of values for the fixed parameters to the kernel, and pd_pars, a list of (value, weight) vectors for the polydisperse parameters. cutoff determines the integration limits: any points with combined weight less than cutoff will not be calculated.
Call
release()
when done with the kernel instance.- __annotations__ = {}¶
- __del__() None ¶
- __doc__ = '\n Callable SAS kernel.\n\n *kernel* is the c function to call.\n\n *model_info* is the module information\n\n *q_input* is the DllInput q vectors at which the kernel should be\n evaluated.\n\n The resulting call method takes the *pars*, a list of values for\n the fixed parameters to the kernel, and *pd_pars*, a list of (value, weight)\n vectors for the polydisperse parameters. *cutoff* determines the\n integration limits: any points with combined weight less than *cutoff*\n will not be calculated.\n\n Call :meth:`release` when done with the kernel instance.\n '¶
- __module__ = 'sasmodels.kerneldll'¶
- _call_kernel(call_details, values, cutoff, magnetic, radius_effective_mode)¶
Call the kernel. Subclasses defining kernels for particular execution engines need to provide an implementation for this.
- release() None ¶
Release resources associated with the kernel.
- class sasmodels.kerneldll.DllModel(dllpath: str, model_info: ModelInfo, dtype: dtype = dtype('float32'))¶
Bases:
KernelModel
ctypes wrapper for a single model.
dllpath is the stored path to the dll.
model_info is the model definition returned from
modelinfo.make_model_info()
.dtype is the desired model precision. Any numpy dtype for single or double precision floats will do, such as ‘f’, ‘float32’ or ‘single’ for single and ‘d’, ‘float64’ or ‘double’ for double. Double precision is an optional extension which may not be available on all devices.
Call
release()
when done with the kernel.- __annotations__ = {}¶
- __doc__ = "\n ctypes wrapper for a single model.\n\n *dllpath* is the stored path to the dll.\n\n *model_info* is the model definition returned from\n :func:`.modelinfo.make_model_info`.\n\n *dtype* is the desired model precision. Any numpy dtype for single\n or double precision floats will do, such as 'f', 'float32' or 'single'\n for single and 'd', 'float64' or 'double' for double. Double precision\n is an optional extension which may not be available on all devices.\n\n Call :meth:`release` when done with the kernel.\n "¶
- __module__ = 'sasmodels.kerneldll'¶
- _load_dll() None ¶
- make_kernel(q_vectors: List[ndarray]) DllKernel ¶
Instantiate a kernel for evaluating the model at q_vectors.
- release() None ¶
Release any resources associated with the model.
- sasmodels.kerneldll.compile_command(source, output)¶
tinycc compiler command
- sasmodels.kerneldll.compile_model(source: str, output: str) None ¶
Compile source producing output.
Raises RuntimeError if the compile failed or the output wasn’t produced.
- sasmodels.kerneldll.decode(s)¶
- sasmodels.kerneldll.dll_name(model_file: str, dtype: dtype) str ¶
Name of the dll containing the model. This is the base file name without any path or extension, with a form such as ‘sas_sphere32’.
- sasmodels.kerneldll.dll_path(model_file: str, dtype: dtype) str ¶
Complete path to the dll for the model. Note that the dll may not exist yet if it hasn’t been compiled.
- sasmodels.kerneldll.load_dll(source: str, model_info: ModelInfo, dtype: dtype = dtype('float64')) DllModel ¶
Create and load a dll corresponding to the source.
model_info is the info object returned from
modelinfo.make_model_info()
.source is returned from
generate.make_source()
, as make_source(model_info)[‘dll’].See
make_dll()
for details on controlling the dll path and the allowed floating point precision.
- sasmodels.kerneldll.make_dll(source: str, model_info: ModelInfo, dtype: dtype = dtype('float64'), system: bool = False) str ¶
Returns the path to the compiled model defined by kernel_module.
If the model has not been compiled, or if the source file(s) are newer than the dll, then make_dll will compile the model before returning. This routine does not load the resulting dll.
dtype is a numpy floating point precision specifier indicating whether the model should be single, double or long double precision. The default is double precision, np.dtype(‘d’).
Set sasmodels.ALLOW_SINGLE_PRECISION_DLLS to False if single precision models are not allowed as DLLs.
Set sasmodels.kerneldll.SAS_DLL_PATH to the compiled dll output path. Alternatively, set the environment variable SAS_DLL_PATH. The default is in ~/.sasmodels/compiled_models.
system is a bool that controls whether these are the precompiled DLLs that would be shipped with a binary distribution.
sasmodels.kernelpy module¶
Python driver for python kernels
Calls the kernel with a vector of \(q\) values for a single parameter set.
Polydispersity is supported by looping over different parameter sets and
summing the results. The interface to PyModel
matches those for
kernelcl.GpuModel
and kerneldll.DllModel
.
- class sasmodels.kernelpy.PyInput(q_vectors, dtype)¶
Bases:
object
Make q data available to the gpu.
q_vectors is a list of q vectors, which will be [q] for 1-D data, and [qx, qy] for 2-D data. Internally, the vectors will be reallocated to get the best performance on OpenCL, which may involve shifting and stretching the array to better match the memory architecture. Additional points will be evaluated with q=1e-3.
dtype is the data type for the q vectors. The data type should be set to match that of the kernel, which is an attribute of
PyModel
. Note that not all kernels support double precision, so even if the program was created for double precision, the GpuProgram.dtype may be single precision.Call
release()
when complete. Even if not called directly, the buffer will be released when the data object is freed.- __dict__ = mappingproxy({'__module__': 'sasmodels.kernelpy', '__doc__': '\n Make q data available to the gpu.\n\n *q_vectors* is a list of q vectors, which will be *[q]* for 1-D data,\n and *[qx, qy]* for 2-D data. Internally, the vectors will be reallocated\n to get the best performance on OpenCL, which may involve shifting and\n stretching the array to better match the memory architecture. Additional\n points will be evaluated with *q=1e-3*.\n\n *dtype* is the data type for the q vectors. The data type should be\n set to match that of the kernel, which is an attribute of\n :class:`PyModel`. Note that not all kernels support double\n precision, so even if the program was created for double precision,\n the *GpuProgram.dtype* may be single precision.\n\n Call :meth:`release` when complete. Even if not called directly, the\n buffer will be released when the data object is freed.\n ', '__init__': <function PyInput.__init__>, 'release': <function PyInput.release>, '__dict__': <attribute '__dict__' of 'PyInput' objects>, '__weakref__': <attribute '__weakref__' of 'PyInput' objects>, '__annotations__': {}})¶
- __doc__ = '\n Make q data available to the gpu.\n\n *q_vectors* is a list of q vectors, which will be *[q]* for 1-D data,\n and *[qx, qy]* for 2-D data. Internally, the vectors will be reallocated\n to get the best performance on OpenCL, which may involve shifting and\n stretching the array to better match the memory architecture. Additional\n points will be evaluated with *q=1e-3*.\n\n *dtype* is the data type for the q vectors. The data type should be\n set to match that of the kernel, which is an attribute of\n :class:`PyModel`. Note that not all kernels support double\n precision, so even if the program was created for double precision,\n the *GpuProgram.dtype* may be single precision.\n\n Call :meth:`release` when complete. Even if not called directly, the\n buffer will be released when the data object is freed.\n '¶
- __init__(q_vectors, dtype)¶
- __module__ = 'sasmodels.kernelpy'¶
- __weakref__¶
list of weak references to the object
- release()¶
Free resources associated with the model inputs.
- class sasmodels.kernelpy.PyKernel(model_info: ModelInfo, q_input: List[ndarray])¶
Bases:
Kernel
Callable SAS kernel.
kernel is the kernel object to call.
model_info is the module information
q_input is the DllInput q vectors at which the kernel should be evaluated.
The resulting call method takes the pars, a list of values for the fixed parameters to the kernel, and pd_pars, a list of (value,weight) vectors for the polydisperse parameters. cutoff determines the integration limits: any points with combined weight less than cutoff will not be calculated.
Call
release()
when done with the kernel instance.- __annotations__ = {}¶
- __doc__ = '\n Callable SAS kernel.\n\n *kernel* is the kernel object to call.\n\n *model_info* is the module information\n\n *q_input* is the DllInput q vectors at which the kernel should be\n evaluated.\n\n The resulting call method takes the *pars*, a list of values for\n the fixed parameters to the kernel, and *pd_pars*, a list of (value,weight)\n vectors for the polydisperse parameters. *cutoff* determines the\n integration limits: any points with combined weight less than *cutoff*\n will not be calculated.\n\n Call :meth:`release` when done with the kernel instance.\n '¶
- __module__ = 'sasmodels.kernelpy'¶
- _call_kernel(call_details: CallDetails, values: ndarray, cutoff: ndarray, magnetic: float, radius_effective_mode: bool) None ¶
Call the kernel. Subclasses defining kernels for particular execution engines need to provide an implementation for this.
- release() None ¶
Free resources associated with the kernel.
- class sasmodels.kernelpy.PyModel(model_info)¶
Bases:
KernelModel
Wrapper for pure python models.
- __annotations__ = {}¶
- __doc__ = '\n Wrapper for pure python models.\n '¶
- __init__(model_info)¶
- __module__ = 'sasmodels.kernelpy'¶
- make_kernel(q_vectors)¶
Instantiate the python kernel with input q_vectors
- release()¶
Free resources associated with the model.
- sasmodels.kernelpy._create_default_functions(model_info)¶
Autogenerate missing functions, such as Iqxy from Iq.
This only works for Iqxy when Iq is written in python.
make_source()
performs a similar role for Iq written in C. This also vectorizes any functions that are not already marked as vectorized.
- sasmodels.kernelpy._create_vector_Iq(model_info)¶
Define Iq as a vector function if it exists.
- sasmodels.kernelpy._create_vector_Iqxy(model_info)¶
Define Iqxy as a vector function if it exists, or default it from Iq().
- sasmodels.kernelpy._loops(parameters: ndarray, form: Callable[[], ndarray], form_volume: Callable[[], float], form_radius: Callable[[], float], nq: int, call_details: CallDetails, values: ndarray, cutoff: float) None ¶
sasmodels.list_pars module¶
List all parameters used along with the models which use them.
Usage:
python -m sasmodels.list_pars [-v]
If ‘-v’ is given, then list the models containing the parameter in addition to just the parameter name.
- sasmodels.list_pars.find_pars(kind=None)¶
Find all parameters in all models.
Returns the reference table {parameter: [model, model, …]}
- sasmodels.list_pars.list_pars(names_only=True, kind=None)¶
Print all parameters in all models.
If names_only then only print the parameter name, not the models it occurs in.
- sasmodels.list_pars.main()¶
Program to list the parameters used across all models.