Stingray API

Library of Time Series Methods For Astronomical X-ray Data.

Base Class

Most stingray` classes are subclasses of a single class, stingray.StingrayObject, which implements most of the I/O functionality and common behavior (e.g. strategies to combine data and make operations such as the sum, difference, or negation). This class is not intended to be instantiated directly, but rather to be used as a base class for other classes. Any class wanting to inherit from stingray.StingrayObject should define a main_array_attr attribute, which defines the name of the attribute that will be used to store the “independent variable” main data array. For example, for all time series-like objects, the main array is the time array, while for the periodograms the main array is the frequency array. All arrays sharing the length (not the shape: they might be multi-dimensional!) of the main array are called “array attributes” and are accessible through the array_attrs method. When applying a mask or any other selection to a stingray.StingrayObject, all array attributes are filtered in the same way. Some array-like attributes might have the same length by chance, in this case the user or the developer should add these to the not_array_attr attribute. For example, stingray.StingrayTimeseries has gti among the not_array_attrs, since it is an array but not related 1-to-1 to the main array, even if in some cases it might happen to have the same numbers of elements of the main array, which is time.

StingrayObject

class stingray.StingrayObject(*args, **kwargs)[source]

This base class defines some general-purpose utilities.

The main purpose is to have a consistent mechanism for:

  • round-tripping to and from Astropy Tables and other dataframes

  • round-tripping to files in different formats

The idea is that any object inheriting StingrayObject should, just by defining an attribute called main_array_attr, be able to perform the operations above, with no additional effort.

main_array_attr is, e.g. time for StingrayTimeseries and Lightcurve, freq for Crossspectrum, energy for VarEnergySpectrum, and so on. It is the array with which all other attributes are compared: if they are of the same shape, they get saved as columns of the table/dataframe, otherwise as metadata.

add(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)[source]

Add two StingrayObject instances.

Add the array values of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

apply_mask(mask: npt.ArrayLike, inplace: bool = False, filtered_attrs: list = None)[source]

Apply a mask to all array attributes of the object

Parameters:
maskarray of bool

The mask. Has to be of the same length as self.time

Returns:
ts_newStingrayObject object

The new object with the mask applied if inplace is False, otherwise the same object.

Other Parameters:
inplacebool

If True, overwrite the current object. Otherwise, return a new one.

filtered_attrslist of str or None

Array attributes to be filtered. Defaults to all array attributes if None. The other array attributes will be set to None. The main array attr is always included.

array_attrs() list[str][source]

List the names of the array attributes of the Stingray Object.

By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of array attributes.

data_attributes() list[str][source]

Clean up the list of attributes, only giving out those pointing to data.

List all the attributes that point directly to valid data. This method goes through all the attributes of the class, eliminating methods, properties, and attributes that are complicated to serialize such as other StingrayObject, or arrays of objects.

This function does not make difference between array-like data and scalar data.

Returns:
data_attributeslist of str

List of attributes pointing to data that are not methods, properties, or other StingrayObject instances.

dict() dict[source]

Return a dictionary representation of the object.

classmethod from_astropy_table(ts: Table) Tso[source]

Create a Stingray Object object from data in an Astropy Table.

The table MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

classmethod from_pandas(ts: DataFrame) Tso[source]

Create an StingrayObject object from data in a pandas DataFrame.

The dataframe MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

Since pandas does not support n-D data, multi-dimensional arrays can be specified as <colname>_dimN_M_K etc.

See documentation of make_1d_arrays_into_nd for details.

classmethod from_xarray(ts: Dataset) Tso[source]

Create a StingrayObject from data in an xarray Dataset.

The dataset MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

get_meta_dict() dict[source]

Give a dictionary with all non-None meta attrs of the object.

internal_array_attrs() list[str][source]

List the names of the internal array attributes of the Stingray Object.

These are array attributes that can be set by properties, and are generally indicated by an underscore followed by the name of the property that links to it (E.g. _counts in Lightcurve). By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of internal array attributes.

meta_attrs() list[str][source]

List the names of the meta attributes of the Stingray Object.

By array attributes, we mean the ones with a different size and shape than main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of meta attributes.

pretty_print(func_to_apply=None, attrs_to_apply=[], attrs_to_discard=[]) str[source]

Return a pretty-printed string representation of the object.

This is useful for debugging, and for interactive use.

Other Parameters:
func_to_applyfunction

A function that modifies the attributes listed in attrs_to_apply. It must return the modified attributes and a label to be printed. If None, no function is applied.

attrs_to_applylist of str

Attributes to be modified by func_to_apply.

attrs_to_discardlist of str

Attributes to be discarded from the output.

classmethod read(filename: str, fmt: str = None) Tso[source]

Generic reader for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

Files that need the astropy.table.Table interface MUST contain at least a column named like the main_array_attr. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all attributes such as mission, gti, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care.

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values should be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Path and file name for the file to be read.

fmt: str

Available options are ‘pickle’, ‘hea’, and any Table-supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.

Returns:
obj: StingrayObject object

The object reconstructed from file

sub(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)[source]

Subtract all the array attrs of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

to_astropy_table(no_longdouble=False) Table[source]

Create an Astropy Table from a StingrayObject

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the meta dictionary.

Other Parameters:
no_longdoublebool

If True, reduce the precision of longdouble arrays to double precision. This needs to be done in some cases, e.g. when the table is to be saved in an architecture not supporting extended precision (e.g. ARM), but can also be useful when an extended precision is not needed.

to_pandas() DataFrame[source]

Create a pandas DataFrame from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

Since pandas does not support n-D data, multi-dimensional arrays are converted into columns before the conversion, with names <colname>_dimN_M_K etc.

See documentation of make_nd_into_arrays for details.

to_xarray() Dataset[source]

Create an xarray Dataset from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

write(filename: str, fmt: str | None = None) None[source]

Generic writer for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values will be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Name and path of the file to save the object list to.

fmt: str

The file format to store the data in. Available options are pickle, hdf5, ascii, fits


Data Classes

These classes define basic functionality related to common data types and typical methods that apply to these data types, including basic read/write functionality. Currently implemented are stingray.StingrayTimeseries, stingray.Lightcurve and stingray.events.EventList.

All time series-like data classes inherit from stingray.StingrayTimeseries, which implements most of the common functionality. The main data array is stored in the time attribute. Good Time Intervals (GTIs) are stored in the gti attribute, which is a list of 2-tuples or 2-lists containing the start and stop times of each GTI. The gti attribute is not an array attribute, since it is not related 1-to-1 to the main array, even if in some cases it might happen to have the same number of elements of the main array. It is by default added to the not_array_attr attribute.

StingrayTimeseries

class stingray.StingrayTimeseries(time: TTime = None, array_attrs: dict = {}, mjdref: TTime = 0, notes: str = '', gti: npt.ArrayLike = None, high_precision: bool = False, ephem: str = None, timeref: str = None, timesys: str = None, skip_checks: bool = False, **other_kw)[source]

Basic class for time series data.

This can be events, binned light curves, unevenly sampled light curves, etc. The only requirement is that the data (which can be any quantity, related or not to an electromagnetic measurement) are associated with a time measurement. We make a distinction between the array attributes, which have the same length of the time array, and the meta attributes, which can be scalars or arrays of different size. The array attributes can be multidimensional (e.g. a spectrum for each time bin), but their first dimension (array.shape[0]) must have same length of the time array.

Array attributes are singled out automatically depending on their shape. All filtering operations (e.g. apply_gtis, rebin, etc.) are applied to array attributes only. For this reason, it is advisable to specify whether a given attribute should not be considered as an array attribute by adding it to the not_array_attr list.

Parameters:
time: iterable

A list or array of time stamps

Other Parameters:
array_attrsdict

Array attributes to be set (e.g. {"flux": flux_array, "flux_err": flux_err_array}). In principle, they could be specified as simple keyword arguments. But this way, we will run a check on the length of the arrays, and raise an error if they are not of a shape compatible with the time array.

dt: float

The time resolution of the time series. Can be a scalar or an array attribute (useful for non-evenly sampled data or events from different instruments)

mjdreffloat

The MJD used as a reference for the time array.

gtis: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Good Time Intervals

high_precisionbool

Change the precision of self.time to float128. Useful while dealing with fast pulsars.

timerefstr

The time reference, as recorded in the FITS file (e.g. SOLARSYSTEM)

timesysstr

The time system, as recorded in the FITS file (e.g. TDB)

ephemstr

The JPL ephemeris used to barycenter the data, if any (e.g. DE430)

skip_checksbool

Skip checks on the time array. Useful when the user is reasonably sure that the input data are valid.

**other_kw

Used internally. Any other keyword arguments will be set as attributes of the object.

Attributes:
time: numpy.ndarray

The array of time stamps, in seconds from the reference MJD defined in mjdref

not_array_attr: list

List of attributes that are never to be considered as array attributes. For example, GTIs are not array attributes.

dt: float

The time resolution of the measurements. Can be a scalar or an array attribute (useful for non-evenly sampled data or events from different instruments). It can also be 0, which means that the time series is not evenly sampled and the effects of the time resolution are considered negligible for the analysis. This is sometimes the case for events from high-energy telescopes.

mjdreffloat

The MJD used as a reference for the time array.

gtis: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Good Time Intervals

high_precisionbool

Change the precision of self.time to float128. Useful while dealing with fast pulsars.

analyze_by_gti(func, fraction_step=1, **kwargs)[source]

Analyze the light curve with any function, on a GTI-by-GTI base.

Parameters:
funcfunction

Function accepting a StingrayTimeseries object as single argument, plus possible additional keyword arguments, and returning a number or a tuple - e.g., (result, error) where both result and error are numbers.

Returns:
start_timesarray

Lower time boundaries of all time segments.

stop_timesarray

upper time boundaries of all segments.

resultarray of N elements

The result of func for each segment of the light curve

Other Parameters:
fraction_stepfloat

By default, segments do not overlap (fraction_step = 1). If fraction_step < 1, then the start points of consecutive segments are fraction_step * segment_size apart, and consecutive segments overlap. For example, for fraction_step = 0.5, the window shifts one half of segment_size)

kwargskeyword arguments

These additional keyword arguments, if present, they will be passed to func

analyze_segments(func, segment_size, fraction_step=1, **kwargs)[source]

Analyze segments of the light curve with any function.

Intervals with less than one data point are skipped.

Parameters:
funcfunction

Function accepting a StingrayTimeseries object as single argument, plus possible additional keyword arguments, and returning a number or a tuple - e.g., (result, error) where both result and error are numbers.

segment_sizefloat

Length in seconds of the light curve segments. If None, the full GTIs are considered instead as segments.

Returns:
start_timesarray

Lower time boundaries of all time segments.

stop_timesarray

upper time boundaries of all segments.

resultlist of N elements

The result of func for each segment of the light curve. If the function returns multiple outputs, they are returned as a list of arrays. If a given interval has not enough data for a calculation, None is returned.

Other Parameters:
fraction_stepfloat

If the step is not a full segment_size but less (e.g. a moving window), this indicates the ratio between step step and segment_size (e.g. 0.5 means that the window shifts of half segment_size)

kwargskeyword arguments

These additional keyword arguments, if present, they will be passed to func

Examples

>>> import numpy as np
>>> time = np.arange(0, 10, 0.1)
>>> counts = np.zeros_like(time) + 10
>>> ts = StingrayTimeseries(time, counts=counts, dt=0.1)
>>> # Define a function that calculates the mean
>>> mean_func = lambda ts: np.mean(ts.counts)
>>> # Calculate the mean in segments of 5 seconds
>>> start, stop, res = ts.analyze_segments(mean_func, 5)
>>> len(res) == 2
True
>>> np.allclose(res, 10)
True
apply_gtis(new_gti=None, inplace: bool = True)[source]

Apply Good Time Intervals (GTIs) to a time series. Filters all the array attributes, only keeping the bins that fall into GTIs.

Parameters:
inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

change_mjdref(new_mjdref: float, inplace=False) StingrayTimeseries[source]

Change the MJD reference time (MJDREF) of the time series

The times of the time series will be shifted in order to be referred to this new MJDREF

Parameters:
new_mjdreffloat

New MJDREF

Returns:
new_tsStingrayTimeseries object

The new time series, shifted by MJDREF

Other Parameters:
inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

concatenate(other, check_gti=True)[source]

Concatenate two StingrayTimeseries objects.

This method concatenates two or more StingrayTimeseries objects along the time axis. GTIs are recalculated by merging all the GTIs together. GTIs should not overlap at any point.

Parameters:
otherStingrayTimeseries object or list of StingrayTimeseries objects

A second time series object, or a list of objects to be concatenated

Other Parameters:
check_gtibool

Check if the GTIs are overlapping or not. Default: True If this is True and GTIs overlap, an error is raised.

estimate_segment_size(min_counts=None, min_samples=None, even_sampling=None)[source]

Estimate a reasonable segment length for segment-by-segment analysis.

The user has to specify a criterion based on a minimum number of counts (if the time series has a counts attribute) or a minimum number of time samples. At least one between min_counts and min_samples must be specified. In the special case of a time series with dt=0 (event list-like, where each time stamp correspond to a single count), the two definitions are equivalent.

Returns:
segment_sizefloat

The length of the light curve chunks that satisfies the conditions

Other Parameters:
min_countsint

Minimum number of counts for each chunk. Optional (but needs min_samples if left unspecified). Only makes sense if the series has a counts attribute and it is evenly sampled.

min_samplesint

Minimum number of time bins. Optional (but needs min_counts if left unspecified).

even_samplingbool

Force the treatment of the data as evenly sampled or not. If None, the data are considered evenly sampled if self.dt is larger than zero and the median separation between subsequent times is within 1% of self.dt.

Examples

>>> import numpy as np
>>> time = np.arange(150)
>>> counts = np.zeros_like(time) + 3
>>> ts = StingrayTimeseries(time, counts=counts, dt=1)
>>> assert np.isclose(ts.estimate_segment_size(min_counts=10, min_samples=3), 4.0)
>>> assert np.isclose(ts.estimate_segment_size(min_counts=10, min_samples=5), 5.0)
>>> counts[2:4] = 1
>>> ts = StingrayTimeseries(time, counts=counts, dt=1)
>>> assert np.isclose(ts.estimate_segment_size(min_counts=3, min_samples=1), 3.0)
>>> # A slightly more complex example
>>> dt=0.2
>>> time = np.arange(0, 1000, dt)
>>> counts = np.random.poisson(100, size=len(time))
>>> ts = StingrayTimeseries(time, counts=counts, dt=dt)
>>> assert np.isclose(ts.estimate_segment_size(100, 2), 0.4)
>>> min_total_bins = 40
>>> assert np.isclose(ts.estimate_segment_size(100, 40), 8.0)
property exposure

Return the total exposure of the time series, i.e. the sum of the GTIs.

Returns:
total_exposurefloat

The total exposure of the time series, in seconds.

fill_bad_time_intervals(max_length=None, attrs_to_randomize=None, buffer_size=None, even_sampling=None, seed=None)[source]

Fill short bad time intervals with random data.

Warning

This method is only appropriate for very short bad time intervals. The simulated data are basically white noise, so they are able to alter the statistical properties of variable data. For very short gaps in the data, the effect of these small injections of white noise should be negligible. How short depends on the single case, the user is urged not to use the method as a black box and make simulations to measure its effect. If you have long bad time intervals, you should use more advanced techniques, not currently available in Stingray for this use case, such as Gaussian Processes. In particular, please verify that the values of max_length and buffer_size are adequate to your case.

To fill the gaps in all but the time points (i.e., flux measures, energies), we take the buffer_size (by default, the largest value between 100 and the estimated samples in a max_length-long gap) valid data points closest to the gap and repeat them randomly with the same empirical statistical distribution. So, if the my_fancy_attr attribute, in the 100 points of the buffer, has 30 times 10, 10 times 9, and 60 times 11, there will be on average 30% of 10, 60% of 11, and 10% of 9 in the simulated data.

Times are treated differently depending on the fact that the time series is evenly sampled or not. If it is not, the times are simulated from a uniform distribution with the same count rate found in the buffer. Otherwise, times just follow the same grid used inside GTIs. Using the evenly sampled or not is decided based on the even_sampling parameter. If left to None, the time series is considered evenly sampled if self.dt is greater than zero and the median separation between subsequent times is within 1% of the time resolution.

Other Parameters:
max_lengthfloat

Maximum length of a bad time interval to be filled. If None, the criterion is bad time intervals shorter than 1/100th of the longest good time interval.

attrs_to_randomizelist of str, default None

List of array_attrs to randomize. If None, all array_attrs are randomized. It should not include time and _mask, which are treated separately.

buffer_sizeint, default 100

Number of good data points to use to calculate the means and variance the random data on each side of the bad time interval

even_samplingbool, default None

Force the treatment of the data as evenly sampled or not. If None, the data are considered evenly sampled if self.dt is larger than zero and the median separation between subsequent times is within 1% of self.dt.

seedint, default None

Random seed to use for the simulation. If None, a random seed is generated.

classmethod from_astropy_timeseries(ts: TimeSeries) StingrayTimeseries[source]

Create a StingrayTimeseries from data in an Astropy TimeSeries

The timeseries has to define at least a column called time, the rest of columns will form the array attributes of the new time series, while the attributes in table.meta will form the new meta attributes of the time series.

Parameters:
tsastropy.timeseries.TimeSeries

A TimeSeries object with the array attributes as columns, and the meta attributes in the meta dictionary

Returns:
tsStingrayTimeseries

Timeseries object

join(*args, **kwargs)[source]

Join other StingrayTimeseries objects with the current one.

If both are empty, an empty StingrayTimeseries is returned.

Standard attributes such as pi and energy remain None if they are None in both. Otherwise, np.nan is used as a default value for the missing values. Arbitrary array attributes are created and joined using the same convention.

Multiple checks are done on the joined time series. If the time array of the series being joined is empty, it is ignored. If the time resolution is different, the final time series will have the rougher time resolution. If the MJDREF is different, the time reference will be changed to the one of the first time series. An empty time series will be ignored.

Note: join is not equivalent to concatenate. concatenate is used to join multiple non-overlapping time series along the time axis, while join is more general, and can be used to join multiple time series with different strategies (see parameter strategy below).

Parameters:
otherStingrayTimeseries or class:list of StingrayTimeseries

The other StingrayTimeseries object which is supposed to be joined with. If other is a list, it is assumed to be a list of StingrayTimeseries and they are all joined, one by one.

Returns:
`ts_new`StingrayTimeseries object

The resulting StingrayTimeseries object.

Other Parameters:
strategy{“intersection”, “union”, “append”, “infer”, “none”}

Method to use to merge the GTIs. If “intersection”, the GTIs are merged using the intersection of the GTIs. If “union”, the GTIs are merged using the union of the GTIs. If “none”, a single GTI with the minimum and the maximum time stamps of all GTIs is returned. If “infer”, the strategy is decided based on the GTIs. If there are no overlaps, “union” is used, otherwise “intersection” is used. If “append”, the GTIs are simply appended but they must be mutually exclusive.

plot(attr, witherrors=False, labels=None, ax=None, title=None, marker='-', save=False, filename=None, plot_btis=True, axis_limits=None)[source]

Plot the time series using matplotlib.

Plot the time series object on a graph self.time on x-axis and self.counts on y-axis with self.counts_err optionally as error bars.

Parameters:
attr: str

Attribute to plot.

Other Parameters:
witherrors: boolean, default False

Whether to plot the StingrayTimeseries with errorbars or not

labelsiterable, default None

A list or tuple with xlabel and ylabel as strings. E.g. if the attribute is 'counts', the list of labels could be ['Time (s)', 'Counts (s^-1)']

axmatplotlib.pyplot.axis object

Axis to be used for plotting. Defaults to creating a new one.

axis_limitslist, tuple, string, default None

Parameter to set axis properties of the matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for the``matplotlib.pyplot.axis()`` method.

titlestr, default None

The title of the plot.

markerstr, default ‘-’

Line style and color of the plot. Line styles and colors are combined in a single format string, as in 'bo' for blue circles. See matplotlib.pyplot.plot for more options.

saveboolean, optional, default False

If True, save the figure with specified filename.

filenamestr

File name of the image to save. Depends on the boolean save.

plot_btisbool

Plot the bad time intervals as red areas on the plot

rebin(dt_new=None, f=None, method='sum')[source]

Rebin the time series to a new time resolution. While the new resolution need not be an integer multiple of the previous time resolution, be aware that if it is not, the last bin will be cut off by the fraction left over by the integer division.

Parameters:
dt_new: float

The new time resolution of the time series. Must be larger than the time resolution of the old time series!

method: {``sum`` | ``mean`` | ``average``}, optional, default ``sum``

This keyword argument sets whether the counts in the new bins should be summed or averaged.

Returns:
ts_new: StingrayTimeseries object

The StingrayTimeseries object with the new, binned time series.

Other Parameters:
f: float

the rebin factor. If specified, it substitutes dt_new with f*self.dt

shift(time_shift: float, inplace=False) StingrayTimeseries[source]

Shift the time and the GTIs by the same amount

Parameters:
time_shift: float

The time interval by which the time series will be shifted (in the same units as the time array in StingrayTimeseries

Returns:
tsStingrayTimeseries object

The new time series shifted by time_shift

Other Parameters:
inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

sort(reverse=False, inplace=False)[source]

Sort a StingrayTimeseries object by time.

A StingrayTimeseries can be sorted in either increasing or decreasing order using this method. The time array gets sorted and the counts array is changed accordingly.

Parameters:
reverseboolean, default False

If True then the object is sorted in reverse order.

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

Returns:
ts_new: StingrayTimeseries object

The StingrayTimeseries object with sorted time and counts arrays.

Examples

>>> time = [2, 1, 3]
>>> count = [200, 100, 300]
>>> ts = StingrayTimeseries(time, array_attrs={"counts": count}, dt=1, skip_checks=True)
>>> ts_new = ts.sort()
>>> ts_new.time
array([1, 2, 3])
>>> assert np.allclose(ts_new.counts, [100, 200, 300])
split_by_gti(gti=None, min_points=2)[source]

Split the current StingrayTimeseries object into a list of StingrayTimeseries objects, one for each continuous GTI segment as defined in the gti attribute.

Parameters:
min_pointsint, default 1

The minimum number of data points in each time series. Light curves with fewer data points will be ignored.

Returns:
list_of_tsslist

A list of StingrayTimeseries objects, one for each GTI segment

to_astropy_timeseries() TimeSeries[source]

Save the StingrayTimeseries to an Astropy timeseries.

Array attributes (time, pi, energy, etc.) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the meta dictionary.

Returns:
tsastropy.timeseries.TimeSeries

A TimeSeries object with the array attributes as columns, and the meta attributes in the meta dictionary

truncate(start=0, stop=None, method='index')[source]

Truncate a StingrayTimeseries object.

This method takes a start and a stop point (either as indices, or as times in the same unit as those in the time attribute, and truncates all bins before start and after stop, then returns a new StingrayTimeseries object with the truncated time series.

Parameters:
startint, default 0

Index (or time stamp) of the starting point of the truncation. If no value is set for the start point, then all points from the first element in the time array are taken into account.

stopint, default None

Index (or time stamp) of the ending point (exclusive) of the truncation. If no value of stop is set, then points including the last point in the counts array are taken in count.

method{index | time}, optional, default index

Type of the start and stop values. If set to index then the values are treated as indices of the counts array, or if set to time, the values are treated as actual time values.

Returns:
ts_new: StingrayTimeseries object

The StingrayTimeseries object with truncated time and arrays.

Examples

>>> time = [1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> count = [10, 20, 30, 40, 50, 60, 70, 80, 90]
>>> ts = StingrayTimeseries(time, array_attrs={"counts": count}, dt=1)
>>> ts_new = ts.truncate(start=2, stop=8)
>>> assert np.allclose(ts_new.counts, [30, 40, 50, 60, 70, 80])
>>> assert np.allclose(ts_new.time, [3, 4, 5, 6, 7, 8])
>>> # Truncation can also be done by time values
>>> ts_new = ts.truncate(start=6, method='time')
>>> assert np.allclose(ts_new.time, [6, 7, 8, 9])
>>> assert np.allclose(ts_new.counts, [60, 70, 80, 90])

Lightcurve

class stingray.Lightcurve(time=None, counts=None, err=None, input_counts=True, gti=None, err_dist='poisson', bg_counts=None, bg_ratio=None, frac_exp=None, mjdref=0, dt=None, skip_checks=False, low_memory=False, mission=None, instr=None, header=None, **other_kw)[source]

Make a light curve object from an array of time stamps and an array of counts.

Parameters:
time: Iterable, `:class:astropy.time.Time`, or `:class:astropy.units.Quantity` object

A list or array of time stamps for a light curve. Must be a type that can be cast to :class:np.array or :class:List of floats, or that has a value attribute that does (e.g. a :class:astropy.units.Quantity or :class:astropy.time.Time object).

counts: iterable, optional, default ``None``

A list or array of the counts in each bin corresponding to the bins defined in time (note: use input_counts=False to input the count range, i.e. counts/second, otherwise use counts/bin).

err: iterable, optional, default ``None``

A list or array of the uncertainties in each bin corresponding to the bins defined in time (note: use input_counts=False to input the count rage, i.e. counts/second, otherwise use counts/bin). If None, we assume the data is poisson distributed and calculate the error from the average of the lower and upper 1-sigma confidence intervals for the Poissonian distribution with mean equal to counts.

input_counts: bool, optional, default True

If True, the code assumes that the input data in counts is in units of counts/bin. If False, it assumes the data in counts is in counts/second.

gti: 2-d float array, default ``None``

[[gti0_0, gti0_1], [gti1_0, gti1_1], ...] Good Time Intervals. They are not applied to the data by default. They will be used by other methods to have an indication of the “safe” time intervals to use during analysis.

err_dist: str, optional, default ``None``

Statistical distribution used to calculate the uncertainties and other statistical values appropriately. Default makes no assumptions and keep errors equal to zero.

bg_counts: iterable,`:class:numpy.array` or `:class:List` of floats, optional, default ``None``

A list or array of background counts detected in the background extraction region in each bin corresponding to the bins defined in time.

bg_ratio: iterable, `:class:numpy.array` or `:class:List` of floats, optional, default ``None``

A list or array of source region area to background region area ratio in each bin. These are factors by which the bg_counts should be scaled to estimate background counts within the source aperture.

frac_exp: iterable, `:class:numpy.array` or `:class:List` of floats, optional, default ``None``

A list or array of fractional exposers in each bin.

mjdref: float

MJD reference (useful in most high-energy mission data)

dt: float or array of floats. Default median(diff(time))

Time resolution of the light curve. Can be an array of the same dimension as time specifying width of each bin.

skip_checks: bool

If True, the user specifies that data are already sorted and contain no infinite or nan points. Use at your own risk

low_memory: bool

If True, all the lazily evaluated attribute (e.g., countrate and countrate_err if input_counts is True) will _not_ be stored in memory, but calculated every time they are requested.

missionstr

Mission that recorded the data (e.g. NICER)

instrstr

Instrument onboard the mission

headerstr

The full header of the original FITS file, if relevant

**other_kw

Used internally. Any other keyword arguments will be ignored

Attributes:
time: numpy.ndarray

The array of midpoints of time bins.

bin_lo: numpy.ndarray

The array of lower time stamp of time bins.

bin_hi: numpy.ndarray

The array of higher time stamp of time bins.

counts: numpy.ndarray

The counts per bin corresponding to the bins in time.

counts_err: numpy.ndarray

The uncertainties corresponding to counts

bg_counts: numpy.ndarray

The background counts corresponding to the bins in time.

bg_ratio: numpy.ndarray

The ratio of source region area to background region area corresponding to each bin.

frac_exp: numpy.ndarray

The fractional exposers in each bin.

countrate: numpy.ndarray

The counts per second in each of the bins defined in time.

countrate_err: numpy.ndarray

The uncertainties corresponding to countrate

meanrate: float

The mean count rate of the light curve.

meancounts: float

The mean counts of the light curve.

n: int

The number of data points in the light curve.

dt: float or array of floats

The time resolution of the light curve.

mjdref: float

MJD reference date (tstart / 86400 gives the date in MJD at the start of the observation)

tseg: float

The total duration of the light curve.

tstart: float

The start time of the light curve.

gti: 2-d float array

[[gti0_0, gti0_1], [gti1_0, gti1_1], ...] Good Time Intervals. They indicate the “safe” time intervals to be used during the analysis of the light curve.

err_dist: string

Statistic of the Lightcurve, it is used to calculate the uncertainties and other statistical values appropriately. It propagates to Spectrum classes.

missionstr

Mission that recorded the data (e.g. NICER)

instrstr

Instrument onboard the mission

detector_iditerable

The detector that recoded each photon, if relevant (e.g. XMM, Chandra)

headerstr

The full header of the original FITS file, if relevant

analyze_lc_chunks(segment_size, func, fraction_step=1, **kwargs)[source]

Analyze segments of the light curve with any function.

Deprecated since version 2.0: Use Lightcurve.analyze_segments(func, segment_size)() instead.

Parameters:
segment_sizefloat

Length in seconds of the light curve segments

funcfunction

Function accepting a Lightcurve object as single argument, plus possible additional keyword arguments, and returning a number or a tuple - e.g., (result, error) where both result and error are numbers.

Returns:
start_timesarray

Lower time boundaries of all time segments.

stop_timesarray

upper time boundaries of all segments.

resultarray of N elements

The result of func for each segment of the light curve

Other Parameters:
fraction_stepfloat

By default, segments do not overlap (fraction_step = 1). If fraction_step < 1, then the start points of consecutive segments are fraction_step * segment_size apart, and consecutive segments overlap. For example, for fraction_step = 0.5, the window shifts one half of segment_size)

kwargskeyword arguments

These additional keyword arguments, if present, they will be passed to func

apply_gtis(inplace=True)[source]

Apply GTIs to a light curve. Filters the time, counts, countrate, counts_err and countrate_err arrays for all bins that fall into Good Time Intervals and recalculates mean countrate and the number of bins.

Parameters:
inplacebool

If True, overwrite the current light curve. Otherwise, return a new one.

baseline(lam, p, niter=10, offset_correction=False)[source]

Calculate the baseline of the light curve, accounting for GTIs.

Parameters:
lamfloat

“smoothness” parameter. Larger values make the baseline stiffer Typically 1e2 < lam < 1e9

pfloat

“asymmetry” parameter. Smaller values make the baseline more “horizontal”. Typically 0.001 < p < 0.1, but not necessary.

Returns:
baselinenumpy.ndarray

An array with the baseline of the light curve

Other Parameters:
offset_correctionbool, default False

by default, this method does not align to the running mean of the light curve, but it goes below the light curve. Setting align to True, an additional step is done to shift the baseline so that it is shifted to the middle of the light curve noise distribution.

bexvar()[source]

Finds posterior samples of Bayesian excess variance (bexvar) for the light curve. It requires source counts in counts and time intervals for each bin. If the dt is an array then uses its elements as time intervals for each bin. If dt is float, it calculates the time intervals by assuming all intervals to be equal to dt.

Returns:
lc_bexvariterable, :class:numpy.array of floats

An array of posterior samples of Bayesian excess variance (bexvar).

check_lightcurve()[source]

Make various checks on the lightcurve.

It can be slow, use it if you are not sure about your input data.

estimate_chunk_length(*args, **kwargs)[source]

Deprecated alias of estimate_segment_size.

estimate_segment_size(min_counts=100, min_samples=100, even_sampling=None)[source]

Estimate a reasonable segment length for segment-by-segment analysis.

The user has to specify a criterion based on a minimum number of counts (if the time series has a counts attribute) or a minimum number of time samples. At least one between min_counts and min_samples must be specified.

Returns:
segment_sizefloat

The length of the light curve chunks that satisfies the conditions

Other Parameters:
min_countsint

Minimum number of counts for each chunk. Optional (but needs min_samples if left unspecified). Only makes sense if the series has a counts attribute and it is evenly sampled.

min_samplesint

Minimum number of time bins. Optional (but needs min_counts if left unspecified).

even_samplingbool

Force the treatment of the data as evenly sampled or not. If None, the data are considered evenly sampled if self.dt is larger than zero and the median separation between subsequent times is within 1% of self.dt.

Examples

>>> import numpy as np
>>> time = np.arange(150)
>>> count = np.zeros_like(time) + 3
>>> lc = Lightcurve(time, count, dt=1)
>>> assert np.isclose(
...     lc.estimate_segment_size(min_counts=10, min_samples=3), 4)
>>> assert np.isclose(lc.estimate_segment_size(min_counts=10, min_samples=5), 5)
>>> count[2:4] = 1
>>> lc = Lightcurve(time, count, dt=1)
>>> assert np.isclose(lc.estimate_segment_size(min_counts=3, min_samples=1), 3)
>>> # A slightly more complex example
>>> dt=0.2
>>> time = np.arange(0, 1000, dt)
>>> counts = np.random.poisson(100, size=len(time))
>>> lc = Lightcurve(time, counts, dt=dt)
>>> assert np.isclose(lc.estimate_segment_size(100, 2), 0.4)
>>> min_total_bins = 40
>>> assert np.isclose(lc.estimate_segment_size(100, 40), 8.0)
static from_astropy_table(ts, **kwargs)[source]

Create a Stingray Object object from data in an Astropy Table.

The table MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

static from_astropy_timeseries(ts, **kwargs)[source]

Create a StingrayTimeseries from data in an Astropy TimeSeries

The timeseries has to define at least a column called time, the rest of columns will form the array attributes of the new time series, while the attributes in table.meta will form the new meta attributes of the time series.

Parameters:
tsastropy.timeseries.TimeSeries

A TimeSeries object with the array attributes as columns, and the meta attributes in the meta dictionary

Returns:
tsStingrayTimeseries

Timeseries object

static from_lightkurve(lk, skip_checks=True)[source]

Creates a new Lightcurve from a lightkurve.LightCurve.

Parameters:
lklightkurve.LightCurve

A lightkurve LightCurve object

skip_checks: bool

If True, the user specifies that data are already sorted and contain no infinite or nan points. Use at your own risk.

join(other, skip_checks=False)[source]

Join two lightcurves into a single object.

The new Lightcurve object will contain time stamps from both the objects. The counts and countrate attributes in the resulting object will contain the union of the non-overlapping parts of the two individual objects, or the average in case of overlapping time arrays of both Lightcurve objects.

Good Time Intervals are also joined.

Note : Ideally, the time array of both lightcurves should not overlap.

Parameters:
otherLightcurve object

The other Lightcurve object which is supposed to be joined with.

skip_checks: bool

If True, the user specifies that data are already sorted and contain no infinite or nan points. Use at your own risk.

Returns:
lc_newLightcurve object

The resulting Lightcurve object.

Examples

>>> time1 = [5, 10, 15]
>>> count1 = [300, 100, 400]
>>> time2 = [20, 25, 30]
>>> count2 = [600, 1200, 800]
>>> lc1 = Lightcurve(time1, count1, dt=5)
>>> lc2 = Lightcurve(time2, count2, dt=5)
>>> lc = lc1.join(lc2)
>>> lc.time
array([ 5, 10, 15, 20, 25, 30])
>>> assert np.allclose(lc.counts, [ 300,  100,  400,  600, 1200,  800])
static make_lightcurve(toa, dt, tseg=None, tstart=None, gti=None, mjdref=0, use_hist=False)[source]

Make a light curve out of photon arrival times, with a given time resolution dt. Note that dt should be larger than the native time resolution of the instrument that has taken the data.

Parameters:
toa: iterable

list of photon arrival times

dt: float

time resolution of the light curve (the bin width)

tseg: float, optional, default ``None``

The total duration of the light curve. If this is None, then the total duration of the light curve will be the interval between the arrival between either the first and the last gti boundary or, if gti is not set, the first and the last photon in toa.

Note: If tseg is not divisible by dt (i.e. if tseg/dt is not an integer number), then the last fractional bin will be dropped!

tstart: float, optional, default ``None``

The start time of the light curve. If this is None, either the first gti boundary or, if not available, the arrival time of the first photon will be used as the start time of the light curve.

gti: 2-d float array

[[gti0_0, gti0_1], [gti1_0, gti1_1], ...] Good Time Intervals

use_histbool

Use np.histogram instead of np.bincounts. Might be advantageous for very short datasets.

Returns:
lc: Lightcurve object

A Lightcurve object with the binned light curve

plot(witherrors=False, labels=None, ax=None, title=None, marker='-', save=False, filename=None, axis_limits=None, axis=None, plot_btis=True)[source]

Plot the light curve using matplotlib.

Plot the light curve object on a graph self.time on x-axis and self.counts on y-axis with self.counts_err optionally as error bars.

Parameters:
witherrors: boolean, default False

Whether to plot the Lightcurve with errorbars or not

labelsiterable, default None

A list of tuple with xlabel and ylabel as strings.

axis_limitslist, tuple, string, default None

Parameter to set axis properties of the matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for the``matplotlib.pyplot.axis()`` method.

axislist, tuple, string, default None

Deprecated in favor of axis_limits, same functionality.

titlestr, default None

The title of the plot.

markerstr, default ‘-’

Line style and color of the plot. Line styles and colors are combined in a single format string, as in 'bo' for blue circles. See matplotlib.pyplot.plot for more options.

saveboolean, optional, default False

If True, save the figure with specified filename.

filenamestr

File name of the image to save. Depends on the boolean save.

axmatplotlib.pyplot.axis object

Axis to be used for plotting. Defaults to creating a new one.

plot_btisbool

Plot the bad time intervals as red areas on the plot

classmethod read(filename, fmt=None, format_=None, err_dist='gauss', skip_checks=False, **fits_kwargs)[source]

Read a Lightcurve object from file.

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • hea : FITS Light curves from HEASARC-supported missions.

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

Files that need the astropy.table.Table interface MUST contain at least a time column and a counts or countrate column. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all lightcurve attributes such as dt, gti, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care.

Parameters:
filename: str

Path and file name for the file to be read.

fmt: str

Available options are ‘pickle’, ‘hea’, and any Table-supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.

Returns:
lcLightcurve object
Other Parameters:
err_dist: str, default=’gauss’

Default error distribution if not specified in the file (e.g. for ASCII files). The default is ‘gauss’ just because it is likely that people using ASCII light curves will want to specify Gaussian error bars, if any.

skip_checksbool

See Lightcurve documentation

**fits_kwargsadditional keyword arguments

Any other arguments to be passed to lcurve_from_fits (only relevant for hea/ogip formats)

rebin(dt_new=None, f=None, method='sum')[source]

Rebin the light curve to a new time resolution. While the new resolution need not be an integer multiple of the previous time resolution, be aware that if it is not, the last bin will be cut off by the fraction left over by the integer division.

Parameters:
dt_new: float

The new time resolution of the light curve. Must be larger than the time resolution of the old light curve!

method: {``sum`` | ``mean`` | ``average``}, optional, default ``sum``

This keyword argument sets whether the counts in the new bins should be summed or averaged.

Returns:
lc_new: Lightcurve object

The Lightcurve object with the new, binned light curve.

Other Parameters:
f: float

the rebin factor. If specified, it substitutes dt_new with f*self.dt

sort(reverse=False, inplace=False)[source]

Sort a Lightcurve object by time.

A Lightcurve can be sorted in either increasing or decreasing order using this method. The time array gets sorted and the counts array is changed accordingly.

Parameters:
reverseboolean, default False

If True then the object is sorted in reverse order.

inplacebool

If True, overwrite the current light curve. Otherwise, return a new one.

Returns:
lc_new: Lightcurve object

The Lightcurve object with sorted time and counts arrays.

Examples

>>> time = [2, 1, 3]
>>> count = [200, 100, 300]
>>> lc = Lightcurve(time, count, dt=1, skip_checks=True)
>>> lc_new = lc.sort()
>>> lc_new.time
array([1, 2, 3])
>>> assert np.allclose(lc_new.counts, [100, 200, 300])
sort_counts(reverse=False, inplace=False)[source]

Sort a Lightcurve object in accordance with its counts array.

A Lightcurve can be sorted in either increasing or decreasing order using this method. The counts array gets sorted and the time array is changed accordingly.

Parameters:
reverseboolean, default False

If True then the object is sorted in reverse order.

inplacebool

If True, overwrite the current light curve. Otherwise, return a new one.

Returns:
lc_new: Lightcurve object

The Lightcurve object with sorted time and counts arrays.

Examples

>>> time = [1, 2, 3]
>>> count = [200, 100, 300]
>>> lc = Lightcurve(time, count, dt=1, skip_checks=True)
>>> lc_new = lc.sort_counts()
>>> lc_new.time
array([2, 1, 3])
>>> assert np.allclose(lc_new.counts, [100, 200, 300])
split(min_gap, min_points=1)[source]

For data with gaps, it can sometimes be useful to be able to split the light curve into separate, evenly sampled objects along those data gaps. This method allows to do this: it finds data gaps of a specified minimum size, and produces a list of new Lightcurve objects for each contiguous segment.

Parameters:
min_gapfloat

The length of a data gap, in the same units as the time attribute of the Lightcurve object. Any smaller gaps will be ignored, any larger gaps will be identified and used to split the light curve.

min_pointsint, default 1

The minimum number of data points in each light curve. Light curves with fewer data points will be ignored.

Returns:
lc_splititerable of Lightcurve objects

The list of all contiguous light curves

Examples

>>> time = np.array([1, 2, 3, 6, 7, 8, 11, 12, 13])
>>> counts = np.random.rand(time.shape[0])
>>> lc = Lightcurve(time, counts, dt=1, skip_checks=True)
>>> split_lc = lc.split(1.5)
to_astropy_table(**kwargs)[source]

Save the light curve to an astropy.table.Table object.

The time array and all the array attributes become columns. The meta attributes become metadata of the astropy.table.Table object.

Other Parameters:
no_longdoublebool, default False

If True, the data are converted to double precision before being saved. This is useful, e.g., for saving to FITS files, which do not support long double precision.

to_astropy_timeseries(**kwargs)[source]

Save the light curve to an astropy.timeseries.TimeSeries object.

The time array and all the array attributes become columns. The meta attributes become metadata of the astropy.timeseries.TimeSeries object. The time array is saved as a TimeDelta object.

Other Parameters:
no_longdoublebool, default False

If True, the data are converted to double precision before being saved. This is useful, e.g., for saving to FITS files, which do not support long double precision.

to_lightkurve()[source]

Returns a lightkurve.LightCurve object. This feature requires Lightkurve to be installed (e.g. pip install lightkurve). An ImportError will be raised if this package is not available.

Returns:
lightcurvelightkurve.LightCurve

A lightkurve LightCurve object.

truncate(start=0, stop=None, method='index')[source]

Truncate a Lightcurve object.

This method takes a start and a stop point (either as indices, or as times in the same unit as those in the time attribute, and truncates all bins before start and after stop, then returns a new Lightcurve object with the truncated light curve.

Parameters:
startint, default 0

Index (or time stamp) of the starting point of the truncation. If no value is set for the start point, then all points from the first element in the time array are taken into account.

stopint, default None

Index (or time stamp) of the ending point (exclusive) of the truncation. If no value of stop is set, then points including the last point in the counts array are taken in count.

method{index | time}, optional, default index

Type of the start and stop values. If set to index then the values are treated as indices of the counts array, or if set to time, the values are treated as actual time values.

Returns:
lc_new: Lightcurve object

The Lightcurve object with truncated time and counts arrays.

Examples

>>> time = [1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> count = [10, 20, 30, 40, 50, 60, 70, 80, 90]
>>> lc = Lightcurve(time, count, dt=1)
>>> lc_new = lc.truncate(start=2, stop=8)
>>> assert np.allclose(lc_new.counts, [30, 40, 50, 60, 70, 80])
>>> lc_new.time
array([3, 4, 5, 6, 7, 8])
>>> # Truncation can also be done by time values
>>> lc_new = lc.truncate(start=6, method='time')
>>> lc_new.time
array([6, 7, 8, 9])
>>> assert np.allclose(lc_new.counts, [60, 70, 80, 90])

EventList

class stingray.events.EventList(time=None, energy=None, ncounts=None, mjdref=0, dt=0, notes='', gti=None, pi=None, high_precision=False, mission=None, instr=None, header=None, detector_id=None, ephem=None, timeref=None, timesys=None, rmf_file=None, skip_checks=False, **other_kw)[source]

Basic class for event list data. Event lists generally correspond to individual events (e.g. photons) recorded by the detector, and their associated properties. For X-ray data where this type commonly occurs, events are time stamps of when a photon arrived in the detector, and (optionally) the photon energy associated with the event.

Parameters:
time: iterable

A list or array of time stamps

Other Parameters:
dt: float

The time resolution of the events. Only relevant when using events to produce light curves with similar bin time.

energy: iterable

A list of array of photon energy values in keV

mjdreffloat

The MJD used as a reference for the time array.

ncounts: int

Number of desired data points in event list. Deprecated

gtis: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Good Time Intervals

piinteger, numpy.ndarray

PI channels

notesstr

Any useful annotations

high_precisionbool

Change the precision of self.time to float128. Useful while dealing with fast pulsars.

missionstr

Mission that recorded the data (e.g. NICER)

instrstr

Instrument onboard the mission

headerstr

The full header of the original FITS file, if relevant

detector_iditerable

The detector that recorded each photon (if the instrument has more than one, e.g. XMM/EPIC-pn)

timerefstr

The time reference, as recorded in the FITS file (e.g. SOLARSYSTEM)

timesysstr

The time system, as recorded in the FITS file (e.g. TDB)

ephemstr

The JPL ephemeris used to barycenter the data, if any (e.g. DE430)

rmf_filestr, default None

The file name of the RMF file to use for calibration.

skip_checksbool, default False

Skip checks for the validity of the event list. Use with caution.

**other_kw

Used internally. Any other keyword arguments will be ignored

Attributes:
time: numpy.ndarray

The array of event arrival times, in seconds from the reference MJD defined in mjdref

energy: numpy.ndarray

The array of photon energy values

ncounts: int

The number of data points in the event list

dt: float

The time resolution of the events. Only relevant when using events to produce light curves with similar bin time.

mjdreffloat

The MJD used as a reference for the time array.

gtis: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Good Time Intervals

piinteger, numpy.ndarray

PI channels

high_precisionbool

Change the precision of self.time to float128. Useful while dealing with fast pulsars.

missionstr

Mission that recorded the data (e.g. NICER)

instrstr

Instrument onboard the mission

detector_iditerable

The detector that recoded each photon, if relevant (e.g. XMM, Chandra)

headerstr

The full header of the original FITS file, if relevant

apply_deadtime(deadtime, inplace=False, **kwargs)[source]

Apply deadtime filter to this event list.

Additional arguments in kwargs are passed to get_deadtime_mask

Parameters:
deadtimefloat

Value of dead time to apply to data

inplacebool, default False

If True, apply the deadtime to the current event list. Otherwise, return a new event list.

Returns:
new_event_listEventList object

Filtered event list. if inplace is True, this is the input object filtered for deadtime, otherwise this is a new object.

additional_outputobject

Only returned if return_all is True. See get_deadtime_mask for more details.

Examples

>>> events = np.array([1, 1.05, 1.07, 1.08, 1.1, 2, 2.2, 3, 3.1, 3.2])
>>> events = EventList(events, gti=[[0, 3.3]])
>>> events.pi=np.array([1, 2, 2, 2, 2, 1, 1, 1, 2, 1])
>>> events.energy=np.array([1, 2, 2, 2, 2, 1, 1, 1, 2, 1])
>>> events.mjdref = 10
>>> filt_events, retval = events.apply_deadtime(0.11, inplace=False,
...                                             verbose=False,
...                                             return_all=True)
>>> assert filt_events is not events
>>> expected = np.array([1, 2, 2.2, 3, 3.2])
>>> assert np.allclose(filt_events.time, expected)
>>> assert np.allclose(filt_events.pi, 1)
>>> assert np.allclose(filt_events.energy, 1)
>>> assert not np.allclose(events.pi, 1)
>>> filt_events = events.apply_deadtime(0.11, inplace=True,
...                                     verbose=False)
>>> assert filt_events is events
convert_pi_to_energy(rmf_file)[source]

Calibrate the energy column of the event list.

Defines the energy attribute of the event list by converting the PI channels to energy using the provided RMF file.

Parameters:
rmf_filestr

The file name of the RMF file to use for calibration.

filter_energy_range(energy_range, inplace=False, use_pi=False)[source]

Filter the event list from a given energy range.

Parameters:
energy_range: [float, float]

Energy range in keV, or in PI channel (if use_pi is True)

Other Parameters:
inplacebool, default False

Do the change in place (modify current event list). Otherwise, copy to a new event list.

use_pibool, default False

Use PI channel instead of energy in keV

Examples

>>> events = EventList(time=[0, 1, 2], energy=[0.3, 0.5, 2], pi=[3, 5, 20])
>>> e1 = events.filter_energy_range([0, 1])
>>> assert np.allclose(e1.time, [0, 1])
>>> assert np.allclose(events.time, [0, 1, 2])
>>> e2 = events.filter_energy_range([0, 10], use_pi=True, inplace=True)
>>> assert np.allclose(e2.time, [0, 1])
>>> assert np.allclose(events.time, [0, 1])
static from_lc(lc)[source]

Create an EventList from a stingray.Lightcurve object. Note that all events in a given time bin will have the same time stamp.

Bins with negative counts will be ignored.

Parameters:
lc: :class:`stingray.Lightcurve` object

Light curve to use for creation of the event list.

Returns:
ev: EventList object

The resulting list of photon arrival times generated from the light curve.

get_color_evolution(energy_ranges, segment_size=None, use_pi=False)[source]

Compute the color in equal-length segments of the event list.

Parameters:
energy_ranges2x2 list

List of energy ranges to compute the color: [[en1_min, en1_max], [en2_min, en2_max]]

segment_sizefloat

Segment size in seconds. If None, the full GTIs are considered instead as segments.

Returns:
colorarray-like

Array of colors, computed in each segment as the ratio of the counts in the second energy range to the counts in the first energy range.

Other Parameters:
use_pibool, default False

Use PI channel instead of energy in keV

get_energy_mask(energy_range, use_pi=False)[source]

Get a mask corresponding to events with a given energy range.

Parameters:
energy_range: [float, float]

Energy range in keV, or in PI channel (if use_pi is True)

Other Parameters:
use_pibool, default False

Use PI channel instead of energy in keV

get_intensity_evolution(energy_range, segment_size=None, use_pi=False)[source]

Compute the intensity in equal-length segments (or full GTIs) of the event list.

Parameters:
energy_range[en1_min, en1_max]

Energy range to compute the intensity

segment_sizefloat

Segment size in seconds. If None, the full GTIs are considered instead as segments.

Returns:
intensityarray-like

Array of intensities (in counts/s), computed in each segment.

Other Parameters:
use_pibool, default False

Use PI channel instead of energy in keV

join(other, strategy='infer')[source]

Join two EventList objects into one.

If both are empty, an empty EventList is returned.

GTIs are crossed if the event lists are over a common time interval, and appended otherwise.

Standard attributes such as pi and energy remain None if they are None in both. Otherwise, np.nan is used as a default value for the EventList where they were None. Arbitrary attributes (e.g., Stokes parameters in polarimetric data) are created and joined using the same convention.

Multiple checks are done on the joined event lists. If the time array of the event list being joined is empty, it is ignored. If the time resolution is different, the final event list will have the rougher time resolution. If the MJDREF is different, the time reference will be changed to the one of the first event list. An empty event list will be ignored.

Parameters:
otherEventList object or class:list of EventList objects

The other EventList object which is supposed to be joined with. If other is a list, it is assumed to be a list of EventList objects and they are all joined, one by one.

Returns:
`ev_new`EventList object

The resulting EventList object.

Other Parameters:
strategy{“intersection”, “union”, “append”, “infer”, “none”}

Method to use to merge the GTIs. If “intersection”, the GTIs are merged using the intersection of the GTIs. If “union”, the GTIs are merged using the union of the GTIs. If “none”, a single GTI with the minimum and the maximum time stamps of all GTIs is returned. If “infer”, the strategy is decided based on the GTIs. If there are no overlaps, “union” is used, otherwise “intersection” is used. If “append”, the GTIs are simply appended but they must be mutually exclusive.

property ncounts

Number of events in the event list.

classmethod read(filename, fmt=None, rmf_file=None, **kwargs)[source]

Read a EventList object from file.

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • hea or ogip : FITS Event files from (well, some) HEASARC-supported missions.

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

Files that need the astropy.table.Table interface MUST contain at least a time column. Other recognized columns are energy and pi. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all eventlist attributes such as mission, gti, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care.

Parameters:
filename: str

Path and file name for the file to be read.

fmt: str

Available options are ‘pickle’, ‘hea’, and any Table-supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.

Returns:
ev: EventList object

The EventList object reconstructed from file

Other Parameters:
rmf_filestr, default None

The file name of the RMF file to use for energy calibration. Defaults to None, which implies no channel->energy conversion at this stage (or a default calibration applied to selected missions).

kwargsdict

Any further keyword arguments to be passed to load_events_and_gtis for reading in event lists in OGIP/HEASOFT format

simulate_energies(spectrum, use_spline=False)[source]

Assign (simulate) energies to event list from a spectrum.

Parameters:
spectrum: 2-d array or list [energies, spectrum]

Energies versus corresponding fluxes. The 2-d array or list must have energies across the first dimension and fluxes across the second one. If the dimension of the energies is the same as spectrum, they are interpreted as bin centers. If it is longer by one, they are interpreted as proper bin edges (similarly to the bins of np.histogram). Note that for non-uniformly binned spectra, it is advisable to pass the exact edges.

simulate_times(lc, use_spline=False, bin_time=None)[source]

Simulate times from an input light curve.

Randomly simulate photon arrival times to an EventList from a stingray.Lightcurve object, using the inverse CDF method.

..note::

Preferably use model light curves containing no Poisson noise, as this method will intrinsically add Poisson noise to them.

Parameters:
lc: :class:`stingray.Lightcurve` object
Returns:
timesarray-like

Simulated photon arrival times

Other Parameters:
use_splinebool

Approximate the light curve with a spline to avoid binning effects

bin_timefloat default None

Ignored and deprecated, maintained for backwards compatibility.

sort(inplace=False)[source]

Sort the event list in time.

Returns:
eventlistEventList

The sorted event list. If inplace=True, it will be a shallow copy of self.

Other Parameters:
inplacebool, default False

Sort in place. If False, return a new event list.

Examples

>>> events = EventList(time=[0, 2, 1], energy=[0.3, 2, 0.5], pi=[3, 20, 5],
...                    skip_checks=True)
>>> e1 = events.sort()
>>> assert np.allclose(e1.time, [0, 1, 2])
>>> assert np.allclose(e1.energy, [0.3, 0.5, 2])
>>> assert np.allclose(e1.pi, [3, 5, 20])

But the original event list has not been altered (inplace=False by default): >>> assert np.allclose(events.time, [0, 2, 1])

Let’s do it in place instead >>> e2 = events.sort(inplace=True) >>> assert np.allclose(e2.time, [0, 1, 2])

In this case, the original event list has been altered. >>> assert np.allclose(events.time, [0, 1, 2])

to_binned_timeseries(dt, array_attrs=None)[source]

Convert the event list to a binned stingray.StingrayTimeseries object.

The result will be something similar to a light curve, but with arbitrary attributes corresponding to a weighted sum of each specified attribute of the event list.

E.g. if the event list has a q attribute, the final time series will have a q attribute, which is the sum of all q values in each time bin.

Parameters:
dt: float

Binning time of the light curve

Returns:
lc: stingray.Lightcurve object
Other Parameters:
array_attrs: list of str

List of attributes to be converted to light curve arrays. If None, all array attributes will be converted.

to_lc(dt, tstart=None, tseg=None)[source]

Convert event list to a stingray.Lightcurve object.

Parameters:
dt: float

Binning time of the light curve

Returns:
lc: stingray.Lightcurve object
Other Parameters:
tstartfloat

Start time of the light curve

tseg: float

Total duration of light curve

to_lc_iter(dt, segment_size=None)[source]

Convert event list to a generator of Lightcurves.

Parameters:
dt: float

Binning time of the light curves

Returns:
lc_gen: generator

Generates one stingray.Lightcurve object for each GTI or segment

Other Parameters:
segment_sizefloat, default None

Optional segment size. If None, use the GTI boundaries

to_lc_list(dt, segment_size=None)[source]

Convert event list to a list of Lightcurves.

Parameters:
dt: float

Binning time of the light curves

Returns:
lc_list: List

List containing one stingray.Lightcurve object for each GTI or segment

Other Parameters:
segment_sizefloat, default None

Optional segment size. If None, use the GTI boundaries


Fourier Products

These classes implement commonly used Fourier analysis products, most importantly Crossspectrum and Powerspectrum, along with the variants for averaged cross/power spectra.

Crossspectrum

class stingray.Crossspectrum(data1=None, data2=None, norm='frac', gti=None, lc1=None, lc2=None, power_type='all', dt=None, fullspec=False, skip_checks=False, save_all=False)[source]
classical_significances(threshold=1, trial_correction=False)[source]

Compute the classical significances for the powers in the power spectrum, assuming an underlying noise distribution that follows a chi-square distributions with 2M degrees of freedom, where M is the number of powers averaged in each bin.

Note that this function will only produce correct results when the following underlying assumptions are fulfilled:

  1. The power spectrum is Leahy-normalized

  2. There is no source of variability in the data other than the periodic signal to be determined with this method. This is important! If there are other sources of (aperiodic) variability in the data, this method will not produce correct results, but instead produce a large number of spurious false positive detections!

  3. There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pile-up or dead time)

By default, the method produces (index,p-values) for all powers in the power spectrum, where index is the numerical index of the power in question. If a threshold is set, then only powers with p-values below that threshold with their respective indices. If trial_correction is set to True, then the threshold will be corrected for the number of trials (frequencies) in the power spectrum before being used.

Parameters:
thresholdfloat, optional, default 1

The threshold to be used when reporting p-values of potentially significant powers. Must be between 0 and 1. Default is 1 (all p-values will be reported).

trial_correctionbool, optional, default False

A Boolean flag that sets whether the threshold will be corrected by the number of frequencies before being applied. This decreases the threshold (p-values need to be lower to count as significant). Default is False (report all powers) though for any application where threshold` is set to something meaningful, this should also be applied!

Returns:
pvalsiterable

A list of (index, p-value) tuples for all powers that have p-values lower than the threshold specified in threshold.

coherence()[source]

Compute Coherence function of the cross spectrum.

Coherence is defined in Vaughan and Nowak, 1996 [1]. It is a Fourier frequency dependent measure of the linear correlation between time series measured simultaneously in two energy channels.

Returns:
cohnumpy.ndarray

Coherence function

References

deadtime_correct(dead_time, rate, background_rate=0, limit_k=200, n_approx=None, paralyzable=False)[source]

Correct the power spectrum for dead time effects.

This correction is based on the formula given in Zhang et al. 2015, assuming a constant dead time for all events. For more advanced dead time corrections, see the FAD method from stingray.deadtime.fad

Parameters:
dead_time: float

The dead time of the detector.

ratefloat

Detected source count rate

Returns:
spectrum: Crossspectrum or derivative.

The dead-time corrected spectrum.

Other Parameters:
background_ratefloat, default 0

Detected background count rate. This is important to estimate when deadtime is given by the combination of the source counts and background counts (e.g. in an imaging X-ray detector).

paralyzable: bool, default False

If True, the dead time correction is done assuming a paralyzable dead time. If False, the correction is done assuming a non-paralyzable (more common) dead time.

limit_kint, default 200

Limit to this value the number of terms in the inner loops of calculations. Check the plots returned by the check_B and check_A functions to test that this number is adequate.

n_approxint, default None

Number of bins to calculate the model power spectrum. If None, it will use the size of the input frequency array. Relatively simple models (e.g., low count rates compared to dead time) can use a smaller number of bins to speed up the calculation, and the final power values will be interpolated.

static from_events(events1, events2, dt, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)[source]

Calculate AveragedCrossspectrum from two event lists

Parameters:
events1stingray.EventList

Events from channel 1

events2stingray.EventList

Events from channel 2

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

static from_lc_iterable(iter_lc1, iter_lc2, dt, segment_size, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)[source]

Calculate AveragedCrossspectrum from two light curves

Parameters:
iter_lc1iterable of stingray.Lightcurve objects or np.array

Light curves from channel 1. If arrays, use them as counts

iter_lc1iterable of stingray.Lightcurve objects or np.array

Light curves from channel 2. If arrays, use them as counts

dtfloat

The time resolution of the light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

save_allbool, default False

If True, save the cross spectrum of each segment in the cs_all attribute of the output Crossspectrum object.

static from_lightcurve(lc1, lc2, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)[source]

Calculate AveragedCrossspectrum from two light curves

Parameters:
lc1stingray.Lightcurve

Light curve from channel 1

lc2stingray.Lightcurve

Light curve from channel 2

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

static from_stingray_timeseries(ts1, ts2, flux_attr, error_flux_attr=None, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)[source]

Calculate AveragedCrossspectrum from two light curves

Parameters:
ts1stingray.Timeseries

Time series from channel 1

ts2stingray.Timeseries

Time series from channel 2

flux_attrstr

What attribute of the time series will be used.

Other Parameters:
error_flux_attrstr

What attribute of the time series will be used as error bar.

segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

static from_time_array(times1, times2, dt, segment_size=None, gti=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True)[source]

Calculate AveragedCrossspectrum from two arrays of event times.

Parameters:
times1np.array

Event arrival times of channel 1

times2np.array

Event arrival times of channel 2

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum.

gti[[gti0, gti1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

initial_checks(data1=None, data2=None, norm='frac', gti=None, lc1=None, lc2=None, segment_size=None, power_type='real', dt=None, fullspec=False)[source]

Run initial checks on the input.

Returns True if checks are passed, False if they are not.

Raises various errors for different bad inputs

Examples

>>> times = np.arange(0, 10)
>>> counts = np.random.poisson(100, 10)
>>> lc1 = Lightcurve(times, counts, skip_checks=True)
>>> lc2 = Lightcurve(times, counts, skip_checks=True)
>>> ev1 = EventList(times)
>>> ev2 = EventList(times)
>>> c = Crossspectrum()
>>> ac = AveragedCrossspectrum()

If norm is not a string, raise a TypeError >>> Crossspectrum.initial_checks(c, norm=1) Traceback (most recent call last): … TypeError: norm must be a string…

If norm is not one of the valid norms, raise a ValueError >>> Crossspectrum.initial_checks(c, norm=”blabla”) Traceback (most recent call last): … ValueError: norm must be ‘frac’…

If power_type is not one of the valid norms, raise a ValueError >>> Crossspectrum.initial_checks(c, power_type=”blabla”) Traceback (most recent call last): … ValueError: power_type not recognized!

If the user passes only one light curve, raise a ValueError

>>> Crossspectrum.initial_checks(c, data1=lc1, data2=None)
Traceback (most recent call last):
...
ValueError: You can't do a cross spectrum...

If the user passes an event list without dt, raise a ValueError

>>> Crossspectrum.initial_checks(c, data1=ev1, data2=ev2, dt=None)
Traceback (most recent call last):
...
ValueError: If using event lists, please specify...
phase_lag()[source]

Calculate the fourier phase lag of the cross spectrum.

This is defined as the argument of the complex cross spectrum, and gives the delay at all frequencies, in cycles, of one input light curve with respect to the other.

plot(labels=None, axis=None, title=None, marker='-', save=False, filename=None, ax=None)[source]

Plot the amplitude of the cross spectrum vs. the frequency using matplotlib.

Parameters:
labelsiterable, default None

A list of tuple with xlabel and ylabel as strings.

axislist, tuple, string, default None

Parameter to set axis properties of the matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for the``matplotlib.pyplot.axis()`` method.

titlestr, default None

The title of the plot.

markerstr, default ‘-’

Line style and color of the plot. Line styles and colors are combined in a single format string, as in 'bo' for blue circles. See matplotlib.pyplot.plot for more options.

saveboolean, optional, default False

If True, save the figure with specified filename.

filenamestr

File name of the image to save. Depends on the boolean save.

axmatplotlib.Axes object

An axes object to fill with the cross correlation plot.

rebin(df=None, f=None, method='mean')[source]

Rebin the cross spectrum to a new frequency resolution df.

Parameters:
df: float

The new frequency resolution

Returns:
bin_cs = Crossspectrum (or one of its subclasses) object

The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum, it will return an object of class AveragedPowerspectrum, too.

Other Parameters:
f: float

the rebin factor. If specified, it substitutes df with f*self.df

rebin_log(f=0.01)[source]

Logarithmic rebin of the periodogram. The new frequency depends on the previous frequency modified by a factor f:

\[d\nu_j = d\nu_{j-1} (1+f)\]
Parameters:
f: float, optional, default ``0.01``

parameter that steers the frequency resolution

Returns:
new_specCrossspectrum (or one of its subclasses) object

The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum, it will return an object of class

time_lag()[source]

Calculate the fourier time lag of the cross spectrum. The time lag is calculated by taking the phase lag \(\phi\) and

..math:

\tau = \frac{\phi}{\two pi \nu}

where \(\nu\) is the center of the frequency bins.

to_norm(norm, inplace=False)[source]

Convert Cross spectrum to new normalization.

Parameters:
normstr

The new normalization of the spectrum

Returns:
new_specobject, same class as input

The new, normalized, spectrum.

Other Parameters:
inplace: bool, default False

If True, change the current instance. Otherwise, return a new one

type = 'crossspectrum'

Make a cross spectrum from a (binned) light curve. You can also make an empty Crossspectrum object to populate with your own Fourier-transformed data (this can sometimes be useful when making binned power spectra). Stingray uses the scipy.fft standards for the sign of the Nyquist frequency.

Parameters:
data1: :class:`stingray.Lightcurve` or :class:`stingray.events.EventList`, optional, default ``None``

The dataset for the first channel/band of interest.

data2: :class:`stingray.Lightcurve` or :class:`stingray.events.EventList`, optional, default ``None``

The dataset for the second, or “reference”, band.

norm: {``frac``, ``abs``, ``leahy``, ``none``}, default ``none``

The normalization of the (real part of the) cross spectrum.

power_type: string, optional, default ``real``

Parameter to choose among complete, real part and magnitude of the cross spectrum.

fullspec: boolean, optional, default ``False``

If False, keep only the positive frequencies, or if True, keep all of them .

Other Parameters:
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input Lightcurve GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the Lightcurve objects before making the cross spectrum.

lc1: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects

For backwards compatibility only. Like data1, but no stingray.events.EventList objects allowed

lc2: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects

For backwards compatibility only. Like data2, but no stingray.events.EventList objects allowed

dt: float

The time resolution of the light curve. Only needed when constructing light curves in the case where data1, data2 are EventList objects

skip_checks: bool

Skip initial checks, for speed or other reasons (you need to trust your inputs!)

Attributes:
freq: numpy.ndarray

The array of mid-bin frequencies that the Fourier transform samples

power: numpy.ndarray

The array of cross spectra (complex numbers)

power_err: numpy.ndarray

The uncertainties of power. An approximation for each bin given by power_err= power/sqrt(m). Where m is the number of power averaged in each bin (by frequency binning, or averaging more than one spectra). Note that for a single realization (m=1) the error is equal to the power.

df: float

The frequency resolution

m: int

The number of averaged cross-spectra amplitudes in each bin.

n: int

The number of data points/time bins in one segment of the light curves.

k: array of int

The rebinning scheme if the object has been rebinned otherwise is set to 1.

nphots1: float

The total number of photons in light curve 1

nphots2: float

The total number of photons in light curve 2


Coherence

Convenience function to compute the coherence between two stingray.Lightcurve objects.

stingray.coherence(lc1, lc2)[source]

Estimate coherence function of two light curves. For details on the definition of the coherence, see Vaughan and Nowak, 1996 [2].

Parameters:
lc1: :class:`stingray.Lightcurve` object

The first light curve data for the channel of interest.

lc2: :class:`stingray.Lightcurve` object

The light curve data for reference band

Returns:
cohnp.ndarray

The array of coherence versus frequency

References


Powerspectrum

class stingray.Powerspectrum(data=None, norm='frac', gti=None, dt=None, lc=None, skip_checks=False)[source]
_initialize_empty()[source]

Set all attributes to None.

_initialize_from_any_input(data, dt=None, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True, save_all=False)[source]

Initialize the class, trying to understand the input types.

The input arguments are the same as __init__(). Based on the type of data, this method will call the appropriate powerspectrum_from_XXXX function, and initialize self with the correct attributes.

_normalize_crossspectrum(unnorm_power)

Normalize the real part of the cross spectrum to Leahy, absolute rms^2, fractional rms^2 normalization, or not at all.

Parameters:
unnorm_power: numpy.ndarray

The unnormalized cross spectrum.

Returns:
power: numpy.nd.array

The normalized co-spectrum (real part of the cross spectrum). For ‘none’ normalization, imaginary part is returned as well.

_operation_with_other_obj(other, operation, operated_attrs=None, error_attrs=None, error_operation=None, inplace=False)

Helper method to codify an operation of one time series with another (e.g. add, subtract). Takes into account the GTIs, and returns a new StingrayTimeseries object.

Parameters:
otherStingrayTimeseries object

A second time series object

operationfunction

An operation between the StingrayTimeseries object calling this method, and other, operating on all the specified array attributes.

Returns:
ts_newStingrayTimeseries object

The new time series calculated in operation

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

The function used for error propagation. Defaults to the sum of squares.

_rms_error(powers)[source]

Compute the error on the fractional rms amplitude using error propagation. Note: this uses the actual measured powers, which is not strictly correct. We should be using the underlying power spectrum, but in the absence of an estimate of that, this will have to do.

\[r = \sqrt{P}\]
\[\begin{split}\delta r = \\frac{1}{2 * \sqrt{P}} \delta P\end{split}\]
Parameters:
powers: iterable

The list of powers used to compute the fractional rms amplitude.

Returns:
delta_rms: float

The error on the fractional rms amplitude.

add(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)

Add two StingrayObject instances.

Add the array values of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

apply_mask(mask: npt.ArrayLike, inplace: bool = False, filtered_attrs: list = None)

Apply a mask to all array attributes of the object

Parameters:
maskarray of bool

The mask. Has to be of the same length as self.time

Returns:
ts_newStingrayObject object

The new object with the mask applied if inplace is False, otherwise the same object.

Other Parameters:
inplacebool

If True, overwrite the current object. Otherwise, return a new one.

filtered_attrslist of str or None

Array attributes to be filtered. Defaults to all array attributes if None. The other array attributes will be set to None. The main array attr is always included.

array_attrs() list[str]

List the names of the array attributes of the Stingray Object.

By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of array attributes.

classical_significances(threshold=1, trial_correction=False)[source]

Compute the classical significances for the powers in the power spectrum, assuming an underlying noise distribution that follows a chi-square distributions with 2M degrees of freedom, where M is the number of powers averaged in each bin.

Note that this function will only produce correct results when the following underlying assumptions are fulfilled:

  1. The power spectrum is Leahy-normalized

  2. There is no source of variability in the data other than the periodic signal to be determined with this method. This is important! If there are other sources of (aperiodic) variability in the data, this method will not produce correct results, but instead produce a large number of spurious false positive detections!

  3. There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pile-up or dead time)

By default, the method produces (index,p-values) for all powers in the power spectrum, where index is the numerical index of the power in question. If a threshold is set, then only powers with p-values below that threshold with their respective indices. If trial_correction is set to True, then the threshold will be corrected for the number of trials (frequencies) in the power spectrum before being used.

Parameters:
thresholdfloat, optional, default 1

The threshold to be used when reporting p-values of potentially significant powers. Must be between 0 and 1. Default is 1 (all p-values will be reported).

trial_correctionbool, optional, default False

A Boolean flag that sets whether the threshold will be corrected by the number of frequencies before being applied. This decreases the threshold (p-values need to be lower to count as significant). Default is False (report all powers) though for any application where threshold` is set to something meaningful, this should also be applied!

Returns:
pvalsiterable

A list of (p-value, index) tuples for all powers that have p-values lower than the threshold specified in threshold.

coherence()

Compute Coherence function of the cross spectrum.

Coherence is defined in Vaughan and Nowak, 1996 [3]. It is a Fourier frequency dependent measure of the linear correlation between time series measured simultaneously in two energy channels.

Returns:
cohnumpy.ndarray

Coherence function

References

compute_rms(min_freq, max_freq, poisson_noise_level=None)[source]

Compute the fractional rms amplitude in the power spectrum between two frequencies.

Parameters:
min_freq: float

The lower frequency bound for the calculation.

max_freq: float

The upper frequency bound for the calculation.

Returns:
rms: float

The fractional rms amplitude contained between min_freq and max_freq.

rms_err: float

The error on the fractional rms amplitude.

Other Parameters:
poisson_noise_levelfloat, default is None

This is the Poisson noise level of the PDS with same normalization as the PDS. If poissoin_noise_level is None, the Poisson noise is calculated in the idealcase e.g. 2./<countrate> for fractional rms normalisation Dead time and other instrumental effects can alter it. The user can fit the Poisson noise level outside this function using the same normalisation of the PDS and it will get subtracted from powers here.

data_attributes() list[str]

Clean up the list of attributes, only giving out those pointing to data.

List all the attributes that point directly to valid data. This method goes through all the attributes of the class, eliminating methods, properties, and attributes that are complicated to serialize such as other StingrayObject, or arrays of objects.

This function does not make difference between array-like data and scalar data.

Returns:
data_attributeslist of str

List of attributes pointing to data that are not methods, properties, or other StingrayObject instances.

deadtime_correct(dead_time, rate, background_rate=0, limit_k=200, n_approx=None, paralyzable=False)

Correct the power spectrum for dead time effects.

This correction is based on the formula given in Zhang et al. 2015, assuming a constant dead time for all events. For more advanced dead time corrections, see the FAD method from stingray.deadtime.fad

Parameters:
dead_time: float

The dead time of the detector.

ratefloat

Detected source count rate

Returns:
spectrum: Crossspectrum or derivative.

The dead-time corrected spectrum.

Other Parameters:
background_ratefloat, default 0

Detected background count rate. This is important to estimate when deadtime is given by the combination of the source counts and background counts (e.g. in an imaging X-ray detector).

paralyzable: bool, default False

If True, the dead time correction is done assuming a paralyzable dead time. If False, the correction is done assuming a non-paralyzable (more common) dead time.

limit_kint, default 200

Limit to this value the number of terms in the inner loops of calculations. Check the plots returned by the check_B and check_A functions to test that this number is adequate.

n_approxint, default None

Number of bins to calculate the model power spectrum. If None, it will use the size of the input frequency array. Relatively simple models (e.g., low count rates compared to dead time) can use a smaller number of bins to speed up the calculation, and the final power values will be interpolated.

dict() dict

Return a dictionary representation of the object.

classmethod from_astropy_table(ts: Table) Tso

Create a Stingray Object object from data in an Astropy Table.

The table MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

static from_events(events, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True, save_all=False)[source]

Calculate an average power spectrum from an event list.

Parameters:
eventsstingray.EventList

Event list to be analyzed.

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency).

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedPowerspectrum.

gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.

normstr, default “frac”

The normalization of the periodogram. abs is absolute rms, frac is fractional rms, leahy is Leahy+83 normalization, and none is the unnormalized periodogram.

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

silentbool, default False

Silence the progress bars.

save_allbool, default False

Save all intermediate PDSs used for the final average.

static from_lc_iterable(iter_lc, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True, save_all=False)[source]

Calculate the average power spectrum of an iterable collection of light curves.

Parameters:
iter_lciterable of stingray.Lightcurve objects or np.array

Light curves. If arrays, use them as counts.

dtfloat

The time resolution of the light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedPowerspectrum.

gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.

normstr, default “frac”

The normalization of the periodogram. abs is absolute rms, frac is fractional rms, leahy is Leahy+83 normalization, and none is the unnormalized periodogram.

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

silentbool, default False

Silence the progress bars.

save_allbool, default False

Save all intermediate PDSs used for the final average.

static from_lightcurve(lc, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True, save_all=False)[source]

Calculate a power spectrum from a light curve.

Parameters:
eventsstingray.Lightcurve

Light curve to be analyzed.

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency).

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedPowerspectrum.

gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.

normstr, default “frac”

The normalization of the periodogram. abs is absolute rms, frac is fractional rms, leahy is Leahy+83 normalization, and none is the unnormalized periodogram.

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

silentbool, default False

Silence the progress bars.

save_allbool, default False

Save all intermediate PDSs used for the final average.

classmethod from_pandas(ts: DataFrame) Tso

Create an StingrayObject object from data in a pandas DataFrame.

The dataframe MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

Since pandas does not support n-D data, multi-dimensional arrays can be specified as <colname>_dimN_M_K etc.

See documentation of make_1d_arrays_into_nd for details.

static from_stingray_timeseries(ts, flux_attr, error_flux_attr=None, segment_size=None, norm='none', silent=False, use_common_mean=True, gti=None, save_all=False)[source]

Calculate AveragedPowerspectrum from a time series.

Parameters:
tsstingray.Timeseries

Input Time Series

flux_attrstr

What attribute of the time series will be used.

Other Parameters:
error_flux_attrstr

What attribute of the time series will be used as error bar.

segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

silentbool, default False

Silence the progress bars

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

save_allbool, default False

Save all intermediate PDSs used for the final average.

static from_time_array(times, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True)[source]

Calculate an average power spectrum from an array of event times.

Parameters:
timesnp.array

Event arrival times.

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency).

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedPowerspectrum.

gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.

normstr, default “frac”

The normalization of the periodogram. abs is absolute rms, frac is fractional rms, leahy is Leahy+83 normalization, and none is the unnormalized periodogram.

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

silentbool, default False

Silence the progress bars.

classmethod from_xarray(ts: Dataset) Tso

Create a StingrayObject from data in an xarray Dataset.

The dataset MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

get_meta_dict() dict

Give a dictionary with all non-None meta attrs of the object.

initial_checks(data1=None, data2=None, norm='frac', gti=None, lc1=None, lc2=None, segment_size=None, power_type='real', dt=None, fullspec=False)

Run initial checks on the input.

Returns True if checks are passed, False if they are not.

Raises various errors for different bad inputs

Examples

>>> times = np.arange(0, 10)
>>> counts = np.random.poisson(100, 10)
>>> lc1 = Lightcurve(times, counts, skip_checks=True)
>>> lc2 = Lightcurve(times, counts, skip_checks=True)
>>> ev1 = EventList(times)
>>> ev2 = EventList(times)
>>> c = Crossspectrum()
>>> ac = AveragedCrossspectrum()

If norm is not a string, raise a TypeError >>> Crossspectrum.initial_checks(c, norm=1) Traceback (most recent call last): … TypeError: norm must be a string…

If norm is not one of the valid norms, raise a ValueError >>> Crossspectrum.initial_checks(c, norm=”blabla”) Traceback (most recent call last): … ValueError: norm must be ‘frac’…

If power_type is not one of the valid norms, raise a ValueError >>> Crossspectrum.initial_checks(c, power_type=”blabla”) Traceback (most recent call last): … ValueError: power_type not recognized!

If the user passes only one light curve, raise a ValueError

>>> Crossspectrum.initial_checks(c, data1=lc1, data2=None)
Traceback (most recent call last):
...
ValueError: You can't do a cross spectrum...

If the user passes an event list without dt, raise a ValueError

>>> Crossspectrum.initial_checks(c, data1=ev1, data2=ev2, dt=None)
Traceback (most recent call last):
...
ValueError: If using event lists, please specify...
internal_array_attrs() list[str]

List the names of the internal array attributes of the Stingray Object.

These are array attributes that can be set by properties, and are generally indicated by an underscore followed by the name of the property that links to it (E.g. _counts in Lightcurve). By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of internal array attributes.

meta_attrs() list[str]

List the names of the meta attributes of the Stingray Object.

By array attributes, we mean the ones with a different size and shape than main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of meta attributes.

modulation_upper_limit(fmin=None, fmax=None, c=0.95)[source]

Upper limit on a sinusoidal modulation.

To understand the meaning of this amplitude: if the modulation is described by:

..math:: p = overline{p} (1 + a * sin(x))

this function returns a.

If it is a sum of sinusoidal harmonics instead ..math:: p = overline{p} (1 + sum_l a_l * sin(lx)) a is equivalent to \(\sqrt(\sum_l a_l^2)\).

See stingray.stats.power_upper_limit, stingray.stats.amplitude_upper_limit for more information.

The formula used to calculate the upper limit assumes the Leahy normalization. If the periodogram is in another normalization, we will internally convert it to Leahy before calculating the upper limit.

Parameters:
fmin: float

The minimum frequency to search (defaults to the first nonzero bin)

fmax: float

The maximum frequency to search (defaults to the Nyquist frequency)

Returns:
a: float

The modulation amplitude that could produce P>pmeas with 1 - c probability.

Other Parameters:
c: float

The confidence value for the upper limit (e.g. 0.95 = 95%)

Examples

>>> pds = Powerspectrum()
>>> pds.norm = "leahy"
>>> pds.freq = np.arange(0., 5.)
>>> # Note: this pds has 40 as maximum value between 2 and 5 Hz
>>> pds.power = np.array([100000, 1, 1, 40, 1])
>>> pds.m = 1
>>> pds.nphots = 30000
>>> assert np.isclose(
...     pds.modulation_upper_limit(fmin=2, fmax=5, c=0.99),
...     0.10164,
...     atol=0.0001)
phase_lag()

Calculate the fourier phase lag of the cross spectrum.

This is defined as the argument of the complex cross spectrum, and gives the delay at all frequencies, in cycles, of one input light curve with respect to the other.

plot(labels=None, axis=None, title=None, marker='-', save=False, filename=None, ax=None)

Plot the amplitude of the cross spectrum vs. the frequency using matplotlib.

Parameters:
labelsiterable, default None

A list of tuple with xlabel and ylabel as strings.

axislist, tuple, string, default None

Parameter to set axis properties of the matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for the``matplotlib.pyplot.axis()`` method.

titlestr, default None

The title of the plot.

markerstr, default ‘-’

Line style and color of the plot. Line styles and colors are combined in a single format string, as in 'bo' for blue circles. See matplotlib.pyplot.plot for more options.

saveboolean, optional, default False

If True, save the figure with specified filename.

filenamestr

File name of the image to save. Depends on the boolean save.

axmatplotlib.Axes object

An axes object to fill with the cross correlation plot.

pretty_print(func_to_apply=None, attrs_to_apply=[], attrs_to_discard=[]) str

Return a pretty-printed string representation of the object.

This is useful for debugging, and for interactive use.

Other Parameters:
func_to_applyfunction

A function that modifies the attributes listed in attrs_to_apply. It must return the modified attributes and a label to be printed. If None, no function is applied.

attrs_to_applylist of str

Attributes to be modified by func_to_apply.

attrs_to_discardlist of str

Attributes to be discarded from the output.

classmethod read(filename: str, fmt: str = None) Tso

Generic reader for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

Files that need the astropy.table.Table interface MUST contain at least a column named like the main_array_attr. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all attributes such as mission, gti, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care.

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values should be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Path and file name for the file to be read.

fmt: str

Available options are ‘pickle’, ‘hea’, and any Table-supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.

Returns:
obj: StingrayObject object

The object reconstructed from file

rebin(df=None, f=None, method='mean')[source]

Rebin the power spectrum.

Parameters:
df: float

The new frequency resolution.

Returns:
bin_cs = Powerspectrum object

The newly binned power spectrum.

Other Parameters:
f: float

The rebin factor. If specified, it substitutes df with f*self.df, so f>1 is recommended.

rebin_log(f=0.01)

Logarithmic rebin of the periodogram. The new frequency depends on the previous frequency modified by a factor f:

\[d\nu_j = d\nu_{j-1} (1+f)\]
Parameters:
f: float, optional, default ``0.01``

parameter that steers the frequency resolution

Returns:
new_specCrossspectrum (or one of its subclasses) object

The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum, it will return an object of class

sub(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)

Subtract all the array attrs of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

time_lag()

Calculate the fourier time lag of the cross spectrum. The time lag is calculated by taking the phase lag \(\phi\) and

..math:

\tau = \frac{\phi}{\two pi \nu}

where \(\nu\) is the center of the frequency bins.

to_astropy_table(no_longdouble=False) Table

Create an Astropy Table from a StingrayObject

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the meta dictionary.

Other Parameters:
no_longdoublebool

If True, reduce the precision of longdouble arrays to double precision. This needs to be done in some cases, e.g. when the table is to be saved in an architecture not supporting extended precision (e.g. ARM), but can also be useful when an extended precision is not needed.

to_norm(norm, inplace=False)

Convert Cross spectrum to new normalization.

Parameters:
normstr

The new normalization of the spectrum

Returns:
new_specobject, same class as input

The new, normalized, spectrum.

Other Parameters:
inplace: bool, default False

If True, change the current instance. Otherwise, return a new one

to_pandas() DataFrame

Create a pandas DataFrame from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

Since pandas does not support n-D data, multi-dimensional arrays are converted into columns before the conversion, with names <colname>_dimN_M_K etc.

See documentation of make_nd_into_arrays for details.

to_xarray() Dataset

Create an xarray Dataset from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

type = 'powerspectrum'

Make a Powerspectrum (also called periodogram) from a (binned) light curve. Periodograms can be normalized by either Leahy normalization, fractional rms normalization, absolute rms normalization, or not at all.

You can also make an empty Powerspectrum object to populate with your own fourier-transformed data (this can sometimes be useful when making binned power spectra).

Parameters:
data: :class:`stingray.Lightcurve` object, optional, default ``None``

The light curve data to be Fourier-transformed.

norm: {“leahy” | “frac” | “abs” | “none” }, optional, default “frac”

The normaliation of the power spectrum to be used. Options are “leahy”, “frac”, “abs” and “none”, default is “frac”.

Other Parameters:
gti: 2-d float array

[[gti0_0, gti0_1], [gti1_0, gti1_1], ...] – Good Time intervals. This choice overrides the GTIs in the single light curves. Use with care, especially if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input object before making the power spectrum.

skip_checks: bool

Skip initial checks, for speed or other reasons (you need to trust your inputs!).

Attributes:
norm: {“leahy” | “frac” | “abs” | “none” }

The normalization of the power spectrum.

freq: numpy.ndarray

The array of mid-bin frequencies that the Fourier transform samples.

power: numpy.ndarray

The array of normalized squared absolute values of Fourier amplitudes.

power_err: numpy.ndarray

The uncertainties of power. An approximation for each bin given by power_err= power/sqrt(m). Where m is the number of power averaged in each bin (by frequency binning, or averaging power spectra of segments of a light curve). Note that for a single realization (m=1) the error is equal to the power.

unnorm_power: numpy.ndarray

The array of unnormalized powers

unnorm_power_err: numpy.ndarray

The uncertainties of unnorm_power.

df: float

The frequency resolution.

m: int

The number of averaged powers in each bin.

n: int

The number of data points in the light curve.

nphots: float

The total number of photons in the light curve.

write(filename: str, fmt: str | None = None) None

Generic writer for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values will be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Name and path of the file to save the object list to.

fmt: str

The file format to store the data in. Available options are pickle, hdf5, ascii, fits


AveragedCrossspectrum

class stingray.AveragedCrossspectrum(data1=None, data2=None, segment_size=None, norm='frac', gti=None, power_type='all', silent=False, lc1=None, lc2=None, dt=None, fullspec=False, save_all=False, use_common_mean=True, skip_checks=False)[source]
add(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)

Add two StingrayObject instances.

Add the array values of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

apply_mask(mask: npt.ArrayLike, inplace: bool = False, filtered_attrs: list = None)

Apply a mask to all array attributes of the object

Parameters:
maskarray of bool

The mask. Has to be of the same length as self.time

Returns:
ts_newStingrayObject object

The new object with the mask applied if inplace is False, otherwise the same object.

Other Parameters:
inplacebool

If True, overwrite the current object. Otherwise, return a new one.

filtered_attrslist of str or None

Array attributes to be filtered. Defaults to all array attributes if None. The other array attributes will be set to None. The main array attr is always included.

array_attrs() list[str]

List the names of the array attributes of the Stingray Object.

By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of array attributes.

classical_significances(threshold=1, trial_correction=False)

Compute the classical significances for the powers in the power spectrum, assuming an underlying noise distribution that follows a chi-square distributions with 2M degrees of freedom, where M is the number of powers averaged in each bin.

Note that this function will only produce correct results when the following underlying assumptions are fulfilled:

  1. The power spectrum is Leahy-normalized

  2. There is no source of variability in the data other than the periodic signal to be determined with this method. This is important! If there are other sources of (aperiodic) variability in the data, this method will not produce correct results, but instead produce a large number of spurious false positive detections!

  3. There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pile-up or dead time)

By default, the method produces (index,p-values) for all powers in the power spectrum, where index is the numerical index of the power in question. If a threshold is set, then only powers with p-values below that threshold with their respective indices. If trial_correction is set to True, then the threshold will be corrected for the number of trials (frequencies) in the power spectrum before being used.

Parameters:
thresholdfloat, optional, default 1

The threshold to be used when reporting p-values of potentially significant powers. Must be between 0 and 1. Default is 1 (all p-values will be reported).

trial_correctionbool, optional, default False

A Boolean flag that sets whether the threshold will be corrected by the number of frequencies before being applied. This decreases the threshold (p-values need to be lower to count as significant). Default is False (report all powers) though for any application where threshold` is set to something meaningful, this should also be applied!

Returns:
pvalsiterable

A list of (index, p-value) tuples for all powers that have p-values lower than the threshold specified in threshold.

coherence()[source]

Averaged Coherence function.

Coherence is defined in Vaughan and Nowak, 1996 [4]. It is a Fourier frequency dependent measure of the linear correlation between time series measured simultaneously in two energy channels.

Compute an averaged Coherence function of cross spectrum by computing coherence function of each segment and averaging them. The return type is a tuple with first element as the coherence function and the second element as the corresponding uncertainty associated with it.

Note : The uncertainty in coherence function is strictly valid for Gaussian statistics only.

Returns:
(coh, uncertainty)tuple of np.ndarray

Tuple comprising the coherence function and uncertainty.

References

data_attributes() list[str]

Clean up the list of attributes, only giving out those pointing to data.

List all the attributes that point directly to valid data. This method goes through all the attributes of the class, eliminating methods, properties, and attributes that are complicated to serialize such as other StingrayObject, or arrays of objects.

This function does not make difference between array-like data and scalar data.

Returns:
data_attributeslist of str

List of attributes pointing to data that are not methods, properties, or other StingrayObject instances.

deadtime_correct(dead_time, rate, background_rate=0, limit_k=200, n_approx=None, paralyzable=False)

Correct the power spectrum for dead time effects.

This correction is based on the formula given in Zhang et al. 2015, assuming a constant dead time for all events. For more advanced dead time corrections, see the FAD method from stingray.deadtime.fad

Parameters:
dead_time: float

The dead time of the detector.

ratefloat

Detected source count rate

Returns:
spectrum: Crossspectrum or derivative.

The dead-time corrected spectrum.

Other Parameters:
background_ratefloat, default 0

Detected background count rate. This is important to estimate when deadtime is given by the combination of the source counts and background counts (e.g. in an imaging X-ray detector).

paralyzable: bool, default False

If True, the dead time correction is done assuming a paralyzable dead time. If False, the correction is done assuming a non-paralyzable (more common) dead time.

limit_kint, default 200

Limit to this value the number of terms in the inner loops of calculations. Check the plots returned by the check_B and check_A functions to test that this number is adequate.

n_approxint, default None

Number of bins to calculate the model power spectrum. If None, it will use the size of the input frequency array. Relatively simple models (e.g., low count rates compared to dead time) can use a smaller number of bins to speed up the calculation, and the final power values will be interpolated.

dict() dict

Return a dictionary representation of the object.

classmethod from_astropy_table(ts: Table) Tso

Create a Stingray Object object from data in an Astropy Table.

The table MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

static from_events(events1, events2, dt, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)

Calculate AveragedCrossspectrum from two event lists

Parameters:
events1stingray.EventList

Events from channel 1

events2stingray.EventList

Events from channel 2

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

static from_lc_iterable(iter_lc1, iter_lc2, dt, segment_size, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)

Calculate AveragedCrossspectrum from two light curves

Parameters:
iter_lc1iterable of stingray.Lightcurve objects or np.array

Light curves from channel 1. If arrays, use them as counts

iter_lc1iterable of stingray.Lightcurve objects or np.array

Light curves from channel 2. If arrays, use them as counts

dtfloat

The time resolution of the light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

save_allbool, default False

If True, save the cross spectrum of each segment in the cs_all attribute of the output Crossspectrum object.

static from_lightcurve(lc1, lc2, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)

Calculate AveragedCrossspectrum from two light curves

Parameters:
lc1stingray.Lightcurve

Light curve from channel 1

lc2stingray.Lightcurve

Light curve from channel 2

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

classmethod from_pandas(ts: DataFrame) Tso

Create an StingrayObject object from data in a pandas DataFrame.

The dataframe MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

Since pandas does not support n-D data, multi-dimensional arrays can be specified as <colname>_dimN_M_K etc.

See documentation of make_1d_arrays_into_nd for details.

static from_stingray_timeseries(ts1, ts2, flux_attr, error_flux_attr=None, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)

Calculate AveragedCrossspectrum from two light curves

Parameters:
ts1stingray.Timeseries

Time series from channel 1

ts2stingray.Timeseries

Time series from channel 2

flux_attrstr

What attribute of the time series will be used.

Other Parameters:
error_flux_attrstr

What attribute of the time series will be used as error bar.

segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

static from_time_array(times1, times2, dt, segment_size=None, gti=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True)

Calculate AveragedCrossspectrum from two arrays of event times.

Parameters:
times1np.array

Event arrival times of channel 1

times2np.array

Event arrival times of channel 2

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum.

gti[[gti0, gti1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

classmethod from_xarray(ts: Dataset) Tso

Create a StingrayObject from data in an xarray Dataset.

The dataset MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

get_meta_dict() dict

Give a dictionary with all non-None meta attrs of the object.

initial_checks(data1, segment_size=None, **kwargs)[source]

Examples

>>> times = np.arange(0, 10)
>>> ev1 = EventList(times)
>>> ev2 = EventList(times)
>>> ac = AveragedCrossspectrum()

If AveragedCrossspectrum, you need segment_size >>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1) Traceback (most recent call last): … ValueError: segment_size must be specified…

And it needs to be finite! >>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1., segment_size=np.nan) Traceback (most recent call last): … ValueError: segment_size must be finite!

internal_array_attrs() list[str]

List the names of the internal array attributes of the Stingray Object.

These are array attributes that can be set by properties, and are generally indicated by an underscore followed by the name of the property that links to it (E.g. _counts in Lightcurve). By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of internal array attributes.

meta_attrs() list[str]

List the names of the meta attributes of the Stingray Object.

By array attributes, we mean the ones with a different size and shape than main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of meta attributes.

phase_lag()[source]

Return the fourier phase lag of the cross spectrum.

plot(labels=None, axis=None, title=None, marker='-', save=False, filename=None, ax=None)

Plot the amplitude of the cross spectrum vs. the frequency using matplotlib.

Parameters:
labelsiterable, default None

A list of tuple with xlabel and ylabel as strings.

axislist, tuple, string, default None

Parameter to set axis properties of the matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for the``matplotlib.pyplot.axis()`` method.

titlestr, default None

The title of the plot.

markerstr, default ‘-’

Line style and color of the plot. Line styles and colors are combined in a single format string, as in 'bo' for blue circles. See matplotlib.pyplot.plot for more options.

saveboolean, optional, default False

If True, save the figure with specified filename.

filenamestr

File name of the image to save. Depends on the boolean save.

axmatplotlib.Axes object

An axes object to fill with the cross correlation plot.

pretty_print(func_to_apply=None, attrs_to_apply=[], attrs_to_discard=[]) str

Return a pretty-printed string representation of the object.

This is useful for debugging, and for interactive use.

Other Parameters:
func_to_applyfunction

A function that modifies the attributes listed in attrs_to_apply. It must return the modified attributes and a label to be printed. If None, no function is applied.

attrs_to_applylist of str

Attributes to be modified by func_to_apply.

attrs_to_discardlist of str

Attributes to be discarded from the output.

classmethod read(filename: str, fmt: str = None) Tso

Generic reader for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

Files that need the astropy.table.Table interface MUST contain at least a column named like the main_array_attr. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all attributes such as mission, gti, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care.

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values should be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Path and file name for the file to be read.

fmt: str

Available options are ‘pickle’, ‘hea’, and any Table-supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.

Returns:
obj: StingrayObject object

The object reconstructed from file

rebin(df=None, f=None, method='mean')

Rebin the cross spectrum to a new frequency resolution df.

Parameters:
df: float

The new frequency resolution

Returns:
bin_cs = Crossspectrum (or one of its subclasses) object

The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum, it will return an object of class AveragedPowerspectrum, too.

Other Parameters:
f: float

the rebin factor. If specified, it substitutes df with f*self.df

rebin_log(f=0.01)

Logarithmic rebin of the periodogram. The new frequency depends on the previous frequency modified by a factor f:

\[d\nu_j = d\nu_{j-1} (1+f)\]
Parameters:
f: float, optional, default ``0.01``

parameter that steers the frequency resolution

Returns:
new_specCrossspectrum (or one of its subclasses) object

The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum, it will return an object of class

sub(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)

Subtract all the array attrs of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

time_lag()[source]

Calculate time lag and uncertainty.

Equation from Bendat & Piersol, 2011 [bendat-2011]__.

Returns:
lagnp.ndarray

The time lag

lag_errnp.ndarray

The uncertainty in the time lag

to_astropy_table(no_longdouble=False) Table

Create an Astropy Table from a StingrayObject

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the meta dictionary.

Other Parameters:
no_longdoublebool

If True, reduce the precision of longdouble arrays to double precision. This needs to be done in some cases, e.g. when the table is to be saved in an architecture not supporting extended precision (e.g. ARM), but can also be useful when an extended precision is not needed.

to_norm(norm, inplace=False)

Convert Cross spectrum to new normalization.

Parameters:
normstr

The new normalization of the spectrum

Returns:
new_specobject, same class as input

The new, normalized, spectrum.

Other Parameters:
inplace: bool, default False

If True, change the current instance. Otherwise, return a new one

to_pandas() DataFrame

Create a pandas DataFrame from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

Since pandas does not support n-D data, multi-dimensional arrays are converted into columns before the conversion, with names <colname>_dimN_M_K etc.

See documentation of make_nd_into_arrays for details.

to_xarray() Dataset

Create an xarray Dataset from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

type = 'crossspectrum'

Make an averaged cross spectrum from a light curve by segmenting two light curves, Fourier-transforming each segment and then averaging the resulting cross spectra.

Parameters:
data1: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects OR :class:`stingray.EventList` object

A light curve from which to compute the cross spectrum. In some cases, this would be the light curve of the wavelength/energy/frequency band of interest.

data2: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects OR :class:`stingray.EventList` object

A second light curve to use in the cross spectrum. In some cases, this would be the wavelength/energy/frequency reference band to compare the band of interest with.

segment_size: float

The size of each segment to average. Note that if the total duration of each Lightcurve object in lc1 or lc2 is not an integer multiple of the segment_size, then any fraction left-over at the end of the time series will be lost. Otherwise you introduce artifacts.

norm: {``frac``, ``abs``, ``leahy``, ``none``}, default ``none``

The normalization of the (real part of the) cross spectrum.

Other Parameters:
gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

dtfloat

The time resolution of the light curve. Only needed when constructing light curves in the case where data1 or data2 are of :class:EventList

power_type: string, optional, default ``all``

Parameter to choose among complete, real part and magnitude of the cross spectrum.

silentbool, default False

Do not show a progress bar when generating an averaged cross spectrum. Useful for the batch execution of many spectra

lc1: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects

For backwards compatibility only. Like data1, but no stingray.events.EventList objects allowed

lc2: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects

For backwards compatibility only. Like data2, but no stingray.events.EventList objects allowed

fullspec: boolean, optional, default ``False``

If True, return the full array of frequencies, otherwise return just the positive frequencies.

save_allbool, default False

Save all intermediate PDSs used for the final average. Use with care. This is likely to fill up your RAM on medium-sized datasets, and to slow down the computation when rebinning.

skip_checks: bool

Skip initial checks, for speed or other reasons (you need to trust your inputs!)

use_common_mean: bool

Averaged cross spectra are normalized in two possible ways: one is by normalizing each of the single spectra that get averaged, the other is by normalizing after the averaging. If use_common_mean is selected, the spectrum will be normalized after the average.

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

Attributes:
freq: numpy.ndarray

The array of mid-bin frequencies that the Fourier transform samples.

power: numpy.ndarray

The array of cross spectra.

power_err: numpy.ndarray

The uncertainties of power. An approximation for each bin given by power_err= power/sqrt(m). Where m is the number of power averaged in each bin (by frequency binning, or averaging power spectra of segments of a light curve). Note that for a single realization (m=1) the error is equal to the power.

df: float

The frequency resolution.

m: int

The number of averaged cross spectra.

n: int

The number of time bins per segment of light curve.

nphots1: float

The total number of photons in the first (interest) light curve.

nphots2: float

The total number of photons in the second (reference) light curve.

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals.

write(filename: str, fmt: str | None = None) None

Generic writer for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values will be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Name and path of the file to save the object list to.

fmt: str

The file format to store the data in. Available options are pickle, hdf5, ascii, fits


AveragedPowerspectrum

class stingray.AveragedPowerspectrum(data=None, segment_size=None, norm='frac', gti=None, silent=False, dt=None, lc=None, large_data=False, save_all=False, skip_checks=False, use_common_mean=True)[source]
add(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)

Add two StingrayObject instances.

Add the array values of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

apply_mask(mask: npt.ArrayLike, inplace: bool = False, filtered_attrs: list = None)

Apply a mask to all array attributes of the object

Parameters:
maskarray of bool

The mask. Has to be of the same length as self.time

Returns:
ts_newStingrayObject object

The new object with the mask applied if inplace is False, otherwise the same object.

Other Parameters:
inplacebool

If True, overwrite the current object. Otherwise, return a new one.

filtered_attrslist of str or None

Array attributes to be filtered. Defaults to all array attributes if None. The other array attributes will be set to None. The main array attr is always included.

array_attrs() list[str]

List the names of the array attributes of the Stingray Object.

By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of array attributes.

classical_significances(threshold=1, trial_correction=False)

Compute the classical significances for the powers in the power spectrum, assuming an underlying noise distribution that follows a chi-square distributions with 2M degrees of freedom, where M is the number of powers averaged in each bin.

Note that this function will only produce correct results when the following underlying assumptions are fulfilled:

  1. The power spectrum is Leahy-normalized

  2. There is no source of variability in the data other than the periodic signal to be determined with this method. This is important! If there are other sources of (aperiodic) variability in the data, this method will not produce correct results, but instead produce a large number of spurious false positive detections!

  3. There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pile-up or dead time)

By default, the method produces (index,p-values) for all powers in the power spectrum, where index is the numerical index of the power in question. If a threshold is set, then only powers with p-values below that threshold with their respective indices. If trial_correction is set to True, then the threshold will be corrected for the number of trials (frequencies) in the power spectrum before being used.

Parameters:
thresholdfloat, optional, default 1

The threshold to be used when reporting p-values of potentially significant powers. Must be between 0 and 1. Default is 1 (all p-values will be reported).

trial_correctionbool, optional, default False

A Boolean flag that sets whether the threshold will be corrected by the number of frequencies before being applied. This decreases the threshold (p-values need to be lower to count as significant). Default is False (report all powers) though for any application where threshold` is set to something meaningful, this should also be applied!

Returns:
pvalsiterable

A list of (p-value, index) tuples for all powers that have p-values lower than the threshold specified in threshold.

coherence()

Averaged Coherence function.

Coherence is defined in Vaughan and Nowak, 1996 [5]. It is a Fourier frequency dependent measure of the linear correlation between time series measured simultaneously in two energy channels.

Compute an averaged Coherence function of cross spectrum by computing coherence function of each segment and averaging them. The return type is a tuple with first element as the coherence function and the second element as the corresponding uncertainty associated with it.

Note : The uncertainty in coherence function is strictly valid for Gaussian statistics only.

Returns:
(coh, uncertainty)tuple of np.ndarray

Tuple comprising the coherence function and uncertainty.

References

compute_rms(min_freq, max_freq, poisson_noise_level=None)

Compute the fractional rms amplitude in the power spectrum between two frequencies.

Parameters:
min_freq: float

The lower frequency bound for the calculation.

max_freq: float

The upper frequency bound for the calculation.

Returns:
rms: float

The fractional rms amplitude contained between min_freq and max_freq.

rms_err: float

The error on the fractional rms amplitude.

Other Parameters:
poisson_noise_levelfloat, default is None

This is the Poisson noise level of the PDS with same normalization as the PDS. If poissoin_noise_level is None, the Poisson noise is calculated in the idealcase e.g. 2./<countrate> for fractional rms normalisation Dead time and other instrumental effects can alter it. The user can fit the Poisson noise level outside this function using the same normalisation of the PDS and it will get subtracted from powers here.

data_attributes() list[str]

Clean up the list of attributes, only giving out those pointing to data.

List all the attributes that point directly to valid data. This method goes through all the attributes of the class, eliminating methods, properties, and attributes that are complicated to serialize such as other StingrayObject, or arrays of objects.

This function does not make difference between array-like data and scalar data.

Returns:
data_attributeslist of str

List of attributes pointing to data that are not methods, properties, or other StingrayObject instances.

deadtime_correct(dead_time, rate, background_rate=0, limit_k=200, n_approx=None, paralyzable=False)

Correct the power spectrum for dead time effects.

This correction is based on the formula given in Zhang et al. 2015, assuming a constant dead time for all events. For more advanced dead time corrections, see the FAD method from stingray.deadtime.fad

Parameters:
dead_time: float

The dead time of the detector.

ratefloat

Detected source count rate

Returns:
spectrum: Crossspectrum or derivative.

The dead-time corrected spectrum.

Other Parameters:
background_ratefloat, default 0

Detected background count rate. This is important to estimate when deadtime is given by the combination of the source counts and background counts (e.g. in an imaging X-ray detector).

paralyzable: bool, default False

If True, the dead time correction is done assuming a paralyzable dead time. If False, the correction is done assuming a non-paralyzable (more common) dead time.

limit_kint, default 200

Limit to this value the number of terms in the inner loops of calculations. Check the plots returned by the check_B and check_A functions to test that this number is adequate.

n_approxint, default None

Number of bins to calculate the model power spectrum. If None, it will use the size of the input frequency array. Relatively simple models (e.g., low count rates compared to dead time) can use a smaller number of bins to speed up the calculation, and the final power values will be interpolated.

dict() dict

Return a dictionary representation of the object.

classmethod from_astropy_table(ts: Table) Tso

Create a Stingray Object object from data in an Astropy Table.

The table MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

static from_events(events, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True, save_all=False)

Calculate an average power spectrum from an event list.

Parameters:
eventsstingray.EventList

Event list to be analyzed.

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency).

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedPowerspectrum.

gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.

normstr, default “frac”

The normalization of the periodogram. abs is absolute rms, frac is fractional rms, leahy is Leahy+83 normalization, and none is the unnormalized periodogram.

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

silentbool, default False

Silence the progress bars.

save_allbool, default False

Save all intermediate PDSs used for the final average.

static from_lc_iterable(iter_lc, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True, save_all=False)

Calculate the average power spectrum of an iterable collection of light curves.

Parameters:
iter_lciterable of stingray.Lightcurve objects or np.array

Light curves. If arrays, use them as counts.

dtfloat

The time resolution of the light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedPowerspectrum.

gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.

normstr, default “frac”

The normalization of the periodogram. abs is absolute rms, frac is fractional rms, leahy is Leahy+83 normalization, and none is the unnormalized periodogram.

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

silentbool, default False

Silence the progress bars.

save_allbool, default False

Save all intermediate PDSs used for the final average.

static from_lightcurve(lc, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True, save_all=False)

Calculate a power spectrum from a light curve.

Parameters:
eventsstingray.Lightcurve

Light curve to be analyzed.

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency).

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedPowerspectrum.

gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.

normstr, default “frac”

The normalization of the periodogram. abs is absolute rms, frac is fractional rms, leahy is Leahy+83 normalization, and none is the unnormalized periodogram.

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

silentbool, default False

Silence the progress bars.

save_allbool, default False

Save all intermediate PDSs used for the final average.

classmethod from_pandas(ts: DataFrame) Tso

Create an StingrayObject object from data in a pandas DataFrame.

The dataframe MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

Since pandas does not support n-D data, multi-dimensional arrays can be specified as <colname>_dimN_M_K etc.

See documentation of make_1d_arrays_into_nd for details.

static from_stingray_timeseries(ts, flux_attr, error_flux_attr=None, segment_size=None, norm='none', silent=False, use_common_mean=True, gti=None, save_all=False)

Calculate AveragedPowerspectrum from a time series.

Parameters:
tsstingray.Timeseries

Input Time Series

flux_attrstr

What attribute of the time series will be used.

Other Parameters:
error_flux_attrstr

What attribute of the time series will be used as error bar.

segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

silentbool, default False

Silence the progress bars

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

save_allbool, default False

Save all intermediate PDSs used for the final average.

static from_time_array(times, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True)

Calculate an average power spectrum from an array of event times.

Parameters:
timesnp.array

Event arrival times.

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency).

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedPowerspectrum.

gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``

Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.

normstr, default “frac”

The normalization of the periodogram. abs is absolute rms, frac is fractional rms, leahy is Leahy+83 normalization, and none is the unnormalized periodogram.

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

silentbool, default False

Silence the progress bars.

classmethod from_xarray(ts: Dataset) Tso

Create a StingrayObject from data in an xarray Dataset.

The dataset MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

get_meta_dict() dict

Give a dictionary with all non-None meta attrs of the object.

initial_checks(*args, **kwargs)[source]

Examples

>>> times = np.arange(0, 10)
>>> ev1 = EventList(times)
>>> ev2 = EventList(times)
>>> ac = AveragedCrossspectrum()

If AveragedCrossspectrum, you need segment_size >>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1) Traceback (most recent call last): … ValueError: segment_size must be specified…

And it needs to be finite! >>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1., segment_size=np.nan) Traceback (most recent call last): … ValueError: segment_size must be finite!

internal_array_attrs() list[str]

List the names of the internal array attributes of the Stingray Object.

These are array attributes that can be set by properties, and are generally indicated by an underscore followed by the name of the property that links to it (E.g. _counts in Lightcurve). By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of internal array attributes.

meta_attrs() list[str]

List the names of the meta attributes of the Stingray Object.

By array attributes, we mean the ones with a different size and shape than main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of meta attributes.

modulation_upper_limit(fmin=None, fmax=None, c=0.95)

Upper limit on a sinusoidal modulation.

To understand the meaning of this amplitude: if the modulation is described by:

..math:: p = overline{p} (1 + a * sin(x))

this function returns a.

If it is a sum of sinusoidal harmonics instead ..math:: p = overline{p} (1 + sum_l a_l * sin(lx)) a is equivalent to \(\sqrt(\sum_l a_l^2)\).

See stingray.stats.power_upper_limit, stingray.stats.amplitude_upper_limit for more information.

The formula used to calculate the upper limit assumes the Leahy normalization. If the periodogram is in another normalization, we will internally convert it to Leahy before calculating the upper limit.

Parameters:
fmin: float

The minimum frequency to search (defaults to the first nonzero bin)

fmax: float

The maximum frequency to search (defaults to the Nyquist frequency)

Returns:
a: float

The modulation amplitude that could produce P>pmeas with 1 - c probability.

Other Parameters:
c: float

The confidence value for the upper limit (e.g. 0.95 = 95%)

Examples

>>> pds = Powerspectrum()
>>> pds.norm = "leahy"
>>> pds.freq = np.arange(0., 5.)
>>> # Note: this pds has 40 as maximum value between 2 and 5 Hz
>>> pds.power = np.array([100000, 1, 1, 40, 1])
>>> pds.m = 1
>>> pds.nphots = 30000
>>> assert np.isclose(
...     pds.modulation_upper_limit(fmin=2, fmax=5, c=0.99),
...     0.10164,
...     atol=0.0001)
phase_lag()

Return the fourier phase lag of the cross spectrum.

plot(labels=None, axis=None, title=None, marker='-', save=False, filename=None, ax=None)

Plot the amplitude of the cross spectrum vs. the frequency using matplotlib.

Parameters:
labelsiterable, default None

A list of tuple with xlabel and ylabel as strings.

axislist, tuple, string, default None

Parameter to set axis properties of the matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for the``matplotlib.pyplot.axis()`` method.

titlestr, default None

The title of the plot.

markerstr, default ‘-’

Line style and color of the plot. Line styles and colors are combined in a single format string, as in 'bo' for blue circles. See matplotlib.pyplot.plot for more options.

saveboolean, optional, default False

If True, save the figure with specified filename.

filenamestr

File name of the image to save. Depends on the boolean save.

axmatplotlib.Axes object

An axes object to fill with the cross correlation plot.

pretty_print(func_to_apply=None, attrs_to_apply=[], attrs_to_discard=[]) str

Return a pretty-printed string representation of the object.

This is useful for debugging, and for interactive use.

Other Parameters:
func_to_applyfunction

A function that modifies the attributes listed in attrs_to_apply. It must return the modified attributes and a label to be printed. If None, no function is applied.

attrs_to_applylist of str

Attributes to be modified by func_to_apply.

attrs_to_discardlist of str

Attributes to be discarded from the output.

classmethod read(filename: str, fmt: str = None) Tso

Generic reader for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

Files that need the astropy.table.Table interface MUST contain at least a column named like the main_array_attr. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all attributes such as mission, gti, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care.

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values should be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Path and file name for the file to be read.

fmt: str

Available options are ‘pickle’, ‘hea’, and any Table-supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.

Returns:
obj: StingrayObject object

The object reconstructed from file

rebin(df=None, f=None, method='mean')

Rebin the power spectrum.

Parameters:
df: float

The new frequency resolution.

Returns:
bin_cs = Powerspectrum object

The newly binned power spectrum.

Other Parameters:
f: float

The rebin factor. If specified, it substitutes df with f*self.df, so f>1 is recommended.

rebin_log(f=0.01)

Logarithmic rebin of the periodogram. The new frequency depends on the previous frequency modified by a factor f:

\[d\nu_j = d\nu_{j-1} (1+f)\]
Parameters:
f: float, optional, default ``0.01``

parameter that steers the frequency resolution

Returns:
new_specCrossspectrum (or one of its subclasses) object

The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum, it will return an object of class

sub(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)

Subtract all the array attrs of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

time_lag()

Calculate time lag and uncertainty.

Equation from Bendat & Piersol, 2011 [bendat-2011]__.

Returns:
lagnp.ndarray

The time lag

lag_errnp.ndarray

The uncertainty in the time lag

to_astropy_table(no_longdouble=False) Table

Create an Astropy Table from a StingrayObject

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the meta dictionary.

Other Parameters:
no_longdoublebool

If True, reduce the precision of longdouble arrays to double precision. This needs to be done in some cases, e.g. when the table is to be saved in an architecture not supporting extended precision (e.g. ARM), but can also be useful when an extended precision is not needed.

to_norm(norm, inplace=False)

Convert Cross spectrum to new normalization.

Parameters:
normstr

The new normalization of the spectrum

Returns:
new_specobject, same class as input

The new, normalized, spectrum.

Other Parameters:
inplace: bool, default False

If True, change the current instance. Otherwise, return a new one

to_pandas() DataFrame

Create a pandas DataFrame from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

Since pandas does not support n-D data, multi-dimensional arrays are converted into columns before the conversion, with names <colname>_dimN_M_K etc.

See documentation of make_nd_into_arrays for details.

to_xarray() Dataset

Create an xarray Dataset from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

type = 'powerspectrum'

Make an averaged periodogram from a light curve by segmenting the light curve, Fourier-transforming each segment and then averaging the resulting periodograms.

Parameters:
data: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects OR :class:`stingray.EventList` object

The light curve data to be Fourier-transformed.

segment_size: float

The size of each segment to average. Note that if the total duration of each Lightcurve object in lc is not an integer multiple of the segment_size, then any fraction left-over at the end of the time series will be lost.

norm: {“leahy” | “frac” | “abs” | “none” }, optional, default “frac”

The normalization of the periodogram to be used.

Other Parameters:
gti: 2-d float array

[[gti0_0, gti0_1], [gti1_0, gti1_1], ...] – Good Time intervals. This choice overrides the GTIs in the single light curves. Use with care, especially if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input object before making the power spectrum.

silentbool, default False

Do not show a progress bar when generating an averaged cross spectrum. Useful for the batch execution of many spectra.

dt: float

The time resolution of the light curve. Only needed when constructing light curves in the case where data is of :class:EventList.

save_allbool, default False

Save all intermediate PDSs used for the final average. Use with care. This is likely to fill up your RAM on medium-sized datasets, and to slow down the computation when rebinning.

skip_checks: bool

Skip initial checks, for speed or other reasons (you need to trust your inputs!).

Attributes:
norm: {``leahy`` | ``frac`` | ``abs`` | ``none`` }

The normalization of the periodogram.

freq: numpy.ndarray

The array of mid-bin frequencies that the Fourier transform samples.

power: numpy.ndarray

The array of normalized powers

power_err: numpy.ndarray

The uncertainties of power. An approximation for each bin given by power_err= power/sqrt(m). Where m is the number of power averaged in each bin (by frequency binning, or averaging power spectra of segments of a light curve). Note that for a single realization (m=1) the error is equal to the power.

unnorm_power: numpy.ndarray

The array of unnormalized powers

unnorm_power_err: numpy.ndarray

The uncertainties of unnorm_power.

df: float

The frequency resolution.

m: int

The number of averaged periodograms.

n: int

The number of data points in the light curve.

nphots: float

The total number of photons in the light curve.

cs_all: list of :class:`Powerspectrum` objects

The list of all periodograms used to calculate the average periodogram.

write(filename: str, fmt: str | None = None) None

Generic writer for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values will be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Name and path of the file to save the object list to.

fmt: str

The file format to store the data in. Available options are pickle, hdf5, ascii, fits


Dynamical Powerspectrum

class stingray.DynamicalPowerspectrum(lc, segment_size, norm='frac', gti=None, sample_time=None)[source]
add(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)

Add two StingrayObject instances.

Add the array values of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

apply_mask(mask: npt.ArrayLike, inplace: bool = False, filtered_attrs: list = None)

Apply a mask to all array attributes of the object

Parameters:
maskarray of bool

The mask. Has to be of the same length as self.time

Returns:
ts_newStingrayObject object

The new object with the mask applied if inplace is False, otherwise the same object.

Other Parameters:
inplacebool

If True, overwrite the current object. Otherwise, return a new one.

filtered_attrslist of str or None

Array attributes to be filtered. Defaults to all array attributes if None. The other array attributes will be set to None. The main array attr is always included.

array_attrs() list[str]

List the names of the array attributes of the Stingray Object.

By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of array attributes.

classical_significances(threshold=1, trial_correction=False)

Compute the classical significances for the powers in the power spectrum, assuming an underlying noise distribution that follows a chi-square distributions with 2M degrees of freedom, where M is the number of powers averaged in each bin.

Note that this function will only produce correct results when the following underlying assumptions are fulfilled:

  1. The power spectrum is Leahy-normalized

  2. There is no source of variability in the data other than the periodic signal to be determined with this method. This is important! If there are other sources of (aperiodic) variability in the data, this method will not produce correct results, but instead produce a large number of spurious false positive detections!

  3. There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pile-up or dead time)

By default, the method produces (index,p-values) for all powers in the power spectrum, where index is the numerical index of the power in question. If a threshold is set, then only powers with p-values below that threshold with their respective indices. If trial_correction is set to True, then the threshold will be corrected for the number of trials (frequencies) in the power spectrum before being used.

Parameters:
thresholdfloat, optional, default 1

The threshold to be used when reporting p-values of potentially significant powers. Must be between 0 and 1. Default is 1 (all p-values will be reported).

trial_correctionbool, optional, default False

A Boolean flag that sets whether the threshold will be corrected by the number of frequencies before being applied. This decreases the threshold (p-values need to be lower to count as significant). Default is False (report all powers) though for any application where threshold` is set to something meaningful, this should also be applied!

Returns:
pvalsiterable

A list of (index, p-value) tuples for all powers that have p-values lower than the threshold specified in threshold.

coherence()

Averaged Coherence function.

Coherence is defined in Vaughan and Nowak, 1996 [6]. It is a Fourier frequency dependent measure of the linear correlation between time series measured simultaneously in two energy channels.

Compute an averaged Coherence function of cross spectrum by computing coherence function of each segment and averaging them. The return type is a tuple with first element as the coherence function and the second element as the corresponding uncertainty associated with it.

Note : The uncertainty in coherence function is strictly valid for Gaussian statistics only.

Returns:
(coh, uncertainty)tuple of np.ndarray

Tuple comprising the coherence function and uncertainty.

References

compute_rms(min_freq, max_freq, poisson_noise_level=0)

Compute the fractional rms amplitude in the power spectrum between two frequencies.

Parameters:
min_freq: float

The lower frequency bound for the calculation.

max_freq: float

The upper frequency bound for the calculation.

Returns:
rms: float

The fractional rms amplitude contained between min_freq and max_freq.

rms_err: float

The error on the fractional rms amplitude.

Other Parameters:
poisson_noise_levelfloat, default is None

This is the Poisson noise level of the PDS with same normalization as the PDS.

data_attributes() list[str]

Clean up the list of attributes, only giving out those pointing to data.

List all the attributes that point directly to valid data. This method goes through all the attributes of the class, eliminating methods, properties, and attributes that are complicated to serialize such as other StingrayObject, or arrays of objects.

This function does not make difference between array-like data and scalar data.

Returns:
data_attributeslist of str

List of attributes pointing to data that are not methods, properties, or other StingrayObject instances.

deadtime_correct(dead_time, rate, background_rate=0, limit_k=200, n_approx=None, paralyzable=False)

Correct the power spectrum for dead time effects.

This correction is based on the formula given in Zhang et al. 2015, assuming a constant dead time for all events. For more advanced dead time corrections, see the FAD method from stingray.deadtime.fad

Parameters:
dead_time: float

The dead time of the detector.

ratefloat

Detected source count rate

Returns:
spectrum: Crossspectrum or derivative.

The dead-time corrected spectrum.

Other Parameters:
background_ratefloat, default 0

Detected background count rate. This is important to estimate when deadtime is given by the combination of the source counts and background counts (e.g. in an imaging X-ray detector).

paralyzable: bool, default False

If True, the dead time correction is done assuming a paralyzable dead time. If False, the correction is done assuming a non-paralyzable (more common) dead time.

limit_kint, default 200

Limit to this value the number of terms in the inner loops of calculations. Check the plots returned by the check_B and check_A functions to test that this number is adequate.

n_approxint, default None

Number of bins to calculate the model power spectrum. If None, it will use the size of the input frequency array. Relatively simple models (e.g., low count rates compared to dead time) can use a smaller number of bins to speed up the calculation, and the final power values will be interpolated.

dict() dict

Return a dictionary representation of the object.

classmethod from_astropy_table(ts: Table) Tso

Create a Stingray Object object from data in an Astropy Table.

The table MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

static from_events(events1, events2, dt, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)

Calculate AveragedCrossspectrum from two event lists

Parameters:
events1stingray.EventList

Events from channel 1

events2stingray.EventList

Events from channel 2

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

static from_lc_iterable(iter_lc1, iter_lc2, dt, segment_size, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)

Calculate AveragedCrossspectrum from two light curves

Parameters:
iter_lc1iterable of stingray.Lightcurve objects or np.array

Light curves from channel 1. If arrays, use them as counts

iter_lc1iterable of stingray.Lightcurve objects or np.array

Light curves from channel 2. If arrays, use them as counts

dtfloat

The time resolution of the light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

save_allbool, default False

If True, save the cross spectrum of each segment in the cs_all attribute of the output Crossspectrum object.

static from_lightcurve(lc1, lc2, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)

Calculate AveragedCrossspectrum from two light curves

Parameters:
lc1stingray.Lightcurve

Light curve from channel 1

lc2stingray.Lightcurve

Light curve from channel 2

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

classmethod from_pandas(ts: DataFrame) Tso

Create an StingrayObject object from data in a pandas DataFrame.

The dataframe MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

Since pandas does not support n-D data, multi-dimensional arrays can be specified as <colname>_dimN_M_K etc.

See documentation of make_1d_arrays_into_nd for details.

static from_stingray_timeseries(ts1, ts2, flux_attr, error_flux_attr=None, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)

Calculate AveragedCrossspectrum from two light curves

Parameters:
ts1stingray.Timeseries

Time series from channel 1

ts2stingray.Timeseries

Time series from channel 2

flux_attrstr

What attribute of the time series will be used.

Other Parameters:
error_flux_attrstr

What attribute of the time series will be used as error bar.

segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

static from_time_array(times1, times2, dt, segment_size=None, gti=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True)

Calculate AveragedCrossspectrum from two arrays of event times.

Parameters:
times1np.array

Event arrival times of channel 1

times2np.array

Event arrival times of channel 2

dtfloat

The time resolution of the intermediate light curves (sets the Nyquist frequency)

Other Parameters:
segment_sizefloat

The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum.

gti[[gti0, gti1], …]

Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.

normstr, default “frac”

The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram

use_common_meanbool, default True

The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set use_common_mean to False to calculate it on a per-segment basis.

fullspecbool, default False

Return the full periodogram, including negative frequencies

silentbool, default False

Silence the progress bars

power_typestr, default ‘all’

If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part

classmethod from_xarray(ts: Dataset) Tso

Create a StingrayObject from data in an xarray Dataset.

The dataset MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

get_meta_dict() dict

Give a dictionary with all non-None meta attrs of the object.

initial_checks(data1, segment_size=None, **kwargs)

Examples

>>> times = np.arange(0, 10)
>>> ev1 = EventList(times)
>>> ev2 = EventList(times)
>>> ac = AveragedCrossspectrum()

If AveragedCrossspectrum, you need segment_size >>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1) Traceback (most recent call last): … ValueError: segment_size must be specified…

And it needs to be finite! >>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1., segment_size=np.nan) Traceback (most recent call last): … ValueError: segment_size must be finite!

internal_array_attrs() list[str]

List the names of the internal array attributes of the Stingray Object.

These are array attributes that can be set by properties, and are generally indicated by an underscore followed by the name of the property that links to it (E.g. _counts in Lightcurve). By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of internal array attributes.

meta_attrs() list[str]

List the names of the meta attributes of the Stingray Object.

By array attributes, we mean the ones with a different size and shape than main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of meta attributes.

phase_lag()

Return the fourier phase lag of the cross spectrum.

plot(labels=None, axis=None, title=None, marker='-', save=False, filename=None, ax=None)

Plot the amplitude of the cross spectrum vs. the frequency using matplotlib.

Parameters:
labelsiterable, default None

A list of tuple with xlabel and ylabel as strings.

axislist, tuple, string, default None

Parameter to set axis properties of the matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for the``matplotlib.pyplot.axis()`` method.

titlestr, default None

The title of the plot.

markerstr, default ‘-’

Line style and color of the plot. Line styles and colors are combined in a single format string, as in 'bo' for blue circles. See matplotlib.pyplot.plot for more options.

saveboolean, optional, default False

If True, save the figure with specified filename.

filenamestr

File name of the image to save. Depends on the boolean save.

axmatplotlib.Axes object

An axes object to fill with the cross correlation plot.

power_colors(freq_edges=[0.00390625, 0.03125, 0.25, 2.0, 16.0], freqs_to_exclude=None, poisson_power=None)[source]

Return the power colors of the dynamical power spectrum.

Parameters:
freq_edges: iterable

The edges of the frequency bins to be used for the power colors.

freqs_to_exclude1-d or 2-d iterable, optional, default None

The ranges of frequencies to exclude from the calculation of the power color. For example, the frequencies containing strong QPOs. A 1-d iterable should contain two values for the edges of a single range. (E.g. [0.1, 0.2]). [[0.1, 0.2], [3, 4]] will exclude the ranges 0.1-0.2 Hz and 3-4 Hz.

poisson_levelfloat or iterable, optional

Defaults to the theoretical Poisson noise level (e.g. 2 for Leahy normalization). The Poisson noise level of the power spectrum. If iterable, it should have the same length as frequency. (This might apply to the case of a power spectrum with a strong dead time distortion)

Returns:
pc0: np.ndarray
pc0_err: np.ndarray
pc1: np.ndarray
pc1_err: np.ndarray

The power colors for each spectrum and their respective errors

pretty_print(func_to_apply=None, attrs_to_apply=[], attrs_to_discard=[]) str

Return a pretty-printed string representation of the object.

This is useful for debugging, and for interactive use.

Other Parameters:
func_to_applyfunction

A function that modifies the attributes listed in attrs_to_apply. It must return the modified attributes and a label to be printed. If None, no function is applied.

attrs_to_applylist of str

Attributes to be modified by func_to_apply.

attrs_to_discardlist of str

Attributes to be discarded from the output.

classmethod read(filename: str, fmt: str = None) Tso

Generic reader for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

Files that need the astropy.table.Table interface MUST contain at least a column named like the main_array_attr. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all attributes such as mission, gti, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care.

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values should be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Path and file name for the file to be read.

fmt: str

Available options are ‘pickle’, ‘hea’, and any Table-supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.

Returns:
obj: StingrayObject object

The object reconstructed from file

rebin(df=None, f=None, method='mean')

Rebin the cross spectrum to a new frequency resolution df.

Parameters:
df: float

The new frequency resolution

Returns:
bin_cs = Crossspectrum (or one of its subclasses) object

The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum, it will return an object of class AveragedPowerspectrum, too.

Other Parameters:
f: float

the rebin factor. If specified, it substitutes df with f*self.df

rebin_by_n_intervals(n, method='average')

Rebin the Dynamic Power Spectrum to a new time resolution, by summing contiguous intervals.

This is different from meth:DynamicalPowerspectrum.rebin_time in that it averages n consecutive intervals regardless of their distance in time. rebin_time will instead average intervals that are separated at most by a time dt_new.

Parameters:
n: int

The number of intervals to be combined into one.

method: {“sum” | “mean” | “average”}, optional, default “average”

This keyword argument sets whether the powers in the new bins should be summed or averaged.

Returns:
time_new: numpy.ndarray

Time axis with new rebinned time resolution.

dynspec_new: numpy.ndarray

New rebinned Dynamical Cross Spectrum.

rebin_frequency(df_new, method='average')

Rebin the Dynamic Power Spectrum to a new frequency resolution. Rebinning is an in-place operation, i.e. will replace the existing dyn_ps attribute.

While the new resolution does not need to be an integer of the previous frequency resolution, be aware that if this is the case, the last frequency bin will be cut off by the fraction left over by the integer division

Parameters:
df_new: float

The new frequency resolution of the dynamical power spectrum. Must be larger than the frequency resolution of the old dynamical power spectrum!

method: {“sum” | “mean” | “average”}, optional, default “average”

This keyword argument sets whether the powers in the new bins should be summed or averaged.

rebin_log(f=0.01)

Logarithmic rebin of the periodogram. The new frequency depends on the previous frequency modified by a factor f:

\[d\nu_j = d\nu_{j-1} (1+f)\]
Parameters:
f: float, optional, default ``0.01``

parameter that steers the frequency resolution

Returns:
new_specCrossspectrum (or one of its subclasses) object

The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from AveragedPowerspectrum, it will return an object of class

rebin_time(dt_new, method='average')

Rebin the Dynamic Power Spectrum to a new time resolution.

Note: this is not changing the time resolution of the input light curve! dt is the integration time of each line of the dynamical power spectrum (typically, an integer multiple of segment_size).

While the new resolution does not need to be an integer of the previous time resolution, be aware that if this is the case, the last time bin will be cut off by the fraction left over by the integer division

Parameters:
dt_new: float

The new time resolution of the dynamical power spectrum. Must be larger than the time resolution of the old dynamical power spectrum!

method: {“sum” | “mean” | “average”}, optional, default “average”

This keyword argument sets whether the powers in the new bins should be summed or averaged.

Returns:
time_new: numpy.ndarray

Time axis with new rebinned time resolution.

dynspec_new: numpy.ndarray

New rebinned Dynamical Cross Spectrum.

sub(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)

Subtract all the array attrs of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

time_lag()

Calculate time lag and uncertainty.

Equation from Bendat & Piersol, 2011 [bendat-2011]__.

Returns:
lagnp.ndarray

The time lag

lag_errnp.ndarray

The uncertainty in the time lag

to_astropy_table(no_longdouble=False) Table

Create an Astropy Table from a StingrayObject

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the meta dictionary.

Other Parameters:
no_longdoublebool

If True, reduce the precision of longdouble arrays to double precision. This needs to be done in some cases, e.g. when the table is to be saved in an architecture not supporting extended precision (e.g. ARM), but can also be useful when an extended precision is not needed.

to_norm(norm, inplace=False)

Convert Cross spectrum to new normalization.

Parameters:
normstr

The new normalization of the spectrum

Returns:
new_specobject, same class as input

The new, normalized, spectrum.

Other Parameters:
inplace: bool, default False

If True, change the current instance. Otherwise, return a new one

to_pandas() DataFrame

Create a pandas DataFrame from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

Since pandas does not support n-D data, multi-dimensional arrays are converted into columns before the conversion, with names <colname>_dimN_M_K etc.

See documentation of make_nd_into_arrays for details.

to_xarray() Dataset

Create an xarray Dataset from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

trace_maximum(min_freq=None, max_freq=None)

Return the indices of the maximum powers in each segment Powerspectrum between specified frequencies.

Parameters:
min_freq: float, default ``None``

The lower frequency bound.

max_freq: float, default ``None``

The upper frequency bound.

Returns:
max_positionsnp.array

The array of indices of the maximum power in each segment having frequency between min_freq and max_freq.

type = 'powerspectrum'

Create a dynamical power spectrum, also often called a spectrogram.

This class will divide a Lightcurve object into segments of length segment_size, create a power spectrum for each segment and store all powers in a matrix as a function of both time (using the mid-point of each segment) and frequency.

This is often used to trace changes in period of a (quasi-)periodic signal over time.

Parameters:
lcstingray.Lightcurve or stingray.EventList object

The time series or event list of which the dynamical power spectrum is to be calculated. If stingray.EventList, dt must be specified as well.

segment_sizefloat, default 1

Length of the segment of light curve, default value is 1 (in whatever units the time array in the Lightcurve` object uses).

norm: {“leahy” | “frac” | “abs” | “none” }, optional, default “frac”

The normaliation of the periodogram to be used.

Other Parameters:
gti: 2-d float array

[[gti0_0, gti0_1], [gti1_0, gti1_1], ...] – Good Time intervals. This choice overrides the GTIs in the single light curves. Use with care, especially if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input object before making the power spectrum.

sample_time: float

Compulsory for input stingray.EventList data. The time resolution of the lightcurve that is created internally from the input event lists. Drives the Nyquist frequency.

Attributes:
segment_size: float

The size of each segment to average. Note that if the total duration of each input object in lc is not an integer multiple of the segment_size, then any fraction left-over at the end of the time series will be lost.

dyn_psnp.ndarray

The matrix of normalized squared absolute values of Fourier amplitudes. The axis are given by the freq and time attributes.

norm: {``leahy`` | ``frac`` | ``abs`` | ``none``}

The normalization of the periodogram.

freq: numpy.ndarray

The array of mid-bin frequencies that the Fourier transform samples.

time: numpy.ndarray

The array of mid-point times of each interval used for the dynamical power spectrum.

df: float

The frequency resolution.

dt: float

The time resolution of the dynamical spectrum. It is not the time resolution of the input light curve. It is the integration time of each line of the dynamical power spectrum (typically, an integer multiple of segment_size).

m: int

The number of averaged cross spectra.

write(filename: str, fmt: str | None = None) None

Generic writer for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values will be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Name and path of the file to save the object list to.

fmt: str

The file format to store the data in. Available options are pickle, hdf5, ascii, fits


CrossCorrelation

class stingray.CrossCorrelation(lc1=None, lc2=None, cross=None, mode='same', norm='none')[source]

Make a cross-correlation from light curves or a cross spectrum.

You can also make an empty Crosscorrelation object to populate with your own cross-correlation data.

Parameters:
lc1: :class:`stingray.Lightcurve` object, optional, default ``None``

The first light curve data for correlation calculations.

lc2: :class:`stingray.Lightcurve` object, optional, default ``None``

The light curve data for the correlation calculations.

cross: :class: `stingray.Crossspectrum` object, default ``None``

The cross spectrum data for the correlation calculations.

mode: {``full``, ``valid``, ``same``}, optional, default ``same``

A string indicating the size of the correlation output. See the relevant scipy documentation [scipy-docs] for more details.

norm: {``none``, ``variance``}

if “variance”, the cross correlation is normalized so that perfect correlation gives 1, and perfect anticorrelation gives -1. See Gaskell & Peterson 1987, Gardner & Done 2017

References

Attributes:
lc1: :class:`stingray.Lightcurve`

The first light curve data for correlation calculations.

lc2: :class:`stingray.Lightcurve`

The light curve data for the correlation calculations.

cross: :class: `stingray.Crossspectrum`

The cross spectrum data for the correlation calculations.

corr: numpy.ndarray

An array of correlation data calculated from two light curves

time_lags: numpy.ndarray

An array of all possible time lags against which each point in corr is calculated

dt: float

The time resolution of each light curve (used in time_lag calculations)

time_shift: float

Time lag that gives maximum value of correlation between two light curves. There will be maximum correlation between light curves if one of the light curve is shifted by time_shift.

n: int

Number of points in self.corr (length of cross-correlation data)

auto: bool

An internal flag to indicate whether this is a cross-correlation or an auto-correlation.

norm: {``none``, ``variance``}

The normalization specified in input

cal_timeshift(dt=1.0)[source]

Calculate the cross correlation against all possible time lags, both positive and negative.

The method signal.correlation_lags() uses SciPy versions >= 1.6.1 ([scipy-docs-lag])

Parameters:
dt: float, optional, default ``1.0``

Time resolution of the light curve, should be passed when object is populated with correlation data and no information about light curve can be extracted. Used to calculate time_lags.

Returns:
self.time_shift: float

Value of the time lag that gives maximum value of correlation between two light curves.

self.time_lags: numpy.ndarray

An array of time_lags calculated from correlation data

References

plot(labels=None, axis=None, title=None, marker='-', save=False, filename=None, ax=None)[source]

Plot the Crosscorrelation as function using Matplotlib. Plot the Crosscorrelation object on a graph self.time_lags on x-axis and self.corr on y-axis

Parameters:
labelsiterable, default None

A list of tuple with xlabel and ylabel as strings.

axislist, tuple, string, default None

Parameter to set axis properties of matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for matplotlib.pyplot.axis() function.

titlestr, default None

The title of the plot.

markerstr, default -

Line style and color of the plot. Line styles and colors are combined in a single format string, as in 'bo' for blue circles. See matplotlib.pyplot.plot for more options.

saveboolean, optional (default=False)

If True, save the figure with specified filename.

filenamestr

File name of the image to save. Depends on the boolean save.

axmatplotlib.Axes object

An axes object to fill with the cross correlation plot.


AutoCorrelation

class stingray.AutoCorrelation(lc=None, mode='same')[source]

Make an auto-correlation from a light curve. You can also make an empty Autocorrelation object to populate with your own auto-correlation data.

Parameters:
lc: :class:`stingray.Lightcurve` object, optional, default ``None``

The light curve data for correlation calculations.

mode: {``full``, ``valid``, ``same``}, optional, default ``same``

A string indicating the size of the correlation output. See the relevant scipy documentation [scipy-docs] for more details.

Attributes:
lc1, lc2::class:`stingray.Lightcurve`

The light curve data for correlation calculations.

corr: numpy.ndarray

An array of correlation data calculated from lightcurve data

time_lags: numpy.ndarray

An array of all possible time lags against which each point in corr is calculated

dt: float

The time resolution of each lightcurve (used in time_lag calculations)

time_shift: float, zero

Max. Value of AutoCorrelation is always at zero lag.

n: int

Number of points in self.corr(Length of auto-correlation data)

cal_timeshift(dt=1.0)

Calculate the cross correlation against all possible time lags, both positive and negative.

The method signal.correlation_lags() uses SciPy versions >= 1.6.1 ([scipy-docs-lag])

Parameters:
dt: float, optional, default ``1.0``

Time resolution of the light curve, should be passed when object is populated with correlation data and no information about light curve can be extracted. Used to calculate time_lags.

Returns:
self.time_shift: float

Value of the time lag that gives maximum value of correlation between two light curves.

self.time_lags: numpy.ndarray

An array of time_lags calculated from correlation data

References

plot(labels=None, axis=None, title=None, marker='-', save=False, filename=None, ax=None)

Plot the Crosscorrelation as function using Matplotlib. Plot the Crosscorrelation object on a graph self.time_lags on x-axis and self.corr on y-axis

Parameters:
labelsiterable, default None

A list of tuple with xlabel and ylabel as strings.

axislist, tuple, string, default None

Parameter to set axis properties of matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for matplotlib.pyplot.axis() function.

titlestr, default None

The title of the plot.

markerstr, default -

Line style and color of the plot. Line styles and colors are combined in a single format string, as in 'bo' for blue circles. See matplotlib.pyplot.plot for more options.

saveboolean, optional (default=False)

If True, save the figure with specified filename.

filenamestr

File name of the image to save. Depends on the boolean save.

axmatplotlib.Axes object

An axes object to fill with the cross correlation plot.


Dead-Time Corrections

stingray.deadtime.fad.FAD(data1, data2, segment_size, dt=None, norm='frac', plot=False, ax=None, smoothing_alg='gauss', smoothing_length=None, verbose=False, tolerance=0.05, strict=False, output_file=None, return_objects=False)[source]

Calculate Frequency Amplitude Difference-corrected (cross)power spectra.

Reference: Bachetti & Huppenkothen, 2018, ApJ, 853L, 21

The two input light curve must be strictly simultaneous, and recorded by two independent detectors with similar responses, so that the count rates are similar and dead time is independent. The method does not apply to different energy channels of the same instrument, or to the signal observed by two instruments with very different responses. See the paper for caveats.

Parameters:
data1Lightcurve or EventList

Input data for channel 1

data2Lightcurve or EventList

Input data for channel 2. Must be strictly simultaneous to data1 and, if a light curve, have the same binning time. Also, it must be strictly independent, e.g. from a different detector. There must be no dead time cross-talk between the two time series.

segment_size: float

The final Fourier products are averaged over many segments of the input light curves. This is the length of each segment being averaged. Note that the light curve must be long enough to have at least 30 segments, as the result gets better as one averages more and more segments.

dtfloat

Time resolution of the light curves used to produce periodograms

norm: {``frac``, ``abs``, ``leahy``, ``none``}, default ``none``

The normalization of the (real part of the) cross spectrum.

Returns:
resultsclass:astropy.table.Table object or dict or str

The content of results depends on whether return_objects is True or False. If return_objects==False, results is a Table with the following columns:

  • pds1: the corrected PDS of lc1

  • pds2: the corrected PDS of lc2

  • cs: the corrected cospectrum

  • ptot: the corrected PDS of lc1 + lc2

If return_objects is True, results is a dict, with keys named like the columns listed above but with AveragePowerspectrum or AverageCrossspectrum objects instead of arrays.

Other Parameters:
plotbool, default False

Plot diagnostics: check if the smoothed Fourier difference scatter is a good approximation of the data scatter.

axmatplotlib.axes.axes object
If not None and plot is True, use this axis object to produce

the diagnostic plot. Otherwise, create a new figure.

smoothing_alg{‘gauss’, …}

Smoothing algorithm. For now, the only smoothing algorithm allowed is gauss, which applies a Gaussian Filter from scipy.

smoothing_lengthint, default segment_size * 3

Number of bins to smooth in gaussian window smoothing

verbose: bool, default False

Print out information on the outcome of the algorithm (recommended)

tolerancefloat, default 0.05

Accepted relative error on the FAD-corrected Fourier amplitude, to be used as success diagnostics. Should be ` stdtheor = 2 / np.sqrt(n) std = (average_corrected_fourier_diff / n).std() np.abs((std - stdtheor) / stdtheor) < tolerance `

strictbool, default False

Decide what to do if the condition on tolerance is not met. If True, raise a RuntimeError. If False, just throw a warning.

output_filestr, default None

Name of an output file (any extension automatically recognized by Astropy is fine)

stingray.deadtime.fad.calculate_FAD_correction(lc1, lc2, segment_size, norm='frac', gti=None, plot=False, ax=None, smoothing_alg='gauss', smoothing_length=None, verbose=False, tolerance=0.05, strict=False, output_file=None, return_objects=False)[source]

Calculate Frequency Amplitude Difference-corrected (cross)power spectra.

Reference: Bachetti & Huppenkothen, 2018, ApJ, 853L, 21

The two input light curve must be strictly simultaneous, and recorded by two independent detectors with similar responses, so that the count rates are similar and dead time is independent. The method does not apply to different energy channels of the same instrument, or to the signal observed by two instruments with very different responses. See the paper for caveats.

Parameters:
lc1: class:`stingray.ligthtcurve.Lightcurve`

Light curve from channel 1

lc2: class:`stingray.ligthtcurve.Lightcurve`

Light curve from channel 2. Must be strictly simultaneous to lc1 and have the same binning time. Also, it must be strictly independent, e.g. from a different detector. There must be no dead time cross-talk between the two light curves.

segment_size: float

The final Fourier products are averaged over many segments of the input light curves. This is the length of each segment being averaged. Note that the light curve must be long enough to have at least 30 segments, as the result gets better as one averages more and more segments.

norm: {``frac``, ``abs``, ``leahy``, ``none``}, default ``none``

The normalization of the (real part of the) cross spectrum.

Returns:
resultsclass:astropy.table.Table object or dict or str

The content of results depends on whether return_objects is True or False. If return_objects==False, results is a Table with the following columns:

  • pds1: the corrected PDS of lc1

  • pds2: the corrected PDS of lc2

  • cs: the corrected cospectrum

  • ptot: the corrected PDS of lc1 + lc2

If return_objects is True, results is a dict, with keys named like the columns listed above but with AveragePowerspectrum or AverageCrossspectrum objects instead of arrays.

Other Parameters:
plotbool, default False

Plot diagnostics: check if the smoothed Fourier difference scatter is a good approximation of the data scatter.

axmatplotlib.axes.axes object
If not None and plot is True, use this axis object to produce

the diagnostic plot. Otherwise, create a new figure.

smoothing_alg{‘gauss’, …}

Smoothing algorithm. For now, the only smoothing algorithm allowed is gauss, which applies a Gaussian Filter from scipy.

smoothing_lengthint, default segment_size * 3

Number of bins to smooth in gaussian window smoothing

verbose: bool, default False

Print out information on the outcome of the algorithm (recommended)

tolerancefloat, default 0.05

Accepted relative error on the FAD-corrected Fourier amplitude, to be used as success diagnostics. Should be ` stdtheor = 2 / np.sqrt(n) std = (average_corrected_fourier_diff / n).std() np.abs((std - stdtheor) / stdtheor) < tolerance `

strictbool, default False

Decide what to do if the condition on tolerance is not met. If True, raise a RuntimeError. If False, just throw a warning.

output_filestr, default None

Name of an output file (any extension automatically recognized by Astropy is fine)

stingray.deadtime.fad.get_periodograms_from_FAD_results(FAD_results, kind='ptot')[source]

Get Stingray periodograms from FAD results.

Parameters:
FAD_resultsastropy.table.Table object or str

Results from calculate_FAD_correction, either as a Table or an output file name

kindstr, one of [‘ptot’, ‘pds1’, ‘pds2’, ‘cs’]

Kind of periodogram to get (E.g., ‘ptot’ -> PDS from the sum of the two light curves, ‘cs’ -> cospectrum, etc.)

Returns:
resultsAveragedCrossspectrum or Averagedpowerspectrum object

The periodogram.

stingray.deadtime.model.check_A(rate, td, tb, max_k=100, save_to=None, linthresh=1e-06, rate_is_incident=True)[source]

Test that A is well-behaved.

This function produces a plot of \(A_k - r_0^2 t_b^2\) vs \(k\), to visually check that \(A_k \rightarrow r_0^2 t_b^2\) for \(k\rightarrow\infty\), as per Eq. 43 in Zhang+95.

With this function is possible to determine how many inner loops k (limit_k in function pds_model_zhang) are necessary for a correct approximation of the dead time model

Parameters:
ratefloat

Count rate, either incident or detected (use the rate_is_incident bool to specify)

tdfloat

Dead time

tbfloat

Bin time of the light curve

Other Parameters:
max_kint

Maximum k to plot

save_tostr, default None

If not None, save the plot to this file

linthreshfloat, default 0.000001

Linear threshold for the “symlog” scale of the plot

rate_is_incidentbool, default True

If True, the input rate is the incident count rate. If False, it is the detected one.

stingray.deadtime.model.check_B(rate, td, tb, max_k=100, save_to=None, linthresh=1e-06, rate_is_incident=True)[source]

Check that \(B\rightarrow 0\) for \(k\rightarrow \infty\).

This function produces a plot of \(B_k\) vs \(k\), to visually check that \(B_k \rightarrow 0\) for \(k\rightarrow\infty\), as per Eq. 43 in Zhang+95.

With this function is possible to determine how many inner loops k (limit_k in function pds_model_zhang) are necessary for a correct approximation of the dead time model

Parameters:
ratefloat

Count rate, either incident or detected (use the rate_is_incident bool to specify)

tdfloat

Dead time

tbfloat

Bin time of the light curve

Other Parameters:
max_kint

Maximum k to plot

save_tostr, default None

If not None, save the plot to this file

linthreshfloat, default 0.000001

Linear threshold for the “symlog” scale of the plot

rate_is_incidentbool, default True

If True, the input rate is the incident count rate. If False, it is the detected one.

stingray.deadtime.model.non_paralyzable_dead_time_model(freqs, dead_time, rate, bin_time=None, limit_k=200, background_rate=0.0, n_approx=None)[source]

Calculate the dead-time-modified power spectrum.

Parameters:
freqsarray of floats

Frequency array

dead_timefloat

Dead time

ratefloat

Detected source count rate

Returns:
powerarray of floats

Power spectrum

Other Parameters:
bin_timefloat

Bin time of the light curve

limit_kint, default 200

Limit to this value the number of terms in the inner loops of calculations. Check the plots returned by the check_B and check_A functions to test that this number is adequate.

background_ratefloat, default 0

Detected background count rate. This is important to estimate when deadtime is given by the combination of the source counts and background counts (e.g. in an imaging X-ray detector).

n_approxint, default None

Number of bins to calculate the model power spectrum. If None, it will use the size of the input frequency array. Relatively simple models (e.g., low count rates compared to dead time) can use a smaller number of bins to speed up the calculation, and the final power values will be interpolated.

stingray.deadtime.model.pds_model_zhang(N, rate, td, tb, limit_k=60, rate_is_incident=True)[source]

Calculate the dead-time-modified power spectrum.

Parameters:
Nint

The number of spectral bins

ratefloat

Incident count rate

tdfloat

Dead time

tbfloat

Bin time of the light curve

Returns:
freqsarray of floats

Frequency array

powerarray of floats

Power spectrum

Other Parameters:
limit_kint

Limit to this value the number of terms in the inner loops of calculations. Check the plots returned by the check_B and check_A functions to test that this number is adequate.

rate_is_incidentbool, default True

If True, the input rate is the incident count rate. If False, it is the detected count rate.

stingray.deadtime.model.r_det(td, r_i)[source]

Calculate detected countrate given dead time and incident countrate.

Parameters:
tdfloat

Dead time

r_ifloat

Incident countrate

stingray.deadtime.model.r_in(td, r_0)[source]

Calculate incident countrate given dead time and detected countrate.

Parameters:
tdfloat

Dead time

r_0float

Detected countrate


Higher-Order Fourier and Spectral Timing Products

These classes implement higher-order Fourier analysis products (e.g. Bispectrum) and Spectral Timing related methods taking advantage of both temporal and spectral information in modern data sets.

Bispectrum

class stingray.bispectrum.Bispectrum(lc, maxlag=None, window=None, scale='biased')[source]

Makes a Bispectrum object from a stingray.Lightcurve.

Bispectrum is a higher order time series analysis method and is calculated by indirect method as Fourier transform of triple auto-correlation function also called as 3rd order cumulant.

Parameters:
lcstingray.Lightcurve object

The light curve data for bispectrum calculation.

maxlagint, optional, default None

Maximum lag on both positive and negative sides of 3rd order cumulant (Similar to lags in correlation). if None, max lag is set to one-half of length of light curve.

window{uniform, parzen, hamming, hanning, triangular, welch, blackman, flat-top}, optional, default ‘uniform’

Type of window function to apply to the data.

scale{biased, unbiased}, optional, default biased

Flag to decide biased or unbiased normalization for 3rd order cumulant function.

References

1) The biphase explained: understanding the asymmetries invcoupled Fourier components of astronomical timeseries by Thomas J. Maccarone Department of Physics, Box 41051, Science Building, Texas Tech University, Lubbock TX 79409-1051 School of Physics and Astronomy, University of Southampton, SO16 4ES

2) T. S. Rao, M. M. Gabr, An Introduction to Bispectral Analysis and Bilinear Time Series Models, Lecture Notes in Statistics, Volume 24, D. Brillinger, S. Fienberg, J. Gani, J. Hartigan, K. Krickeberg, Editors, Springer-Verlag, New York, NY, 1984.

3) Matlab version of bispectrum under following link. https://www.mathworks.com/matlabcentral/fileexchange/60-bisp3cum

Examples

>> from stingray.lightcurve import Lightcurve
>> from stingray.bispectrum import Bispectrum
>> lc = Lightcurve([1,2,3,4,5],[2,3,1,1,2])
>> bs = Bispectrum(lc,maxlag=1)
>> bs.lags
array([-1.,  0.,  1.])
>> bs.freq
array([-0.5,  0.,  0.5])
>> bs.cum3
array([[-0.2976,  0.1024,  0.1408],
    [ 0.1024,  0.144, -0.2976],
    [ 0.1408, -0.2976,  0.1024]])
>> bs.bispec_mag
array([[ 1.26336794,  0.0032   ,  0.0032    ],
    [ 0.0032   ,  0.16     ,  0.0032    ],
    [ 0.0032   ,  0.0032   ,  1.26336794]])
>> bs.bispec_phase
array([[ -9.65946229e-01,   2.25347190e-14,   3.46944695e-14],
    [  0.00000000e+00,   3.14159265e+00,   0.00000000e+00],
    [ -3.46944695e-14,  -2.25347190e-14,   9.65946229e-01]])
Attributes:
lcstingray.Lightcurve object

The light curve data to compute the Bispectrum.

fsfloat

Sampling frequencies

nint

Total Number of samples of light curve observations.

maxlagint

Maximum lag on both positive and negative sides of 3rd order cumulant (similar to lags in correlation)

signalnumpy.ndarray

Row vector of light curve counts for matrix operations

scale{biased, unbiased}

Flag to decide biased or unbiased normalization for 3rd order cumulant function.

lagsnumpy.ndarray

An array of time lags for which 3rd order cumulant is calculated

freqnumpy.ndarray

An array of freq values for Bispectrum.

cum3numpy.ndarray

A maxlag*2+1 x maxlag*2+1 matrix containing 3rd order cumulant data for different lags.

bispecnumpy.ndarray

A`` maxlag*2+1 x maxlag*2+1`` matrix containing bispectrum data for different frequencies.

bispec_magnumpy.ndarray

Magnitude of the bispectrum

bispec_phasenumpy.ndarray

Phase of the bispectrum

plot_cum3(axis=None, save=False, filename=None)[source]

Plot the 3rd order cumulant as function of time lags using matplotlib. Plot the cum3 attribute on a graph with the lags attribute on x-axis and y-axis and cum3 on z-axis

Parameters:
axislist, tuple, string, default None

Parameter to set axis properties of matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for matplotlib.pyplot.axis() method.

savebool, optionalm, default False

If True, save the figure with specified filename.

filenamestr

File name and path of the image to save. Depends on the boolean save.

Returns:
pltmatplotlib.pyplot object

Reference to plot, call show() to display it

plot_mag(axis=None, save=False, filename=None)[source]

Plot the magnitude of bispectrum as function of freq using matplotlib. Plot the bispec_mag attribute on a graph with freq attribute on the x-axis and y-axis and the bispec_mag attribute on the z-axis.

Parameters:
axislist, tuple, string, default None

Parameter to set axis properties of matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for matplotlib.pyplot.axis() method.

savebool, optional, default False

If True, save the figure with specified filename and path.

filenamestr

File name and path of the image to save. Depends on the bool save.

Returns:
pltmatplotlib.pyplot object

Reference to plot, call show() to display it

plot_phase(axis=None, save=False, filename=None)[source]

Plot the phase of bispectrum as function of freq using matplotlib. Plot the bispec_phase attribute on a graph with phase attribute on the x-axis and y-axis and the bispec_phase attribute on the z-axis.

Parameters:
axislist, tuple, string, default None

Parameter to set axis properties of matplotlib figure. For example it can be a list like [xmin, xmax, ymin, ymax] or any other acceptable argument for matplotlib.pyplot.axis() function.

savebool, optional, default False

If True, save the figure with specified filename and path.

filenamestr

File name and path of the image to save. Depends on the bool save.

Returns:
pltmatplotlib.pyplot object

Reference to plot, call show() to display it


Covariancespectrum

class stingray.Covariancespectrum(data, dt=None, band_interest=None, ref_band_interest=None, std=None)[source]

Compute a covariance spectrum for the data. The input data can be either in event data or pre-made light curves. Event data can either be in the form of a numpy.ndarray with (time stamp, energy) pairs or a stingray.events.EventList object. If light curves are formed ahead of time, then a list of stingray.Lightcurve objects should be passed to the object, ideally one light curve for each band of interest.

For the case where the data is input as a list of stingray.Lightcurve objects, the reference band(s) should either be

  1. a single stingray.Lightcurve object,

  2. a list of stingray.Lightcurve objects with the reference band for each band of interest pre-made, or

  3. None, in which case reference bands will formed by combining all light curves except for the band of interest.

In the case of event data, band_interest and ref_band_interest can be (multiple) pairs of energies, and the light curves for the bands of interest and reference bands will be produced dynamically.

Parameters:
data{numpy.ndarray | stingray.events.EventList object | list of stingray.Lightcurve objects}

data contains the time series data, either in the form of a 2-D array of (time stamp, energy) pairs for event data, or as a list of light curves. Note : The event list must be in sorted order with respect to the times of arrivals.

dtfloat

The time resolution of the stingray.Lightcurve formed from the energy bin. Only used if data is an event list.

band_interest{None, iterable of tuples}

If None, all possible energy values will be assumed to be of interest, and a covariance spectrum in the highest resolution will be produced. Note: if the input is a list of stingray.Lightcurve objects, then the user may supply their energy values here, for construction of a reference band.

ref_band_interest{None, tuple, stingray.Lightcurve, list of stingray.Lightcurve objects}

Defines the reference band to be used for comparison with the bands of interest. If None, all bands except the band of interest will be used for each band of interest, respectively. Alternatively, a tuple can be given for event list data, which will extract the reference band (always excluding the band of interest), or one may put in a single stingray.Lightcurve object to be used (the same for each band of interest) or a list of stingray.Lightcurve objects, one for each band of interest.

stdfloat or np.array or list of numbers

The term std is used to calculate the excess variance of a band. If std is set to None, default Poisson case is taken and the std is calculated as mean(lc)**0.5. In the case of a single float as input, the same is used as the standard deviation which is also used as the std. And if the std is an iterable of numbers, their mean is used for the same purpose.

References

[1] Wilkinson, T. and Uttley, P. (2009), Accretion disc variability in the hard state of black hole X-ray binaries. Monthly Notices of the Royal Astronomical Society, 397: 666–676. doi: 10.1111/j.1365-2966.2009.15008.x

Examples

See the notebooks repository for detailed notebooks on the code.

Attributes:
unnorm_covarnp.ndarray

An array of arrays with mid point band_interest and their covariance. It is the array-form of the dictionary energy_covar. The covariance values are unnormalized.

covarnp.ndarray

Normalized covariance spectrum.

covar_errornp.ndarray

Errors of the normalized covariance spectrum.


AveragedCovariancespectrum

class stingray.AveragedCovariancespectrum(data, segment_size, dt=None, band_interest=None, ref_band_interest=None, std=None)[source]

Compute a covariance spectrum for the data, defined in [covar spectrum]_ Equation 15.

Parameters:
data{numpy.ndarray | list of stingray.Lightcurve objects}

data contains the time series data, either in the form of a 2-D array of (time stamp, energy) pairs for event data, or as a list of stingray.Lightcurve objects. Note : The event list must be in sorted order with respect to the times of arrivals.

segment_sizefloat

The length of each segment in the averaged covariance spectrum. The number of segments will be calculated automatically using the total length of the data set and the segment_size defined here.

dtfloat

The time resolution of the stingray.Lightcurve formed from the energy bin. Only used if data is an event list.

band_interest{None, iterable of tuples}

If None, all possible energy values will be assumed to be of interest, and a covariance spectrum in the highest resolution will be produced. Note: if the input is a list of stingray.Lightcurve objects, then the user may supply their energy values here, for construction of a reference band.

ref_band_interest{None, tuple, stingray.Lightcurve, list of stingray.Lightcurve objects}

Defines the reference band to be used for comparison with the bands of interest. If None, all bands except the band of interest will be used for each band of interest, respectively. Alternatively, a tuple can be given for event list data, which will extract the reference band (always excluding the band of interest), or one may put in a single stingray.Lightcurve object to be used (the same for each band of interest) or a list of stingray.Lightcurve objects, one for each band of interest.

stdfloat or np.array or list of numbers

The term std is used to calculate the excess variance of a band. If std is set to None, default Poisson case is taken and the std is calculated as mean(lc)**0.5. In the case of a single float as input, the same is used as the standard deviation which is also used as the std. And if the std is an iterable of numbers, their mean is used for the same purpose.

References

Attributes:
unnorm_covarnp.ndarray

An array of arrays with mid point band_interest and their covariance. It is the array-form of the dictionary energy_covar. The covariance values are unnormalized.

covarnp.ndarray

Normalized covariance spectrum.

covar_errornp.ndarray

Errors of the normalized covariance spectrum.


VarEnergySpectrum

Abstract base class for spectral timing products including both variability and spectral information.

class stingray.varenergyspectrum.VarEnergySpectrum(events, freq_interval, energy_spec, ref_band=None, bin_time=1, use_pi=False, segment_size=None, events2=None, return_complex=False)[source]
property energy

Give the centers of the energy intervals.

from_astropy_table(*args, **kwargs)[source]

Create a Stingray Object object from data in an Astropy Table.

The table MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

from_pandas(*args, **kwargs)[source]

Create an StingrayObject object from data in a pandas DataFrame.

The dataframe MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

Since pandas does not support n-D data, multi-dimensional arrays can be specified as <colname>_dimN_M_K etc.

See documentation of make_1d_arrays_into_nd for details.

from_xarray(*args, **kwargs)[source]

Create a StingrayObject from data in an xarray Dataset.

The dataset MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

main_array_attr = 'energy'

Base class for variability-energy spectrum.

This class is only a base for the various variability spectra, and it’s not to be instantiated by itself.

Parameters:
eventsstingray.events.EventList object

event list

freq_interval[f0, f1], floats

the frequency range over which calculating the variability quantity

energy_speclist or tuple (emin, emax, N, type)

if a list is specified, this is interpreted as a list of bin edges; if a tuple is provided, this will encode the minimum and maximum energies, the number of intervals, and lin or log.

Other Parameters:
ref_band[emin, emax], floats; default None

minimum and maximum energy of the reference band. If None, the full band is used.

use_pibool, default False

Use channel instead of energy

events2stingray.events.EventList object

event list for the second channel, if not the same. Useful if the reference band has to be taken from another detector.

return_complex: bool, default False

In spectra that produce complex values, return the whole spectrum. Otherwise, the absolute value will be returned.

Attributes:
events1array-like

list of events used to produce the spectrum

events2array-like

if the spectrum requires it, second list of events

freq_intervalarray-like

interval of frequencies used to calculate the spectrum

energy_intervals[[e00, e01], [e10, e11], ...]

energy intervals used for the spectrum

spectrumarray-like

the spectral values, corresponding to each energy interval

spectrum_errorarray-like

the error bars corresponding to spectrum

energyarray-like

The centers of energy intervals


RmsEnergySpectrum

stingray.varenergyspectrum.RmsEnergySpectrum

alias of RmsSpectrum


LagEnergySpectrum

stingray.varenergyspectrum.LagEnergySpectrum

alias of LagSpectrum


ExcessVarianceSpectrum

class stingray.varenergyspectrum.ExcessVarianceSpectrum(events, freq_interval, energy_spec, bin_time=1, use_pi=False, segment_size=None, normalization='fvar')[source]

Calculate the Excess Variance spectrum.

For each energy interval, calculate the excess variance in the specified frequency range.

Parameters:
eventsstingray.events.EventList object

event list

freq_interval[f0, f1], list of float

the frequency range over which calculating the variability quantity

energy_speclist or tuple (emin, emax, N, type)

if a list is specified, this is interpreted as a list of bin edges; if a tuple is provided, this will encode the minimum and maximum energies, the number of intervals, and lin or log.

Other Parameters:
ref_band[emin, emax], floats; default None

minimum and maximum energy of the reference band. If None, the full band is used.

use_pibool, default False

Use channel instead of energy

Attributes:
events1array-like

list of events used to produce the spectrum

freq_intervalarray-like

interval of frequencies used to calculate the spectrum

energy_intervals[[e00, e01], [e10, e11], ...]

energy intervals used for the spectrum

spectrumarray-like

the spectral values, corresponding to each energy interval

spectrum_errorarray-like

the errorbars corresponding to spectrum

add(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)

Add two StingrayObject instances.

Add the array values of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

apply_mask(mask: npt.ArrayLike, inplace: bool = False, filtered_attrs: list = None)

Apply a mask to all array attributes of the object

Parameters:
maskarray of bool

The mask. Has to be of the same length as self.time

Returns:
ts_newStingrayObject object

The new object with the mask applied if inplace is False, otherwise the same object.

Other Parameters:
inplacebool

If True, overwrite the current object. Otherwise, return a new one.

filtered_attrslist of str or None

Array attributes to be filtered. Defaults to all array attributes if None. The other array attributes will be set to None. The main array attr is always included.

array_attrs() list[str]

List the names of the array attributes of the Stingray Object.

By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of array attributes.

data_attributes() list[str]

Clean up the list of attributes, only giving out those pointing to data.

List all the attributes that point directly to valid data. This method goes through all the attributes of the class, eliminating methods, properties, and attributes that are complicated to serialize such as other StingrayObject, or arrays of objects.

This function does not make difference between array-like data and scalar data.

Returns:
data_attributeslist of str

List of attributes pointing to data that are not methods, properties, or other StingrayObject instances.

dict() dict

Return a dictionary representation of the object.

property energy

Give the centers of the energy intervals.

from_astropy_table(*args, **kwargs)

Create a Stingray Object object from data in an Astropy Table.

The table MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

from_pandas(*args, **kwargs)

Create an StingrayObject object from data in a pandas DataFrame.

The dataframe MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

Since pandas does not support n-D data, multi-dimensional arrays can be specified as <colname>_dimN_M_K etc.

See documentation of make_1d_arrays_into_nd for details.

from_xarray(*args, **kwargs)

Create a StingrayObject from data in an xarray Dataset.

The dataset MUST contain at least a column named like the main_array_attr. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.

get_meta_dict() dict

Give a dictionary with all non-None meta attrs of the object.

internal_array_attrs() list[str]

List the names of the internal array attributes of the Stingray Object.

These are array attributes that can be set by properties, and are generally indicated by an underscore followed by the name of the property that links to it (E.g. _counts in Lightcurve). By array attributes, we mean the ones with the same size and shape as main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of internal array attributes.

main_array_attr = 'energy'

Base class for variability-energy spectrum.

This class is only a base for the various variability spectra, and it’s not to be instantiated by itself.

Parameters:
eventsstingray.events.EventList object

event list

freq_interval[f0, f1], floats

the frequency range over which calculating the variability quantity

energy_speclist or tuple (emin, emax, N, type)

if a list is specified, this is interpreted as a list of bin edges; if a tuple is provided, this will encode the minimum and maximum energies, the number of intervals, and lin or log.

Other Parameters:
ref_band[emin, emax], floats; default None

minimum and maximum energy of the reference band. If None, the full band is used.

use_pibool, default False

Use channel instead of energy

events2stingray.events.EventList object

event list for the second channel, if not the same. Useful if the reference band has to be taken from another detector.

return_complex: bool, default False

In spectra that produce complex values, return the whole spectrum. Otherwise, the absolute value will be returned.

Attributes:
events1array-like

list of events used to produce the spectrum

events2array-like

if the spectrum requires it, second list of events

freq_intervalarray-like

interval of frequencies used to calculate the spectrum

energy_intervals[[e00, e01], [e10, e11], ...]

energy intervals used for the spectrum

spectrumarray-like

the spectral values, corresponding to each energy interval

spectrum_errorarray-like

the error bars corresponding to spectrum

energyarray-like

The centers of energy intervals

meta_attrs() list[str]

List the names of the meta attributes of the Stingray Object.

By array attributes, we mean the ones with a different size and shape than main_array_attr (e.g. time in EventList)

Returns:
attributeslist of str

List of meta attributes.

pretty_print(func_to_apply=None, attrs_to_apply=[], attrs_to_discard=[]) str

Return a pretty-printed string representation of the object.

This is useful for debugging, and for interactive use.

Other Parameters:
func_to_applyfunction

A function that modifies the attributes listed in attrs_to_apply. It must return the modified attributes and a label to be printed. If None, no function is applied.

attrs_to_applylist of str

Attributes to be modified by func_to_apply.

attrs_to_discardlist of str

Attributes to be discarded from the output.

classmethod read(filename: str, fmt: str = None) Tso

Generic reader for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

Files that need the astropy.table.Table interface MUST contain at least a column named like the main_array_attr. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all attributes such as mission, gti, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care.

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values should be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Path and file name for the file to be read.

fmt: str

Available options are ‘pickle’, ‘hea’, and any Table-supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.

Returns:
obj: StingrayObject object

The object reconstructed from file

sub(other, operated_attrs=None, error_attrs=None, error_operation=<function sqsum>, inplace=False)

Subtract all the array attrs of two StingrayObject instances element by element, assuming the main array attributes of the instances match exactly.

All array attrs ending with _err are treated as error bars and propagated with the sum of squares.

GTIs are crossed, so that only common intervals are saved.

Parameters:
otherStingrayTimeseries object

A second time series object

Other Parameters:
operated_attrslist of str or None

Array attributes to be operated on. Defaults to all array attributes not ending in _err. The other array attributes will be discarded from the time series to avoid inconsistencies.

error_attrslist of str or None

Array attributes to be operated on with error_operation. Defaults to all array attributes ending with _err.

error_operationfunction

Function to be called to propagate the errors

inplacebool

If True, overwrite the current time series. Otherwise, return a new one.

to_astropy_table(no_longdouble=False) Table

Create an Astropy Table from a StingrayObject

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the meta dictionary.

Other Parameters:
no_longdoublebool

If True, reduce the precision of longdouble arrays to double precision. This needs to be done in some cases, e.g. when the table is to be saved in an architecture not supporting extended precision (e.g. ARM), but can also be useful when an extended precision is not needed.

to_pandas() DataFrame

Create a pandas DataFrame from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

Since pandas does not support n-D data, multi-dimensional arrays are converted into columns before the conversion, with names <colname>_dimN_M_K etc.

See documentation of make_nd_into_arrays for details.

to_xarray() Dataset

Create an xarray Dataset from a StingrayObject.

Array attributes (e.g. time, pi, energy, etc. for EventList) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the ds.attrs dictionary.

write(filename: str, fmt: str | None = None) None

Generic writer for :class`StingrayObject`

Currently supported formats are

  • pickle (not recommended for long-term storage)

  • any other formats compatible with the writers in astropy.table.Table (ascii.ecsv, hdf5, etc.)

..note:

Complex values can be dealt with out-of-the-box in some formats
like HDF5 or FITS, not in others (e.g. all ASCII formats).
With these formats, and in any case when fmt is ``None``, complex
values will be stored as two columns of real numbers, whose names
are of the format <variablename>.real and <variablename>.imag
Parameters:
filename: str

Name and path of the file to save the object list to.

fmt: str

The file format to store the data in. Available options are pickle, hdf5, ascii, fits


Utilities

Commonly used utility functionality, including Good Time Interval operations and input/output helper methods.

Statistical Functions

stingray.stats.a_from_pf(p)[source]

Fractional amplitude of modulation from pulsed fraction

If the pulsed profile is defined as p = mean * (1 + a * sin(phase)),

we define “pulsed fraction” as 2a/b, where b = mean + a is the maximum and a is the amplitude of the modulation.

Hence, a = pf / (2 - pf)

Examples

>>> a_from_pf(1)
1.0
>>> a_from_pf(0)
0.0
stingray.stats.a_from_ssig(ssig, ncounts)[source]

Amplitude of a sinusoid corresponding to a given Z/PDS value

From Leahy et al. 1983, given a pulse profile p = lambda * (1 + a * sin(phase)), The theoretical value of Z^2_n is Ncounts / 2 * a^2

Note that if there are multiple sinusoidal components, one can use a = sqrt(sum(a_l)) (Bachetti+2021b)

Examples

>>> assert np.isclose(a_from_ssig(150, 30000), 0.1)
stingray.stats.amplitude_upper_limit(pmeas, counts, n=1, c=0.95, fft_corr=False, nyq_ratio=0)[source]

Upper limit on a sinusoidal modulation, given a measured power in the PDS/Z search.

Eq. 10 in Vaughan+94 and a_from_ssig: they are equivalent but Vaughan+94 corrects further for the response inside an FFT bin and at frequencies close to Nyquist. These two corrections are added by using fft_corr=True and nyq_ratio to the correct \(f / f_{Nyq}\) of the FFT peak

To understand the meaning of this amplitude: if the modulation is described by:

..math:: p = overline{p} (1 + a * sin(x))

this function returns a.

If it is a sum of sinusoidal harmonics instead ..math:: p = overline{p} (1 + sum_l a_l * sin(lx)) a is equivalent to \(\sqrt(\sum_l a_l^2)\).

See power_upper_limit

Parameters:
pmeas: float

The measured value of power

counts: int

The number of counts in the light curve used to calculate the spectrum

Returns:
a: float

The modulation amplitude that could produce P>pmeas with 1 - c probability

Other Parameters:
n: int

The number of summed powers to obtain pmeas. It can be multiple harmonics of the PDS, adjacent bins in a PDS summed to collect all the power in a QPO, or the n in Z^2_n

c: float

The confidence value for the probability (e.g. 0.95 = 95%)

fft_corr: bool

Apply a correction for the expected power concentrated in an FFT bin, which is about 0.773 on average (it’s 1 at the center of the bin, 2/pi at the bin edge.

nyq_ratio: float

Ratio of the frequency of this feature with respect to the Nyquist frequency. Important to know when dealing with FFTs, because the FFT response decays between 0 and f_Nyq similarly to the response inside a frequency bin: from 1 at 0 Hz to ~2/pi at f_Nyq

Examples

>>> aup = amplitude_upper_limit(40, 30000, 1, 0.99)
>>> aup_nyq = amplitude_upper_limit(40, 30000, 1, 0.99, nyq_ratio=1)
>>> assert np.isclose(aup_nyq, aup / (2 / np.pi))
>>> aup_corr = amplitude_upper_limit(40, 30000, 1, 0.99, fft_corr=True)
>>> assert np.isclose(aup_corr, aup / np.sqrt(0.773))
stingray.stats.classical_pvalue(power, nspec)[source]

Note: This is stingray’s original implementation of the probability distribution for the power spectrum. It is superseded by the implementation in pds_probability for practical purposes, but remains here for backwards compatibility and for its educational value as a clear, explicit implementation of the correct probability distribution.

Compute the probability of detecting the current power under the assumption that there is no periodic oscillation in the data.

This computes the single-trial p-value that the power was observed under the null hypothesis that there is no signal in the data.

Important: the underlying assumptions that make this calculation valid are:

  1. the powers in the power spectrum follow a chi-square distribution

  2. the power spectrum is normalized according to [Leahy 1983]_, such that the powers have a mean of 2 and a variance of 4

  3. there is only white noise in the light curve. That is, there is no aperiodic variability that would change the overall shape of the power spectrum.

Also note that the p-value is for a single trial, i.e. the power currently being tested. If more than one power or more than one power spectrum are being tested, the resulting p-value must be corrected for the number of trials (Bonferroni correction).

Mathematical formulation in [Groth 1975]_. Original implementation in IDL by Anna L. Watts.

Parameters:
powerfloat

The squared Fourier amplitude of a spectrum to be evaluated

nspecint

The number of spectra or frequency bins averaged in power. This matters because averaging spectra or frequency bins increases the signal-to-noise ratio, i.e. makes the statistical distributions of the noise narrower, such that a smaller power might be very significant in averaged spectra even though it would not be in a single power spectrum.

Returns:
pvalfloat

The classical p-value of the observed power being consistent with the null hypothesis of white noise

References

stingray.stats.equivalent_gaussian_Nsigma(p)[source]

Number of Gaussian sigmas corresponding to tail probability.

This function computes the value of the characteristic function of a standard Gaussian distribution for the tail probability equivalent to the provided p-value, and turns this value into units of standard deviations away from the Gaussian mean. This allows the user to make a statement about the signal such as “I detected this pulsation at 4.1 sigma

The example values below are obtained by brute-force integrating the Gaussian probability density function using the mpmath library between Nsigma and +inf.

Examples

>>> assert np.isclose(equivalent_gaussian_Nsigma(0.15865525393145707), 1,
...                   atol=0.01)
>>> assert np.isclose(equivalent_gaussian_Nsigma(0.0013498980316301035), 3,
...                   atol=0.01)
>>> assert np.isclose(equivalent_gaussian_Nsigma(9.865877e-10), 6,
...                   atol=0.01)
>>> assert np.isclose(equivalent_gaussian_Nsigma(6.22096e-16), 8,
...                   atol=0.01)
>>> assert np.isclose(equivalent_gaussian_Nsigma(3.0567e-138), 25, atol=0.1)
stingray.stats.fold_detection_level(nbin, epsilon=0.01, ntrial=1)[source]

Return the detection level for a folded profile.

See Leahy et al. (1983).

Parameters:
nbinint

The number of bins in the profile

epsilonfloat, default 0.01

The fractional probability that the signal has been produced by noise

Returns:
detlevfloat

The epoch folding statistics corresponding to a probability epsilon * 100 % that the signal has been produced by noise

Other Parameters:
ntrialint

The number of trials executed to find this profile

stingray.stats.fold_profile_logprobability(stat, nbin, ntrial=1)[source]

Calculate the probability of a certain folded profile, due to noise.

Parameters:
statfloat

The epoch folding statistics

nbinint

The number of bins in the profile

Returns:
logpfloat

The log-probability that the profile has been produced by noise

Other Parameters:
ntrialint

The number of trials executed to find this profile

stingray.stats.fold_profile_probability(stat, nbin, ntrial=1)[source]

Calculate the probability of a certain folded profile, due to noise.

Parameters:
statfloat

The epoch folding statistics

nbinint

The number of bins in the profile

Returns:
pfloat

The probability that the profile has been produced by noise

Other Parameters:
ntrialint

The number of trials executed to find this profile

stingray.stats.p_multitrial_from_single_trial(p1, n)[source]

Calculate a multi-trial p-value from a single-trial one.

Calling p the probability of a single success, the Binomial distributions says that the probability at least one outcome in n trials is

\[P(k\geq 1) = \sum_{k\geq 1} \binom{n}{k} p^k (1-p)^{(n-k)}\]

or more simply, using P(k ≥ 0) = 1

\[P(k\geq 1) = 1 - \binom{n}{0} (1-p)^n = 1 - (1-p)^n\]
Parameters:
p1float

The significance at which we reject the null hypothesis on each single trial.

nint

The number of trials

Returns:
pnfloat

The significance at which we reject the null hypothesis after multiple trials

stingray.stats.p_single_trial_from_p_multitrial(pn, n)[source]

Calculate the single-trial p-value from a total p-value

Let us say that we want to reject a null hypothesis at the pn level, after executing n different measurements. This might be the case because, e.g., we want to have a 1% probability of detecting a signal in an entire power spectrum, and we need to correct the detection level accordingly.

The typical procedure is dividing the initial probability (often called _epsilon_) by the number of trials. This is called the Bonferroni correction and it is often a good approximation, when pn is low: p1 = pn / n.

However, if pn is close to 1, this approximation gives incorrect results.

Here we calculate this probability by inverting the Binomial problem. Given that (see p_multitrial_from_single_trial) the probability of getting more than one hit in n trials, given the single-trial probability p, is

\[P (k \geq 1) = 1 - (1 - p)^n,\]

we get the single trial probability from the multi-trial one from

\[p = 1 - (1 - P)^{(1/n)}\]

This is also known as Šidák correction.

Parameters:
pnfloat

The significance at which we want to reject the null hypothesis after multiple trials

nint

The number of trials

Returns:
p1float

The significance at which we reject the null hypothesis on each single trial.

stingray.stats.pds_detection_level(epsilon=0.01, ntrial=1, n_summed_spectra=1, n_rebin=1)[source]

Detection level for a PDS.

Return the detection level (with probability 1 - epsilon) for a Power Density Spectrum of nbins bins, normalized a la Leahy (1983), based on the 2-dof \({\chi}^2\) statistics, corrected for rebinning (n_rebin) and multiple PDS averaging (n_summed_spectra)

Parameters:
epsilonfloat

The single-trial probability value(s)

Other Parameters:
ntrialint

The number of independent trials (the independent bins of the PDS)

n_summed_spectraint

The number of power density spectra that have been averaged to obtain this power level

n_rebinint

The number of power density bins that have been averaged to obtain this power level

Examples

>>> assert np.isclose(pds_detection_level(0.1), 4.6, atol=0.1)
>>> assert np.allclose(pds_detection_level(0.1, n_rebin=[1]), [4.6], atol=0.1)
stingray.stats.pds_probability(level, ntrial=1, n_summed_spectra=1, n_rebin=1)[source]

Give the probability of a given power level in PDS.

Return the probability of a certain power level in a Power Density Spectrum of nbins bins, normalized a la Leahy (1983), based on the 2-dof \({\chi}^2\) statistics, corrected for rebinning (n_rebin) and multiple PDS averaging (n_summed_spectra)

Parameters:
levelfloat or array of floats

The power level for which we are calculating the probability

Returns:
epsilonfloat

The probability value(s)

Other Parameters:
ntrialint

The number of independent trials (the independent bins of the PDS)

n_summed_spectraint

The number of power density spectra that have been averaged to obtain this power level

n_rebinint

The number of power density bins that have been averaged to obtain this power level

stingray.stats.pf_from_a(a)[source]

Pulsed fraction from fractional amplitude of modulation.

If the pulsed profile is defined as p = mean * (1 + a * sin(phase)),

we define “pulsed fraction” as 2a/b, where b = mean + a is the maximum and a is the amplitude of the modulation.

Hence, pulsed fraction = 2a/(1+a)

Examples

>>> pf_from_a(1)
1.0
>>> pf_from_a(0)
0.0
stingray.stats.pf_from_ssig(ssig, ncounts)[source]

Estimate pulsed fraction for a sinusoid from a given Z or PDS power.

See a_from_ssig and pf_from_a for more details

Examples

>>> assert np.isclose(round(a_from_pf(pf_from_ssig(150, 30000)), 1), 0.1)
stingray.stats.pf_upper_limit(*args, **kwargs)[source]

Upper limit on pulsed fraction, given a measured power in the PDS/Z search.

See power_upper_limit and pf_from_ssig. All arguments are the same as amplitude_upper_limit

Parameters:
pmeas: float

The measured value of power

counts: int

The number of counts in the light curve used to calculate the spectrum

Returns:
pf: float

The pulsed fraction that could produce P>pmeas with 1 - c probability

Other Parameters:
n: int

The number of summed powers to obtain pmeas. It can be multiple harmonics of the PDS, adjacent bins in a PDS summed to collect all the power in a QPO, or the n in Z^2_n

c: float

The confidence value for the probability (e.g. 0.95 = 95%)

fft_corr: bool

Apply a correction for the expected power concentrated in an FFT bin, which is about 0.773 on average (it’s 1 at the center of the bin, 2/pi at the bin edge.

nyq_ratio: float

Ratio of the frequency of this feature with respect to the Nyquist frequency. Important to know when dealing with FFTs, because the FFT response decays between 0 and f_Nyq similarly to the response inside a frequency bin: from 1 at 0 Hz to ~2/pi at f_Nyq

Examples

>>> pfup = pf_upper_limit(40, 30000, 1, 0.99)
>>> assert np.isclose(pfup, 0.13, atol=0.01)
stingray.stats.phase_dispersion_detection_level(nsamples, nbin, epsilon=0.01, ntrial=1)[source]

Return the detection level for a phase dispersion minimization periodogram..

Parameters:
nsamplesint

The number of time bins in the light curve

nbinint

The number of bins in the profile

epsilonfloat, default 0.01

The fractional probability that the signal has been produced by noise

Returns:
detlevfloat

The epoch folding statistics corresponding to a probability epsilon * 100 % that the signal has been produced by noise

Other Parameters:
ntrialint

The number of trials executed to find this profile

stingray.stats.phase_dispersion_logprobability(stat, nsamples, nbin, ntrial=1)[source]

Calculate the log-probability of a peak in a phase dispersion minimization periodogram, due to noise.

Uses the beta-distribution from Czerny-Schwarzendorf (1997).

Parameters:
statfloat

The value of the PDM inverse peak

nsamplesint

The number of samples in the time series

nbinint

The number of bins in the profile

Returns:
logpfloat

The log-probability that the profile has been produced by noise

Other Parameters:
ntrialint

The number of trials executed to find this profile

stingray.stats.phase_dispersion_probability(stat, nsamples, nbin, ntrial=1)[source]

Calculate the probability of a peak in a phase dispersion minimization periodogram, due to noise.

Uses the beta-distribution from Czerny-Schwarzendorf (1997).

Parameters:
statfloat

The value of the PDM inverse peak

nsamplesint

The number of samples in the time series

nbinint

The number of bins in the profile

Returns:
pfloat

The probability that the profile has been produced by noise

Other Parameters:
ntrialint

The number of trials executed to find this profile

stingray.stats.power_confidence_limits(preal, n=1, c=0.95)[source]

Confidence limits on power, given a (theoretical) signal power.

This is to be used when we expect a given power (e.g. from the pulsed fraction measured in previous observations) and we want to know the range of values the measured power could take to a given confidence level. Adapted from Vaughan et al. 1994, noting that, after appropriate normalization of the spectral stats, the distribution of powers in the PDS and the Z^2_n searches is always described by a noncentral chi squared distribution.

Parameters:
preal: float

The theoretical signal-generated value of power

Returns:
pmeas: [float, float]

The upper and lower confidence interval (a, 1-a) on the measured power

Other Parameters:
n: int

The number of summed powers to obtain the result. It can be multiple harmonics of the PDS, adjacent bins in a PDS summed to collect all the power in a QPO, or the n in Z^2_n

c: float

The confidence level (e.g. 0.95=95%)

Examples

>>> cl = power_confidence_limits(150, c=0.84)
>>> assert np.allclose(cl, [127, 176], atol=1)
stingray.stats.power_upper_limit(pmeas, n=1, c=0.95)[source]

Upper limit on signal power, given a measured power in the PDS/Z search.

Adapted from Vaughan et al. 1994, noting that, after appropriate normalization of the spectral stats, the distribution of powers in the PDS and the Z^2_n searches is always described by a noncentral chi squared distribution.

Note that Vaughan+94 gives p(pmeas | preal), while we are interested in p(real | pmeas), which is not described by the NCX2 stat. Rather than integrating the CDF of this probability distribution, we start from a reasonable approximation and fit to find the preal that gives pmeas as a (e.g.95%) confidence limit.

As Vaughan+94 shows, this power is always larger than the observed one. This is because we are looking for the maximum signal power that, combined with noise powers, would give the observed power. This involves the possibility that noise powers partially cancel out some signal power.

Parameters:
pmeas: float

The measured value of power

Returns:
psig: float

The signal power that could produce P>pmeas with 1 - c probability

Other Parameters:
n: int

The number of summed powers to obtain pmeas. It can be multiple harmonics of the PDS, adjacent bins in a PDS summed to collect all the power in a QPO, or the n in Z^2_n

c: float

The confidence value for the probability (e.g. 0.95 = 95%)

Examples

>>> pup = power_upper_limit(40, 1, 0.99)
>>> assert np.isclose(pup, 75, atol=2)
stingray.stats.ssig_from_a(a, ncounts)[source]

Theoretical power in the Z or PDS search for a sinusoid of amplitude a.

From Leahy et al. 1983, given a pulse profile p = lambda * (1 + a * sin(phase)), The theoretical value of Z^2_n is Ncounts / 2 * a^2

Note that if there are multiple sinusoidal components, one can use a = sqrt(sum(a_l)) (Bachetti+2021b)

Examples

>>> round(ssig_from_a(0.1, 30000), 1)
150.0
stingray.stats.ssig_from_pf(pf, ncounts)[source]

Theoretical power in the Z or PDS for a sinusoid of pulsed fraction pf.

See ssig_from_a and a_from_pf for more details

Examples

>>> assert round(ssig_from_pf(pf_from_a(0.1), 30000), 1) == 150.0
stingray.stats.z2_n_detection_level(n=2, epsilon=0.01, ntrial=1, n_summed_spectra=1)[source]

Return the detection level for the Z^2_n statistics.

See Buccheri et al. (1983), Bendat and Piersol (1971).

Parameters:
nint, default 2

The n in $Z^2_n$ (number of harmonics, including the fundamental)

epsilonfloat, default 0.01

The fractional probability that the signal has been produced by noise

Returns:
detlevfloat

The epoch folding statistics corresponding to a probability epsilon * 100 % that the signal has been produced by noise

Other Parameters:
ntrialint

The number of trials executed to find this profile

n_summed_spectraint

Number of Z_2^n periodograms that are being averaged

stingray.stats.z2_n_logprobability(z2, n, ntrial=1, n_summed_spectra=1)[source]

Calculate the probability of a certain folded profile, due to noise.

Parameters:
z2float

A Z^2_n statistics value

nint, default 2

The n in $Z^2_n$ (number of harmonics, including the fundamental)

Returns:
pfloat

The probability that the Z^2_n value has been produced by noise

Other Parameters:
ntrialint

The number of trials executed to find this profile

n_summed_spectraint

Number of Z_2^n periodograms that were averaged to obtain z2

stingray.stats.z2_n_probability(z2, n, ntrial=1, n_summed_spectra=1)[source]

Calculate the probability of a certain folded profile, due to noise.

Parameters:
z2float

A Z^2_n statistics value

nint, default 2

The n in $Z^2_n$ (number of harmonics, including the fundamental)

Returns:
pfloat

The probability that the Z^2_n value has been produced by noise

Other Parameters:
ntrialint

The number of trials executed to find this profile

n_summed_spectraint

Number of Z_2^n periodograms that were averaged to obtain z2

GTI Functionality

stingray.gti.append_gtis(gti0, gti1)[source]

Union of two non-overlapping GTIs.

If the two GTIs “touch”, this is tolerated and the touching GTIs are joined in a single one.

Parameters:
gti0: 2-d float array

List of GTIs of the form [[gti0_0, gti0_1], [gti1_0, gti1_1], ...].

gti1: 2-d float array

List of GTIs of the form [[gti0_0, gti0_1], [gti1_0, gti1_1], ...].

Returns:
gti: 2-d float array

The newly created GTI array.

Examples

>>> assert np.allclose(append_gtis([[0, 1]], [[2, 3]]), [[0, 1], [2, 3]])
>>> np.allclose(append_gtis([[0, 1], [4, 5]], [[2, 3]]),
...             [[0, 1], [2, 3], [4, 5]])
True
>>> assert np.allclose(append_gtis([[0, 1]], [[1, 3]]), [[0, 3]])
stingray.gti.bin_intervals_from_gtis(gtis, segment_size, time, dt=None, fraction_step=1, epsilon=0.001)[source]

Compute start/stop times of equal time intervals, compatible with GTIs, and map them to the indices of an array of time stamps.

Used to start each FFT/PDS/cospectrum from the start of a GTI, and stop before the next gap in data (end of GTI). In this case, it is necessary to specify the time array containing the times of the light curve bins. Returns start and stop bins of the intervals to use for the PDS.

Parameters:
gtis2-d float array

List of GTIs of the form [[gti0_0, gti0_1], [gti1_0, gti1_1], ...].

segment_sizefloat

Length of each time segment.

timearray-like

Array of time stamps.

Returns:
spectrum_start_binsarray-like

List of starting bins in the original time array to use in spectral calculations.

spectrum_stop_binsarray-like

List of end bins to use in the spectral calculations.

Other Parameters:
dtfloat, default median(diff(time))

Time resolution of the light curve.

epsilonfloat, default 0.001

The tolerance, in fraction of dt, for the comparisons at the borders.

fraction_stepfloat

If the step is not a full segment_size but less (e.g. a moving window), this indicates the ratio between step step and segment_size (e.g. 0.5 means that the window shifts by half segment_size).

Examples

>>> time = np.arange(0.5, 13.5)
>>> gtis = [[0, 5], [6, 8], [9, 10]]
>>> segment_size = 2
>>> start_bins, stop_bins = bin_intervals_from_gtis(gtis,segment_size,time)
>>> assert np.allclose(start_bins, [0, 2, 6])
>>> assert np.allclose(stop_bins, [2, 4, 8])
>>> assert np.allclose(time[start_bins[0]:stop_bins[0]], [0.5, 1.5])
>>> assert np.allclose(time[start_bins[1]:stop_bins[1]], [2.5, 3.5])
stingray.gti.check_gtis(gti)[source]

Check if GTIs are well-behaved.

Check that:

  1. the shape of the GTI array is correct;

  2. no start > end

  3. no overlaps.

Parameters:
gtilist

A list of GTI (start, stop) pairs extracted from the FITS file.

Raises:
TypeError

If GTIs are of the wrong shape

ValueError

If GTIs have overlapping or displaced values

stingray.gti.check_separate(gti0, gti1)[source]

Check if two GTIs do not overlap.

Parameters:
gti0: 2-d float array

List of GTIs of form [[gti0_0, gti0_1], [gti1_0, gti1_1], ...].

gti1: 2-d float array

List of GTIs of form [[gti0_0, gti0_1], [gti1_0, gti1_1], ...].

Returns:
separate: bool

True if GTIs are mutually exclusive, False if not.

Examples

>>> gti0 = [[0, 10]]
>>> gti1 = [[20, 30]]
>>> assert check_separate(gti0, gti1)
>>> gti0 = [[0, 10]]
>>> gti1 = [[0, 10]]
>>> check_separate(gti0, gti1)
False
>>> gti0 = [[0, 10]]
>>> gti1 = [[10, 20]]
>>> assert check_separate(gti0, gti1)
>>> gti0 = [[0, 11]]
>>> gti1 = [[10, 20]]
>>> check_separate(gti0, gti1)
False
>>> gti0 = [[0, 11]]
>>> gti1 = [[10, 20]]
>>> check_separate(gti1, gti0)
False
>>> gti0 = [[0, 10], [30, 40]]
>>> gti1 = [[11, 28]]
>>> assert check_separate(gti0, gti1)
stingray.gti.create_gti_from_condition(time, condition, safe_interval=0, dt=None)[source]

Create a GTI list from a time array and a boolean mask (condition).

Parameters:
timearray-like

Array containing time stamps.

conditionarray-like

An array of bools, of the same length of time. A possible condition can be, e.g., the result of lc > 0.

Returns:
gtis[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]

The newly created GTIs.

Other Parameters:
safe_intervalfloat or [float, float]

A safe interval to exclude at both ends (if single float) or the start and the end (if pair of values) of GTIs.

dtfloat

The width (in sec) of each bin of the time array. Can be irregular.

stingray.gti.create_gti_mask(time, gtis, safe_interval=None, min_length=0, return_new_gtis=False, dt=None, epsilon=0.001)[source]

Create GTI mask.

Assumes that no overlaps are present between GTIs

Parameters:
timenumpy.ndarray

An array of time stamps

gtis[[g0_0, g0_1], [g1_0, g1_1], ...], float array-like

The list of GTIs

Returns:
maskbool array

A mask labelling all time stamps that are included in the GTIs versus those that are not.

new_gtisNx2 array

An array of new GTIs created by this function.

Other Parameters:
safe_intervalfloat or [float, float], default None

A safe interval to exclude at both ends (if single float) or the start and the end (if pair of values) of GTIs. If None, no safe interval is applied to data.

min_lengthfloat

An optional minimum length for the GTIs to be applied. Only GTIs longer than min_length will be considered when creating the mask.

return_new_gtisbool

If True`, return the list of new GTIs (if min_length > 0)

dtfloat

Time resolution of the data, i.e. the interval between time stamps.

epsilonfloat

Fraction of dt that is tolerated at the borders of a GTI.

stingray.gti.create_gti_mask_complete(time, gtis, safe_interval=0, min_length=0, return_new_gtis=False, dt=None, epsilon=0.001