Stingray API¶
Library of Time Series Methods For Astronomical Xray Data.
Data Classes¶
These classes define basic functionality related to common data types and typical methods
that apply to these data types, including basic read/write functionality. Currently
implemented are stingray.Lightcurve
and stingray.events.EventList
.
Lightcurve¶
 class stingray.Lightcurve(time, counts, err=None, input_counts=True, gti=None, err_dist='poisson', mjdref=0, dt=None, skip_checks=False, low_memory=False, mission=None, instr=None, header=None, **other_kw)[source]¶
Make a light curve object from an array of time stamps and an array of counts.
 Parameters
 time: Iterable, `:class:astropy.time.Time`, or `:class:astropy.units.Quantity` object
A list or array of time stamps for a light curve. Must be a type that can be cast to
:class:np.array
or:class:List
of floats, or that has avalue
attribute that does (e.g. a:class:astropy.units.Quantity
or:class:astropy.time.Time
object). counts: iterable, optional, default ``None``
A list or array of the counts in each bin corresponding to the bins defined in
time
(note: useinput_counts=False
to input the count range, i.e. counts/second, otherwise use counts/bin). err: iterable, optional, default ``None``
A list or array of the uncertainties in each bin corresponding to the bins defined in
time
(note: useinput_counts=False
to input the count rage, i.e. counts/second, otherwise use counts/bin). IfNone
, we assume the data is poisson distributed and calculate the error from the average of the lower and upper 1sigma confidence intervals for the Poissonian distribution with mean equal tocounts
. input_counts: bool, optional, default True
If True, the code assumes that the input data in
counts
is in units of counts/bin. If False, it assumes the data incounts
is in counts/second. gti: 2d float array, default ``None``
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
Good Time Intervals. They are not applied to the data by default. They will be used by other methods to have an indication of the “safe” time intervals to use during analysis. err_dist: str, optional, default ``None``
Statistical distribution used to calculate the uncertainties and other statistical values appropriately. Default makes no assumptions and keep errors equal to zero.
 mjdref: float
MJD reference (useful in most highenergy mission data)
 skip_checks: bool
If True, the user specifies that data are already sorted and contain no infinite or nan points. Use at your own risk
 low_memory: bool
If True, all the lazily evaluated attribute (e.g., countrate and countrate_err if input_counts is True) will _not_ be stored in memory, but calculated every time they are requested.
 missionstr
Mission that recorded the data (e.g. NICER)
 instrstr
Instrument onboard the mission
 headerstr
The full header of the original FITS file, if relevant
 **other_kw
Used internally. Any other keyword arguments will be ignored
 Attributes
 time: numpy.ndarray
The array of midpoints of time bins.
 bin_lo: numpy.ndarray
The array of lower time stamp of time bins.
 bin_hi: numpy.ndarray
The array of higher time stamp of time bins.
 counts: numpy.ndarray
The counts per bin corresponding to the bins in
time
. counts_err: numpy.ndarray
The uncertainties corresponding to
counts
 countrate: numpy.ndarray
The counts per second in each of the bins defined in
time
. countrate_err: numpy.ndarray
The uncertainties corresponding to
countrate
 meanrate: float
The mean count rate of the light curve.
 meancounts: float
The mean counts of the light curve.
 n: int
The number of data points in the light curve.
 dt: float
The time resolution of the light curve.
 mjdref: float
MJD reference date (
tstart
/ 86400 gives the date in MJD at the start of the observation) tseg: float
The total duration of the light curve.
 tstart: float
The start time of the light curve.
 gti: 2d float array
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
Good Time Intervals. They indicate the “safe” time intervals to be used during the analysis of the light curve. err_dist: string
Statistic of the Lightcurve, it is used to calculate the uncertainties and other statistical values appropriately. It propagates to Spectrum classes.
 missionstr
Mission that recorded the data (e.g. NICER)
 instrstr
Instrument onboard the mission
 detector_iditerable
The detector that recoded each photon, if relevant (e.g. XMM, Chandra)
 headerstr
The full header of the original FITS file, if relevant
 analyze_lc_chunks(segment_size, func, fraction_step=1, **kwargs)[source]¶
Analyze segments of the light curve with any function.
 Parameters
 segment_sizefloat
Length in seconds of the light curve segments
 funcfunction
Function accepting a
Lightcurve
object as single argument, plus possible additional keyword arguments, and returning a number or a tuple  e.g.,(result, error)
where bothresult
anderror
are numbers.
 Returns
 start_timesarray
Lower time boundaries of all time segments.
 stop_timesarray
upper time boundaries of all segments.
 resultarray of N elements
The result of
func
for each segment of the light curve
 Other Parameters
 fraction_stepfloat
If the step is not a full
segment_size
but less (e.g. a moving window), this indicates the ratio between step step andsegment_size
(e.g. 0.5 means that the window shifts of halfsegment_size
) kwargskeyword arguments
These additional keyword arguments, if present, they will be passed to
func
Examples
>>> import numpy as np >>> time = np.arange(0, 10, 0.1) >>> counts = np.zeros_like(time) + 10 >>> lc = Lightcurve(time, counts, dt=0.1) >>> # Define a function that calculates the mean >>> mean_func = lambda x: np.mean(x) >>> # Calculate the mean in segments of 5 seconds >>> start, stop, res = lc.analyze_lc_chunks(5, mean_func) >>> len(res) == 2 True >>> np.allclose(res, 10) True
 apply_gtis()[source]¶
Apply GTIs to a light curve. Filters the
time
,counts
,countrate
,counts_err
andcountrate_err
arrays for all bins that fall into Good Time Intervals and recalculates mean countrate and the number of bins.
 baseline(lam, p, niter=10, offset_correction=False)[source]¶
Calculate the baseline of the light curve, accounting for GTIs.
 Parameters
 lamfloat
“smoothness” parameter. Larger values make the baseline stiffer Typically
1e2 < lam < 1e9
 pfloat
“asymmetry” parameter. Smaller values make the baseline more “horizontal”. Typically
0.001 < p < 0.1
, but not necessary.
 Returns
 baselinenumpy.ndarray
An array with the baseline of the light curve
 Other Parameters
 offset_correctionbool, default False
by default, this method does not align to the running mean of the light curve, but it goes below the light curve. Setting align to True, an additional step is done to shift the baseline so that it is shifted to the middle of the light curve noise distribution.
 check_lightcurve()[source]¶
Make various checks on the lightcurve.
It can be slow, use it if you are not sure about your input data.
 estimate_segment_size(min_total_counts=100, min_time_bins=100)[source]¶
Estimate a reasonable segment length for chunkbychunk analysis.
Choose a reasonable length for time segments, given a minimum number of total counts in the segment, and a minimum number of time bins in the segment.
The user specifies a condition on the total counts in each segment and the minimum number of time bins.
 Returns
 segment_sizefloat
The length of the light curve chunks that satisfies the conditions
 Other Parameters
 min_total_countsint
Minimum number of counts for each chunk
 min_time_binsint
Minimum number of time bins
Examples
>>> import numpy as np >>> time = np.arange(150) >>> count = np.zeros_like(time) + 3 >>> lc = Lightcurve(time, count, dt=1) >>> lc.estimate_segment_size(min_total_counts=10, min_time_bins=3) 4.0 >>> lc.estimate_segment_size(min_total_counts=10, min_time_bins=5) 5.0 >>> count[2:4] = 1 >>> lc = Lightcurve(time, count, dt=1) >>> lc.estimate_segment_size(min_total_counts=3, min_time_bins=1) 3.0 >>> # A slightly more complex example >>> dt=0.2 >>> time = np.arange(0, 1000, dt) >>> counts = np.random.poisson(100, size=len(time)) >>> lc = Lightcurve(time, counts, dt=dt) >>> lc.estimate_segment_size(100, 2) 0.4 >>> min_total_bins = 40 >>> lc.estimate_segment_size(100, 40) 8.0
 static from_astropy_table(ts, **kwargs)[source]¶
Create a Stingray Object object from data in an Astropy Table.
The table MUST contain at least a column named like the
main_array_attr
. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.It is strongly advisable to define such attributes and columns using the standard attributes of the wanted StingrayObject (e.g.
time
,pi
, etc. forEventList
)
 static from_astropy_timeseries(ts, **kwargs)[source]¶
Create a
StingrayTimeseries
from data in an Astropy TimeSeriesThe timeseries has to define at least a column called time, the rest of columns will form the array attributes of the new event list, while the attributes in table.meta will form the new meta attributes of the event list.
It is strongly advisable to define such attributes and columns using the standard attributes of EventList: time, pi, energy, gti etc.
 Parameters
 ts
astropy.timeseries.TimeSeries
A
TimeSeries
object with the array attributes as columns, and the meta attributes in themeta
dictionary
 ts
 Returns
 ts
StingrayTimeseries
Timeseries object
 ts
 static from_lightkurve(lk, skip_checks=True)[source]¶
Creates a new
Lightcurve
from alightkurve.LightCurve
. Parameters
 lk
lightkurve.LightCurve
A lightkurve LightCurve object
 skip_checks: bool
If True, the user specifies that data are already sorted and contain no infinite or nan points. Use at your own risk.
 lk
 join(other, skip_checks=False)[source]¶
Join two lightcurves into a single object.
The new
Lightcurve
object will contain time stamps from both the objects. Thecounts
andcountrate
attributes in the resulting object will contain the union of the nonoverlapping parts of the two individual objects, or the average in case of overlappingtime
arrays of bothLightcurve
objects.Good Time Intervals are also joined.
Note : Ideally, the
time
array of both lightcurves should not overlap. Parameters
 other
Lightcurve
object The other
Lightcurve
object which is supposed to be joined with. skip_checks: bool
If True, the user specifies that data are already sorted and contain no infinite or nan points. Use at your own risk.
 other
 Returns
 lc_new
Lightcurve
object The resulting
Lightcurve
object.
 lc_new
Examples
>>> time1 = [5, 10, 15] >>> count1 = [300, 100, 400] >>> time2 = [20, 25, 30] >>> count2 = [600, 1200, 800] >>> lc1 = Lightcurve(time1, count1, dt=5) >>> lc2 = Lightcurve(time2, count2, dt=5) >>> lc = lc1.join(lc2) >>> lc.time array([ 5, 10, 15, 20, 25, 30]) >>> np.allclose(lc.counts, [ 300, 100, 400, 600, 1200, 800]) True
 static make_lightcurve(toa, dt, tseg=None, tstart=None, gti=None, mjdref=0, use_hist=False)[source]¶
Make a light curve out of photon arrival times, with a given time resolution
dt
. Note thatdt
should be larger than the native time resolution of the instrument that has taken the data. Parameters
 toa: iterable
list of photon arrival times
 dt: float
time resolution of the light curve (the bin width)
 tseg: float, optional, default ``None``
The total duration of the light curve. If this is
None
, then the total duration of the light curve will be the interval between the arrival between the first and the last photon intoa
.Note: If
tseg
is not divisible bydt
(i.e. iftseg
/dt
is not an integer number), then the last fractional bin will be dropped! tstart: float, optional, default ``None``
The start time of the light curve. If this is
None
, the arrival time of the first photon will be used as the start time of the light curve. gti: 2d float array
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
Good Time Intervals use_histbool
Use
np.histogram
instead ofnp.bincounts
. Might be advantageous for very short datasets.
 Returns
 lc:class:
Lightcurve
object A
Lightcurve
object with the binned light curve
 lc:class:
 plot(witherrors=False, labels=None, axis=None, title=None, marker='', save=False, filename=None)[source]¶
Plot the light curve using
matplotlib
.Plot the light curve object on a graph
self.time
on xaxis andself.counts
on yaxis withself.counts_err
optionally as error bars. Parameters
 witherrors: boolean, default False
Whether to plot the Lightcurve with errorbars or not
 labelsiterable, default
None
A list of tuple with
xlabel
andylabel
as strings. axislist, tuple, string, default
None
Parameter to set axis properties of the
matplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument for the``matplotlib.pyplot.axis()`` method. titlestr, default
None
The title of the plot.
 markerstr, default ‘’
Line style and color of the plot. Line styles and colors are combined in a single format string, as in
'bo'
for blue circles. Seematplotlib.pyplot.plot
for more options. saveboolean, optional, default
False
If
True
, save the figure with specified filename. filenamestr
File name of the image to save. Depends on the boolean
save
.
 classmethod read(filename, fmt=None, format_=None, err_dist='gauss', skip_checks=False)[source]¶
Read a
Lightcurve
object from file.Currently supported formats are
pickle (not recommended for longterm storage)
hea : FITS Light curves from HEASARCsupported missions.
any other formats compatible with the writers in
astropy.table.Table
(ascii.ecsv, hdf5, etc.)
Files that need the
astropy.table.Table
interface MUST contain at least atime
column and acounts
orcountrate
column. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all lightcurve attributes such asdt
,gti
, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care. Parameters
 filename: str
Path and file name for the file to be read.
 fmt: str
Available options are ‘pickle’, ‘hea’, and any
Table
supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.
 Returns
 lc
Lightcurve
object
 lc
 Other Parameters
 err_dist: str, default=’gauss’
Default error distribution if not specified in the file (e.g. for ASCII files). The default is ‘gauss’ just because it is likely that people using ASCII light curves will want to specify Gaussian error bars, if any.
 skip_checksbool
See
Lightcurve
documentation
 rebin(dt_new=None, f=None, method='sum')[source]¶
Rebin the light curve to a new time resolution. While the new resolution need not be an integer multiple of the previous time resolution, be aware that if it is not, the last bin will be cut off by the fraction left over by the integer division.
 Parameters
 dt_new: float
The new time resolution of the light curve. Must be larger than the time resolution of the old light curve!
 method: {``sum``  ``mean``  ``average``}, optional, default ``sum``
This keyword argument sets whether the counts in the new bins should be summed or averaged.
 Returns
 lc_new:class:
Lightcurve
object The
Lightcurve
object with the new, binned light curve.
 lc_new:class:
 Other Parameters
 f: float
the rebin factor. If specified, it substitutes
dt_new
withf*self.dt
 sort(reverse=False)[source]¶
Sort a Lightcurve object by time.
A Lightcurve can be sorted in either increasing or decreasing order using this method. The time array gets sorted and the counts array is changed accordingly.
 Parameters
 reverseboolean, default False
If True then the object is sorted in reverse order.
 Returns
 lc_new:class:
Lightcurve
object The
Lightcurve
object with sorted time and counts arrays.
 lc_new:class:
Examples
>>> time = [2, 1, 3] >>> count = [200, 100, 300] >>> lc = Lightcurve(time, count, dt=1) >>> lc_new = lc.sort() >>> lc_new.time array([1, 2, 3]) >>> np.allclose(lc_new.counts, [100, 200, 300]) True
 sort_counts(reverse=False)[source]¶
Sort a
Lightcurve
object in accordance with its counts array.A
Lightcurve
can be sorted in either increasing or decreasing order using this method. The counts array gets sorted and the time array is changed accordingly. Parameters
 reverseboolean, default
False
If
True
then the object is sorted in reverse order.
 reverseboolean, default
 Returns
 lc_new:class:
Lightcurve
object The
Lightcurve
object with sortedtime
andcounts
arrays.
 lc_new:class:
Examples
>>> time = [1, 2, 3] >>> count = [200, 100, 300] >>> lc = Lightcurve(time, count, dt=1) >>> lc_new = lc.sort_counts() >>> lc_new.time array([2, 1, 3]) >>> np.allclose(lc_new.counts, [100, 200, 300]) True
 split(min_gap, min_points=1)[source]¶
For data with gaps, it can sometimes be useful to be able to split the light curve into separate, evenly sampled objects along those data gaps. This method allows to do this: it finds data gaps of a specified minimum size, and produces a list of new
Lightcurve
objects for each contiguous segment. Parameters
 min_gapfloat
The length of a data gap, in the same units as the
time
attribute of theLightcurve
object. Any smaller gaps will be ignored, any larger gaps will be identified and used to split the light curve. min_pointsint, default 1
The minimum number of data points in each light curve. Light curves with fewer data points will be ignored.
 Returns
 lc_splititerable of
Lightcurve
objects The list of all contiguous light curves
 lc_splititerable of
Examples
>>> time = np.array([1, 2, 3, 6, 7, 8, 11, 12, 13]) >>> counts = np.random.rand(time.shape[0]) >>> lc = Lightcurve(time, counts, dt=1, skip_checks=True) >>> split_lc = lc.split(1.5)
 split_by_gti(gti=None, min_points=2)[source]¶
Split the current
Lightcurve
object into a list ofLightcurve
objects, one for each continuous GTI segment as defined in thegti
attribute. Parameters
 min_pointsint, default 1
The minimum number of data points in each light curve. Light curves with fewer data points will be ignored.
 Returns
 list_of_lcslist
A list of
Lightcurve
objects, one for each GTI segment
 to_astropy_table()[source]¶
Create an Astropy Table from a
StingrayObject
Array attributes (e.g.
time
,pi
,energy
, etc. forEventList
) are converted into columns, while meta attributes (mjdref
,gti
, etc.) are saved into themeta
dictionary.
 to_astropy_timeseries()[source]¶
Save the
StingrayTimeseries
to anAstropy
timeseries.Array attributes (time, pi, energy, etc.) are converted into columns, while meta attributes (mjdref, gti, etc.) are saved into the
meta
dictionary. Returns
 ts
astropy.timeseries.TimeSeries
A
TimeSeries
object with the array attributes as columns, and the meta attributes in themeta
dictionary
 ts
 to_lightkurve()[source]¶
Returns a
lightkurve.LightCurve
object. This feature requires Lightkurve to be installed (e.g.pip install lightkurve
). AnImportError
will be raised if this package is not available. Returns
 lightcurve
lightkurve.LightCurve
A lightkurve LightCurve object.
 lightcurve
 truncate(start=0, stop=None, method='index')[source]¶
Truncate a
Lightcurve
object.This method takes a
start
and astop
point (either as indices, or as times in the same unit as those in thetime
attribute, and truncates all bins beforestart
and afterstop
, then returns a newLightcurve
object with the truncated light curve. Parameters
 startint, default 0
Index (or time stamp) of the starting point of the truncation. If no value is set for the start point, then all points from the first element in the
time
array are taken into account. stopint, default
None
Index (or time stamp) of the ending point (exclusive) of the truncation. If no value of stop is set, then points including the last point in the counts array are taken in count.
 method{
index
time
}, optional, defaultindex
Type of the start and stop values. If set to
index
then the values are treated as indices of the counts array, or if set totime
, the values are treated as actual time values.
 Returns
 lc_new:class:
Lightcurve
object The
Lightcurve
object with truncated time and counts arrays.
 lc_new:class:
Examples
>>> time = [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> count = [10, 20, 30, 40, 50, 60, 70, 80, 90] >>> lc = Lightcurve(time, count, dt=1) >>> lc_new = lc.truncate(start=2, stop=8) >>> np.allclose(lc_new.counts, [30, 40, 50, 60, 70, 80]) True >>> lc_new.time array([3, 4, 5, 6, 7, 8]) >>> # Truncation can also be done by time values >>> lc_new = lc.truncate(start=6, method='time') >>> lc_new.time array([6, 7, 8, 9]) >>> np.allclose(lc_new.counts, [60, 70, 80, 90]) True
EventList¶
 class stingray.events.EventList(time=None, energy=None, ncounts=None, mjdref=0, dt=0, notes='', gti=None, pi=None, high_precision=False, mission=None, instr=None, header=None, detector_id=None, ephem=None, timeref=None, timesys=None, **other_kw)[source]¶
Basic class for event list data. Event lists generally correspond to individual events (e.g. photons) recorded by the detector, and their associated properties. For Xray data where this type commonly occurs, events are time stamps of when a photon arrived in the detector, and (optionally) the photon energy associated with the event.
 Parameters
 time: iterable
A list or array of time stamps
 Other Parameters
 dt: float
The time resolution of the events. Only relevant when using events to produce light curves with similar bin time.
 energy: iterable
A list of array of photon energy values in keV
 mjdreffloat
The MJD used as a reference for the time array.
 ncounts: int
Number of desired data points in event list.
 gtis: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Good Time Intervals
 piinteger, numpy.ndarray
PI channels
 notesstr
Any useful annotations
 high_precisionbool
Change the precision of self.time to float128. Useful while dealing with fast pulsars.
 missionstr
Mission that recorded the data (e.g. NICER)
 instrstr
Instrument onboard the mission
 headerstr
The full header of the original FITS file, if relevant
 detector_iditerable
The detector that recorded each photon (if the instrument has more than one, e.g. XMM/EPICpn)
 timerefstr
The time reference, as recorded in the FITS file (e.g. SOLARSYSTEM)
 timesysstr
The time system, as recorded in the FITS file (e.g. TDB)
 ephemstr
The JPL ephemeris used to barycenter the data, if any (e.g. DE430)
 **other_kw
Used internally. Any other keyword arguments will be ignored
 Attributes
 time: numpy.ndarray
The array of event arrival times, in seconds from the reference MJD defined in
mjdref
 energy: numpy.ndarray
The array of photon energy values
 ncounts: int
The number of data points in the event list
 dt: float
The time resolution of the events. Only relevant when using events to produce light curves with similar bin time.
 mjdreffloat
The MJD used as a reference for the time array.
 gtis: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Good Time Intervals
 piinteger, numpy.ndarray
PI channels
 high_precisionbool
Change the precision of self.time to float128. Useful while dealing with fast pulsars.
 missionstr
Mission that recorded the data (e.g. NICER)
 instrstr
Instrument onboard the mission
 detector_iditerable
The detector that recoded each photon, if relevant (e.g. XMM, Chandra)
 headerstr
The full header of the original FITS file, if relevant
 apply_deadtime(deadtime, inplace=False, **kwargs)[source]¶
Apply deadtime filter to this event list.
Additional arguments in
kwargs
are passed toget_deadtime_mask
 Parameters
 deadtimefloat
Value of dead time to apply to data
 inplacebool, default False
If True, apply the deadtime to the current event list. Otherwise, return a new event list.
 Returns
 new_event_list
EventList
object Filtered event list. if
inplace
is True, this is the input object filtered for deadtime, otherwise this is a new object. additional_outputobject
Only returned if
return_all
is True. Seeget_deadtime_mask
for more details.
 new_event_list
Examples
>>> events = np.array([1, 1.05, 1.07, 1.08, 1.1, 2, 2.2, 3, 3.1, 3.2]) >>> events = EventList(events) >>> events.pi=np.array([1, 2, 2, 2, 2, 1, 1, 1, 2, 1]) >>> events.energy=np.array([1, 2, 2, 2, 2, 1, 1, 1, 2, 1]) >>> events.mjdref = 10 >>> filt_events, retval = events.apply_deadtime(0.11, inplace=False, ... verbose=False, ... return_all=True) >>> filt_events is events False >>> expected = np.array([1, 2, 2.2, 3, 3.2]) >>> np.allclose(filt_events.time, expected) True >>> np.allclose(filt_events.pi, 1) True >>> np.allclose(filt_events.energy, 1) True >>> np.allclose(events.pi, 1) False >>> filt_events = events.apply_deadtime(0.11, inplace=True, ... verbose=False) >>> filt_events is events True
 apply_mask(mask, inplace=False)[source]¶
Apply a mask to all array attributes of the event list
 Parameters
 maskarray of
bool
The mask. Has to be of the same length as
self.time
 maskarray of
 Other Parameters
 inplacebool
If True, overwrite the current event list. Otherwise, return a new one.
Examples
>>> evt = EventList(time=[0, 1, 2], mission="nustar") >>> evt.bubuattr = [222, 111, 333] >>> newev0 = evt.apply_mask([True, True, False], inplace=False); >>> newev1 = evt.apply_mask([True, True, False], inplace=True); >>> newev0.mission == "nustar" True >>> np.allclose(newev0.time, [0, 1]) True >>> np.allclose(newev0.bubuattr, [222, 111]) True >>> np.allclose(newev1.time, [0, 1]) True >>> evt is newev1 True
 filter_energy_range(energy_range, inplace=False, use_pi=False)[source]¶
Filter the event list from a given energy range.
 Parameters
 energy_range: [float, float]
Energy range in keV, or in PI channel (if
use_pi
is True)
 Other Parameters
 inplacebool, default False
Do the change in place (modify current event list). Otherwise, copy to a new event list.
 use_pibool, default False
Use PI channel instead of energy in keV
Examples
>>> events = EventList(time=[0, 1, 2], energy=[0.3, 0.5, 2], pi=[3, 5, 20]) >>> e1 = events.filter_energy_range([0, 1]) >>> np.allclose(e1.time, [0, 1]) True >>> np.allclose(events.time, [0, 1, 2]) True >>> e2 = events.filter_energy_range([0, 10], use_pi=True, inplace=True) >>> np.allclose(e2.time, [0, 1]) True >>> np.allclose(events.time, [0, 1]) True
 static from_lc(lc)[source]¶
Create an
EventList
from astingray.Lightcurve
object. Note that all events in a given time bin will have the same time stamp. Parameters
 lc:class:
stingray.Lightcurve
object Light curve to use for creation of the event list.
 lc:class:
 Returns
 ev:class:
EventList
object The resulting list of photon arrival times generated from the light curve.
 ev:class:
 join(other)[source]¶
Join two
EventList
objects into one.If both are empty, an empty
EventList
is returned.GTIs are crossed if the event lists are over a common time interval, and appended otherwise.
pi
andpha
remainNone
if they areNone
in both. Otherwise, 0 is used as a default value for theEventList
where they were None.
 classmethod read(filename, fmt=None, format_=None, **kwargs)[source]¶
Read a
EventList
object from file.Currently supported formats are
pickle (not recommended for longterm storage)
hea or ogip : FITS Event files from (well, some) HEASARCsupported missions.
any other formats compatible with the writers in
astropy.table.Table
(ascii.ecsv, hdf5, etc.)
Files that need the
astropy.table.Table
interface MUST contain at least atime
column. Other recognized columns areenergy
andpi
. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all eventlist attributes such asmission
,gti
, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care. Parameters
 filename: str
Path and file name for the file to be read.
 fmt: str
Available options are ‘pickle’, ‘hea’, and any
Table
supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.
 Returns
 Other Parameters
 kwargsdict
Any further keyword arguments to be passed to
load_events_and_gtis
for reading in event lists in OGIP/HEASOFT format
 simulate_energies(spectrum, use_spline=False)[source]¶
Assign (simulate) energies to event list from a spectrum.
 Parameters
 spectrum: 2d array or list [energies, spectrum]
Energies versus corresponding fluxes. The 2d array or list must have energies across the first dimension and fluxes across the second one. If the dimension of the energies is the same as spectrum, they are interpreted as bin centers. If it is longer by one, they are interpreted as proper bin edges (similarly to the bins of
np.histogram
). Note that for nonuniformly binned spectra, it is advisable to pass the exact edges.
 simulate_times(lc, use_spline=False, bin_time=None)[source]¶
Simulate times from an input light curve.
Randomly simulate photon arrival times to an
EventList
from astingray.Lightcurve
object, using the inverse CDF method. ..note::
Preferably use model light curves containing no Poisson noise, as this method will intrinsically add Poisson noise to them.
 Parameters
 lc:class:
stingray.Lightcurve
object
 lc:class:
 Returns
 timesarraylike
Simulated photon arrival times
 Other Parameters
 use_splinebool
Approximate the light curve with a spline to avoid binning effects
 bin_timefloat default None
Ignored and deprecated, maintained for backwards compatibility.
 sort(inplace=False)[source]¶
Sort the event list in time.
 Returns
 eventlist
EventList
The sorted event list. If
inplace=True
, it will be a shallow copy ofself
.
 eventlist
 Other Parameters
 inplacebool, default False
Sort in place. If False, return a new event list.
Examples
>>> events = EventList(time=[0, 2, 1], energy=[0.3, 2, 0.5], pi=[3, 20, 5]) >>> e1 = events.sort() >>> np.allclose(e1.time, [0, 1, 2]) True >>> np.allclose(e1.energy, [0.3, 0.5, 2]) True >>> np.allclose(e1.pi, [3, 5, 20]) True
But the original event list has not been altered (
inplace=False
by default): >>> np.allclose(events.time, [0, 2, 1]) TrueLet’s do it in place instead >>> e2 = events.sort(inplace=True) >>> np.allclose(e2.time, [0, 1, 2]) True
In this case, the original event list has been altered. >>> np.allclose(events.time, [0, 1, 2]) True
 to_lc(dt, tstart=None, tseg=None)[source]¶
Convert event list to a
stingray.Lightcurve
object. Parameters
 dt: float
Binning time of the light curve
 Returns
 lc:class:
stingray.Lightcurve
object
 lc:class:
 Other Parameters
 tstartfloat
Start time of the light curve
 tseg: float
Total duration of light curve
 to_lc_iter(dt, segment_size=None)[source]¶
Convert event list to a generator of Lightcurves.
 Parameters
 dt: float
Binning time of the light curves
 Returns
 lc_gen:
generator
Generates one
stingray.Lightcurve
object for each GTI or segment
 lc_gen:
 Other Parameters
 segment_sizefloat, default None
Optional segment size. If None, use the GTI boundaries
 to_lc_list(dt, segment_size=None)[source]¶
Convert event list to a list of Lightcurves.
 Parameters
 dt: float
Binning time of the light curves
 Returns
 lc_list:
List
List containig one
stingray.Lightcurve
object for each GTI or segment
 lc_list:
 Other Parameters
 segment_sizefloat, default None
Optional segment size. If None, use the GTI boundaries
Fourier Products¶
These classes implement commonly used Fourier analysis products, most importantly Crossspectrum
and
Powerspectrum
, along with the variants for averaged cross/power spectra.
Crossspectrum¶
 class stingray.Crossspectrum(data1=None, data2=None, norm='frac', gti=None, lc1=None, lc2=None, power_type='real', dt=None, fullspec=False, skip_checks=False, legacy=False)[source]¶
 classical_significances(threshold=1, trial_correction=False)[source]¶
Compute the classical significances for the powers in the power spectrum, assuming an underlying noise distribution that follows a chisquare distributions with 2M degrees of freedom, where M is the number of powers averaged in each bin.
Note that this function will only produce correct results when the following underlying assumptions are fulfilled:
The power spectrum is Leahynormalized
There is no source of variability in the data other than the periodic signal to be determined with this method. This is important! If there are other sources of (aperiodic) variability in the data, this method will not produce correct results, but instead produce a large number of spurious false positive detections!
There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pileup or dead time)
By default, the method produces
(index,pvalues)
for all powers in the power spectrum, where index is the numerical index of the power in question. If athreshold
is set, then only powers with pvalues below that threshold with their respective indices. Iftrial_correction
is set toTrue
, then the threshold will be corrected for the number of trials (frequencies) in the power spectrum before being used. Parameters
 thresholdfloat, optional, default
1
The threshold to be used when reporting pvalues of potentially significant powers. Must be between 0 and 1. Default is
1
(all pvalues will be reported). trial_correctionbool, optional, default
False
A Boolean flag that sets whether the
threshold
will be corrected by the number of frequencies before being applied. This decreases thethreshold
(pvalues need to be lower to count as significant). Default isFalse
(report all powers) though for any application wherethreshold`
is set to something meaningful, this should also be applied!
 thresholdfloat, optional, default
 Returns
 pvalsiterable
A list of
(index, pvalue)
tuples for all powers that have pvalues lower than the threshold specified inthreshold
.
 coherence()[source]¶
Compute Coherence function of the cross spectrum.
Coherence is defined in Vaughan and Nowak, 1996 1. It is a Fourier frequency dependent measure of the linear correlation between time series measured simultaneously in two energy channels.
 Returns
 cohnumpy.ndarray
Coherence function
References
 static from_events(events1, events2, dt, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)[source]¶
Calculate AveragedCrossspectrum from two event lists
 Parameters
 events1
stingray.EventList
Events from channel 1
 events2
stingray.EventList
Events from channel 2
 dtfloat
The time resolution of the intermediate light curves (sets the Nyquist frequency)
 events1
 Other Parameters
 segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum
 normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram
 use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set
use_common_mean
to False to calculate it on a persegment basis. fullspecbool, default False
Return the full periodogram, including negative frequencies
 silentbool, default False
Silence the progress bars
 power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part
 gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.
 static from_lc_iterable(iter_lc1, iter_lc2, dt, segment_size, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)[source]¶
Calculate AveragedCrossspectrum from two light curves
 Parameters
 iter_lc1iterable of
stingray.Lightcurve
objects ornp.array
Light curves from channel 1. If arrays, use them as counts
 iter_lc1iterable of
stingray.Lightcurve
objects ornp.array
Light curves from channel 2. If arrays, use them as counts
 dtfloat
The time resolution of the light curves (sets the Nyquist frequency)
 iter_lc1iterable of
 Other Parameters
 segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum
 normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram
 use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set
use_common_mean
to False to calculate it on a persegment basis. fullspecbool, default False
Return the full periodogram, including negative frequencies
 silentbool, default False
Silence the progress bars
 power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part
 gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.
 static from_lightcurve(lc1, lc2, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)[source]¶
Calculate AveragedCrossspectrum from two light curves
 Parameters
 lc1
stingray.Lightcurve
Light curve from channel 1
 lc2
stingray.Lightcurve
Light curve from channel 2
 lc1
 Other Parameters
 segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum
 normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram
 use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set
use_common_mean
to False to calculate it on a persegment basis. fullspecbool, default False
Return the full periodogram, including negative frequencies
 silentbool, default False
Silence the progress bars
 power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part
 gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.
 static from_time_array(times1, times2, dt, segment_size=None, gti=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True)[source]¶
Calculate AveragedCrossspectrum from two arrays of event times.
 Parameters
 times1
np.array
Event arrival times of channel 1
 times2
np.array
Event arrival times of channel 2
 dtfloat
The time resolution of the intermediate light curves (sets the Nyquist frequency)
 times1
 Other Parameters
 segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for
AveragedCrossspectrum
. gti[[gti0, gti1], …]
Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.
 normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram
 use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set
use_common_mean
to False to calculate it on a persegment basis. fullspecbool, default False
Return the full periodogram, including negative frequencies
 silentbool, default False
Silence the progress bars
 power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part
 initial_checks(data1=None, data2=None, norm='frac', gti=None, lc1=None, lc2=None, segment_size=None, power_type='real', dt=None, fullspec=False)[source]¶
Run initial checks on the input.
Returns True if checks are passed, False if they are not.
Raises various errors for different bad inputs
Examples
>>> times = np.arange(0, 10) >>> counts = np.random.poisson(100, 10) >>> lc1 = Lightcurve(times, counts, skip_checks=True) >>> lc2 = Lightcurve(times, counts, skip_checks=True) >>> ev1 = EventList(times) >>> ev2 = EventList(times) >>> c = Crossspectrum() >>> ac = AveragedCrossspectrum()
If norm is not a string, raise a TypeError >>> Crossspectrum.initial_checks(c, norm=1) Traceback (most recent call last): … TypeError: norm must be a string…
If
norm
is not one of the valid norms, raise a ValueError >>> Crossspectrum.initial_checks(c, norm=”blabla”) Traceback (most recent call last): … ValueError: norm must be ‘frac’…If
power_type
is not one of the valid norms, raise a ValueError >>> Crossspectrum.initial_checks(c, power_type=”blabla”) Traceback (most recent call last): … ValueError:power_type
not recognized!If the user passes only one light curve, raise a ValueError
>>> Crossspectrum.initial_checks(c, data1=lc1, data2=None) Traceback (most recent call last): ... ValueError: You can't do a cross spectrum...
If the user passes an event list without dt, raise a ValueError
>>> Crossspectrum.initial_checks(c, data1=ev1, data2=ev2, dt=None) Traceback (most recent call last): ... ValueError: If using event lists, please specify...
 phase_lag()[source]¶
Calculate the fourier phase lag of the cross spectrum.
This is defined as the argument of the complex cross spectrum, and gives the delay at all frequencies, in cycles, of one input light curve with respect to the other.
 plot(labels=None, axis=None, title=None, marker='', save=False, filename=None)[source]¶
Plot the amplitude of the cross spectrum vs. the frequency using
matplotlib
. Parameters
 labelsiterable, default
None
A list of tuple with
xlabel
andylabel
as strings. axislist, tuple, string, default
None
Parameter to set axis properties of the
matplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument for the``matplotlib.pyplot.axis()`` method. titlestr, default
None
The title of the plot.
 markerstr, default ‘’
Line style and color of the plot. Line styles and colors are combined in a single format string, as in
'bo'
for blue circles. Seematplotlib.pyplot.plot
for more options. saveboolean, optional, default
False
If
True
, save the figure with specified filename. filenamestr
File name of the image to save. Depends on the boolean
save
.
 labelsiterable, default
 rebin(df=None, f=None, method='mean')[source]¶
Rebin the cross spectrum to a new frequency resolution
df
. Parameters
 df: float
The new frequency resolution
 Returns
 bin_cs =class:
Crossspectrum
(or one of its subclasses) object The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from
AveragedPowerspectrum
, it will return an object of classAveragedPowerspectrum
, too.
 bin_cs =class:
 Other Parameters
 f: float
the rebin factor. If specified, it substitutes df with
f*self.df
 rebin_log(f=0.01)[source]¶
Logarithmic rebin of the periodogram. The new frequency depends on the previous frequency modified by a factor f:
\[d\nu_j = d\nu_{j1} (1+f)\] Parameters
 f: float, optional, default ``0.01``
parameter that steers the frequency resolution
 Returns
 new_spec
Crossspectrum
(or one of its subclasses) object The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from
AveragedPowerspectrum
, it will return an object of class
 new_spec
 time_lag()[source]¶
Calculate the fourier time lag of the cross spectrum. The time lag is calculated by taking the phase lag \(\phi\) and
..math:
\tau = \frac{\phi}{\two pi \nu}
where \(\nu\) is the center of the frequency bins.
 to_norm(norm, inplace=False)[source]¶
Convert Cross spectrum to new normalization.
 Parameters
 normstr
The new normalization of the spectrum
 Returns
 new_specobject, same class as input
The new, normalized, spectrum.
 Other Parameters
 inplace: bool, default False
If True, change the current instance. Otherwise, return a new one
 type = 'crossspectrum'¶
Make a cross spectrum from a (binned) light curve. You can also make an empty
Crossspectrum
object to populate with your own Fouriertransformed data (this can sometimes be useful when making binned power spectra). Stingray uses the scipy.fft standards for the sign of the Nyquist frequency. Parameters
 data1:class:
stingray.Lightcurve
orstingray.events.EventList
, optional, defaultNone
The dataset for the first channel/band of interest.
 data2:class:
stingray.Lightcurve
orstingray.events.EventList
, optional, defaultNone
The dataset for the second, or “reference”, band.
 norm: {``frac``, ``abs``, ``leahy``, ``none``}, default ``none``
The normalization of the (real part of the) cross spectrum.
 power_type: string, optional, default ``real``
Parameter to choose among complete, real part and magnitude of the cross spectrum.
 fullspec: boolean, optional, default ``False``
If False, keep only the positive frequencies, or if True, keep all of them .
 data1:class:
 Other Parameters
 gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input
Lightcurve
GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to theLightcurve
objects before making the cross spectrum. lc1:class:
stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve
objects For backwards compatibility only. Like
data1
, but nostingray.events.EventList
objects allowed lc2:class:
stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve
objects For backwards compatibility only. Like
data2
, but nostingray.events.EventList
objects allowed dt: float
The time resolution of the light curve. Only needed when constructing light curves in the case where
data1
,data2
areEventList
objects skip_checks: bool
Skip initial checks, for speed or other reasons (you need to trust your inputs!)
 Attributes
 freq: numpy.ndarray
The array of midbin frequencies that the Fourier transform samples
 power: numpy.ndarray
The array of cross spectra (complex numbers)
 power_err: numpy.ndarray
The uncertainties of
power
. An approximation for each bin given bypower_err= power/sqrt(m)
. Wherem
is the number of power averaged in each bin (by frequency binning, or averaging more than one spectra). Note that for a single realization (m=1
) the error is equal to the power. df: float
The frequency resolution
 m: int
The number of averaged crossspectra amplitudes in each bin.
 n: int
The number of data points/time bins in one segment of the light curves.
 nphots1: float
The total number of photons in light curve 1
 nphots2: float
The total number of photons in light curve 2
 legacy: bool
Use the legacy machinery of AveragedCrossspectrum. This might be useful to compare with old results, and is also needed to use light curve lists as an input, to conserve the spectra of each segment, or to use the large_data option.
Coherence¶
Convenience function to compute the coherence between two stingray.Lightcurve
objects.
 stingray.coherence(lc1, lc2)[source]¶
Estimate coherence function of two light curves. For details on the definition of the coherence, see Vaughan and Nowak, 1996 2.
 Parameters
 lc1:class:
stingray.Lightcurve
object The first light curve data for the channel of interest.
 lc2:class:
stingray.Lightcurve
object The light curve data for reference band
 lc1:class:
 Returns
 coh
np.ndarray
The array of coherence versus frequency
 coh
References
Powerspectrum¶
 class stingray.Powerspectrum(data=None, norm='frac', gti=None, dt=None, lc=None, skip_checks=False, legacy=False)[source]¶
 _fourier_cross(lc1, lc2, fullspec=False)¶
Fourier transform the two light curves, then compute the cross spectrum. Computed as CS = lc1 x lc2* (where lc2 is the one that gets complexconjugated). The user has the option to either get just the positive frequencies or the full spectrum.
 Parameters
 lc1:class:
stingray.Lightcurve
object One light curve to be Fourier transformed. Ths is the band of interest or channel of interest.
 lc2:class:
stingray.Lightcurve
object Another light curve to be Fourier transformed. This is the reference band.
 fullspec: boolean. Default is False.
If True, return the whole array of frequencies, or only positive frequencies (False).
 lc1:class:
 Returns
 fr: numpy.ndarray
The squared absolute value of the Fourier amplitudes
 _initialize_from_any_input(data, dt=None, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True)[source]¶
Initialize the class, trying to understand the input types.
The input arguments are the same as
__init__()
. Based on the type ofdata
, this method will call the appropriatepowerspectrum_from_XXXX
function, and initializeself
with the correct attributes.
 _make_auxil_pds(lc1, lc2)¶
Helper method to create the power spectrum of both light curves independently.
 Parameters
 lc1, lc2
stingray.Lightcurve
objects Two light curves used for computing the cross spectrum.
 lc1, lc2
 _make_crossspectrum(lc1, lc2, fullspec=False)¶
Auxiliary method computing the normalized cross spectrum from two light curves. This includes checking for the presence of and applying Good Time Intervals, computing the unnormalized Fourier crossamplitude, and then renormalizing using the required normalization. Also computes an uncertainty estimate on the cross spectral powers.
 Parameters
 lc1, lc2
stingray.Lightcurve
objects Two light curves used for computing the cross spectrum.
 fullspec: boolean, default ``False``
Return full frequency array (True) or just positive frequencies (False)
 lc1, lc2
 _normalize_crossspectrum(unnorm_power)¶
Normalize the real part of the cross spectrum to Leahy, absolute rms^2, fractional rms^2 normalization, or not at all.
 Parameters
 unnorm_power: numpy.ndarray
The unnormalized cross spectrum.
 Returns
 power: numpy.nd.array
The normalized cospectrum (real part of the cross spectrum). For ‘none’ normalization, imaginary part is returned as well.
 _rms_error(powers)[source]¶
Compute the error on the fractional rms amplitude using error propagation. Note: this uses the actual measured powers, which is not strictly correct. We should be using the underlying power spectrum, but in the absence of an estimate of that, this will have to do.
\[r = \sqrt{P}\]\[\delta r = \frac{1}{2 * \sqrt{P}} \delta P\] Parameters
 powers: iterable
The list of powers used to compute the fractional rms amplitude.
 Returns
 delta_rms: float
The error on the fractional rms amplitude.
 array_attrs() list[str] ¶
List the names of the array attributes of the Stingray Object.
By array attributes, we mean the ones with the same size and shape as
main_array_attr
(e.g.time
inEventList
)
 classical_significances(threshold=1, trial_correction=False)[source]¶
Compute the classical significances for the powers in the power spectrum, assuming an underlying noise distribution that follows a chisquare distributions with 2M degrees of freedom, where M is the number of powers averaged in each bin.
Note that this function will only produce correct results when the following underlying assumptions are fulfilled:
The power spectrum is Leahynormalized
There is no source of variability in the data other than the periodic signal to be determined with this method. This is important! If there are other sources of (aperiodic) variability in the data, this method will not produce correct results, but instead produce a large number of spurious false positive detections!
There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pileup or dead time)
By default, the method produces
(index,pvalues)
for all powers in the power spectrum, where index is the numerical index of the power in question. If athreshold
is set, then only powers with pvalues below that threshold with their respective indices. Iftrial_correction
is set toTrue
, then the threshold will be corrected for the number of trials (frequencies) in the power spectrum before being used. Parameters
 thresholdfloat, optional, default
1
The threshold to be used when reporting pvalues of potentially significant powers. Must be between 0 and 1. Default is
1
(all pvalues will be reported). trial_correctionbool, optional, default
False
A Boolean flag that sets whether the
threshold
will be corrected by the number of frequencies before being applied. This decreases thethreshold
(pvalues need to be lower to count as significant). Default isFalse
(report all powers) though for any application wherethreshold`
is set to something meaningful, this should also be applied!
 thresholdfloat, optional, default
 Returns
 pvalsiterable
A list of
(pvalue, index)
tuples for all powers that have pvalues lower than the threshold specified inthreshold
.
 coherence()¶
Compute Coherence function of the cross spectrum.
Coherence is defined in Vaughan and Nowak, 1996 3. It is a Fourier frequency dependent measure of the linear correlation between time series measured simultaneously in two energy channels.
 Returns
 cohnumpy.ndarray
Coherence function
References
 compute_rms(min_freq, max_freq, white_noise_offset=0.0)[source]¶
Compute the fractional rms amplitude in the power spectrum between two frequencies.
 Parameters
 min_freq: float
The lower frequency bound for the calculation.
 max_freq: float
The upper frequency bound for the calculation.
 Returns
 rms: float
The fractional rms amplitude contained between
min_freq
andmax_freq
. rms_err: float
The error on the fractional rms amplitude.
 Other Parameters
 white_noise_offsetfloat, default 0
This is the white noise level, in Leahy normalization. In the ideal case, this is 2. Dead time and other instrumental effects can alter it. The user can fit the white noise level outside this function and it will get subtracted from powers here.
 classmethod from_astropy_table(ts: Table) Tso ¶
Create a Stingray Object object from data in an Astropy Table.
The table MUST contain at least a column named like the
main_array_attr
. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.It is strongly advisable to define such attributes and columns using the standard attributes of the wanted StingrayObject (e.g.
time
,pi
, etc. forEventList
)
 static from_events(events, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True)[source]¶
Calculate an average power spectrum from an event list.
 Parameters
 events
stingray.EventList
Event list to be analyzed.
 dtfloat
The time resolution of the intermediate light curves (sets the Nyquist frequency).
 events
 Other Parameters
 segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for
AveragedPowerspectrum
. gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.
 normstr, default “frac”
The normalization of the periodogram.
abs
is absolute rms,frac
is fractional rms,leahy
is Leahy+83 normalization, andnone
is the unnormalized periodogram. use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set
use_common_mean
to False to calculate it on a persegment basis. silentbool, default False
Silence the progress bars.
 static from_lc_iterable(iter_lc, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True)[source]¶
Calculate the average power spectrum of an iterable collection of light curves.
 Parameters
 iter_lciterable of
stingray.Lightcurve
objects ornp.array
Light curves. If arrays, use them as counts.
 dtfloat
The time resolution of the light curves (sets the Nyquist frequency)
 iter_lciterable of
 Other Parameters
 segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for
AveragedPowerspectrum
. gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.
 normstr, default “frac”
The normalization of the periodogram.
abs
is absolute rms,frac
is fractional rms,leahy
is Leahy+83 normalization, andnone
is the unnormalized periodogram. use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set
use_common_mean
to False to calculate it on a persegment basis. silentbool, default False
Silence the progress bars.
 static from_lightcurve(lc, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True)[source]¶
Calculate a power spectrum from a light curve.
 Parameters
 events
stingray.Lightcurve
Light curve to be analyzed.
 dtfloat
The time resolution of the intermediate light curves (sets the Nyquist frequency).
 events
 Other Parameters
 segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for
AveragedPowerspectrum
. gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.
 normstr, default “frac”
The normalization of the periodogram.
abs
is absolute rms,frac
is fractional rms,leahy
is Leahy+83 normalization, andnone
is the unnormalized periodogram. use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set
use_common_mean
to False to calculate it on a persegment basis. silentbool, default False
Silence the progress bars.
 classmethod from_pandas(ts: DataFrame) Tso ¶
Create an
StingrayObject
object from data in a pandas DataFrame.The dataframe MUST contain at least a column named like the
main_array_attr
. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.It is strongly advisable to define such attributes and columns using the standard attributes of the wanted StingrayObject (e.g.
time
,pi
, etc. forEventList
)
 static from_time_array(times, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True)[source]¶
Calculate an average power spectrum from an array of event times.
 Parameters
 times
np.array
Event arrival times.
 dtfloat
The time resolution of the intermediate light curves (sets the Nyquist frequency).
 times
 Other Parameters
 segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for
AveragedPowerspectrum
. gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.
 normstr, default “frac”
The normalization of the periodogram.
abs
is absolute rms,frac
is fractional rms,leahy
is Leahy+83 normalization, andnone
is the unnormalized periodogram. use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set
use_common_mean
to False to calculate it on a persegment basis. silentbool, default False
Silence the progress bars.
 classmethod from_xarray(ts: Dataset) Tso ¶
Create a
StingrayObject
from data in an xarray Dataset.The dataset MUST contain at least a column named like the
main_array_attr
. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.It is strongly advisable to define such attributes and columns using the standard attributes of the wanted StingrayObject (e.g.
time
,pi
, etc. forEventList
)
 initial_checks(data1=None, data2=None, norm='frac', gti=None, lc1=None, lc2=None, segment_size=None, power_type='real', dt=None, fullspec=False)¶
Run initial checks on the input.
Returns True if checks are passed, False if they are not.
Raises various errors for different bad inputs
Examples
>>> times = np.arange(0, 10) >>> counts = np.random.poisson(100, 10) >>> lc1 = Lightcurve(times, counts, skip_checks=True) >>> lc2 = Lightcurve(times, counts, skip_checks=True) >>> ev1 = EventList(times) >>> ev2 = EventList(times) >>> c = Crossspectrum() >>> ac = AveragedCrossspectrum()
If norm is not a string, raise a TypeError >>> Crossspectrum.initial_checks(c, norm=1) Traceback (most recent call last): … TypeError: norm must be a string…
If
norm
is not one of the valid norms, raise a ValueError >>> Crossspectrum.initial_checks(c, norm=”blabla”) Traceback (most recent call last): … ValueError: norm must be ‘frac’…If
power_type
is not one of the valid norms, raise a ValueError >>> Crossspectrum.initial_checks(c, power_type=”blabla”) Traceback (most recent call last): … ValueError:power_type
not recognized!If the user passes only one light curve, raise a ValueError
>>> Crossspectrum.initial_checks(c, data1=lc1, data2=None) Traceback (most recent call last): ... ValueError: You can't do a cross spectrum...
If the user passes an event list without dt, raise a ValueError
>>> Crossspectrum.initial_checks(c, data1=ev1, data2=ev2, dt=None) Traceback (most recent call last): ... ValueError: If using event lists, please specify...
 meta_attrs() list[str] ¶
List the names of the meta attributes of the Stingray Object.
By array attributes, we mean the ones with a different size and shape than
main_array_attr
(e.g.time
inEventList
)
 modulation_upper_limit(fmin=None, fmax=None, c=0.95)[source]¶
Upper limit on a sinusoidal modulation.
To understand the meaning of this amplitude: if the modulation is described by:
..math:: p = overline{p} (1 + a * sin(x))
this function returns a.
If it is a sum of sinusoidal harmonics instead ..math:: p = overline{p} (1 + sum_l a_l * sin(lx)) a is equivalent to \(\sqrt(\sum_l a_l^2)\).
See
stingray.stats.power_upper_limit
,stingray.stats.amplitude_upper_limit
for more information.The formula used to calculate the upper limit assumes the Leahy normalization. If the periodogram is in another normalization, we will internally convert it to Leahy before calculating the upper limit.
 Parameters
 fmin: float
The minimum frequency to search (defaults to the first nonzero bin)
 fmax: float
The maximum frequency to search (defaults to the Nyquist frequency)
 Returns
 a: float
The modulation amplitude that could produce P>pmeas with 1  c probability.
 Other Parameters
 c: float
The confidence value for the upper limit (e.g. 0.95 = 95%)
Examples
>>> pds = Powerspectrum() >>> pds.norm = "leahy" >>> pds.freq = np.arange(0., 5.) >>> # Note: this pds has 40 as maximum value between 2 and 5 Hz >>> pds.power = np.array([100000, 1, 1, 40, 1]) >>> pds.m = 1 >>> pds.nphots = 30000 >>> pds.modulation_upper_limit(fmin=2, fmax=5, c=0.99) 0.1016...
 phase_lag()¶
Calculate the fourier phase lag of the cross spectrum.
This is defined as the argument of the complex cross spectrum, and gives the delay at all frequencies, in cycles, of one input light curve with respect to the other.
 plot(labels=None, axis=None, title=None, marker='', save=False, filename=None)¶
Plot the amplitude of the cross spectrum vs. the frequency using
matplotlib
. Parameters
 labelsiterable, default
None
A list of tuple with
xlabel
andylabel
as strings. axislist, tuple, string, default
None
Parameter to set axis properties of the
matplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument for the``matplotlib.pyplot.axis()`` method. titlestr, default
None
The title of the plot.
 markerstr, default ‘’
Line style and color of the plot. Line styles and colors are combined in a single format string, as in
'bo'
for blue circles. Seematplotlib.pyplot.plot
for more options. saveboolean, optional, default
False
If
True
, save the figure with specified filename. filenamestr
File name of the image to save. Depends on the boolean
save
.
 labelsiterable, default
 classmethod read(filename: str, fmt: str = None, format_=None) Tso ¶
Generic reader for :class`StingrayObject`
Currently supported formats are
pickle (not recommended for longterm storage)
any other formats compatible with the writers in
astropy.table.Table
(ascii.ecsv, hdf5, etc.)
Files that need the
astropy.table.Table
interface MUST contain at least a column named like themain_array_attr
. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all attributes such asmission
,gti
, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care...note:
Complex values can be dealt with outofthebox in some formats like HDF5 or FITS, not in others (e.g. all ASCII formats). With these formats, and in any case when fmt is ``None``, complex values should be stored as two columns of real numbers, whose names are of the format <variablename>.real and <variablename>.imag
 Parameters
 filename: str
Path and file name for the file to be read.
 fmt: str
Available options are ‘pickle’, ‘hea’, and any
Table
supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.
 Returns
 obj:class:
StingrayObject
object The object reconstructed from file
 obj:class:
 rebin(df=None, f=None, method='mean')[source]¶
Rebin the power spectrum.
 Parameters
 df: float
The new frequency resolution.
 Returns
 bin_cs =class:
Powerspectrum
object The newly binned power spectrum.
 bin_cs =class:
 Other Parameters
 f: float
The rebin factor. If specified, it substitutes
df
withf*self.df
, sof>1
is recommended.
 rebin_log(f=0.01)¶
Logarithmic rebin of the periodogram. The new frequency depends on the previous frequency modified by a factor f:
\[d\nu_j = d\nu_{j1} (1+f)\] Parameters
 f: float, optional, default ``0.01``
parameter that steers the frequency resolution
 Returns
 new_spec
Crossspectrum
(or one of its subclasses) object The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from
AveragedPowerspectrum
, it will return an object of class
 new_spec
 time_lag()¶
Calculate the fourier time lag of the cross spectrum. The time lag is calculated by taking the phase lag \(\phi\) and
..math:
\tau = \frac{\phi}{\two pi \nu}
where \(\nu\) is the center of the frequency bins.

to_astropy_table()
astropy.table.Table
¶ Create an Astropy Table from a
StingrayObject
Array attributes (e.g.
time
,pi
,energy
, etc. forEventList
) are converted into columns, while meta attributes (mjdref
,gti
, etc.) are saved into themeta
dictionary.
 to_norm(norm, inplace=False)¶
Convert Cross spectrum to new normalization.
 Parameters
 normstr
The new normalization of the spectrum
 Returns
 new_specobject, same class as input
The new, normalized, spectrum.
 Other Parameters
 inplace: bool, default False
If True, change the current instance. Otherwise, return a new one
 to_pandas() DataFrame ¶
Create a pandas
DataFrame
from aStingrayObject
.Array attributes (e.g.
time
,pi
,energy
, etc. forEventList
) are converted into columns, while meta attributes (mjdref
,gti
, etc.) are saved into theds.attrs
dictionary.
 to_xarray() Dataset ¶
Create an
xarray
Dataset from aStingrayObject
.Array attributes (e.g.
time
,pi
,energy
, etc. forEventList
) are converted into columns, while meta attributes (mjdref
,gti
, etc.) are saved into theds.attrs
dictionary.
 type = 'powerspectrum'¶
Make a
Powerspectrum
(also called periodogram) from a (binned) light curve. Periodograms can be normalized by either Leahy normalization, fractional rms normalization, absolute rms normalization, or not at all.You can also make an empty
Powerspectrum
object to populate with your own fouriertransformed data (this can sometimes be useful when making binned power spectra). Parameters
 data:class:
stingray.Lightcurve
object, optional, defaultNone
The light curve data to be Fouriertransformed.
 norm: {“leahy”  “frac”  “abs”  “none” }, optional, default “frac”
The normaliation of the power spectrum to be used. Options are “leahy”, “frac”, “abs” and “none”, default is “frac”.
 data:class:
 Other Parameters
 gti: 2d float array
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
– Good Time intervals. This choice overrides the GTIs in the single light curves. Use with care, especially if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input object before making the power spectrum. skip_checks: bool
Skip initial checks, for speed or other reasons (you need to trust your inputs!).
 Attributes
 norm: {“leahy”  “frac”  “abs”  “none” }
The normalization of the power spectrum.
 freq: numpy.ndarray
The array of midbin frequencies that the Fourier transform samples.
 power: numpy.ndarray
The array of normalized squared absolute values of Fourier amplitudes.
 power_err: numpy.ndarray
The uncertainties of
power
. An approximation for each bin given bypower_err= power/sqrt(m)
. Wherem
is the number of power averaged in each bin (by frequency binning, or averaging power spectra of segments of a light curve). Note that for a single realization (m=1
) the error is equal to the power. df: float
The frequency resolution.
 m: int
The number of averaged powers in each bin.
 n: int
The number of data points in the light curve.
 nphots: float
The total number of photons in the light curve.
 legacy: bool
Use the legacy machinery of
AveragedPowerspectrum
. This might be useful to compare with old results, and is also needed to use light curve lists as an input, to conserve the spectra of each segment, or to use the large_data option.
 write(filename: str, fmt: Optional[str] = None, format_=None) None ¶
Generic writer for :class`StingrayObject`
Currently supported formats are
pickle (not recommended for longterm storage)
any other formats compatible with the writers in
astropy.table.Table
(ascii.ecsv, hdf5, etc.)
..note:
Complex values can be dealt with outofthebox in some formats like HDF5 or FITS, not in others (e.g. all ASCII formats). With these formats, and in any case when fmt is ``None``, complex values will be stored as two columns of real numbers, whose names are of the format <variablename>.real and <variablename>.imag
 Parameters
 filename: str
Name and path of the file to save the object list to.
 fmt: str
The file format to store the data in. Available options are
pickle
,hdf5
,ascii
,fits
AveragedCrossspectrum¶
 class stingray.AveragedCrossspectrum(data1=None, data2=None, segment_size=None, norm='frac', gti=None, power_type='all', silent=False, lc1=None, lc2=None, dt=None, fullspec=False, large_data=False, save_all=False, use_common_mean=True, legacy=False, skip_checks=False)[source]¶
 array_attrs() list[str] ¶
List the names of the array attributes of the Stingray Object.
By array attributes, we mean the ones with the same size and shape as
main_array_attr
(e.g.time
inEventList
)
 classical_significances(threshold=1, trial_correction=False)¶
Compute the classical significances for the powers in the power spectrum, assuming an underlying noise distribution that follows a chisquare distributions with 2M degrees of freedom, where M is the number of powers averaged in each bin.
Note that this function will only produce correct results when the following underlying assumptions are fulfilled:
The power spectrum is Leahynormalized
There is no source of variability in the data other than the periodic signal to be determined with this method. This is important! If there are other sources of (aperiodic) variability in the data, this method will not produce correct results, but instead produce a large number of spurious false positive detections!
There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pileup or dead time)
By default, the method produces
(index,pvalues)
for all powers in the power spectrum, where index is the numerical index of the power in question. If athreshold
is set, then only powers with pvalues below that threshold with their respective indices. Iftrial_correction
is set toTrue
, then the threshold will be corrected for the number of trials (frequencies) in the power spectrum before being used. Parameters
 thresholdfloat, optional, default
1
The threshold to be used when reporting pvalues of potentially significant powers. Must be between 0 and 1. Default is
1
(all pvalues will be reported). trial_correctionbool, optional, default
False
A Boolean flag that sets whether the
threshold
will be corrected by the number of frequencies before being applied. This decreases thethreshold
(pvalues need to be lower to count as significant). Default isFalse
(report all powers) though for any application wherethreshold`
is set to something meaningful, this should also be applied!
 thresholdfloat, optional, default
 Returns
 pvalsiterable
A list of
(index, pvalue)
tuples for all powers that have pvalues lower than the threshold specified inthreshold
.
 coherence()[source]¶
Averaged Coherence function.
Coherence is defined in Vaughan and Nowak, 1996 4. It is a Fourier frequency dependent measure of the linear correlation between time series measured simultaneously in two energy channels.
Compute an averaged Coherence function of cross spectrum by computing coherence function of each segment and averaging them. The return type is a tuple with first element as the coherence function and the second element as the corresponding uncertainty associated with it.
Note : The uncertainty in coherence function is strictly valid for Gaussian statistics only.
 Returns
 (coh, uncertainty)tuple of np.ndarray
Tuple comprising the coherence function and uncertainty.
References
 classmethod from_astropy_table(ts: Table) Tso ¶
Create a Stingray Object object from data in an Astropy Table.
The table MUST contain at least a column named like the
main_array_attr
. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.It is strongly advisable to define such attributes and columns using the standard attributes of the wanted StingrayObject (e.g.
time
,pi
, etc. forEventList
)
 static from_events(events1, events2, dt, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)¶
Calculate AveragedCrossspectrum from two event lists
 Parameters
 events1
stingray.EventList
Events from channel 1
 events2
stingray.EventList
Events from channel 2
 dtfloat
The time resolution of the intermediate light curves (sets the Nyquist frequency)
 events1
 Other Parameters
 segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum
 normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram
 use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set
use_common_mean
to False to calculate it on a persegment basis. fullspecbool, default False
Return the full periodogram, including negative frequencies
 silentbool, default False
Silence the progress bars
 power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part
 gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.
 static from_lc_iterable(iter_lc1, iter_lc2, dt, segment_size, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)¶
Calculate AveragedCrossspectrum from two light curves
 Parameters
 iter_lc1iterable of
stingray.Lightcurve
objects ornp.array
Light curves from channel 1. If arrays, use them as counts
 iter_lc1iterable of
stingray.Lightcurve
objects ornp.array
Light curves from channel 2. If arrays, use them as counts
 dtfloat
The time resolution of the light curves (sets the Nyquist frequency)
 iter_lc1iterable of
 Other Parameters
 segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum
 normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram
 use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set
use_common_mean
to False to calculate it on a persegment basis. fullspecbool, default False
Return the full periodogram, including negative frequencies
 silentbool, default False
Silence the progress bars
 power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part
 gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.
 static from_lightcurve(lc1, lc2, segment_size=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True, gti=None)¶
Calculate AveragedCrossspectrum from two light curves
 Parameters
 lc1
stingray.Lightcurve
Light curve from channel 1
 lc2
stingray.Lightcurve
Light curve from channel 2
 lc1
 Other Parameters
 segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for AveragedCrossspectrum
 normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram
 use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set
use_common_mean
to False to calculate it on a persegment basis. fullspecbool, default False
Return the full periodogram, including negative frequencies
 silentbool, default False
Silence the progress bars
 power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part
 gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.
 classmethod from_pandas(ts: DataFrame) Tso ¶
Create an
StingrayObject
object from data in a pandas DataFrame.The dataframe MUST contain at least a column named like the
main_array_attr
. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.It is strongly advisable to define such attributes and columns using the standard attributes of the wanted StingrayObject (e.g.
time
,pi
, etc. forEventList
)
 static from_time_array(times1, times2, dt, segment_size=None, gti=None, norm='none', power_type='all', silent=False, fullspec=False, use_common_mean=True)¶
Calculate AveragedCrossspectrum from two arrays of event times.
 Parameters
 times1
np.array
Event arrival times of channel 1
 times2
np.array
Event arrival times of channel 2
 dtfloat
The time resolution of the intermediate light curves (sets the Nyquist frequency)
 times1
 Other Parameters
 segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for
AveragedCrossspectrum
. gti[[gti0, gti1], …]
Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.
 normstr, default “frac”
The normalization of the periodogram. “abs” is absolute rms, “frac” is fractional rms, “leahy” is Leahy+83 normalization, and “none” is the unnormalized periodogram
 use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). Here we assume the mean is calculated on the full light curve, but the user can set
use_common_mean
to False to calculate it on a persegment basis. fullspecbool, default False
Return the full periodogram, including negative frequencies
 silentbool, default False
Silence the progress bars
 power_typestr, default ‘all’
If ‘all’, give complex powers. If ‘abs’, the absolute value; if ‘real’, the real part
 classmethod from_xarray(ts: Dataset) Tso ¶
Create a
StingrayObject
from data in an xarray Dataset.The dataset MUST contain at least a column named like the
main_array_attr
. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.It is strongly advisable to define such attributes and columns using the standard attributes of the wanted StingrayObject (e.g.
time
,pi
, etc. forEventList
)
 initial_checks(data1, segment_size=None, **kwargs)[source]¶
Examples
>>> times = np.arange(0, 10) >>> ev1 = EventList(times) >>> ev2 = EventList(times) >>> ac = AveragedCrossspectrum()
If AveragedCrossspectrum, you need
segment_size
>>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1) Traceback (most recent call last): … ValueError: segment_size must be specified…And it needs to be finite! >>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1., segment_size=np.nan) Traceback (most recent call last): … ValueError: segment_size must be finite!
 meta_attrs() list[str] ¶
List the names of the meta attributes of the Stingray Object.
By array attributes, we mean the ones with a different size and shape than
main_array_attr
(e.g.time
inEventList
)
 plot(labels=None, axis=None, title=None, marker='', save=False, filename=None)¶
Plot the amplitude of the cross spectrum vs. the frequency using
matplotlib
. Parameters
 labelsiterable, default
None
A list of tuple with
xlabel
andylabel
as strings. axislist, tuple, string, default
None
Parameter to set axis properties of the
matplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument for the``matplotlib.pyplot.axis()`` method. titlestr, default
None
The title of the plot.
 markerstr, default ‘’
Line style and color of the plot. Line styles and colors are combined in a single format string, as in
'bo'
for blue circles. Seematplotlib.pyplot.plot
for more options. saveboolean, optional, default
False
If
True
, save the figure with specified filename. filenamestr
File name of the image to save. Depends on the boolean
save
.
 labelsiterable, default
 classmethod read(filename: str, fmt: str = None, format_=None) Tso ¶
Generic reader for :class`StingrayObject`
Currently supported formats are
pickle (not recommended for longterm storage)
any other formats compatible with the writers in
astropy.table.Table
(ascii.ecsv, hdf5, etc.)
Files that need the
astropy.table.Table
interface MUST contain at least a column named like themain_array_attr
. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all attributes such asmission
,gti
, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care...note:
Complex values can be dealt with outofthebox in some formats like HDF5 or FITS, not in others (e.g. all ASCII formats). With these formats, and in any case when fmt is ``None``, complex values should be stored as two columns of real numbers, whose names are of the format <variablename>.real and <variablename>.imag
 Parameters
 filename: str
Path and file name for the file to be read.
 fmt: str
Available options are ‘pickle’, ‘hea’, and any
Table
supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.
 Returns
 obj:class:
StingrayObject
object The object reconstructed from file
 obj:class:
 rebin(df=None, f=None, method='mean')¶
Rebin the cross spectrum to a new frequency resolution
df
. Parameters
 df: float
The new frequency resolution
 Returns
 bin_cs =class:
Crossspectrum
(or one of its subclasses) object The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from
AveragedPowerspectrum
, it will return an object of classAveragedPowerspectrum
, too.
 bin_cs =class:
 Other Parameters
 f: float
the rebin factor. If specified, it substitutes df with
f*self.df
 rebin_log(f=0.01)¶
Logarithmic rebin of the periodogram. The new frequency depends on the previous frequency modified by a factor f:
\[d\nu_j = d\nu_{j1} (1+f)\] Parameters
 f: float, optional, default ``0.01``
parameter that steers the frequency resolution
 Returns
 new_spec
Crossspectrum
(or one of its subclasses) object The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from
AveragedPowerspectrum
, it will return an object of class
 new_spec
 time_lag()[source]¶
Calculate time lag and uncertainty.
Equation from Bendat & Piersol, 2011 [bendat2011]__.
 Returns
 lagnp.ndarray
The time lag
 lag_errnp.ndarray
The uncertainty in the time lag

to_astropy_table()
astropy.table.Table
¶ Create an Astropy Table from a
StingrayObject
Array attributes (e.g.
time
,pi
,energy
, etc. forEventList
) are converted into columns, while meta attributes (mjdref
,gti
, etc.) are saved into themeta
dictionary.
 to_norm(norm, inplace=False)¶
Convert Cross spectrum to new normalization.
 Parameters
 normstr
The new normalization of the spectrum
 Returns
 new_specobject, same class as input
The new, normalized, spectrum.
 Other Parameters
 inplace: bool, default False
If True, change the current instance. Otherwise, return a new one
 to_pandas() DataFrame ¶
Create a pandas
DataFrame
from aStingrayObject
.Array attributes (e.g.
time
,pi
,energy
, etc. forEventList
) are converted into columns, while meta attributes (mjdref
,gti
, etc.) are saved into theds.attrs
dictionary.
 to_xarray() Dataset ¶
Create an
xarray
Dataset from aStingrayObject
.Array attributes (e.g.
time
,pi
,energy
, etc. forEventList
) are converted into columns, while meta attributes (mjdref
,gti
, etc.) are saved into theds.attrs
dictionary.
 type = 'crossspectrum'¶
Make an averaged cross spectrum from a light curve by segmenting two light curves, Fouriertransforming each segment and then averaging the resulting cross spectra.
 Parameters
 data1:class:
stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve
objects ORstingray.EventList
object A light curve from which to compute the cross spectrum. In some cases, this would be the light curve of the wavelength/energy/frequency band of interest.
 data2:class:
stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve
objects ORstingray.EventList
object A second light curve to use in the cross spectrum. In some cases, this would be the wavelength/energy/frequency reference band to compare the band of interest with.
 segment_size: float
The size of each segment to average. Note that if the total duration of each
Lightcurve
object inlc1
orlc2
is not an integer multiple of thesegment_size
, then any fraction leftover at the end of the time series will be lost. Otherwise you introduce artifacts. norm: {``frac``, ``abs``, ``leahy``, ``none``}, default ``none``
The normalization of the (real part of the) cross spectrum.
 data1:class:
 Other Parameters
 gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.
 dtfloat
The time resolution of the light curve. Only needed when constructing light curves in the case where data1 or data2 are of :class:EventList
 power_type: string, optional, default ``all``
Parameter to choose among complete, real part and magnitude of the cross spectrum.
 silentbool, default False
Do not show a progress bar when generating an averaged cross spectrum. Useful for the batch execution of many spectra
 lc1:class:
stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve
objects For backwards compatibility only. Like
data1
, but nostingray.events.EventList
objects allowed lc2:class:
stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve
objects For backwards compatibility only. Like
data2
, but nostingray.events.EventList
objects allowed fullspec: boolean, optional, default ``False``
If True, return the full array of frequencies, otherwise return just the positive frequencies.
 large_databool, default False
Use only for input light curves larger than \(10^7\) data points. Uses zarr and dask for computation.
 save_allbool, default False
Save all intermediate PDSs used for the final average. Use with care. This is likely to fill up your RAM on mediumsized datasets, and to slow down the computation when rebinning.
 skip_checks: bool
Skip initial checks, for speed or other reasons (you need to trust your inputs!)
 use_common_mean: bool
Averaged cross spectra are normalized in two possible ways: one is by normalizing each of the single spectra that get averaged, the other is by normalizing after the averaging. If
use_common_mean
is selected, the spectrum will be normalized after the average. legacy: bool
Use the legacy machinery of
AveragedCrossspectrum
. This might be useful to compare with old results, and is also needed to use light curve lists as an input, to conserve the spectra of each segment, or to use the large_data option. gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals. Defaults to the common GTIs from the two input objects. Could throw errors if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input objects before making the cross spectrum.
 Attributes
 freq: numpy.ndarray
The array of midbin frequencies that the Fourier transform samples.
 power: numpy.ndarray
The array of cross spectra.
 power_err: numpy.ndarray
The uncertainties of
power
. An approximation for each bin given bypower_err= power/sqrt(m)
. Wherem
is the number of power averaged in each bin (by frequency binning, or averaging power spectra of segments of a light curve). Note that for a single realization (m=1
) the error is equal to the power. df: float
The frequency resolution.
 m: int
The number of averaged cross spectra.
 n: int
The number of time bins per segment of light curve.
 nphots1: float
The total number of photons in the first (interest) light curve.
 nphots2: float
The total number of photons in the second (reference) light curve.
 gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
Good Time intervals.
 write(filename: str, fmt: Optional[str] = None, format_=None) None ¶
Generic writer for :class`StingrayObject`
Currently supported formats are
pickle (not recommended for longterm storage)
any other formats compatible with the writers in
astropy.table.Table
(ascii.ecsv, hdf5, etc.)
..note:
Complex values can be dealt with outofthebox in some formats like HDF5 or FITS, not in others (e.g. all ASCII formats). With these formats, and in any case when fmt is ``None``, complex values will be stored as two columns of real numbers, whose names are of the format <variablename>.real and <variablename>.imag
 Parameters
 filename: str
Name and path of the file to save the object list to.
 fmt: str
The file format to store the data in. Available options are
pickle
,hdf5
,ascii
,fits
AveragedPowerspectrum¶
 class stingray.AveragedPowerspectrum(data=None, segment_size=None, norm='frac', gti=None, silent=False, dt=None, lc=None, large_data=False, save_all=False, skip_checks=False, use_common_mean=True, legacy=False)[source]¶
 array_attrs() list[str] ¶
List the names of the array attributes of the Stingray Object.
By array attributes, we mean the ones with the same size and shape as
main_array_attr
(e.g.time
inEventList
)
 classical_significances(threshold=1, trial_correction=False)¶
Compute the classical significances for the powers in the power spectrum, assuming an underlying noise distribution that follows a chisquare distributions with 2M degrees of freedom, where M is the number of powers averaged in each bin.
Note that this function will only produce correct results when the following underlying assumptions are fulfilled:
The power spectrum is Leahynormalized
There is no source of variability in the data other than the periodic signal to be determined with this method. This is important! If there are other sources of (aperiodic) variability in the data, this method will not produce correct results, but instead produce a large number of spurious false positive detections!
There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pileup or dead time)
By default, the method produces
(index,pvalues)
for all powers in the power spectrum, where index is the numerical index of the power in question. If athreshold
is set, then only powers with pvalues below that threshold with their respective indices. Iftrial_correction
is set toTrue
, then the threshold will be corrected for the number of trials (frequencies) in the power spectrum before being used. Parameters
 thresholdfloat, optional, default
1
The threshold to be used when reporting pvalues of potentially significant powers. Must be between 0 and 1. Default is
1
(all pvalues will be reported). trial_correctionbool, optional, default
False
A Boolean flag that sets whether the
threshold
will be corrected by the number of frequencies before being applied. This decreases thethreshold
(pvalues need to be lower to count as significant). Default isFalse
(report all powers) though for any application wherethreshold`
is set to something meaningful, this should also be applied!
 thresholdfloat, optional, default
 Returns
 pvalsiterable
A list of
(pvalue, index)
tuples for all powers that have pvalues lower than the threshold specified inthreshold
.
 coherence()¶
Averaged Coherence function.
Coherence is defined in Vaughan and Nowak, 1996 5. It is a Fourier frequency dependent measure of the linear correlation between time series measured simultaneously in two energy channels.
Compute an averaged Coherence function of cross spectrum by computing coherence function of each segment and averaging them. The return type is a tuple with first element as the coherence function and the second element as the corresponding uncertainty associated with it.
Note : The uncertainty in coherence function is strictly valid for Gaussian statistics only.
 Returns
 (coh, uncertainty)tuple of np.ndarray
Tuple comprising the coherence function and uncertainty.
References
 compute_rms(min_freq, max_freq, white_noise_offset=0.0)¶
Compute the fractional rms amplitude in the power spectrum between two frequencies.
 Parameters
 min_freq: float
The lower frequency bound for the calculation.
 max_freq: float
The upper frequency bound for the calculation.
 Returns
 rms: float
The fractional rms amplitude contained between
min_freq
andmax_freq
. rms_err: float
The error on the fractional rms amplitude.
 Other Parameters
 white_noise_offsetfloat, default 0
This is the white noise level, in Leahy normalization. In the ideal case, this is 2. Dead time and other instrumental effects can alter it. The user can fit the white noise level outside this function and it will get subtracted from powers here.
 classmethod from_astropy_table(ts: Table) Tso ¶
Create a Stingray Object object from data in an Astropy Table.
The table MUST contain at least a column named like the
main_array_attr
. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.It is strongly advisable to define such attributes and columns using the standard attributes of the wanted StingrayObject (e.g.
time
,pi
, etc. forEventList
)
 static from_events(events, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True)¶
Calculate an average power spectrum from an event list.
 Parameters
 events
stingray.EventList
Event list to be analyzed.
 dtfloat
The time resolution of the intermediate light curves (sets the Nyquist frequency).
 events
 Other Parameters
 segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for
AveragedPowerspectrum
. gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.
 normstr, default “frac”
The normalization of the periodogram.
abs
is absolute rms,frac
is fractional rms,leahy
is Leahy+83 normalization, andnone
is the unnormalized periodogram. use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set
use_common_mean
to False to calculate it on a persegment basis. silentbool, default False
Silence the progress bars.
 static from_lc_iterable(iter_lc, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True)¶
Calculate the average power spectrum of an iterable collection of light curves.
 Parameters
 iter_lciterable of
stingray.Lightcurve
objects ornp.array
Light curves. If arrays, use them as counts.
 dtfloat
The time resolution of the light curves (sets the Nyquist frequency)
 iter_lciterable of
 Other Parameters
 segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for
AveragedPowerspectrum
. gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.
 normstr, default “frac”
The normalization of the periodogram.
abs
is absolute rms,frac
is fractional rms,leahy
is Leahy+83 normalization, andnone
is the unnormalized periodogram. use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set
use_common_mean
to False to calculate it on a persegment basis. silentbool, default False
Silence the progress bars.
 static from_lightcurve(lc, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True)¶
Calculate a power spectrum from a light curve.
 Parameters
 events
stingray.Lightcurve
Light curve to be analyzed.
 dtfloat
The time resolution of the intermediate light curves (sets the Nyquist frequency).
 events
 Other Parameters
 segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for
AveragedPowerspectrum
. gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.
 normstr, default “frac”
The normalization of the periodogram.
abs
is absolute rms,frac
is fractional rms,leahy
is Leahy+83 normalization, andnone
is the unnormalized periodogram. use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set
use_common_mean
to False to calculate it on a persegment basis. silentbool, default False
Silence the progress bars.
 classmethod from_pandas(ts: DataFrame) Tso ¶
Create an
StingrayObject
object from data in a pandas DataFrame.The dataframe MUST contain at least a column named like the
main_array_attr
. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.It is strongly advisable to define such attributes and columns using the standard attributes of the wanted StingrayObject (e.g.
time
,pi
, etc. forEventList
)
 static from_time_array(times, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True)¶
Calculate an average power spectrum from an array of event times.
 Parameters
 times
np.array
Event arrival times.
 dtfloat
The time resolution of the intermediate light curves (sets the Nyquist frequency).
 times
 Other Parameters
 segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for
AveragedPowerspectrum
. gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.
 normstr, default “frac”
The normalization of the periodogram.
abs
is absolute rms,frac
is fractional rms,leahy
is Leahy+83 normalization, andnone
is the unnormalized periodogram. use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set
use_common_mean
to False to calculate it on a persegment basis. silentbool, default False
Silence the progress bars.
 classmethod from_xarray(ts: Dataset) Tso ¶
Create a
StingrayObject
from data in an xarray Dataset.The dataset MUST contain at least a column named like the
main_array_attr
. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.It is strongly advisable to define such attributes and columns using the standard attributes of the wanted StingrayObject (e.g.
time
,pi
, etc. forEventList
)
 initial_checks(*args, **kwargs)[source]¶
Examples
>>> times = np.arange(0, 10) >>> ev1 = EventList(times) >>> ev2 = EventList(times) >>> ac = AveragedCrossspectrum()
If AveragedCrossspectrum, you need
segment_size
>>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1) Traceback (most recent call last): … ValueError: segment_size must be specified…And it needs to be finite! >>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1., segment_size=np.nan) Traceback (most recent call last): … ValueError: segment_size must be finite!
 meta_attrs() list[str] ¶
List the names of the meta attributes of the Stingray Object.
By array attributes, we mean the ones with a different size and shape than
main_array_attr
(e.g.time
inEventList
)
 modulation_upper_limit(fmin=None, fmax=None, c=0.95)¶
Upper limit on a sinusoidal modulation.
To understand the meaning of this amplitude: if the modulation is described by:
..math:: p = overline{p} (1 + a * sin(x))
this function returns a.
If it is a sum of sinusoidal harmonics instead ..math:: p = overline{p} (1 + sum_l a_l * sin(lx)) a is equivalent to \(\sqrt(\sum_l a_l^2)\).
See
stingray.stats.power_upper_limit
,stingray.stats.amplitude_upper_limit
for more information.The formula used to calculate the upper limit assumes the Leahy normalization. If the periodogram is in another normalization, we will internally convert it to Leahy before calculating the upper limit.
 Parameters
 fmin: float
The minimum frequency to search (defaults to the first nonzero bin)
 fmax: float
The maximum frequency to search (defaults to the Nyquist frequency)
 Returns
 a: float
The modulation amplitude that could produce P>pmeas with 1  c probability.
 Other Parameters
 c: float
The confidence value for the upper limit (e.g. 0.95 = 95%)
Examples
>>> pds = Powerspectrum() >>> pds.norm = "leahy" >>> pds.freq = np.arange(0., 5.) >>> # Note: this pds has 40 as maximum value between 2 and 5 Hz >>> pds.power = np.array([100000, 1, 1, 40, 1]) >>> pds.m = 1 >>> pds.nphots = 30000 >>> pds.modulation_upper_limit(fmin=2, fmax=5, c=0.99) 0.1016...
 phase_lag()¶
Return the fourier phase lag of the cross spectrum.
 plot(labels=None, axis=None, title=None, marker='', save=False, filename=None)¶
Plot the amplitude of the cross spectrum vs. the frequency using
matplotlib
. Parameters
 labelsiterable, default
None
A list of tuple with
xlabel
andylabel
as strings. axislist, tuple, string, default
None
Parameter to set axis properties of the
matplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument for the``matplotlib.pyplot.axis()`` method. titlestr, default
None
The title of the plot.
 markerstr, default ‘’
Line style and color of the plot. Line styles and colors are combined in a single format string, as in
'bo'
for blue circles. Seematplotlib.pyplot.plot
for more options. saveboolean, optional, default
False
If
True
, save the figure with specified filename. filenamestr
File name of the image to save. Depends on the boolean
save
.
 labelsiterable, default
 classmethod read(filename: str, fmt: str = None, format_=None) Tso ¶
Generic reader for :class`StingrayObject`
Currently supported formats are
pickle (not recommended for longterm storage)
any other formats compatible with the writers in
astropy.table.Table
(ascii.ecsv, hdf5, etc.)
Files that need the
astropy.table.Table
interface MUST contain at least a column named like themain_array_attr
. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all attributes such asmission
,gti
, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care...note:
Complex values can be dealt with outofthebox in some formats like HDF5 or FITS, not in others (e.g. all ASCII formats). With these formats, and in any case when fmt is ``None``, complex values should be stored as two columns of real numbers, whose names are of the format <variablename>.real and <variablename>.imag
 Parameters
 filename: str
Path and file name for the file to be read.
 fmt: str
Available options are ‘pickle’, ‘hea’, and any
Table
supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.
 Returns
 obj:class:
StingrayObject
object The object reconstructed from file
 obj:class:
 rebin(df=None, f=None, method='mean')¶
Rebin the power spectrum.
 Parameters
 df: float
The new frequency resolution.
 Returns
 bin_cs =class:
Powerspectrum
object The newly binned power spectrum.
 bin_cs =class:
 Other Parameters
 f: float
The rebin factor. If specified, it substitutes
df
withf*self.df
, sof>1
is recommended.
 rebin_log(f=0.01)¶
Logarithmic rebin of the periodogram. The new frequency depends on the previous frequency modified by a factor f:
\[d\nu_j = d\nu_{j1} (1+f)\] Parameters
 f: float, optional, default ``0.01``
parameter that steers the frequency resolution
 Returns
 new_spec
Crossspectrum
(or one of its subclasses) object The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from
AveragedPowerspectrum
, it will return an object of class
 new_spec
 time_lag()¶
Calculate time lag and uncertainty.
Equation from Bendat & Piersol, 2011 [bendat2011]__.
 Returns
 lagnp.ndarray
The time lag
 lag_errnp.ndarray
The uncertainty in the time lag

to_astropy_table()
astropy.table.Table
¶ Create an Astropy Table from a
StingrayObject
Array attributes (e.g.
time
,pi
,energy
, etc. forEventList
) are converted into columns, while meta attributes (mjdref
,gti
, etc.) are saved into themeta
dictionary.
 to_norm(norm, inplace=False)¶
Convert Cross spectrum to new normalization.
 Parameters
 normstr
The new normalization of the spectrum
 Returns
 new_specobject, same class as input
The new, normalized, spectrum.
 Other Parameters
 inplace: bool, default False
If True, change the current instance. Otherwise, return a new one
 to_pandas() DataFrame ¶
Create a pandas
DataFrame
from aStingrayObject
.Array attributes (e.g.
time
,pi
,energy
, etc. forEventList
) are converted into columns, while meta attributes (mjdref
,gti
, etc.) are saved into theds.attrs
dictionary.
 to_xarray() Dataset ¶
Create an
xarray
Dataset from aStingrayObject
.Array attributes (e.g.
time
,pi
,energy
, etc. forEventList
) are converted into columns, while meta attributes (mjdref
,gti
, etc.) are saved into theds.attrs
dictionary.
 type = 'powerspectrum'¶
Make an averaged periodogram from a light curve by segmenting the light curve, Fouriertransforming each segment and then averaging the resulting periodograms.
 Parameters
 data:class:
stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve
objects ORstingray.EventList
object The light curve data to be Fouriertransformed.
 segment_size: float
The size of each segment to average. Note that if the total duration of each
Lightcurve
object in lc is not an integer multiple of thesegment_size
, then any fraction leftover at the end of the time series will be lost. norm: {“leahy”  “frac”  “abs”  “none” }, optional, default “frac”
The normalization of the periodogram to be used.
 data:class:
 Other Parameters
 gti: 2d float array
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
– Good Time intervals. This choice overrides the GTIs in the single light curves. Use with care, especially if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input object before making the power spectrum. silentbool, default False
Do not show a progress bar when generating an averaged cross spectrum. Useful for the batch execution of many spectra.
 dt: float
The time resolution of the light curve. Only needed when constructing light curves in the case where data is of :class:EventList.
 large_databool, default False
Use only for data larger than 10**7 data points!! Uses zarr and dask for computation.
 save_allbool, default False
Save all intermediate PDSs used for the final average. Use with care. This is likely to fill up your RAM on mediumsized datasets, and to slow down the computation when rebinning.
 skip_checks: bool
Skip initial checks, for speed or other reasons (you need to trust your inputs!).
 Attributes
 norm: {``leahy``  ``frac``  ``abs``  ``none`` }
The normalization of the periodogram.
 freq: numpy.ndarray
The array of midbin frequencies that the Fourier transform samples.
 power: numpy.ndarray
The array of normalized squared absolute values of Fourier amplitudes.
 power_err: numpy.ndarray
The uncertainties of
power
. An approximation for each bin given bypower_err= power/sqrt(m)
. Wherem
is the number of power averaged in each bin (by frequency binning, or averaging power spectra of segments of a light curve). Note that for a single realization (m=1
) the error is equal to the power. df: float
The frequency resolution.
 m: int
The number of averaged periodograms.
 n: int
The number of data points in the light curve.
 nphots: float
The total number of photons in the light curve.
 legacy: bool
Use the legacy machinery of
AveragedPowerspectrum
. This might be useful to compare with old results, and is also needed to use light curve lists as an input, to conserve the spectra of each segment, or to use the large_data option.
 write(filename: str, fmt: Optional[str] = None, format_=None) None ¶
Generic writer for :class`StingrayObject`
Currently supported formats are
pickle (not recommended for longterm storage)
any other formats compatible with the writers in
astropy.table.Table
(ascii.ecsv, hdf5, etc.)
..note:
Complex values can be dealt with outofthebox in some formats like HDF5 or FITS, not in others (e.g. all ASCII formats). With these formats, and in any case when fmt is ``None``, complex values will be stored as two columns of real numbers, whose names are of the format <variablename>.real and <variablename>.imag
 Parameters
 filename: str
Name and path of the file to save the object list to.
 fmt: str
The file format to store the data in. Available options are
pickle
,hdf5
,ascii
,fits
Dynamical Powerspectrum¶
 class stingray.DynamicalPowerspectrum(lc, segment_size, norm='frac', gti=None, dt=None)[source]¶
 array_attrs() list[str] ¶
List the names of the array attributes of the Stingray Object.
By array attributes, we mean the ones with the same size and shape as
main_array_attr
(e.g.time
inEventList
)
 classical_significances(threshold=1, trial_correction=False)¶
Compute the classical significances for the powers in the power spectrum, assuming an underlying noise distribution that follows a chisquare distributions with 2M degrees of freedom, where M is the number of powers averaged in each bin.
Note that this function will only produce correct results when the following underlying assumptions are fulfilled:
The power spectrum is Leahynormalized
There is no source of variability in the data other than the periodic signal to be determined with this method. This is important! If there are other sources of (aperiodic) variability in the data, this method will not produce correct results, but instead produce a large number of spurious false positive detections!
There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pileup or dead time)
By default, the method produces
(index,pvalues)
for all powers in the power spectrum, where index is the numerical index of the power in question. If athreshold
is set, then only powers with pvalues below that threshold with their respective indices. Iftrial_correction
is set toTrue
, then the threshold will be corrected for the number of trials (frequencies) in the power spectrum before being used. Parameters
 thresholdfloat, optional, default
1
The threshold to be used when reporting pvalues of potentially significant powers. Must be between 0 and 1. Default is
1
(all pvalues will be reported). trial_correctionbool, optional, default
False
A Boolean flag that sets whether the
threshold
will be corrected by the number of frequencies before being applied. This decreases thethreshold
(pvalues need to be lower to count as significant). Default isFalse
(report all powers) though for any application wherethreshold`
is set to something meaningful, this should also be applied!
 thresholdfloat, optional, default
 Returns
 pvalsiterable
A list of
(pvalue, index)
tuples for all powers that have pvalues lower than the threshold specified inthreshold
.
 coherence()¶
Averaged Coherence function.
Coherence is defined in Vaughan and Nowak, 1996 6. It is a Fourier frequency dependent measure of the linear correlation between time series measured simultaneously in two energy channels.
Compute an averaged Coherence function of cross spectrum by computing coherence function of each segment and averaging them. The return type is a tuple with first element as the coherence function and the second element as the corresponding uncertainty associated with it.
Note : The uncertainty in coherence function is strictly valid for Gaussian statistics only.
 Returns
 (coh, uncertainty)tuple of np.ndarray
Tuple comprising the coherence function and uncertainty.
References
 compute_rms(min_freq, max_freq, white_noise_offset=0.0)¶
Compute the fractional rms amplitude in the power spectrum between two frequencies.
 Parameters
 min_freq: float
The lower frequency bound for the calculation.
 max_freq: float
The upper frequency bound for the calculation.
 Returns
 rms: float
The fractional rms amplitude contained between
min_freq
andmax_freq
. rms_err: float
The error on the fractional rms amplitude.
 Other Parameters
 white_noise_offsetfloat, default 0
This is the white noise level, in Leahy normalization. In the ideal case, this is 2. Dead time and other instrumental effects can alter it. The user can fit the white noise level outside this function and it will get subtracted from powers here.
 classmethod from_astropy_table(ts: Table) Tso ¶
Create a Stingray Object object from data in an Astropy Table.
The table MUST contain at least a column named like the
main_array_attr
. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.It is strongly advisable to define such attributes and columns using the standard attributes of the wanted StingrayObject (e.g.
time
,pi
, etc. forEventList
)
 static from_events(events, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True)¶
Calculate an average power spectrum from an event list.
 Parameters
 events
stingray.EventList
Event list to be analyzed.
 dtfloat
The time resolution of the intermediate light curves (sets the Nyquist frequency).
 events
 Other Parameters
 segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for
AveragedPowerspectrum
. gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.
 normstr, default “frac”
The normalization of the periodogram.
abs
is absolute rms,frac
is fractional rms,leahy
is Leahy+83 normalization, andnone
is the unnormalized periodogram. use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set
use_common_mean
to False to calculate it on a persegment basis. silentbool, default False
Silence the progress bars.
 static from_lc_iterable(iter_lc, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True)¶
Calculate the average power spectrum of an iterable collection of light curves.
 Parameters
 iter_lciterable of
stingray.Lightcurve
objects ornp.array
Light curves. If arrays, use them as counts.
 dtfloat
The time resolution of the light curves (sets the Nyquist frequency)
 iter_lciterable of
 Other Parameters
 segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for
AveragedPowerspectrum
. gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.
 normstr, default “frac”
The normalization of the periodogram.
abs
is absolute rms,frac
is fractional rms,leahy
is Leahy+83 normalization, andnone
is the unnormalized periodogram. use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set
use_common_mean
to False to calculate it on a persegment basis. silentbool, default False
Silence the progress bars.
 static from_lightcurve(lc, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True)¶
Calculate a power spectrum from a light curve.
 Parameters
 events
stingray.Lightcurve
Light curve to be analyzed.
 dtfloat
The time resolution of the intermediate light curves (sets the Nyquist frequency).
 events
 Other Parameters
 segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for
AveragedPowerspectrum
. gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.
 normstr, default “frac”
The normalization of the periodogram.
abs
is absolute rms,frac
is fractional rms,leahy
is Leahy+83 normalization, andnone
is the unnormalized periodogram. use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set
use_common_mean
to False to calculate it on a persegment basis. silentbool, default False
Silence the progress bars.
 classmethod from_pandas(ts: DataFrame) Tso ¶
Create an
StingrayObject
object from data in a pandas DataFrame.The dataframe MUST contain at least a column named like the
main_array_attr
. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.It is strongly advisable to define such attributes and columns using the standard attributes of the wanted StingrayObject (e.g.
time
,pi
, etc. forEventList
)
 static from_time_array(times, dt, segment_size=None, gti=None, norm='frac', silent=False, use_common_mean=True)¶
Calculate an average power spectrum from an array of event times.
 Parameters
 times
np.array
Event arrival times.
 dtfloat
The time resolution of the intermediate light curves (sets the Nyquist frequency).
 times
 Other Parameters
 segment_sizefloat
The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for
AveragedPowerspectrum
. gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], …]``
Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that happens, assign the desired GTIs to the input object.
 normstr, default “frac”
The normalization of the periodogram.
abs
is absolute rms,frac
is fractional rms,leahy
is Leahy+83 normalization, andnone
is the unnormalized periodogram. use_common_meanbool, default True
The mean of the light curve can be estimated in each interval, or on the full light curve. This gives different results (Alston+2013). By default, we assume the mean is calculated on the full light curve, but the user can set
use_common_mean
to False to calculate it on a persegment basis. silentbool, default False
Silence the progress bars.
 classmethod from_xarray(ts: Dataset) Tso ¶
Create a
StingrayObject
from data in an xarray Dataset.The dataset MUST contain at least a column named like the
main_array_attr
. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.It is strongly advisable to define such attributes and columns using the standard attributes of the wanted StingrayObject (e.g.
time
,pi
, etc. forEventList
)
 initial_checks(*args, **kwargs)¶
Examples
>>> times = np.arange(0, 10) >>> ev1 = EventList(times) >>> ev2 = EventList(times) >>> ac = AveragedCrossspectrum()
If AveragedCrossspectrum, you need
segment_size
>>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1) Traceback (most recent call last): … ValueError: segment_size must be specified…And it needs to be finite! >>> AveragedCrossspectrum.initial_checks(ac, data1=ev1, data2=ev2, dt=1., segment_size=np.nan) Traceback (most recent call last): … ValueError: segment_size must be finite!
 meta_attrs() list[str] ¶
List the names of the meta attributes of the Stingray Object.
By array attributes, we mean the ones with a different size and shape than
main_array_attr
(e.g.time
inEventList
)
 modulation_upper_limit(fmin=None, fmax=None, c=0.95)¶
Upper limit on a sinusoidal modulation.
To understand the meaning of this amplitude: if the modulation is described by:
..math:: p = overline{p} (1 + a * sin(x))
this function returns a.
If it is a sum of sinusoidal harmonics instead ..math:: p = overline{p} (1 + sum_l a_l * sin(lx)) a is equivalent to \(\sqrt(\sum_l a_l^2)\).
See
stingray.stats.power_upper_limit
,stingray.stats.amplitude_upper_limit
for more information.The formula used to calculate the upper limit assumes the Leahy normalization. If the periodogram is in another normalization, we will internally convert it to Leahy before calculating the upper limit.
 Parameters
 fmin: float
The minimum frequency to search (defaults to the first nonzero bin)
 fmax: float
The maximum frequency to search (defaults to the Nyquist frequency)
 Returns
 a: float
The modulation amplitude that could produce P>pmeas with 1  c probability.
 Other Parameters
 c: float
The confidence value for the upper limit (e.g. 0.95 = 95%)
Examples
>>> pds = Powerspectrum() >>> pds.norm = "leahy" >>> pds.freq = np.arange(0., 5.) >>> # Note: this pds has 40 as maximum value between 2 and 5 Hz >>> pds.power = np.array([100000, 1, 1, 40, 1]) >>> pds.m = 1 >>> pds.nphots = 30000 >>> pds.modulation_upper_limit(fmin=2, fmax=5, c=0.99) 0.1016...
 phase_lag()¶
Return the fourier phase lag of the cross spectrum.
 plot(labels=None, axis=None, title=None, marker='', save=False, filename=None)¶
Plot the amplitude of the cross spectrum vs. the frequency using
matplotlib
. Parameters
 labelsiterable, default
None
A list of tuple with
xlabel
andylabel
as strings. axislist, tuple, string, default
None
Parameter to set axis properties of the
matplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument for the``matplotlib.pyplot.axis()`` method. titlestr, default
None
The title of the plot.
 markerstr, default ‘’
Line style and color of the plot. Line styles and colors are combined in a single format string, as in
'bo'
for blue circles. Seematplotlib.pyplot.plot
for more options. saveboolean, optional, default
False
If
True
, save the figure with specified filename. filenamestr
File name of the image to save. Depends on the boolean
save
.
 labelsiterable, default
 classmethod read(filename: str, fmt: str = None, format_=None) Tso ¶
Generic reader for :class`StingrayObject`
Currently supported formats are
pickle (not recommended for longterm storage)
any other formats compatible with the writers in
astropy.table.Table
(ascii.ecsv, hdf5, etc.)
Files that need the
astropy.table.Table
interface MUST contain at least a column named like themain_array_attr
. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all attributes such asmission
,gti
, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care...note:
Complex values can be dealt with outofthebox in some formats like HDF5 or FITS, not in others (e.g. all ASCII formats). With these formats, and in any case when fmt is ``None``, complex values should be stored as two columns of real numbers, whose names are of the format <variablename>.real and <variablename>.imag
 Parameters
 filename: str
Path and file name for the file to be read.
 fmt: str
Available options are ‘pickle’, ‘hea’, and any
Table
supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.
 Returns
 obj:class:
StingrayObject
object The object reconstructed from file
 obj:class:
 rebin(df=None, f=None, method='mean')¶
Rebin the power spectrum.
 Parameters
 df: float
The new frequency resolution.
 Returns
 bin_cs =class:
Powerspectrum
object The newly binned power spectrum.
 bin_cs =class:
 Other Parameters
 f: float
The rebin factor. If specified, it substitutes
df
withf*self.df
, sof>1
is recommended.
 rebin_frequency(df_new, method='sum')[source]¶
Rebin the Dynamic Power Spectrum to a new frequency resolution. Rebinning is an inplace operation, i.e. will replace the existing
dyn_ps
attribute.While the new resolution need not be an integer multiple of the previous frequency resolution, be aware that if it is not, the last bin will be cut off by the fraction left over by the integer division.
 Parameters
 df_new: float
The new frequency resolution of the dynamical power spectrum. Must be larger than the frequency resolution of the old dynamical power spectrum!
 method: {“sum”  “mean”  “average”}, optional, default “sum”
This keyword argument sets whether the counts in the new bins should be summed or averaged.
 rebin_log(f=0.01)¶
Logarithmic rebin of the periodogram. The new frequency depends on the previous frequency modified by a factor f:
\[d\nu_j = d\nu_{j1} (1+f)\] Parameters
 f: float, optional, default ``0.01``
parameter that steers the frequency resolution
 Returns
 new_spec
Crossspectrum
(or one of its subclasses) object The newly binned cross spectrum or power spectrum. Note: this object will be of the same type as the object that called this method. For example, if this method is called from
AveragedPowerspectrum
, it will return an object of class
 new_spec
 rebin_time(dt_new, method='sum')[source]¶
Rebin the Dynamic Power Spectrum to a new time resolution. While the new resolution need not be an integer multiple of the previous time resolution, be aware that if it is not, the last bin will be cut off by the fraction left over by the integer division.
 Parameters
 dt_new: float
The new time resolution of the dynamical power spectrum. Must be larger than the time resolution of the old dynamical power spectrum!
 method: {“sum”  “mean”  “average”}, optional, default “sum”
This keyword argument sets whether the counts in the new bins should be summed or averaged.
 Returns
 time_new: numpy.ndarray
Time axis with new rebinned time resolution.
 dynspec_new: numpy.ndarray
New rebinned Dynamical Power Spectrum.
 time_lag()¶
Calculate time lag and uncertainty.
Equation from Bendat & Piersol, 2011 [bendat2011]__.
 Returns
 lagnp.ndarray
The time lag
 lag_errnp.ndarray
The uncertainty in the time lag

to_astropy_table()
astropy.table.Table
¶ Create an Astropy Table from a
StingrayObject
Array attributes (e.g.
time
,pi
,energy
, etc. forEventList
) are converted into columns, while meta attributes (mjdref
,gti
, etc.) are saved into themeta
dictionary.
 to_norm(norm, inplace=False)¶
Convert Cross spectrum to new normalization.
 Parameters
 normstr
The new normalization of the spectrum
 Returns
 new_specobject, same class as input
The new, normalized, spectrum.
 Other Parameters
 inplace: bool, default False
If True, change the current instance. Otherwise, return a new one
 to_pandas() DataFrame ¶
Create a pandas
DataFrame
from aStingrayObject
.Array attributes (e.g.
time
,pi
,energy
, etc. forEventList
) are converted into columns, while meta attributes (mjdref
,gti
, etc.) are saved into theds.attrs
dictionary.
 to_xarray() Dataset ¶
Create an
xarray
Dataset from aStingrayObject
.Array attributes (e.g.
time
,pi
,energy
, etc. forEventList
) are converted into columns, while meta attributes (mjdref
,gti
, etc.) are saved into theds.attrs
dictionary.
 trace_maximum(min_freq=None, max_freq=None)[source]¶
Return the indices of the maximum powers in each segment
Powerspectrum
between specified frequencies. Parameters
 min_freq: float, default ``None``
The lower frequency bound.
 max_freq: float, default ``None``
The upper frequency bound.
 Returns
 max_positionsnp.array
The array of indices of the maximum power in each segment having frequency between
min_freq
andmax_freq
.
 type = 'powerspectrum'¶
Create a dynamical power spectrum, also often called a spectrogram.
This class will divide a
Lightcurve
object into segments of lengthsegment_size
, create a power spectrum for each segment and store all powers in a matrix as a function of both time (using the midpoint of each segment) and frequency.This is often used to trace changes in period of a (quasi)periodic signal over time.
 Parameters
 lc
stingray.Lightcurve
orstingray.EventList
object The time series or event list of which the dynamical power spectrum is to be calculated.
 segment_sizefloat, default 1
Length of the segment of light curve, default value is 1 (in whatever units the
time
array in theLightcurve`
object uses). norm: {“leahy”  “frac”  “abs”  “none” }, optional, default “frac”
The normaliation of the periodogram to be used.
 lc
 Other Parameters
 gti: 2d float array
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
– Good Time intervals. This choice overrides the GTIs in the single light curves. Use with care, especially if these GTIs have overlaps with the input object GTIs! If you’re getting errors regarding your GTIs, don’t use this and only give GTIs to the input object before making the power spectrum.
 Attributes
 segment_size: float
The size of each segment to average. Note that if the total duration of each input object in lc is not an integer multiple of the
segment_size
, then any fraction leftover at the end of the time series will be lost. dyn_psnp.ndarray
The matrix of normalized squared absolute values of Fourier amplitudes. The axis are given by the
freq
andtime
attributes. norm: {``leahy``  ``frac``  ``abs``  ``none``}
The normalization of the periodogram.
 freq: numpy.ndarray
The array of midbin frequencies that the Fourier transform samples.
 df: float
The frequency resolution.
 dt: float
The time resolution.
 write(filename: str, fmt: Optional[str] = None, format_=None) None ¶
Generic writer for :class`StingrayObject`
Currently supported formats are
pickle (not recommended for longterm storage)
any other formats compatible with the writers in
astropy.table.Table
(ascii.ecsv, hdf5, etc.)
..note:
Complex values can be dealt with outofthebox in some formats like HDF5 or FITS, not in others (e.g. all ASCII formats). With these formats, and in any case when fmt is ``None``, complex values will be stored as two columns of real numbers, whose names are of the format <variablename>.real and <variablename>.imag
 Parameters
 filename: str
Name and path of the file to save the object list to.
 fmt: str
The file format to store the data in. Available options are
pickle
,hdf5
,ascii
,fits
CrossCorrelation¶
 class stingray.CrossCorrelation(lc1=None, lc2=None, cross=None, mode='same', norm='none')[source]¶
Make a crosscorrelation from light curves or a cross spectrum.
You can also make an empty
Crosscorrelation
object to populate with your own crosscorrelation data. Parameters
 lc1:class:
stingray.Lightcurve
object, optional, defaultNone
The first light curve data for correlation calculations.
 lc2:class:
stingray.Lightcurve
object, optional, defaultNone
The light curve data for the correlation calculations.
 cross:class:
stingray.Crossspectrum
object, defaultNone
The cross spectrum data for the correlation calculations.
 mode: {``full``, ``valid``, ``same``}, optional, default ``same``
A string indicating the size of the correlation output. See the relevant
scipy
documentation [scipydocs] for more details. norm: {``none``, ``variance``}
if “variance”, the cross correlation is normalized so that perfect correlation gives 1, and perfect anticorrelation gives 1. See Gaskell & Peterson 1987, Gardner & Done 2017
 lc1:class:
References
 Attributes
 lc1:class:
stingray.Lightcurve
The first light curve data for correlation calculations.
 lc2:class:
stingray.Lightcurve
The light curve data for the correlation calculations.
 cross:class:
stingray.Crossspectrum
The cross spectrum data for the correlation calculations.
 corr: numpy.ndarray
An array of correlation data calculated from two light curves
 time_lags: numpy.ndarray
An array of all possible time lags against which each point in corr is calculated
 dt: float
The time resolution of each light curve (used in
time_lag
calculations) time_shift: float
Time lag that gives maximum value of correlation between two light curves. There will be maximum correlation between light curves if one of the light curve is shifted by
time_shift
. n: int
Number of points in
self.corr
(length of crosscorrelation data) auto: bool
An internal flag to indicate whether this is a crosscorrelation or an autocorrelation.
 norm: {``none``, ``variance``}
The normalization specified in input
 lc1:class:
 cal_timeshift(dt=1.0)[source]¶
Calculate the cross correlation against all possible time lags, both positive and negative.
The method signal.correlation_lags() uses SciPy versions >= 1.6.1 ([scipydocslag])
 Parameters
 dt: float, optional, default ``1.0``
Time resolution of the light curve, should be passed when object is populated with correlation data and no information about light curve can be extracted. Used to calculate
time_lags
.
 Returns
 self.time_shift: float
Value of the time lag that gives maximum value of correlation between two light curves.
 self.time_lags: numpy.ndarray
An array of
time_lags
calculated from correlation data
References
 plot(labels=None, axis=None, title=None, marker='', save=False, filename=None, ax=None)[source]¶
Plot the
Crosscorrelation
as function using Matplotlib. Plot the Crosscorrelation object on a graphself.time_lags
on xaxis andself.corr
on yaxis Parameters
 labelsiterable, default
None
A list of tuple with
xlabel
andylabel
as strings. axislist, tuple, string, default
None
Parameter to set axis properties of
matplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument formatplotlib.pyplot.axis()
function. titlestr, default
None
The title of the plot.
 markerstr, default

Line style and color of the plot. Line styles and colors are combined in a single format string, as in
'bo'
for blue circles. Seematplotlib.pyplot.plot
for more options. saveboolean, optional (default=False)
If True, save the figure with specified filename.
 filenamestr
File name of the image to save. Depends on the boolean
save
. ax
matplotlib.Axes
object An axes object to fill with the cross correlation plot.
 labelsiterable, default
AutoCorrelation¶
 class stingray.AutoCorrelation(lc=None, mode='same')[source]¶
Make an autocorrelation from a light curve. You can also make an empty Autocorrelation object to populate with your own autocorrelation data.
 Parameters
 lc:class:
stingray.Lightcurve
object, optional, defaultNone
The light curve data for correlation calculations.
 mode: {``full``, ``valid``, ``same``}, optional, default ``same``
A string indicating the size of the correlation output. See the relevant
scipy
documentation [scipydocs] for more details.
 lc:class:
 Attributes
 lc1, lc2::class:`stingray.Lightcurve`
The light curve data for correlation calculations.
 corr: numpy.ndarray
An array of correlation data calculated from lightcurve data
 time_lags: numpy.ndarray
An array of all possible time lags against which each point in corr is calculated
 dt: float
The time resolution of each lightcurve (used in time_lag calculations)
 time_shift: float, zero
Max. Value of AutoCorrelation is always at zero lag.
 n: int
Number of points in self.corr(Length of autocorrelation data)
 cal_timeshift(dt=1.0)¶
Calculate the cross correlation against all possible time lags, both positive and negative.
The method signal.correlation_lags() uses SciPy versions >= 1.6.1 ([scipydocslag])
 Parameters
 dt: float, optional, default ``1.0``
Time resolution of the light curve, should be passed when object is populated with correlation data and no information about light curve can be extracted. Used to calculate
time_lags
.
 Returns
 self.time_shift: float
Value of the time lag that gives maximum value of correlation between two light curves.
 self.time_lags: numpy.ndarray
An array of
time_lags
calculated from correlation data
References
 plot(labels=None, axis=None, title=None, marker='', save=False, filename=None, ax=None)¶
Plot the
Crosscorrelation
as function using Matplotlib. Plot the Crosscorrelation object on a graphself.time_lags
on xaxis andself.corr
on yaxis Parameters
 labelsiterable, default
None
A list of tuple with
xlabel
andylabel
as strings. axislist, tuple, string, default
None
Parameter to set axis properties of
matplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument formatplotlib.pyplot.axis()
function. titlestr, default
None
The title of the plot.
 markerstr, default

Line style and color of the plot. Line styles and colors are combined in a single format string, as in
'bo'
for blue circles. Seematplotlib.pyplot.plot
for more options. saveboolean, optional (default=False)
If True, save the figure with specified filename.
 filenamestr
File name of the image to save. Depends on the boolean
save
. ax
matplotlib.Axes
object An axes object to fill with the cross correlation plot.
 labelsiterable, default
DeadTime Corrections¶
 stingray.deadtime.fad.FAD(data1, data2, segment_size, dt=None, norm='frac', plot=False, ax=None, smoothing_alg='gauss', smoothing_length=None, verbose=False, tolerance=0.05, strict=False, output_file=None, return_objects=False)[source]¶
Calculate Frequency Amplitude Differencecorrected (cross)power spectra.
Reference: Bachetti & Huppenkothen, 2018, ApJ, 853L, 21
The two input light curve must be strictly simultaneous, and recorded by two independent detectors with similar responses, so that the count rates are similar and dead time is independent. The method does not apply to different energy channels of the same instrument, or to the signal observed by two instruments with very different responses. See the paper for caveats.
 Parameters
 data1
Lightcurve
orEventList
Input data for channel 1
 data2
Lightcurve
orEventList
Input data for channel 2. Must be strictly simultaneous to
data1
and, if a light curve, have the same binning time. Also, it must be strictly independent, e.g. from a different detector. There must be no dead time crosstalk between the two time series. segment_size: float
The final Fourier products are averaged over many segments of the input light curves. This is the length of each segment being averaged. Note that the light curve must be long enough to have at least 30 segments, as the result gets better as one averages more and more segments.
 dtfloat
Time resolution of the light curves used to produce periodograms
 norm: {``frac``, ``abs``, ``leahy``, ``none``}, default ``none``
The normalization of the (real part of the) cross spectrum.
 data1
 Returns
 resultsclass:
astropy.table.Table
object ordict
orstr
The content of
results
depends on whetherreturn_objects
is True or False. Ifreturn_objects==False
,results
is aTable
with the following columns:pds1: the corrected PDS of
lc1
pds2: the corrected PDS of
lc2
cs: the corrected cospectrum
ptot: the corrected PDS of lc1 + lc2
If
return_objects
is True,results
is adict
, with keys named like the columns listed above but withAveragePowerspectrum
orAverageCrossspectrum
objects instead of arrays.
 resultsclass:
 Other Parameters
 plotbool, default False
Plot diagnostics: check if the smoothed Fourier difference scatter is a good approximation of the data scatter.
 ax
matplotlib.axes.axes
object  If not None and
plot
is True, use this axis object to produce the diagnostic plot. Otherwise, create a new figure.
 If not None and
 smoothing_alg{‘gauss’, …}
Smoothing algorithm. For now, the only smoothing algorithm allowed is
gauss
, which applies a Gaussian Filter fromscipy
. smoothing_lengthint, default
segment_size * 3
Number of bins to smooth in gaussian window smoothing
 verbose: bool, default False
Print out information on the outcome of the algorithm (recommended)
 tolerancefloat, default 0.05
Accepted relative error on the FADcorrected Fourier amplitude, to be used as success diagnostics. Should be
` stdtheor = 2 / np.sqrt(n) std = (average_corrected_fourier_diff / n).std() np.abs((std  stdtheor) / stdtheor) < tolerance `
 strictbool, default False
Decide what to do if the condition on tolerance is not met. If True, raise a
RuntimeError
. If False, just throw a warning. output_filestr, default None
Name of an output file (any extension automatically recognized by Astropy is fine)
 stingray.deadtime.fad.calculate_FAD_correction(lc1, lc2, segment_size, norm='frac', gti=None, plot=False, ax=None, smoothing_alg='gauss', smoothing_length=None, verbose=False, tolerance=0.05, strict=False, output_file=None, return_objects=False)[source]¶
Calculate Frequency Amplitude Differencecorrected (cross)power spectra.
Reference: Bachetti & Huppenkothen, 2018, ApJ, 853L, 21
The two input light curve must be strictly simultaneous, and recorded by two independent detectors with similar responses, so that the count rates are similar and dead time is independent. The method does not apply to different energy channels of the same instrument, or to the signal observed by two instruments with very different responses. See the paper for caveats.
 Parameters
 lc1: class:`stingray.ligthtcurve.Lightcurve`
Light curve from channel 1
 lc2: class:`stingray.ligthtcurve.Lightcurve`
Light curve from channel 2. Must be strictly simultaneous to
lc1
and have the same binning time. Also, it must be strictly independent, e.g. from a different detector. There must be no dead time crosstalk between the two light curves. segment_size: float
The final Fourier products are averaged over many segments of the input light curves. This is the length of each segment being averaged. Note that the light curve must be long enough to have at least 30 segments, as the result gets better as one averages more and more segments.
 norm: {``frac``, ``abs``, ``leahy``, ``none``}, default ``none``
The normalization of the (real part of the) cross spectrum.
 Returns
 resultsclass:
astropy.table.Table
object ordict
orstr
The content of
results
depends on whetherreturn_objects
is True or False. Ifreturn_objects==False
,results
is aTable
with the following columns:pds1: the corrected PDS of
lc1
pds2: the corrected PDS of
lc2
cs: the corrected cospectrum
ptot: the corrected PDS of lc1 + lc2
If
return_objects
is True,results
is adict
, with keys named like the columns listed above but withAveragePowerspectrum
orAverageCrossspectrum
objects instead of arrays.
 resultsclass:
 Other Parameters
 plotbool, default False
Plot diagnostics: check if the smoothed Fourier difference scatter is a good approximation of the data scatter.
 ax
matplotlib.axes.axes
object  If not None and
plot
is True, use this axis object to produce the diagnostic plot. Otherwise, create a new figure.
 If not None and
 smoothing_alg{‘gauss’, …}
Smoothing algorithm. For now, the only smoothing algorithm allowed is
gauss
, which applies a Gaussian Filter fromscipy
. smoothing_lengthint, default
segment_size * 3
Number of bins to smooth in gaussian window smoothing
 verbose: bool, default False
Print out information on the outcome of the algorithm (recommended)
 tolerancefloat, default 0.05
Accepted relative error on the FADcorrected Fourier amplitude, to be used as success diagnostics. Should be
` stdtheor = 2 / np.sqrt(n) std = (average_corrected_fourier_diff / n).std() np.abs((std  stdtheor) / stdtheor) < tolerance `
 strictbool, default False
Decide what to do if the condition on tolerance is not met. If True, raise a
RuntimeError
. If False, just throw a warning. output_filestr, default None
Name of an output file (any extension automatically recognized by Astropy is fine)
 stingray.deadtime.fad.get_periodograms_from_FAD_results(FAD_results, kind='ptot')[source]¶
Get Stingray periodograms from FAD results.
 Parameters
 FAD_results
astropy.table.Table
object orstr
Results from
calculate_FAD_correction
, either as a Table or an output file name kind
str
, one of [‘ptot’, ‘pds1’, ‘pds2’, ‘cs’] Kind of periodogram to get (E.g., ‘ptot’ > PDS from the sum of the two light curves, ‘cs’ > cospectrum, etc.)
 FAD_results
 Returns
 results
AveragedCrossspectrum
orAveragedpowerspectrum
object The periodogram.
 results
 stingray.deadtime.model.check_A(rate, td, tb, max_k=100, save_to=None)[source]¶
Test that A is wellbehaved.
Check that Ak >r0**2tb**2 for k>infty, as per Eq. 43 in Zhang+95.
 stingray.deadtime.model.check_B(rate, td, tb, max_k=100, save_to=None)[source]¶
Check that B>0 for k>infty.
 stingray.deadtime.model.factorial(n, exact=False)[source]¶
The factorial of a number or array of numbers.
The factorial of nonnegative integer
n
is the product of all positive integers less than or equal ton
:n! = n * (n  1) * (n  2) * ... * 1
 Parameters
 nint or array_like of ints
Input values. If
n < 0
, the return value is 0. exactbool, optional
If True, calculate the answer exactly using long integer arithmetic. If False, result is approximated in floating point rapidly using the
gamma
function. Default is False.
 Returns
 nffloat or int or ndarray
Factorial of
n
, as integer or float depending onexact
.
Notes
For arrays with
exact=True
, the factorial is computed only once, for the largest input, with each other result computed in the process. The output dtype is increased toint64
orobject
if necessary.With
exact=False
the factorial is approximated using the gamma function:\[n! = \Gamma(n+1)\]Examples
>>> from scipy.special import factorial >>> arr = np.array([3, 4, 5]) >>> factorial(arr, exact=False) array([ 6., 24., 120.]) >>> factorial(arr, exact=True) array([ 6, 24, 120]) >>> factorial(5, exact=True) 120
 stingray.deadtime.model.pds_model_zhang(N, rate, td, tb, limit_k=60)[source]¶
Calculate the deadtimemodified power spectrum.
 Parameters
 Nint
The number of spectral bins
 ratefloat
Incident count rate
 tdfloat
Dead time
 tbfloat
Bin time of the light curve
 Returns
 freqsarray of floats
Frequency array
 powerarray of floats
Power spectrum
 Other Parameters
HigherOrder Fourier and Spectral Timing Products¶
These classes implement higherorder Fourier analysis products (e.g. Bispectrum
) and
Spectral Timing related methods taking advantage of both temporal and spectral information in
modern data sets.
Bispectrum¶
 class stingray.bispectrum.Bispectrum(lc, maxlag=None, window=None, scale='biased')[source]¶
Makes a
Bispectrum
object from astingray.Lightcurve
.Bispectrum
is a higher order time series analysis method and is calculated by indirect method as Fourier transform of triple autocorrelation function also called as 3rd order cumulant. Parameters
 lc
stingray.Lightcurve
object The light curve data for bispectrum calculation.
 maxlagint, optional, default
None
Maximum lag on both positive and negative sides of 3rd order cumulant (Similar to lags in correlation). if
None
, max lag is set to onehalf of length of light curve. window{
uniform
,parzen
,hamming
,hanning
,triangular
,welch
,blackman
,flattop
}, optional, default ‘uniform’ Type of window function to apply to the data.
 scale{
biased
,unbiased
}, optional, defaultbiased
Flag to decide biased or unbiased normalization for 3rd order cumulant function.
 lc
References
1) The biphase explained: understanding the asymmetries invcoupled Fourier components of astronomical timeseries by Thomas J. Maccarone Department of Physics, Box 41051, Science Building, Texas Tech University, Lubbock TX 794091051 School of Physics and Astronomy, University of Southampton, SO16 4ES
2) T. S. Rao, M. M. Gabr, An Introduction to Bispectral Analysis and Bilinear Time Series Models, Lecture Notes in Statistics, Volume 24, D. Brillinger, S. Fienberg, J. Gani, J. Hartigan, K. Krickeberg, Editors, SpringerVerlag, New York, NY, 1984.
3) Matlab version of bispectrum under following link. https://www.mathworks.com/matlabcentral/fileexchange/60bisp3cum
Examples
>> from stingray.lightcurve import Lightcurve >> from stingray.bispectrum import Bispectrum >> lc = Lightcurve([1,2,3,4,5],[2,3,1,1,2]) >> bs = Bispectrum(lc,maxlag=1) >> bs.lags array([1., 0., 1.]) >> bs.freq array([0.5, 0., 0.5]) >> bs.cum3 array([[0.2976, 0.1024, 0.1408], [ 0.1024, 0.144, 0.2976], [ 0.1408, 0.2976, 0.1024]]) >> bs.bispec_mag array([[ 1.26336794, 0.0032 , 0.0032 ], [ 0.0032 , 0.16 , 0.0032 ], [ 0.0032 , 0.0032 , 1.26336794]]) >> bs.bispec_phase array([[ 9.65946229e01, 2.25347190e14, 3.46944695e14], [ 0.00000000e+00, 3.14159265e+00, 0.00000000e+00], [ 3.46944695e14, 2.25347190e14, 9.65946229e01]])
 Attributes
 lc
stingray.Lightcurve
object The light curve data to compute the
Bispectrum
. fsfloat
Sampling frequencies
 nint
Total Number of samples of light curve observations.
 maxlagint
Maximum lag on both positive and negative sides of 3rd order cumulant (similar to lags in correlation)
 signalnumpy.ndarray
Row vector of light curve counts for matrix operations
 scale{
biased
,unbiased
} Flag to decide biased or unbiased normalization for 3rd order cumulant function.
 lagsnumpy.ndarray
An array of time lags for which 3rd order cumulant is calculated
 freqnumpy.ndarray
An array of freq values for
Bispectrum
. cum3numpy.ndarray
A
maxlag*2+1 x maxlag*2+1
matrix containing 3rd order cumulant data for different lags. bispecnumpy.ndarray
A`` maxlag*2+1 x maxlag*2+1`` matrix containing bispectrum data for different frequencies.
 bispec_magnumpy.ndarray
Magnitude of the bispectrum
 bispec_phasenumpy.ndarray
Phase of the bispectrum
 lc
 plot_cum3(axis=None, save=False, filename=None)[source]¶
Plot the 3rd order cumulant as function of time lags using
matplotlib
. Plot thecum3
attribute on a graph with thelags
attribute on xaxis and yaxis andcum3
on zaxis Parameters
 axislist, tuple, string, default
None
Parameter to set axis properties of
matplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument formatplotlib.pyplot.axis()
method. savebool, optionalm, default
False
If
True
, save the figure with specified filename. filenamestr
File name and path of the image to save. Depends on the boolean
save
.
 axislist, tuple, string, default
 Returns
 plt
matplotlib.pyplot
object Reference to plot, call
show()
to display it
 plt
 plot_mag(axis=None, save=False, filename=None)[source]¶
Plot the magnitude of bispectrum as function of freq using
matplotlib
. Plot thebispec_mag
attribute on a graph withfreq
attribute on the xaxis and yaxis and thebispec_mag
attribute on the zaxis. Parameters
 axislist, tuple, string, default
None
Parameter to set axis properties of
matplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument formatplotlib.pyplot.axis()
method. savebool, optional, default
False
If
True
, save the figure with specified filename and path. filenamestr
File name and path of the image to save. Depends on the bool
save
.
 axislist, tuple, string, default
 Returns
 plt
matplotlib.pyplot
object Reference to plot, call
show()
to display it
 plt
 plot_phase(axis=None, save=False, filename=None)[source]¶
Plot the phase of bispectrum as function of freq using
matplotlib
. Plot thebispec_phase
attribute on a graph withphase
attribute on the xaxis and yaxis and thebispec_phase
attribute on the zaxis. Parameters
 axislist, tuple, string, default
None
Parameter to set axis properties of
matplotlib
figure. For example it can be a list like[xmin, xmax, ymin, ymax]
or any other acceptable argument formatplotlib.pyplot.axis()
function. savebool, optional, default
False
If
True
, save the figure with specified filename and path. filenamestr
File name and path of the image to save. Depends on the bool
save
.
 axislist, tuple, string, default
 Returns
 plt
matplotlib.pyplot
object Reference to plot, call
show()
to display it
 plt
Covariancespectrum¶
 class stingray.Covariancespectrum(data, dt=None, band_interest=None, ref_band_interest=None, std=None)[source]¶
Compute a covariance spectrum for the data. The input data can be either in event data or premade light curves. Event data can either be in the form of a
numpy.ndarray
with(time stamp, energy)
pairs or astingray.events.EventList
object. If light curves are formed ahead of time, then a list ofstingray.Lightcurve
objects should be passed to the object, ideally one light curve for each band of interest.For the case where the data is input as a list of
stingray.Lightcurve
objects, the reference band(s) should either bea single
stingray.Lightcurve
object,a list of
stingray.Lightcurve
objects with the reference band for each band of interest premade, orNone
, in which case reference bands will formed by combining all light curves except for the band of interest.
In the case of event data,
band_interest
andref_band_interest
can be (multiple) pairs of energies, and the light curves for the bands of interest and reference bands will be produced dynamically. Parameters
 data{
numpy.ndarray
stingray.events.EventList
object  list ofstingray.Lightcurve
objects} data
contains the time series data, either in the form of a 2D array of(time stamp, energy)
pairs for event data, or as a list of light curves. Note : The event list must be in sorted order with respect to the times of arrivals. dtfloat
The time resolution of the
stingray.Lightcurve
formed from the energy bin. Only used ifdata
is an event list. band_interest{
None
, iterable of tuples} If
None
, all possible energy values will be assumed to be of interest, and a covariance spectrum in the highest resolution will be produced. Note: if the input is a list ofstingray.Lightcurve
objects, then the user may supply their energy values here, for construction of a reference band. ref_band_interest{
None
, tuple,stingray.Lightcurve
, list ofstingray.Lightcurve
objects} Defines the reference band to be used for comparison with the bands of interest. If
None
, all bands except the band of interest will be used for each band of interest, respectively. Alternatively, a tuple can be given for event list data, which will extract the reference band (always excluding the band of interest), or one may put in a singlestingray.Lightcurve
object to be used (the same for each band of interest) or a list ofstingray.Lightcurve
objects, one for each band of interest. stdfloat or np.array or list of numbers
The term
std
is used to calculate the excess variance of a band. Ifstd
is set toNone
, default Poisson case is taken and the std is calculated asmean(lc)**0.5
. In the case of a single float as input, the same is used as the standard deviation which is also used as the std. And if the std is an iterable of numbers, their mean is used for the same purpose.
 data{
References
[1] Wilkinson, T. and Uttley, P. (2009), Accretion disc variability in the hard state of black hole Xray binaries. Monthly Notices of the Royal Astronomical Society, 397: 666–676. doi: 10.1111/j.13652966.2009.15008.x
Examples
See the notebooks repository for detailed notebooks on the code.
 Attributes
 unnorm_covarnp.ndarray
An array of arrays with mid point
band_interest
and their covariance. It is the arrayform of the dictionaryenergy_covar
. The covariance values are unnormalized. covarnp.ndarray
Normalized covariance spectrum.
 covar_errornp.ndarray
Errors of the normalized covariance spectrum.
AveragedCovariancespectrum¶
 class stingray.AveragedCovariancespectrum(data, segment_size, dt=None, band_interest=None, ref_band_interest=None, std=None)[source]¶
Compute a covariance spectrum for the data, defined in [covar spectrum]_ Equation 15.
 Parameters
 data{numpy.ndarray  list of
stingray.Lightcurve
objects} data
contains the time series data, either in the form of a 2D array of(time stamp, energy)
pairs for event data, or as a list ofstingray.Lightcurve
objects. Note : The event list must be in sorted order with respect to the times of arrivals. segment_sizefloat
The length of each segment in the averaged covariance spectrum. The number of segments will be calculated automatically using the total length of the data set and the segment_size defined here.
 dtfloat
The time resolution of the
stingray.Lightcurve
formed from the energy bin. Only used ifdata
is an event list. band_interest{
None
, iterable of tuples} If
None
, all possible energy values will be assumed to be of interest, and a covariance spectrum in the highest resolution will be produced. Note: if the input is a list ofstingray.Lightcurve
objects, then the user may supply their energy values here, for construction of a reference band. ref_band_interest{None, tuple,
stingray.Lightcurve
, list ofstingray.Lightcurve
objects} Defines the reference band to be used for comparison with the bands of interest. If None, all bands except the band of interest will be used for each band of interest, respectively. Alternatively, a tuple can be given for event list data, which will extract the reference band (always excluding the band of interest), or one may put in a single
stingray.Lightcurve
object to be used (the same for each band of interest) or a list ofstingray.Lightcurve
objects, one for each band of interest. stdfloat or np.array or list of numbers
The term
std
is used to calculate the excess variance of a band. Ifstd
is set toNone
, default Poisson case is taken and thestd
is calculated asmean(lc)**0.5
. In the case of a single float as input, the same is used as the standard deviation which is also used as the std. And if the std is an iterable of numbers, their mean is used for the same purpose.
 data{numpy.ndarray  list of
References
 Attributes
 unnorm_covarnp.ndarray
An array of arrays with mid point band_interest and their covariance. It is the arrayform of the dictionary
energy_covar
. The covariance values are unnormalized. covarnp.ndarray
Normalized covariance spectrum.
 covar_errornp.ndarray
Errors of the normalized covariance spectrum.
VarEnergySpectrum¶
Abstract base class for spectral timing products including both variability and spectral information.
 class stingray.varenergyspectrum.VarEnergySpectrum(events, freq_interval, energy_spec, ref_band=None, bin_time=1, use_pi=False, segment_size=None, events2=None, return_complex=False)[source]¶
 property energy¶
Give the centers of the energy intervals.
 from_astropy_table(*args, **kwargs)[source]¶
Create a Stingray Object object from data in an Astropy Table.
The table MUST contain at least a column named like the
main_array_attr
. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.It is strongly advisable to define such attributes and columns using the standard attributes of the wanted StingrayObject (e.g.
time
,pi
, etc. forEventList
)
 from_pandas(*args, **kwargs)[source]¶
Create an
StingrayObject
object from data in a pandas DataFrame.The dataframe MUST contain at least a column named like the
main_array_attr
. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.It is strongly advisable to define such attributes and columns using the standard attributes of the wanted StingrayObject (e.g.
time
,pi
, etc. forEventList
)
 from_xarray(*args, **kwargs)[source]¶
Create a
StingrayObject
from data in an xarray Dataset.The dataset MUST contain at least a column named like the
main_array_attr
. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.It is strongly advisable to define such attributes and columns using the standard attributes of the wanted StingrayObject (e.g.
time
,pi
, etc. forEventList
)
 main_array_attr = 'energy'¶
Base class for variabilityenergy spectrum.
This class is only a base for the various variability spectra, and it’s not to be instantiated by itself.
 Parameters
 events
stingray.events.EventList
object event list
 freq_interval
[f0, f1]
, floats the frequency range over which calculating the variability quantity
 energy_speclist or tuple
(emin, emax, N, type)
if a
list
is specified, this is interpreted as a list of bin edges; if atuple
is provided, this will encode the minimum and maximum energies, the number of intervals, andlin
orlog
.
 events
 Other Parameters
 ref_band
[emin, emax
], floats; defaultNone
minimum and maximum energy of the reference band. If
None
, the full band is used. use_pibool, default
False
Use channel instead of energy
 events2
stingray.events.EventList
object event list for the second channel, if not the same. Useful if the reference band has to be taken from another detector.
 return_complex: bool, default False
In spectra that produce complex values, return the whole spectrum. Otherwise, the absolute value will be returned.
 ref_band
 Attributes
 events1arraylike
list of events used to produce the spectrum
 events2arraylike
if the spectrum requires it, second list of events
 freq_intervalarraylike
interval of frequencies used to calculate the spectrum
 energy_intervals
[[e00, e01], [e10, e11], ...]
energy intervals used for the spectrum
 spectrumarraylike
the spectral values, corresponding to each energy interval
 spectrum_errorarraylike
the error bars corresponding to spectrum
 energyarraylike
The centers of energy intervals
RmsEnergySpectrum¶
 stingray.varenergyspectrum.RmsEnergySpectrum¶
alias of
stingray.varenergyspectrum.RmsSpectrum
LagEnergySpectrum¶
 stingray.varenergyspectrum.LagEnergySpectrum¶
alias of
stingray.varenergyspectrum.LagSpectrum
ExcessVarianceSpectrum¶
 class stingray.varenergyspectrum.ExcessVarianceSpectrum(events, freq_interval, energy_spec, bin_time=1, use_pi=False, segment_size=None, normalization='fvar')[source]¶
Calculate the Excess Variance spectrum.
For each energy interval, calculate the excess variance in the specified frequency range.
 Parameters
 events
stingray.events.EventList
object event list
 freq_interval
[f0, f1]
, list of float the frequency range over which calculating the variability quantity
 energy_speclist or tuple
(emin, emax, N, type)
if a list is specified, this is interpreted as a list of bin edges; if a tuple is provided, this will encode the minimum and maximum energies, the number of intervals, and
lin
orlog
.
 events
 Other Parameters
 ref_band
[emin, emax]
, floats; defaultNone
minimum and maximum energy of the reference band. If
None
, the full band is used. use_pibool, default
False
Use channel instead of energy
 ref_band
 Attributes
 events1arraylike
list of events used to produce the spectrum
 freq_intervalarraylike
interval of frequencies used to calculate the spectrum
 energy_intervals
[[e00, e01], [e10, e11], ...]
energy intervals used for the spectrum
 spectrumarraylike
the spectral values, corresponding to each energy interval
 spectrum_errorarraylike
the errorbars corresponding to spectrum
 array_attrs() list[str] ¶
List the names of the array attributes of the Stingray Object.
By array attributes, we mean the ones with the same size and shape as
main_array_attr
(e.g.time
inEventList
)
 property energy¶
Give the centers of the energy intervals.
 from_astropy_table(*args, **kwargs)¶
Create a Stingray Object object from data in an Astropy Table.
The table MUST contain at least a column named like the
main_array_attr
. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.It is strongly advisable to define such attributes and columns using the standard attributes of the wanted StingrayObject (e.g.
time
,pi
, etc. forEventList
)
 from_pandas(*args, **kwargs)¶
Create an
StingrayObject
object from data in a pandas DataFrame.The dataframe MUST contain at least a column named like the
main_array_attr
. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.It is strongly advisable to define such attributes and columns using the standard attributes of the wanted StingrayObject (e.g.
time
,pi
, etc. forEventList
)
 from_xarray(*args, **kwargs)¶
Create a
StingrayObject
from data in an xarray Dataset.The dataset MUST contain at least a column named like the
main_array_attr
. The rest of columns will form the array attributes of the new object, while the attributes in ds.attrs will form the new meta attributes of the object.It is strongly advisable to define such attributes and columns using the standard attributes of the wanted StingrayObject (e.g.
time
,pi
, etc. forEventList
)
 meta_attrs() list[str] ¶
List the names of the meta attributes of the Stingray Object.
By array attributes, we mean the ones with a different size and shape than
main_array_attr
(e.g.time
inEventList
)
 classmethod read(filename: str, fmt: str = None, format_=None) Tso ¶
Generic reader for :class`StingrayObject`
Currently supported formats are
pickle (not recommended for longterm storage)
any other formats compatible with the writers in
astropy.table.Table
(ascii.ecsv, hdf5, etc.)
Files that need the
astropy.table.Table
interface MUST contain at least a column named like themain_array_attr
. The default ascii format is enhanced CSV (ECSV). Data formats supporting the serialization of metadata (such as ECSV and HDF5) can contain all attributes such asmission
,gti
, etc with no significant loss of information. Other file formats might lose part of the metadata, so must be used with care...note:
Complex values can be dealt with outofthebox in some formats like HDF5 or FITS, not in others (e.g. all ASCII formats). With these formats, and in any case when fmt is ``None``, complex values should be stored as two columns of real numbers, whose names are of the format <variablename>.real and <variablename>.imag
 Parameters
 filename: str
Path and file name for the file to be read.
 fmt: str
Available options are ‘pickle’, ‘hea’, and any
Table
supported format such as ‘hdf5’, ‘ascii.ecsv’, etc.
 Returns
 obj:class:
StingrayObject
object The object reconstructed from file
 obj:class:

to_astropy_table()
astropy.table.Table
¶ Create an Astropy Table from a
StingrayObject
Array attributes (e.g.
time
,pi
,energy
, etc. forEventList
) are converted into columns, while meta attributes (mjdref
,gti
, etc.) are saved into themeta
dictionary.
 to_pandas() DataFrame ¶
Create a pandas
DataFrame
from aStingrayObject
.Array attributes (e.g.
time
,pi
,energy
, etc. forEventList
) are converted into columns, while meta attributes (mjdref
,gti
, etc.) are saved into theds.attrs
dictionary.
 to_xarray() Dataset ¶
Create an
xarray
Dataset from aStingrayObject
.Array attributes (e.g.
time
,pi
,energy
, etc. forEventList
) are converted into columns, while meta attributes (mjdref
,gti
, etc.) are saved into theds.attrs
dictionary.
 write(filename: str, fmt: Optional[str] = None, format_=None) None ¶
Generic writer for :class`StingrayObject`
Currently supported formats are
pickle (not recommended for longterm storage)
any other formats compatible with the writers in
astropy.table.Table
(ascii.ecsv, hdf5, etc.)
..note:
Complex values can be dealt with outofthebox in some formats like HDF5 or FITS, not in others (e.g. all ASCII formats). With these formats, and in any case when fmt is ``None``, complex values will be stored as two columns of real numbers, whose names are of the format <variablename>.real and <variablename>.imag
 Parameters
 filename: str
Name and path of the file to save the object list to.
 fmt: str
The file format to store the data in. Available options are
pickle
,hdf5
,ascii
,fits
Utilities¶
Commonly used utility functionality, including Good Time Interval operations and input/output helper methods.
Statistical Functions¶
 stingray.stats.a_from_pf(p)[source]¶
Fractional amplitude of modulation from pulsed fraction
If the pulsed profile is defined as p = mean * (1 + a * sin(phase)),
we define “pulsed fraction” as 2a/b, where b = mean + a is the maximum and a is the amplitude of the modulation.
Hence, a = pf / (2  pf)
Examples
>>> a_from_pf(1) 1.0 >>> a_from_pf(0) 0.0
 stingray.stats.a_from_ssig(ssig, ncounts)[source]¶
Amplitude of a sinusoid corresponding to a given Z/PDS value
From Leahy et al. 1983, given a pulse profile p = lambda * (1 + a * sin(phase)), The theoretical value of Z^2_n is Ncounts / 2 * a^2
Note that if there are multiple sinusoidal components, one can use a = sqrt(sum(a_l)) (Bachetti+2021b)
Examples
>>> a_from_ssig(150, 30000) 0.1
 stingray.stats.amplitude_upper_limit(pmeas, counts, n=1, c=0.95, fft_corr=False, nyq_ratio=0)[source]¶
Upper limit on a sinusoidal modulation, given a measured power in the PDS/Z search.
Eq. 10 in Vaughan+94 and
a_from_ssig
: they are equivalent but Vaughan+94 corrects further for the response inside an FFT bin and at frequencies close to Nyquist. These two corrections are added by using fft_corr=True and nyq_ratio to the correct \(f / f_{Nyq}\) of the FFT peakTo understand the meaning of this amplitude: if the modulation is described by:
..math:: p = overline{p} (1 + a * sin(x))
this function returns a.
If it is a sum of sinusoidal harmonics instead ..math:: p = overline{p} (1 + sum_l a_l * sin(lx)) a is equivalent to \(\sqrt(\sum_l a_l^2)\).
 Parameters
 pmeas: float
The measured value of power
 counts: int
The number of counts in the light curve used to calculate the spectrum
 Returns
 a: float
The modulation amplitude that could produce P>pmeas with 1  c probability
 Other Parameters
 n: int
The number of summed powers to obtain pmeas. It can be multiple harmonics of the PDS, adjacent bins in a PDS summed to collect all the power in a QPO, or the n in Z^2_n
 c: float
The confidence value for the probability (e.g. 0.95 = 95%)
 fft_corr: bool
Apply a correction for the expected power concentrated in an FFT bin, which is about 0.773 on average (it’s 1 at the center of the bin, 2/pi at the bin edge.
 nyq_ratio: float
Ratio of the frequency of this feature with respect to the Nyquist frequency. Important to know when dealing with FFTs, because the FFT response decays between 0 and f_Nyq similarly to the response inside a frequency bin: from 1 at 0 Hz to ~2/pi at f_Nyq
Examples
>>> aup = amplitude_upper_limit(40, 30000, 1, 0.99) >>> aup_nyq = amplitude_upper_limit(40, 30000, 1, 0.99, nyq_ratio=1) >>> np.isclose(aup_nyq, aup / (2 / np.pi)) True >>> aup_corr = amplitude_upper_limit(40, 30000, 1, 0.99, fft_corr=True) >>> np.isclose(aup_corr, aup / np.sqrt(0.773)) True
 stingray.stats.classical_pvalue(power, nspec)[source]¶
Note: This is stingray’s original implementation of the probability distribution for the power spectrum. It is superseded by the implementation in pds_probability for practical purposes, but remains here for backwards compatibility and for its educational value as a clear, explicit implementation of the correct probability distribution.
Compute the probability of detecting the current power under the assumption that there is no periodic oscillation in the data.
This computes the singletrial pvalue that the power was observed under the null hypothesis that there is no signal in the data.
Important: the underlying assumptions that make this calculation valid are:
the powers in the power spectrum follow a chisquare distribution
the power spectrum is normalized according to [Leahy 1983]_, such that the powers have a mean of 2 and a variance of 4
there is only white noise in the light curve. That is, there is no aperiodic variability that would change the overall shape of the power spectrum.
Also note that the pvalue is for a single trial, i.e. the power currently being tested. If more than one power or more than one power spectrum are being tested, the resulting pvalue must be corrected for the number of trials (Bonferroni correction).
Mathematical formulation in [Groth 1975]_. Original implementation in IDL by Anna L. Watts.
 Parameters
 powerfloat
The squared Fourier amplitude of a spectrum to be evaluated
 nspecint
The number of spectra or frequency bins averaged in
power
. This matters because averaging spectra or frequency bins increases the signaltonoise ratio, i.e. makes the statistical distributions of the noise narrower, such that a smaller power might be very significant in averaged spectra even though it would not be in a single power spectrum.
 Returns
 pvalfloat
The classical pvalue of the observed power being consistent with the null hypothesis of white noise
References
 stingray.stats.equivalent_gaussian_Nsigma(p)[source]¶
Number of Gaussian sigmas corresponding to tail probability.
This function computes the value of the characteristic function of a standard Gaussian distribution for the tail probability equivalent to the provided pvalue, and turns this value into units of standard deviations away from the Gaussian mean. This allows the user to make a statement about the signal such as “I detected this pulsation at 4.1 sigma
The example values below are obtained by bruteforce integrating the Gaussian probability density function using the mpmath library between Nsigma and +inf.
Examples
>>> np.isclose(equivalent_gaussian_Nsigma(0.15865525393145707), 1, ... atol=0.01) True >>> np.isclose(equivalent_gaussian_Nsigma(0.0013498980316301035), 3, ... atol=0.01) True >>> np.isclose(equivalent_gaussian_Nsigma(9.865877e10), 6, ... atol=0.01) True >>> np.isclose(equivalent_gaussian_Nsigma(6.22096e16), 8, ... atol=0.01) True >>> np.isclose(equivalent_gaussian_Nsigma(3.0567e138), 25, atol=0.1) True
 stingray.stats.fold_detection_level(nbin, epsilon=0.01, ntrial=1)[source]¶
Return the detection level for a folded profile.
See Leahy et al. (1983).
 Parameters
 nbinint
The number of bins in the profile
 epsilonfloat, default 0.01
The fractional probability that the signal has been produced by noise
 Returns
 detlevfloat
The epoch folding statistics corresponding to a probability epsilon * 100 % that the signal has been produced by noise
 Other Parameters
 ntrialint
The number of trials executed to find this profile
 stingray.stats.fold_profile_logprobability(stat, nbin, ntrial=1)[source]¶
Calculate the probability of a certain folded profile, due to noise.
 Parameters
 statfloat
The epoch folding statistics
 nbinint
The number of bins in the profile
 Returns
 logpfloat
The logprobability that the profile has been produced by noise
 Other Parameters
 ntrialint
The number of trials executed to find this profile
 stingray.stats.fold_profile_probability(stat, nbin, ntrial=1)[source]¶
Calculate the probability of a certain folded profile, due to noise.
 Parameters
 statfloat
The epoch folding statistics
 nbinint
The number of bins in the profile
 Returns
 pfloat
The probability that the profile has been produced by noise
 Other Parameters
 ntrialint
The number of trials executed to find this profile
 stingray.stats.p_multitrial_from_single_trial(p1, n)[source]¶
Calculate a multitrial pvalue from a singletrial one.
Calling p the probability of a single success, the Binomial distributions says that the probability at least one outcome in n trials is
\[P(k\geq 1) = \sum_{k\geq 1} \binom{n}{k} p^k (1p)^{(nk)}\]or more simply, using P(k ≥ 0) = 1
\[P(k\geq 1) = 1  \binom{n}{0} (1p)^n = 1  (1p)^n\] Parameters
 p1float
The significance at which we reject the null hypothesis on each single trial.
 nint
The number of trials
 Returns
 pnfloat
The significance at which we reject the null hypothesis after multiple trials
 stingray.stats.p_single_trial_from_p_multitrial(pn, n)[source]¶
Calculate the singletrial pvalue from a total pvalue
Let us say that we want to reject a null hypothesis at the
pn
level, after executingn
different measurements. This might be the case because, e.g., we want to have a 1% probability of detecting a signal in an entire power spectrum, and we need to correct the detection level accordingly.The typical procedure is dividing the initial probability (often called _epsilon_) by the number of trials. This is called the Bonferroni correction and it is often a good approximation, when
pn
is low:p1 = pn / n
.However, if
pn
is close to 1, this approximation gives incorrect results.Here we calculate this probability by inverting the Binomial problem. Given that (see
p_multitrial_from_single_trial
) the probability of getting more than one hit in n trials, given the singletrial probability p, is\[P (k \geq 1) = 1  (1  p)^n,\]we get the single trial probability from the multitrial one from
\[p = 1  (1  P)^{(1/n)}\]This is also known as Šidák correction.
 Parameters
 pnfloat
The significance at which we want to reject the null hypothesis after multiple trials
 nint
The number of trials
 Returns
 p1float
The significance at which we reject the null hypothesis on each single trial.
 stingray.stats.pds_detection_level(epsilon=0.01, ntrial=1, n_summed_spectra=1, n_rebin=1)[source]¶
Detection level for a PDS.
Return the detection level (with probability 1  epsilon) for a Power Density Spectrum of nbins bins, normalized a la Leahy (1983), based on the 2dof \({\chi}^2\) statistics, corrected for rebinning (n_rebin) and multiple PDS averaging (n_summed_spectra)
 Parameters
 epsilonfloat
The singletrial probability value(s)
 Other Parameters
 ntrialint
The number of independent trials (the independent bins of the PDS)
 n_summed_spectraint
The number of power density spectra that have been averaged to obtain this power level
 n_rebinint
The number of power density bins that have been averaged to obtain this power level
Examples
>>> np.isclose(pds_detection_level(0.1), 4.6, atol=0.1) True >>> np.allclose(pds_detection_level(0.1, n_rebin=[1]), [4.6], atol=0.1) True
 stingray.stats.pds_probability(level, ntrial=1, n_summed_spectra=1, n_rebin=1)[source]¶
Give the probability of a given power level in PDS.
Return the probability of a certain power level in a Power Density Spectrum of nbins bins, normalized a la Leahy (1983), based on the 2dof \({\chi}^2\) statistics, corrected for rebinning (n_rebin) and multiple PDS averaging (n_summed_spectra)
 Parameters
 levelfloat or array of floats
The power level for which we are calculating the probability
 Returns
 epsilonfloat
The probability value(s)
 Other Parameters
 ntrialint
The number of independent trials (the independent bins of the PDS)
 n_summed_spectraint
The number of power density spectra that have been averaged to obtain this power level
 n_rebinint
The number of power density bins that have been averaged to obtain this power level
 stingray.stats.pf_from_a(a)[source]¶
Pulsed fraction from fractional amplitude of modulation.
If the pulsed profile is defined as p = mean * (1 + a * sin(phase)),
we define “pulsed fraction” as 2a/b, where b = mean + a is the maximum and a is the amplitude of the modulation.
Hence, pulsed fraction = 2a/(1+a)
Examples
>>> pf_from_a(1) 1.0 >>> pf_from_a(0) 0.0
 stingray.stats.pf_from_ssig(ssig, ncounts)[source]¶
Estimate pulsed fraction for a sinusoid from a given Z or PDS power.
See
a_from_ssig
andpf_from_a
for more detailsExamples
>>> round(a_from_pf(pf_from_ssig(150, 30000)), 1) 0.1
 stingray.stats.pf_upper_limit(*args, **kwargs)[source]¶
Upper limit on pulsed fraction, given a measured power in the PDS/Z search.
See
power_upper_limit
andpf_from_ssig
. All arguments are the same asamplitude_upper_limit
 Parameters
 pmeas: float
The measured value of power
 counts: int
The number of counts in the light curve used to calculate the spectrum
 Returns
 pf: float
The pulsed fraction that could produce P>pmeas with 1  c probability
 Other Parameters
 n: int
The number of summed powers to obtain pmeas. It can be multiple harmonics of the PDS, adjacent bins in a PDS summed to collect all the power in a QPO, or the n in Z^2_n
 c: float
The confidence value for the probability (e.g. 0.95 = 95%)
 fft_corr: bool
Apply a correction for the expected power concentrated in an FFT bin, which is about 0.773 on average (it’s 1 at the center of the bin, 2/pi at the bin edge.
 nyq_ratio: float
Ratio of the frequency of this feature with respect to the Nyquist frequency. Important to know when dealing with FFTs, because the FFT response decays between 0 and f_Nyq similarly to the response inside a frequency bin: from 1 at 0 Hz to ~2/pi at f_Nyq
Examples
>>> pfup = pf_upper_limit(40, 30000, 1, 0.99) >>> np.isclose(pfup, 0.13, atol=0.01) True
 stingray.stats.power_confidence_limits(preal, n=1, c=0.95)[source]¶
Confidence limits on power, given a (theoretical) signal power.
This is to be used when we expect a given power (e.g. from the pulsed fraction measured in previous observations) and we want to know the range of values the measured power could take to a given confidence level. Adapted from Vaughan et al. 1994, noting that, after appropriate normalization of the spectral stats, the distribution of powers in the PDS and the Z^2_n searches is always described by a noncentral chi squared distribution.
 Parameters
 preal: float
The theoretical signalgenerated value of power
 Returns
 pmeas: [float, float]
The upper and lower confidence interval (a, 1a) on the measured power
 Other Parameters
 n: int
The number of summed powers to obtain the result. It can be multiple harmonics of the PDS, adjacent bins in a PDS summed to collect all the power in a QPO, or the n in Z^2_n
 c: float
The confidence level (e.g. 0.95=95%)
Examples
>>> cl = power_confidence_limits(150, c=0.84) >>> np.allclose(cl, [127, 176], atol=1) True
 stingray.stats.power_upper_limit(pmeas, n=1, c=0.95)[source]¶
Upper limit on signal power, given a measured power in the PDS/Z search.
Adapted from Vaughan et al. 1994, noting that, after appropriate normalization of the spectral stats, the distribution of powers in the PDS and the Z^2_n searches is always described by a noncentral chi squared distribution.
Note that Vaughan+94 gives p(pmeas  preal), while we are interested in p(real  pmeas), which is not described by the NCX2 stat. Rather than integrating the CDF of this probability distribution, we start from a reasonable approximation and fit to find the preal that gives pmeas as a (e.g.95%) confidence limit.
As Vaughan+94 shows, this power is always larger than the observed one. This is because we are looking for the maximum signal power that, combined with noise powers, would give the observed power. This involves the possibility that noise powers partially cancel out some signal power.
 Parameters
 pmeas: float
The measured value of power
 Returns
 psig: float
The signal power that could produce P>pmeas with 1  c probability
 Other Parameters
 n: int
The number of summed powers to obtain pmeas. It can be multiple harmonics of the PDS, adjacent bins in a PDS summed to collect all the power in a QPO, or the n in Z^2_n
 c: float
The confidence value for the probability (e.g. 0.95 = 95%)
Examples
>>> pup = power_upper_limit(40, 1, 0.99) >>> np.isclose(pup, 75, atol=2) True
 stingray.stats.ssig_from_a(a, ncounts)[source]¶
Theoretical power in the Z or PDS search for a sinusoid of amplitude a.
From Leahy et al. 1983, given a pulse profile p = lambda * (1 + a * sin(phase)), The theoretical value of Z^2_n is Ncounts / 2 * a^2
Note that if there are multiple sinusoidal components, one can use a = sqrt(sum(a_l)) (Bachetti+2021b)
Examples
>>> round(ssig_from_a(0.1, 30000), 1) 150.0
 stingray.stats.ssig_from_pf(pf, ncounts)[source]¶
Theoretical power in the Z or PDS for a sinusoid of pulsed fraction pf.
See
ssig_from_a
anda_from_pf
for more detailsExamples
>>> round(ssig_from_pf(pf_from_a(0.1), 30000), 1) 150.0
 stingray.stats.z2_n_detection_level(n=2, epsilon=0.01, ntrial=1, n_summed_spectra=1)[source]¶
Return the detection level for the Z^2_n statistics.
See Buccheri et al. (1983), Bendat and Piersol (1971).
 Parameters
 nint, default 2
The
n
in $Z^2_n$ (number of harmonics, including the fundamental) epsilonfloat, default 0.01
The fractional probability that the signal has been produced by noise
 Returns
 detlevfloat
The epoch folding statistics corresponding to a probability epsilon * 100 % that the signal has been produced by noise
 Other Parameters
 ntrialint
The number of trials executed to find this profile
 n_summed_spectraint
Number of Z_2^n periodograms that are being averaged
 stingray.stats.z2_n_logprobability(z2, n, ntrial=1, n_summed_spectra=1)[source]¶
Calculate the probability of a certain folded profile, due to noise.
 Parameters
 z2float
A Z^2_n statistics value
 nint, default 2
The
n
in $Z^2_n$ (number of harmonics, including the fundamental)
 Returns
 pfloat
The probability that the Z^2_n value has been produced by noise
 Other Parameters
 ntrialint
The number of trials executed to find this profile
 n_summed_spectraint
Number of Z_2^n periodograms that were averaged to obtain z2
 stingray.stats.z2_n_probability(z2, n, ntrial=1, n_summed_spectra=1)[source]¶
Calculate the probability of a certain folded profile, due to noise.
 Parameters
 z2float
A Z^2_n statistics value
 nint, default 2
The
n
in $Z^2_n$ (number of harmonics, including the fundamental)
 Returns
 pfloat
The probability that the Z^2_n value has been produced by noise
 Other Parameters
 ntrialint
The number of trials executed to find this profile
 n_summed_spectraint
Number of Z_2^n periodograms that were averaged to obtain z2
GTI Functionality¶
 stingray.gti.append_gtis(gti0, gti1)[source]¶
Union of two nonoverlapping GTIs.
If the two GTIs “touch”, this is tolerated and the touching GTIs are joined in a single one.
 Parameters
 gti0: 2d float array
List of GTIs of the form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
. gti1: 2d float array
List of GTIs of the form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
.
 Returns
 gti: 2d float array
The newly created GTI array.
Examples
>>> np.allclose(append_gtis([[0, 1]], [[2, 3]]), [[0, 1], [2, 3]]) True >>> np.allclose(append_gtis([[0, 1], [4, 5]], [[2, 3]]), ... [[0, 1], [2, 3], [4, 5]]) True >>> np.allclose(append_gtis([[0, 1]], [[1, 3]]), [[0, 3]]) True
 stingray.gti.bin_intervals_from_gtis(gtis, segment_size, time, dt=None, fraction_step=1, epsilon=0.001)[source]¶
Compute start/stop times of equal time intervals, compatible with GTIs, and map them to the indices of an array of time stamps.
Used to start each FFT/PDS/cospectrum from the start of a GTI, and stop before the next gap in data (end of GTI). In this case, it is necessary to specify the time array containing the times of the light curve bins. Returns start and stop bins of the intervals to use for the PDS.
 Parameters
 gtis2d float array
List of GTIs of the form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
. segment_sizefloat
Length of each time segment.
 timearraylike
Array of time stamps.
 Returns
 spectrum_start_binsarraylike
List of starting bins in the original time array to use in spectral calculations.
 spectrum_stop_binsarraylike
List of end bins to use in the spectral calculations.
 Other Parameters
 dtfloat, default median(diff(time))
Time resolution of the light curve.
 epsilonfloat, default 0.001
The tolerance, in fraction of
dt
, for the comparisons at the borders. fraction_stepfloat
If the step is not a full
segment_size
but less (e.g. a moving window), this indicates the ratio between step step andsegment_size
(e.g.0.5
means that the window shifts by halfsegment_size
).
Examples
>>> time = np.arange(0.5, 13.5)
>>> gtis = [[0, 5], [6, 8], [9, 10]]
>>> segment_size = 2
>>> start_bins, stop_bins = bin_intervals_from_gtis(gtis,segment_size,time)
>>> np.allclose(start_bins, [0, 2, 6]) True >>> np.allclose(stop_bins, [2, 4, 8]) True >>> np.allclose(time[start_bins[0]:stop_bins[0]], [0.5, 1.5]) True >>> np.allclose(time[start_bins[1]:stop_bins[1]], [2.5, 3.5]) True
 stingray.gti.check_gtis(gti)[source]¶
Check if GTIs are wellbehaved.
Check that:
the shape of the GTI array is correct;
no start > end
no overlaps.
 Parameters
 gtilist
A list of GTI
(start, stop)
pairs extracted from the FITS file.
 Raises
 TypeError
If GTIs are of the wrong shape
 ValueError
If GTIs have overlapping or displaced values
 stingray.gti.check_separate(gti0, gti1)[source]¶
Check if two GTIs do not overlap.
 Parameters
 gti0: 2d float array
List of GTIs of form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
. gti1: 2d float array
List of GTIs of form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
.
 Returns
 separate: bool
True
if GTIs are mutually exclusive,False
if not.
Examples
>>> gti0 = [[0, 10]] >>> gti1 = [[20, 30]] >>> check_separate(gti0, gti1) True >>> gti0 = [[0, 10]] >>> gti1 = [[0, 10]] >>> check_separate(gti0, gti1) False >>> gti0 = [[0, 10]] >>> gti1 = [[10, 20]] >>> check_separate(gti0, gti1) True >>> gti0 = [[0, 11]] >>> gti1 = [[10, 20]] >>> check_separate(gti0, gti1) False >>> gti0 = [[0, 11]] >>> gti1 = [[10, 20]] >>> check_separate(gti1, gti0) False >>> gti0 = [[0, 10], [30, 40]] >>> gti1 = [[11, 28]] >>> check_separate(gti0, gti1) True
 stingray.gti.create_gti_from_condition(time, condition, safe_interval=0, dt=None)[source]¶
Create a GTI list from a time array and a boolean mask (
condition
). Parameters
 timearraylike
Array containing time stamps.
 conditionarraylike
An array of bools, of the same length of time. A possible condition can be, e.g., the result of
lc > 0
.
 Returns
 gtis
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
The newly created GTIs.
 gtis
 Other Parameters
 safe_intervalfloat or
[float, float]
A safe interval to exclude at both ends (if single float) or the start and the end (if pair of values) of GTIs.
 dtfloat
The width (in sec) of each bin of the time array. Can be irregular.
 safe_intervalfloat or
 stingray.gti.create_gti_mask(time, gtis, safe_interval=None, min_length=0, return_new_gtis=False, dt=None, epsilon=0.001)[source]¶
Create GTI mask.
Assumes that no overlaps are present between GTIs
 Parameters
 timenumpy.ndarray
An array of time stamps
 gtis
[[g0_0, g0_1], [g1_0, g1_1], ...]
, float arraylike The list of GTIs
 Returns
 maskbool array
A mask labelling all time stamps that are included in the GTIs versus those that are not.
 new_gtis
Nx2
array An array of new GTIs created by this function.
 Other Parameters
 safe_intervalfloat or
[float, float]
, default None A safe interval to exclude at both ends (if single float) or the start and the end (if pair of values) of GTIs. If None, no safe interval is applied to data.
 min_lengthfloat
An optional minimum length for the GTIs to be applied. Only GTIs longer than
min_length
will be considered when creating the mask. return_new_gtisbool
If
True`
, return the list of new GTIs (ifmin_length > 0
) dtfloat
Time resolution of the data, i.e. the interval between time stamps.
 epsilonfloat
Fraction of
dt
that is tolerated at the borders of a GTI.
 safe_intervalfloat or
 stingray.gti.create_gti_mask_complete(time, gtis, safe_interval=0, min_length=0, return_new_gtis=False, dt=None, epsilon=0.001)[source]¶
Create GTI mask, allowing for nonconstant
dt
.Assumes that no overlaps are present between GTIs.
 Parameters
 timenumpy.ndarray
An array of time stamps.
 gtis
[[g0_0, g0_1], [g1_0, g1_1], ...]
, float arraylike The list of GTIs.
 Returns
 maskbool array
A mask labelling all time stamps that are included in the GTIs versus those that are not.
 new_gtisNx2 array
An array of new GTIs created by this function.
 Other Parameters
 safe_intervalfloat or [float, float]
A safe interval to exclude at both ends (if single float) or the start and the end (if pair of values) of GTIs.
 min_lengthfloat
An optional minimum length for the GTIs to be applied. Only GTIs longer than
min_length
will be considered when creating the mask. return_new_gtisbool
If
True
, return the list of new GTIs (ifmin_length > 0
). dtfloat
Time resolution of the data, i.e. the interval between time stamps.
 epsilonfloat
Fraction of
dt
that is tolerated at the borders of a GTI.
 stingray.gti.cross_gtis(gti_list)[source]¶
From multiple GTI lists, extract the common intervals EXACTLY.
 Parameters
 gti_listarraylike
List of GTI arrays, each one in the usual format
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
.
 Returns
 gti0: 2d float array
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
The newly created GTIs.
See also
cross_two_gtis
Extract the common intervals from two GTI lists EXACTLY
Examples
>>> gti1 = np.array([[1, 2]]) >>> gti2 = np.array([[1, 2]]) >>> newgti = cross_gtis([gti1, gti2]) >>> np.allclose(newgti, [[1, 2]]) True >>> gti1 = np.array([[1, 4]]) >>> gti2 = np.array([[1, 2], [2, 4]]) >>> newgti = cross_gtis([gti1, gti2]) >>> np.allclose(newgti, [[1, 4]]) True
 stingray.gti.cross_two_gtis(gti0, gti1)[source]¶
Extract the common intervals from two GTI lists EXACTLY.
 Parameters
 gti0iterable of the form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
 gti1iterable of the form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
The two lists of GTIs to be crossed.
 gti0iterable of the form
 Returns
 gtis
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
The newly created GTIs.
 gtis
See also
cross_gtis
From multiple GTI lists, extract common intervals EXACTLY
Examples
>>> gti1 = np.array([[1, 2]]) >>> gti2 = np.array([[1, 2]]) >>> newgti = cross_two_gtis(gti1, gti2) >>> np.allclose(newgti, [[1, 2]]) True >>> gti1 = np.array([[1, 4]]) >>> gti2 = np.array([[1, 2], [2, 4]]) >>> newgti = cross_two_gtis(gti1, gti2) >>> np.allclose(newgti, [[1, 4]]) True
 stingray.gti.generate_indices_of_gti_boundaries(times, gti, dt=0)[source]¶
Get the indices of events from different GTIs of the observation.
This is a generator, yielding the boundaries of each GTI and the corresponding indices in the time array.
 Parameters
 timesfloat
np.array
Array of times.
 gti[[gti00, gti01], [gti10, gti11], …]
Good time intervals.
 timesfloat
 Yields
 g0: float
Start time of current GTI.
 g1: float
End time of current GTI.
 startidx: int
Start index of the current GTI in the time array.
 stopidx: int
End index of the current GTI in the time array. Note that this is larger by one, so that
time[startidx:stopidx]
returns the correct time interval.
 Other Parameters
 dtfloat
If times are uniformly binned, this is the binning time.
Examples
>>> times = [0.1, 0.2, 0.5, 0.8, 1.1] >>> gtis = [[0, 0.55], [0.6, 2.1]] >>> vals = generate_indices_of_gti_boundaries(times, gtis) >>> v0 = next(vals) >>> np.allclose(v0[:2], gtis[0]) True >>> np.allclose(v0[2:], [0, 3]) True
 stingray.gti.generate_indices_of_segment_boundaries_binned(times, gti, segment_size, dt=None)[source]¶
Get the indices of binned times from different segments of the observation.
This is a generator, yielding the boundaries of each segment and the corresponding indices in the time array
 Parameters
 timesfloat
np.array
Array of times, uniformly sampled
 gti[[gti00, gti01], [gti10, gti11], …]
good time intervals
 segment_sizefloat
length of segments
 timesfloat
 Yields
 t0: float
First time value, from the time array, in the current segment
 t1: float
Last time value, from the time array, in the current segment
 startidx: int
Start index of the current segment in the time array
 stopidx: int
End index of the current segment in the time array. Note that this is larger by one, so that
time[startidx:stopidx]
returns the correct time interval.
Examples
>>> times = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7] >>> gtis = [[0.05, 0.55]] >>> vals = generate_indices_of_segment_boundaries_binned(times, gtis, 0.5, dt=0.1) >>> v0 = next(vals) >>> np.allclose(v0[:2], [0.05, 0.55]) True >>> np.allclose(v0[2:], [0, 5]) True
 stingray.gti.generate_indices_of_segment_boundaries_unbinned(times, gti, segment_size)[source]¶
Get the indices of events from different segments of the observation.
This is a generator, yielding the boundaries of each segment and the corresponding indices in the time array.
 Parameters
 timesfloat
np.array
Array of times.
 gti[[gti00, gti01], [gti10, gti11], …]
Good time intervals.
 segment_sizefloat
Length of segments.
 timesfloat
 Yields
 t0: float
Start time of current segment.
 t1: float
End time of current segment.
 startidx: int
Start index of the current segment in the time array.
 stopidx: int
End index of the current segment in the time array. Note that this is larger by one, so that
time[startidx:stopidx]
returns the correct time interval.
Examples
>>> times = [0.1, 0.2, 0.5, 0.8, 1.1] >>> gtis = [[0, 0.55], [0.6, 2.1]] >>> vals = generate_indices_of_segment_boundaries_unbinned(times, gtis, 0.5) >>> v0 = next(vals) >>> np.allclose(v0[:2], [0, 0.5]) True >>> # Note: 0.5 is not included in the interval >>> np.allclose(v0[2:], [0, 2]) True >>> v1 = next(vals) >>> np.allclose(v1[:2], [0.6, 1.1]) True >>> # Again: 1.1 is not included in the interval >>> np.allclose(v1[2:], [3, 4]) True
 stingray.gti.get_btis(gtis, start_time=None, stop_time=None)[source]¶
From GTIs, obtain bad time intervals, i.e. the intervals not covered by the GTIs.
GTIs have to be wellbehaved, in the sense that they have to pass
check_gtis
. Parameters
 gtisiterable
A list of GTIs.
 start_timefloat
Optional start time of the overall observation (e.g. can be earlier than the first time stamp in
gtis
). stop_timefloat
Optional stop time of the overall observation (e.g. can be later than the last time stamp in``gtis``).
 Returns
 btisnumpy.ndarray
A list of bad time intervals.
 stingray.gti.get_gti_extensions_from_pattern(lchdulist, name_pattern='GTI')[source]¶
Gets the GTI extensions that match a given pattern.
 Parameters
 lchdulist: `:class:astropy.io.fits.HDUList` object
The full content of a FITS file.
 name_pattern: str
Pattern indicating all the GTI extensions.
 Returns
 ext_list: list
List of GTI extension numbers whose name matches the input pattern.
Examples
>>> from astropy.io import fits >>> start = np.arange(0, 300, 100) >>> stop = start + 50. >>> s1 = fits.Column(name='START', array=start, format='D') >>> s2 = fits.Column(name='STOP', array=stop, format='D') >>> hdu1 = fits.TableHDU.from_columns([s1, s2], name='GTI005XX') >>> hdu2 = fits.TableHDU.from_columns([s1, s2], name='GTI00501') >>> lchdulist = fits.HDUList([hdu1]) >>> gtiextn = get_gti_extensions_from_pattern( ... lchdulist, name_pattern='GTI005[09]+') >>> np.allclose(gtiextn, [1]) True
 stingray.gti.get_gti_from_all_extensions(lchdulist, accepted_gtistrings=['GTI'], det_numbers=None)[source]¶
Intersect the GTIs from the all accepted extensions.
 Parameters
 lchdulist: `:class:astropy.io.fits.HDUList` object
The full content of a FITS file.
 accepted_gtistrings: list of str
Base strings of GTI extensions. For missions adding the detector number to GTI extensions like, e.g., XMM and Chandra, this function automatically adds the detector number and looks for all matching GTI extensions (e.g. “STDGTI” will also retrieve “STDGTI05”; “GTI0” will also retrieve “GTI00501”).
 Returns
 gti_list: [[gti00, gti01], [gti10, gti11], …]
List of good time intervals, as the intersection of all matching GTIs. If there are two matching extensions, with GTIs [[0, 50], [100, 200]] and [[40, 70]] respectively, this function will return [[40, 50]].
Examples
>>> from astropy.io import fits >>> s1 = fits.Column(name='START', array=[0, 100, 200], format='D') >>> s2 = fits.Column(name='STOP', array=[50, 150, 250], format='D') >>> hdu1 = fits.TableHDU.from_columns([s1, s2], name='GTI00501') >>> s1 = fits.Column(name='START', array=[200, 300], format='D') >>> s2 = fits.Column(name='STOP', array=[250, 350], format='D') >>> hdu2 = fits.TableHDU.from_columns([s1, s2], name='STDGTI05') >>> lchdulist = fits.HDUList([hdu1, hdu2]) >>> gti = get_gti_from_all_extensions( ... lchdulist, accepted_gtistrings=['GTI0', 'STDGTI'], ... det_numbers=[5]) >>> np.allclose(gti, [[200, 250]]) True
 stingray.gti.get_gti_from_hdu(gtihdu)[source]¶
Get the GTIs from a given FITS extension.
 Parameters
 gtihdu: `:class:astropy.io.fits.TableHDU` object
The GTI HDU.
 Returns
 gti_list: [[gti00, gti01], [gti10, gti11], …]
List of good time intervals.
Examples
>>> from astropy.io import fits >>> start = np.arange(0, 300, 100) >>> stop = start + 50. >>> s1 = fits.Column(name='START', array=start, format='D') >>> s2 = fits.Column(name='STOP', array=stop, format='D') >>> hdu1 = fits.TableHDU.from_columns([s1, s2], name='GTI00501') >>> gti = get_gti_from_hdu(hdu1) >>> np.allclose(gti, [[0, 50], [100, 150], [200, 250]]) True
 stingray.gti.get_gti_lengths(gti)[source]¶
Calculate the length of each Good Time Interval.
 Parameters
 gti[[gti00, gti01], [gti10, gti11], …]
The list of good time intervals.
 Returns
 lengths
np.ndarray
List of GTI lengths.
 lengths
Examples
>>> gti = [[0, 1000], [1000, 1001], [3000, 3020]] >>> np.allclose(get_gti_lengths(gti), [1000, 1, 20]) True
 stingray.gti.get_total_gti_length(gti, minlen=0)[source]¶
Calculate the total exposure during Good Time Intervals.
 Parameters
 gti[[gti00, gti01], [gti10, gti11], …]
The list of good time intervals.
 minlenfloat
Minimum GTI length to consider.
 Returns
 lengthfloat
The total exposure during GTIs.
Examples
>>> gti = [[0, 1000], [1000, 1001], [3000, 3020]] >>> get_total_gti_length(gti) 1021 >>> get_total_gti_length(gti, minlen=5) 1020
 stingray.gti.gti_border_bins(gtis, time, dt=None, epsilon=0.001)[source]¶
Find the indices in a time array corresponding to the borders of GTIs.
GTIs shorter than the bin time are not returned.
 Parameters
 gtis2d float array
List of GTIs of the form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
. timearraylike
Array of time stamps.
 Returns
 spectrum_start_binsarraylike
List of starting bins of each GTI
 spectrum_stop_binsarraylike
List of stop bins of each GTI. The elements corresponding to these bins should not be included.
 Other Parameters
 dtfloat, default median(diff(time))
Time resolution of the light curve.
 epsilonfloat, default 0.001
The tolerance, in fraction of
dt
, for the comparisons at the borders. fraction_stepfloat
If the step is not a full
segment_size
but less (e.g. a moving window), this indicates the ratio between step step andsegment_size
(e.g.0.5
means that the window shifts by halfsegment_size
).
Examples
>>> times = np.arange(0.5, 13.5)
>>> gti_border_bins([[16., 18.]], times) Traceback (most recent call last): ... ValueError: Invalid time interval for the given GTIs
>>> start_bins, stop_bins = gti_border_bins( ... [[0, 5], [6, 8]], times)
>>> np.allclose(start_bins, [0, 6]) True >>> np.allclose(stop_bins, [5, 8]) True >>> np.allclose(times[start_bins[0]:stop_bins[0]], [0.5, 1.5, 2.5, 3.5, 4.5]) True >>> np.allclose(times[start_bins[1]:stop_bins[1]], [6.5, 7.5]) True
 stingray.gti.join_gtis(gti0, gti1)[source]¶
Union of two GTIs.
If GTIs are mutually exclusive, it calls
append_gtis
. Otherwise we put the extremes of partially overlapping GTIs on an ideal line and look at the number of opened and closed intervals. When the number of closed and opened intervals is the same, the full GTI is complete and we close it.In practice, we assign to each opening time of a GTI the value
1
, and the value1
to each closing time; when the cumulative sum is zero, the GTI has ended. The timestamp after each closed GTI is the start of a new one.(cumsum) 1 2 1 0 1 2 1 2 1 0 GTI A : : : : FINAL GTI : :: GTI B  
 Parameters
 gti0: 2d float array
List of GTIs of the form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
 gti1: 2d float array
List of GTIs of the form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
 Returns
 gti: 2d float array
The newly created GTI
 stingray.gti.load_gtis(fits_file, gtistring=None)[source]¶
Load Good Time Intervals (GTIs) from
HDU EVENTS
of filefits_file
. File is expected to be in FITS format. Parameters
 fits_filestr
File name and path for the FITS file with the GTIs to be loaded.
 gtistringstr
If the name of the FITS extension with the GTIs is not
GTI
, the alternative name can be set with this parameter.
 Returns
 gti_listlist
A list of GTI
(start, stop)
pairs extracted from the FITS file.
 stingray.gti.time_intervals_from_gtis(gtis, segment_size, fraction_step=1, epsilon=1e05)[source]¶
Compute start/stop times of equal time intervals, compatible with GTIs.
Used to start each FFT/PDS/cospectrum from the start of a GTI, and stop before the next gap in data (end of GTI).
 Parameters
 gtis2d float array
List of GTIs of the form
[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]
 segment_sizefloat
Length of the time segments
 fraction_stepfloat
If the step is not a full
segment_size
but less (e.g. a moving window), this indicates the ratio between step step andsegment_size
(e.g.0.5
means that the window shifts by halfsegment_size
).
 Returns
 spectrum_start_timesarraylike
List of starting times to use in the spectral calculations.
 spectrum_stop_timesarraylike
List of end times to use in the spectral calculations.
I/O Functionality¶
 stingray.io.common_name(str1, str2, default='common')[source]¶
Strip two strings of the letters not in common.
Filenames must be of same length and only differ by a few letters.
 Parameters
 str1str
 str2str
 Returns
 common_strstr
A string containing the parts of the two names in common
 Other Parameters
 defaultstr
The string to return if
common_str
is empty
 stingray.io.get_file_extension(fname)[source]¶
Get the extension from the file name.
If gzipped, add ‘.gz’ to extension.
Examples
>>> get_file_extension('ciao.tar') '.tar' >>> get_file_extension('ciao.tar.gz') '.tar.gz' >>> get_file_extension('ciao.evt.gz') '.evt.gz' >>> get_file_extension('ciao.a.tutti.evt.gz') '.evt.gz'
 stingray.io.get_key_from_mission_info(info, key, default, inst=None, mode=None)[source]¶
Get the name of a header key or table column from the mission database.
Many entries in the mission database have default values that can be altered for specific instruments or observing modes. Here, if there is a definition for a given instrument and mode, we take that, otherwise we use the default).
 Parameters
 infodict
Nested dictionary containing all the information for a given mission. It can be nested, e.g. contain some info for a given instrument, and for each observing mode of that instrument.
 keystr
The key to read from the info dictionary
 defaultobject
The default value. It can be of any type, depending on the expected type for the entry.
 Returns
 retvalobject
The wanted entry from the info dictionary
 Other Parameters
 inststr
Instrument
 modestr
Observing mode
Examples
>>> info = {'ecol': 'PI', "A": {"ecol": "BLA"}, "C": {"M1": {"ecol": "X"}}} >>> get_key_from_mission_info(info, "ecol", "BU", inst="A", mode=None) 'BLA' >>> get_key_from_mission_info(info, "ecol", "BU", inst="B", mode=None) 'PI' >>> get_key_from_mission_info(info, "ecol", "BU", inst="A", mode="M1") 'BLA' >>> get_key_from_mission_info(info, "ecol", "BU", inst="C", mode="M1") 'X' >>> get_key_from_mission_info(info, "ghghg", "BU", inst="C", mode="M1") 'BU'
 stingray.io.high_precision_keyword_read(hdr, keyword)[source]¶
Read FITS header keywords, also if split in two.
In the case where the keyword is split in two, like
MJDREF = MJDREFI + MJDREFF
in some missions, this function returns the summed value. Otherwise, the content of the single keyword
 Parameters
 hdrdict_like
The FITS header structure, or a dictionary
 keywordstr
The key to read in the header
 Returns
 valuelong double
The value of the key, or
None
if something went wrong
 stingray.io.lcurve_from_fits(fits_file, gtistring='GTI', timecolumn='TIME', ratecolumn=None, ratehdu=1, fracexp_limit=0.9, outfile=None, noclobber=False, outdir=None)[source]¶
Load a lightcurve from a fits file.
Note
FITS light curve handling is still under testing. Absolute times might be incorrect depending on the light curve format.
 Parameters
 fits_filestr
File name of the input light curve in FITS format
 Returns
 datadict
Dictionary containing all information needed to create a
stingray.Lightcurve
object
 Other Parameters
 gtistringstr
Name of the GTI extension in the FITS file
 timecolumnstr
Name of the column containing times in the FITS file
 ratecolumnstr
Name of the column containing rates in the FITS file
 ratehdustr or int
Name or index of the FITS extension containing the light curve
 fracexp_limitfloat
Minimum exposure fraction allowed
 noclobberbool
If True, do not overwrite existing files
 stingray.io.load_events_and_gtis(fits_file, additional_columns=None, gtistring=None, gti_file=None, hduname=None, column=None)[source]¶
Load event lists and GTIs from one or more files.
Loads event list from HDU EVENTS of file fits_file, with Good Time intervals. Optionally, returns additional columns of data from the same HDU of the events.
 Parameters
 fits_filestr
 Returns
 retvalsObject with the following attributes:
 ev_listarraylike
Event times in Mission Epoch Time
 gti_list: [[gti0_0, gti0_1], [gti1_0, gti1_1], …]
GTIs in Mission Epoch Time
 additional_data: dict
A dictionary, where each key is the one specified in additional_colums. The data are an array with the values of the specified column in the fits file.
 t_startfloat
Start time in Mission Epoch Time
 t_stopfloat
Stop time in Mission Epoch Time
 pi_listarraylike
Raw Instrument energy channels
 cal_pi_listarraylike
Calibrated PI channels (those that can be easily converted to energy values, regardless of the instrument setup.)
 energy_listarraylike
Energy of each photon in keV (only for NuSTAR, NICER, XMM)
 instrstr
Name of the instrument (e.g. EPICpn or FPMA)
 missionstr
Name of the instrument (e.g. XMM or NuSTAR)
 mjdreffloat
MJD reference time for the mission
 headerstr
Full header of the FITS file, for debugging purposes
 detector_idarraylike, int
Detector id for each photon (e.g. each of the CCDs composing XMM’s or Chandra’s instruments)
 Other Parameters
 additional_columns: list of str, optional
A list of keys corresponding to the additional columns to extract from the event HDU (ex.: [‘PI’, ‘X’])
 gtistringstr
Commaseparated list of accepted GTI extensions (default GTI,STDGTI), with or without appended integer number denoting the detector
 gti_filestr, default None
External GTI file
 hdunamestr or int, default 1
Name of the HDU containing the event list
 columnstr, default None
The column containing the time values. If None, we use the name specified in the mission database, and if there is nothing there, “TIME”
 return_limits: bool, optional
Return the TSTART and TSTOP keyword values
 stingray.io.mkdir_p(path)[source]¶
Safe
mkdir
function, found at [somkdir]. Parameters
 pathstr
The absolute path to the directory to be created
Notes
 stingray.io.read_header_key(fits_file, key, hdu=1)[source]¶
Read the header key key from HDU hdu of the file
fits_file
. Parameters
 fits_file: str
The file name and absolute path to the event file.
 key: str
The keyword to be read
 Returns
 valueobject
The value stored under
key
infits_file
 Other Parameters
 hduint
Index of the HDU extension from which the header key to be read.
 stingray.io.read_mission_info(mission=None)[source]¶
Search the relevant information about a mission in xselect.mdb.
 stingray.io.ref_mjd(fits_file, hdu=1)[source]¶
Read
MJDREFF
,MJDREFI
or, if failed,MJDREF
, from the FITS header. Parameters
 fits_filestr
The file name and absolute path to the event file.
 Returns
 mjdrefnumpy.longdouble
the reference MJD
 Other Parameters
 hduint
Index of the HDU extension from which the header key to be read.
 stingray.io.rough_calibration(pis, mission)[source]¶
Make a rough conversion betwenn PI channel and energy.
Only works for NICER, NuSTAR, and XMM.
 Parameters
 pis: float or array of floats
PI channels in data
 mission: str
Mission name
 Returns
 energiesfloat or array of floats
Energy values
Examples
>>> rough_calibration(0, 'nustar') 1.6 >>> # It's caseinsensitive >>> rough_calibration(1200, 'XMm') 1.2 >>> rough_calibration(10, 'asDf') Traceback (most recent call last): ... ValueError: Mission asdf not recognized >>> rough_calibration(100, 'nicer') 1.0
 stingray.io.savefig(filename, **kwargs)[source]¶
Save a figure plotted by
matplotlib
.Note : This function is supposed to be used after the
plot
function. Otherwise it will save a blank image with no plot. Parameters
 filenamestr
The name of the image file. Extension must be specified in the file name. For example filename with
png
extension will give a rasterized image while.pdf
extension will give a vectorized output. kwargskeyword arguments
Keyword arguments to be passed to
savefig
function ofmatplotlib.pyplot
. For example usebbox_inches='tight'
to remove the undesirable whitepace around the image.
 stingray.io.split_numbers(number, shift=0)[source]¶
Split high precision number(s) into doubles.
You can specify the number of shifts to move the decimal point.
 Parameters
 number: long double
The input high precision number which is to be split
 Returns
 number_I: double
First part of high precision number
 number_F: double
Second part of high precision number
 Other Parameters
 shift: integer
Move the cut by
shift
decimal points to the right (left if negative)
Examples
>>> n = 12.34 >>> i, f = split_numbers(n) >>> i == 12 True >>> np.isclose(f, 0.34) True >>> split_numbers(n, 2) (12.34, 0.0) >>> split_numbers(n, 1) (10.0, 2.34)
Other Utility Functions¶
 stingray.utils.baseline_als(x, y, lam=None, p=None, niter=10, return_baseline=False, offset_correction=False)[source]¶
Baseline Correction with Asymmetric Least Squares Smoothing.
 Parameters
 xarraylike
the sample time/number/position
 yarraylike
the data series corresponding to
x
 lamfloat
the lambda parameter of the ALS method. This control how much the baseline can adapt to local changes. A higher value corresponds to a stiffer baseline
 pfloat
the asymmetry parameter of the ALS method. This controls the overall slope tolerated for the baseline. A higher value correspond to a higher possible slope
 Returns
 y_subtractedarraylike, same size as
y
The initial time series, subtracted from the trend
 baselinearraylike, same size as
y
Fitted baseline. Only returned if return_baseline is
True
 y_subtractedarraylike, same size as
 Other Parameters
 niterint
The number of iterations to perform
 return_baselinebool
return the baseline?
 offset_correctionbool
also correct for an offset to align with the running mean of the scan
Examples
>>> x = np.arange(0, 10, 0.01) >>> y = np.zeros_like(x) + 10 >>> ysub = baseline_als(x, y) >>> np.all(ysub < 0.001) True
 stingray.utils.contiguous_regions(condition)[source]¶
Find contiguous
True
regions of the boolean arraycondition
.Return a 2D array where the first column is the start index of the region and the second column is the end index, found on [socontiguous].
 Parameters
 conditionbool array
 Returns
 idx
[[i0_0, i0_1], [i1_0, i1_1], ...]
A list of integer couples, with the start and end of each
True
blocks in the original array
 idx
Notes
 stingray.utils.create_window(N, window_type='uniform')[source]¶
A method to create window functions commonly used in signal processing.
Windows supported are: Hamming, Hanning, uniform (rectangular window), triangular window, blackmann window among others.
 Parameters
 Nint
Total number of data points in window. If negative, abs is taken.
 window_type{
uniform
,parzen
,hamming
,hanning
,triangular
,welch
,blackmann
,flattop
}, optional, defaultuniform
Type of window to create.
 Returns
 window: numpy.ndarray
Window function of length
N
.
 stingray.utils.excess_variance(lc, normalization='fvar')[source]¶
Calculate the excess variance.
Vaughan et al. 2003, MNRAS 345, 1271 give three measurements of source intrinsic variance: if a light curve has a total variance of \(S^2\), and each point has an error bar \(\sigma_{err}\), the excess variance is defined as
\[\sigma_{XS} = S^2  \overline{\sigma_{err}}^2;\]the normalized excess variance is the excess variance divided by the square of the mean intensity:
\[\sigma_{NXS} = \dfrac{\sigma_{XS}}{\overline{x}^2};\]the fractional mean square variability amplitude, or \(F_{var}\), is finally defined as
\[F_{var} = \sqrt{\dfrac{\sigma_{XS}}{\overline{x}^2}}\] Parameters
 lca
Lightcurve
object  normalizationstr
if
fvar
, return the fractional mean square variability \(F_{var}\). Ifnone
, return the unnormalized excess variance variance \(\sigma_{XS}\). Ifnorm_xs
, return the normalized excess variance \(\sigma_{XS}\) Returns
 ——
 var_xsfloat
 var_xs_errfloat
 lca
 stingray.utils.find_nearest(array, value)[source]¶
Return the array value that is closest to the input value (Abigail Stevens: Thanks StackOverflow!)
 Parameters
 arraynp.array of ints or floats
1D array of numbers to search through. Should already be sorted from low values to high values.
 valueint or float
The value you want to find the closest to in the array.
 Returns
 array[idx]int or float
The array value that is closest to the input value.
 idxint
The index of the array of the closest value.
 stingray.utils.genDataPath(dir_path)[source]¶
Generates data path to chunks.
 Parameters
 dir_path: string
Path to zarr datastore + Top level directory name for data
 Returns
 list
List of path’s to datastore
 Raises
 IOError
If directory does not exist
 stingray.utils.get_random_state(random_state=None)[source]¶
Return a Mersenne Twister pseudorandom number generator.
 Parameters
 seedinteger or
numpy.random.RandomState
, optional, defaultNone
 seedinteger or
 Returns
 random_statemtrand.RandomState object
 stingray.utils.is_iterable(var)[source]¶
Test if a variable is an iterable.
 Parameters
 varobject
The variable to be tested for iterablyness
 Returns
 is_iterbool
Returns
True
ifvar
is anIterable
,False
otherwise
 stingray.utils.is_string(s)[source]¶
Portable function to answer whether a variable is a string.
 Parameters
 sobject
An object that is potentially a string
 Returns
 isstringbool
A boolean decision on whether
s
is a string or not
 stingray.utils.look_for_array_in_array(array1, array2)[source]¶
Find a subset of values in an array.
 Parameters
 array1iterable
An array with values to be searched
 array2iterable
A second array which potentially contains a subset of values also contained in
array1
 Returns —— array3iterable An array with the subset of values
 contained in both ``array1`` and ``array2``
 stingray.utils.nearest_power_of_two(x)[source]¶
Return a number which is nearest to
x
and is the integral power of two. Parameters
 xint, float
 Returns
 x_nearestint
Number closest to
x
and is the integral power of two.
 stingray.utils.optimal_bin_time(fftlen, tbin)[source]¶
Vary slightly the bin time to have a power of two number of bins.
Given an FFT length and a proposed bin time, return a bin time slightly shorter than the original, that will produce a poweroftwo number of FFT bins.
 Parameters
 fftlenint
Number of positive frequencies in a proposed Fourier spectrum
 tbinfloat
The proposed time resolution of a light curve
 Returns
 resfloat
A time resolution that will produce a Fourier spectrum with
fftlen
frequencies and a number of FFT bins that are a power of two
 stingray.utils.order_list_of_arrays(data, order)[source]¶
Sort an array according to the specified order.
 Parameters
 dataiterable
 Returns
 datalist or dict
 stingray.utils.poisson_symmetrical_errors(counts)[source]¶
Optimized version of frequentist symmetrical errors.
Uses a lookup table in order to limit the calls to poisson_conf_interval
 Parameters
 countsiterable
An array of Poissondistributed numbers
 Returns
 errnumpy.ndarray
An array of uncertainties associated with the Poisson counts in
counts
Examples
>>> from astropy.stats import poisson_conf_interval >>> counts = np.random.randint(0, 1000, 100) >>> #  Do it without the lookup table  >>> err_low, err_high = poisson_conf_interval(np.asarray(counts), ... interval='frequentistconfidence', sigma=1) >>> err_low = np.asarray(counts) >>> err_high = np.asarray(counts) >>> err = (np.absolute(err_low) + np.absolute(err_high))/2.0 >>> # Do it with this function >>> err_thisfun = poisson_symmetrical_errors(counts) >>> # Test that results are always the same >>> assert np.allclose(err_thisfun, err)
 stingray.utils.rebin_data(x, y, dx_new, yerr=None, method='sum', dx=None)[source]¶
Rebin some data to an arbitrary new data resolution. Either sum the data points in the new bins or average them.
 Parameters
 x: iterable
The dependent variable with some resolution, which can vary throughout the time series.
 y: iterable
The independent variable to be binned
 dx_new: float
The new resolution of the dependent variable
x
 Returns
 xbin: numpy.ndarray
The midpoints of the new bins in
x
 ybin: numpy.ndarray
The binned quantity
y
 ybin_err: numpy.ndarray
The uncertainties of the binned values of
y
. step_size: float
The size of the binning step
 Other Parameters
 yerr: iterable, optional
The uncertainties of
y
, to be propagated during binning. method: {``sum``  ``average``  ``mean``}, optional, default ``sum``
The method to be used in binning. Either sum the samples
y
in each new bin ofx
, or take the arithmetic mean. dx: float
The old resolution (otherwise, calculated from difference between time bins)
Examples
>>> x = np.arange(0, 100, 0.01) >>> y = np.ones(x.size) >>> yerr = np.ones(x.size) >>> xbin, ybin, ybinerr, step_size = rebin_data( ... x, y, 4, yerr=yerr, method='sum', dx=0.01) >>> np.allclose(ybin, 400) True >>> np.allclose(ybinerr, 20) True >>> xbin, ybin, ybinerr, step_size = rebin_data( ... x, y, 4, yerr=yerr, method='mean') >>> np.allclose(ybin, 1) True >>> np.allclose(ybinerr, 0.05) True
 stingray.utils.rebin_data_log(x, y, f, y_err=None, dx=None)[source]¶
Logarithmic rebin of some data. Particularly useful for the power spectrum.
The new dependent variable depends on the previous dependent variable modified by a factor f:
\[d\nu_j = d\nu_{j1} (1+f)\] Parameters
 x: iterable
The dependent variable with some resolution
dx_old = x[1]x[0]
 y: iterable
The independent variable to be binned
 f: float
The factor of increase of each bin wrt the previous one.
 Returns
 xbin: numpy.ndarray
The midpoints of the new bins in
x
 ybin: numpy.ndarray
The binned quantity
y
 ybin_err: numpy.ndarray
The uncertainties of the binned values of
y
 step_size: float
The size of the binning step
 Other Parameters
 yerr: iterable, optional
The uncertainties of
y
to be propagated during binning. method: {``sum``  ``average``  ``mean``}, optional, default ``sum``
The method to be used in binning. Either sum the samples
y
in each new bin ofx
or take the arithmetic mean. dx: float, optional
The binning step of the initial
x
Modeling¶
This subpackage defines classes and functions related to parametric modelling of various types of data sets. Currently, most functionality is focused on modelling Fourier products (especially power spectra and averaged power spectra), but rudimentary functionality exists for modelling e.g. light curves.
LogLikelihood Classes¶
These classes define basic loglikelihoods for modelling time series and power spectra.
stingray.modeling.LogLikelihood
is an abstract base class, i.e. a template for creating
userdefined loglikelihoods and should not be instantiated itself. Based on this base class
are several definitions for a stingray.modeling.GaussianLogLikelihood
, appropriate for
data with normally distributed uncertainties, a stingray.modeling.PoissonLogLikelihood
appropriate for photon counting data, and a stingray.modeling.PSDLogLikelihood
appropriate for (averaged) power spectra.
 class stingray.modeling.LogLikelihood(x, y, model, **kwargs)[source]¶
Abstract Base Class defining the structure of a
LogLikelihood
object. This class cannot be called itself, since each statistical distribution has its own definition for the likelihood, which should occur in subclasses. Parameters
 xiterable
xcoordinate of the data. Could be multidimensional.
 yiterable
ycoordinate of the data. Could be multidimensional.
 modelan
astropy.modeling.FittableModel
instance Your model
 kwargs
keyword arguments specific to the individual subclasses. For details, see the respective docstrings for each subclass
 class stingray.modeling.GaussianLogLikelihood(x, y, yerr, model)[source]¶
Likelihood for data with Gaussian uncertainties. Astronomers also call this likelihood ChiSquared, but be aware that this has nothing to do with the likelihood based on the Chisquare distribution, which is also defined as in of
PSDLogLikelihood
in this module!Use this class here whenever your data has Gaussian uncertainties.
 Parameters
 xiterable
xcoordinate of the data
 yiterable
ycoordinte of the data
 yerriterable
the uncertainty on the data, as standard deviation
 modelan
astropy.modeling.FittableModel
instance The model to use in the likelihood.
 Attributes
 xiterable
xcoordinate of the data
 yiterable
ycoordinte of the data
 yerriterable
the uncertainty on the data, as standard deviation
 modelan Astropy Model instance
The model to use in the likelihood.
 nparint
The number of free parameters in the model
 evaluate(pars, neg=False)[source]¶
Evaluate the Gaussian loglikelihood for a given set of parameters.
 Parameters
 parsnumpy.ndarray
An array of parameters at which to evaluate the model and subsequently the loglikelihood. Note that the length of this array must match the free parameters in
model
, i.e.npar
 negbool, optional, default
False
If
True
, return the negative loglikelihood, i.e.loglike
, rather thanloglike
. This is useful e.g. for optimization routines, which generally minimize functions.
 Returns
 loglikefloat
The log(likelihood) value for the data and model.
 class stingray.modeling.PoissonLogLikelihood(x, y, model)[source]¶
Likelihood for data with uncertainties following a Poisson distribution. This is useful e.g. for (binned) photon count data.
 Parameters
 xiterable
xcoordinate of the data
 yiterable
ycoordinte of the data
 modelan
astropy.modeling.FittableModel
instance The model to use in the likelihood.
 Attributes
 xiterable
xcoordinate of the data
 yiterable
ycoordinte of the data
 yerriterable
the uncertainty on the data, as standard deviation
 modelan
astropy.modeling.FittableModel
instance The model to use in the likelihood.
 nparint
The number of free parameters in the model
 evaluate(pars, neg=False)[source]¶
Evaluate the loglikelihood for a given set of parameters.
 Parameters
 parsnumpy.ndarray
An array of parameters at which to evaluate the model and subsequently the loglikelihood. Note that the length of this array must match the free parameters in
model
, i.e.npar
 negbool, optional, default
False
If
True
, return the negative loglikelihood, i.e.loglike
, rather thanloglike
. This is useful e.g. for optimization routines, which generally minimize functions.
 Returns
 loglikefloat
The log(likelihood) value for the data and model.
 class stingray.modeling.PSDLogLikelihood(freq, power, model, m=1)[source]¶
A likelihood based on the Chisquare distribution, appropriate for modelling (averaged) power spectra. Note that this is not the same as the statistic astronomers commonly call ChiSquare, which is a fit statistic derived from the Gaussian loglikelihood, defined elsewhere in this module.
 Parameters
 freqiterable
Array with frequencies
 poweriterable
Array with (averaged/singular) powers corresponding to the frequencies in
freq
 modelan
astropy.modeling.FittableModel
instance The model to use in the likelihood.
 mint
1/2 of the degrees of freedom
 Attributes
 xiterable
xcoordinate of the data
 yiterable
ycoordinte of the data
 yerriterable
the uncertainty on the data, as standard deviation
 modelan
astropy.modeling.FittableModel
instance The model to use in the likelihood.
 nparint
The number of free parameters in the model
 evaluate(pars, neg=False)[source]¶
Evaluate the loglikelihood for a given set of parameters.
 Parameters
 parsnumpy.ndarray
An array of parameters at which to evaluate the model and subsequently the loglikelihood. Note that the length of this array must match the free parameters in
model
, i.e.npar
 negbool, optional, default
False
If
True
, return the negative loglikelihood, i.e.loglike
, rather thanloglike
. This is useful e.g. for optimization routines, which generally minimize functions.
 Returns
 loglikefloat
The log(likelihood) value for the data and model.
 class stingray.modeling.LaplaceLogLikelihood(x, y, yerr, model)[source]¶
A Laplace likelihood for the cospectrum.
 Parameters
 xiterable
Array with independent variable
 yiterable
Array with dependent variable
 modelan
astropy.modeling.FittableModel
instance The model to use in the likelihood.
 yerriterable
Array with the uncertainties on
y
, in standard deviation
 Attributes
 xiterable
xcoordinate of the data
 yiterable
ycoordinte of the data
 yerriterable
the uncertainty on the data, as standard deviation
 modelan
astropy.modeling.FittableModel
instance The model to use in the likelihood.
 nparint
The number of free parameters in the model
 evaluate(pars, neg=False)[source]¶
Evaluate the loglikelihood for a given set of parameters.
 Parameters
 parsnumpy.ndarray
An array of parameters at which to evaluate the model and subsequently the loglikelihood. Note that the length of this array must match the free parameters in
model
, i.e.npar
 negbool, optional, default
False
If
True
, return the negative loglikelihood, i.e.loglike
, rather thanloglike
. This is useful e.g. for optimization routines, which generally minimize functions.
 Returns
 loglikefloat
The log(likelihood) value for the data and model.
Posterior Classes¶
These classes define basic posteriors for parametric modelling of time series and power spectra, based on
the loglikelihood classes defined in LogLikelihood Classes. stingray.modeling.Posterior
is an
abstract base class laying out a basic template for defining posteriors. As with the loglikelihood classes
above, several posterior classes are defined for a variety of data types.
Note that priors are not predefined in these classes, since they are problem dependent and should be
set by the user. The convenience function stingray.modeling.set_logprior()
can be useful to help set
priors for these posterior classes.
 class stingray.modeling.Posterior(x, y, model, **kwargs)[source]¶
Define a
Posterior
object.The
Posterior
describes the Bayesian probability distribution of a set of parameters \(\theta\) given some observed data \(D\) and some prior assumptions \(I\).It is defined as
\[p(\theta  D, I) = p(D  \theta, I) p(\theta  I)/p(D I)\]where \(p(D  \theta, I)\) describes the likelihood, i.e. the sampling distribution of the data and the (parametric) model, and \(p(\theta  I)\) describes the prior distribution, i.e. our information about the parameters \(\theta\) before we gathered the data. The marginal likelihood \(p(D I)\) describes the probability of observing the data given the model assumptions, integrated over the space of all parameters.
 Parameters
 xiterable
The abscissa or independent variable of the data. This could in principle be a multidimensional array.
 yiterable
The ordinate or dependent variable of the data.
 model
astropy.modeling.models
instance The parametric model supposed to represent the data. For details see the
astropy.modeling
documentation kwargs
keyword arguments related to the subclasses of
Posterior
. For details, see the documentation of the individual subclasses
References
Sivia, D. S., and J. Skilling. “Data Analysis: A Bayesian Tutorial. 2006.”
Gelman, Andrew, et al. Bayesian data analysis. Vol. 2. Boca Raton, FL, USA: Chapman & Hall/CRC, 2014.
von Toussaint, Udo. “Bayesian inference in physics.” Reviews of Modern Physics 83.3 (2011): 943.
Hogg, David W. “Probability Calculus for inference”. arxiv: 1205.4446
 logposterior(t0, neg=False)[source]¶
Definition of the logposterior. Requires methods
loglikelihood
andlogprior
to both be defined.Note that
loglikelihood
is set in the subclass ofPosterior
appropriate for your problem at hand, as islogprior
. Parameters
 t0numpy.ndarray
An array of parameters at which to evaluate the model and subsequently the logposterior. Note that the length of this array must match the free parameters in
model
, i.e.npar
 negbool, optional, default
False
If
True
, return the negative logposterior, i.e.lpost
, rather thanlpost
. This is useful e.g. for optimization routines, which generally minimize functions.
 Returns
 lpostfloat
The value of the logposterior for the given parameters
t0
 class stingray.modeling.GaussianPosterior(x, y, yerr, model, priors=None)[source]¶
A general class for twodimensional data following a Gaussian sampling distribution.
 Parameters
 xnumpy.ndarray
independent variable
 ynumpy.ndarray
dependent variable
 yerrnumpy.ndarray
measurement uncertainties for y
 modelinstance of any subclass of
astropy.modeling.FittableModel
The model for the power spectrum.
 priorsdict of form
{"parameter name": function}
, optional A dictionary with the definitions for the prior probabilities. For each parameter in
model
, there must be a prior defined with a key of the exact same name as stored inmodel.param_names
. The item for each key is a function definition defining the prior (e.g. a lambda function or ascipy.stats.distribution.pdf
. Ifpriors = None
, then no prior is set. This means priors need to be added by hand using theset_logprior()
function defined in this module. Note that it is impossible to call aPosterior
object itself or theself.logposterior
method without defining a prior.
 logposterior(t0, neg=False)¶
Definition of the logposterior. Requires methods
loglikelihood
andlogprior
to both be defined.Note that
loglikelihood
is set in the subclass ofPosterior
appropriate for your problem at hand, as islogprior
. Parameters
 t0numpy.ndarray
An array of parameters at which to evaluate the model and subsequently the logposterior. Note that the length of this array must match the free parameters in
model
, i.e.npar
 negbool, optional, default
False
If
True
, return the negative logposterior, i.e.lpost
, rather thanlpost
. This is useful e.g. for optimization routines, which generally minimize functions.
 Returns
 lpostfloat
The value of the logposterior for the given parameters
t0
 class stingray.modeling.PoissonPosterior(x, y, model, priors=None)[source]¶
Posterior
for Poisson light curve data. Primary intended use is for modelling Xray light curves, but alternative uses are conceivable. Parameters
 xnumpy.ndarray
The independent variable (e.g. time stamps of a light curve)
 ynumpy.ndarray
The dependent variable (e.g. counts per bin of a light curve)
 modelinstance of any subclass of
astropy.modeling.FittableModel
The model for the power spectrum.
 priorsdict of form
{"parameter name": function}
, optional A dictionary with the definitions for the prior probabilities. For each parameter in
model
, there must be a prior defined with a key of the exact same name as stored inmodel.param_names
. The item for each key is a function definition defining the prior (e.g. a lambda function or ascipy.stats.distribution.pdf
. Ifpriors = None
, then no prior is set. This means priors need to be added by hand using theset_logprior()
function defined in this module. Note that it is impossible to call aPosterior
object itself or theself.logposterior
method without defining a prior.
 Attributes
 xnumpy.ndarray
The independent variable (list of frequencies) stored in ps.freq
 ynumpy.ndarray
The dependent variable (list of powers) stored in ps.power
 modelinstance of any subclass of
astropy.modeling.FittableModel
The model for the power spectrum.
 logposterior(t0, neg=False)¶
Definition of the logposterior. Requires methods
loglikelihood
andlogprior
to both be defined.Note that
loglikelihood
is set in the subclass ofPosterior
appropriate for your problem at hand, as islogprior
. Parameters
 t0numpy.ndarray
An array of parameters at which to evaluate the model and subsequently the logposterior. Note that the length of this array must match the free parameters in
model
, i.e.npar
 negbool, optional, default
False
If
True
, return the negative logposterior, i.e.lpost
, rather thanlpost
. This is useful e.g. for optimization routines, which generally minimize functions.
 Returns
 lpostfloat
The value of the logposterior for the given parameters
t0
 class stingray.modeling.PSDPosterior(freq, power, model, priors=None, m=1)[source]¶
Posterior
distribution for power spectra. Uses an exponential distribution for the errors in the likelihood, or a \(\chi^2\) distribution with \(2M\) degrees of freedom, where \(M\) is the number of frequency bins or power spectra averaged in each bin. Parameters
 ps{
stingray.Powerspectrum
stingray.AveragedPowerspectrum
} instance the
stingray.Powerspectrum
object containing the data modelinstance of any subclass of
astropy.modeling.FittableModel
The model for the power spectrum.
 priorsdict of form
{"parameter name": function}
, optional A dictionary with the definitions for the prior probabilities. For each parameter in
model
, there must be a prior defined with a key of the exact same name as stored inmodel.param_names
. The item for each key is a function definition defining the prior (e.g. a lambda function or ascipy.stats.distribution.pdf
. Ifpriors = None
, then no prior is set. This means priors need to be added by hand using theset_logprior()
function defined in this module. Note that it is impossible to call aPosterior
object itself or theself.logposterior
method without defining a prior. mint, default
1
The number of averaged periodograms or frequency bins in
ps
. Useful for binned/averaged periodograms, since the value of m will change the likelihood function!
 ps{
 Attributes
 ps{
stingray.Powerspectrum
stingray.AveragedPowerspectrum
} instance the
stingray.Powerspectrum
object containing the data xnumpy.ndarray
The independent variable (list of frequencies) stored in
ps.freq
 ynumpy.ndarray
The dependent variable (list of powers) stored in
ps.power
 modelinstance of any subclass of
astropy.modeling.FittableModel
The model for the power spectrum.
 ps{
 logposterior(t0, neg=False)¶
Definition of the logposterior. Requires methods
loglikelihood
andlogprior
to both be defined.Note that
loglikelihood
is set in the subclass ofPosterior
appropriate for your problem at hand, as islogprior
. Parameters
 t0numpy.ndarray
An array of parameters at which to evaluate the model and subsequently the logposterior. Note that the length of this array must match the free parameters in
model
, i.e.npar
 negbool, optional, default
False
If
True
, return the negative logposterior, i.e.lpost
, rather thanlpost
. This is useful e.g. for optimization routines, which generally minimize functions.
 Returns
 lpostfloat
The value of the logposterior for the given parameters
t0
 class stingray.modeling.LaplacePosterior(x, y, yerr, model, priors=None)[source]¶
A general class for twodimensional data following a Gaussian sampling distribution.
 Parameters
 xnumpy.ndarray
independent variable
 ynumpy.ndarray
dependent variable
 yerrnumpy.ndarray
measurement uncertainties for y, in standard deviation
 modelinstance of any subclass of
astropy.modeling.FittableModel
The model for the power spectrum.
 priorsdict of form
{"parameter name": function}
, optional A dictionary with the definitions for the prior probabilities. For each parameter in
model
, there must be a prior defined with a key of the exact same name as stored inmodel.param_names
. The item for each key is a function definition defining the prior (e.g. a lambda function or ascipy.stats.distribution.pdf
. Ifpriors = None
, then no prior is set. This means priors need to be added by hand using theset_logprior()
function defined in this module. Note that it is impossible to call aPosterior
object itself or theself.logposterior
method without defining a prior.
 logposterior(t0, neg=False)¶
Definition of the logposterior. Requires methods
loglikelihood
andlogprior
to both be defined.Note that
loglikelihood
is set in the subclass ofPosterior
appropriate for your problem at hand, as islogprior
. Parameters
 t0numpy.ndarray
An array of parameters at which to evaluate the model and subsequently the logposterior. Note that the length of this array must match the free parameters in
model
, i.e.npar
 negbool, optional, default
False
If
True
, return the negative logposterior, i.e.lpost
, rather thanlpost
. This is useful e.g. for optimization routines, which generally minimize functions.
 Returns
 lpostfloat
The value of the logposterior for the given parameters
t0
Parameter Estimation Classes¶
These classes implement functionality related to parameter estimation. They define basic fit
and
sample
methods using scipy.optimize
and emcee
, respectively, for optimization and Markov Chain Monte
Carlo sampling. stingray.modeling.PSDParEst
implements some more advanced functionality for modelling
power spectra, including both frequentist and Bayesian searches for (quasi)periodic signals.
 class stingray.modeling.ParameterEstimation(fitmethod='BFGS', max_post=True)[source]¶
Parameter estimation of twodimensional data, either via optimization or MCMC. Note: optimization with bounds is not supported. If something like this is required, define (uniform) priors in the ParametricModel instances to be used below.
 Parameters
 fitmethodstring, optional, default
LBFGSB
Any of the strings allowed in
scipy.optimize.minimize
in the method keyword. Sets the fit method to be used. max_postbool, optional, default
True
If
True
, then compute the MaximumAPosteriori estimate. IfFalse
, compute a Maximum Likelihood estimate.
 fitmethodstring, optional, default
 calibrate_lrt(lpost1, t1, lpost2, t2, sample=None, neg=True, max_post=False, nsim=1000, niter=200, nwalkers=500, burnin=200, namestr='test', seed=None)[source]¶
Calibrate the outcome of a Likelihood Ratio Test via MCMC.
In order to compare models via likelihood ratio test, one generally aims to compute a pvalue for the null hypothesis (generally the simpler model). There are two special cases where the theoretical distribution used to compute that pvalue analytically given the observed likelihood ratio (a chisquare distribution) is not applicable:
the models are not nested (i.e. Model 1 is not a special, simpler case of Model 2),
the parameter values fixed in Model 2 to retrieve Model 1 are at the edges of parameter space (e.g. if one must set, say, an amplitude to zero in order to remove a component in the more complex model, and negative amplitudes are excluded a priori)
In these cases, the observed likelihood ratio must be calibrated via simulations of the simpler model (Model 1), using MCMC to take into account the uncertainty in the parameters. This function does exactly that: it computes the likelihood ratio for the observed data, and produces simulations to calibrate the likelihood ratio and compute a pvalue for observing the data under the assumption that Model 1 istrue.
If
max_post=True
, the code will use MCMC to sample the posterior of the parameters and simulate fake data from there.If
max_post=False
, the code will use the covariance matrix derived from the fit to simulate data sets for comparison. Parameters
 lpost1object of a subclass of
Posterior
The
Posterior
object for model 1 t1iterable
The starting parameters for model 1
 lpost2object of a subclass of
Posterior
The
Posterior
object for model 2 t2iterable
The starting parameters for model 2
 negbool, optional, default
True
Boolean flag to decide whether to use the negative loglikelihood or logposterior
 max_post: bool, optional, default ``False``
If
True
, set the internal state to do the optimization with the loglikelihood rather than the logposterior.
 lpost1object of a subclass of
 Returns
 pvaluefloat [0,1]
pvalue ‘n stuff
 compute_lrt(lpost1, t1, lpost2, t2, neg=True, max_post=False)[source]¶
This function computes the Likelihood Ratio Test between two nested models.
 Parameters
 lpost1object of a subclass of
Posterior
The
Posterior
object for model 1 t1iterable
The starting parameters for model 1
 lpost2object of a subclass of
Posterior
The
Posterior
object for model 2 t2iterable
The starting parameters for model 2
 negbool, optional, default
True
Boolean flag to decide whether to use the negative loglikelihood or logposterior
 max_post: bool, optional, default ``False``
If
True
, set the internal state to do the optimization with the loglikelihood rather than the logposterior.
 lpost1object of a subclass of
 Returns
 lrtfloat
The likelihood ratio for model 2 and model 1
 res1OptimizationResults object
Contains the result of fitting
lpost1
 res2OptimizationResults object
Contains the results of fitting
lpost2
 fit(lpost, t0, neg=True, scipy_optimize_options=None)[source]¶
Do either a MaximumAPosteriori (MAP) or Maximum Likelihood (ML) fit to the data.
MAP fits include priors, ML fits do not.
 Parameters
 lpost
Posterior
(or subclass) instance and instance of class
Posterior
or one of its subclasses that defines the function to be minimized (either inloglikelihood
orlogposterior
) t0{
list
numpy.ndarray
} List/array with set of initial parameters
 negbool, optional, default
True
Boolean to be passed to
lpost
, setting whether to use the negative posterior or the negative loglikelihood. Useful for optimization routines, which are generally defined as minimization routines. scipy_optimize_optionsdict, optional, default
None
A dictionary with options for
scipy.optimize.minimize
, directly passed on as keyword arguments.
 lpost
 Returns
 res
OptimizationResults
object An object containing useful summaries of the fitting procedure. For details, see documentation of class:
OptimizationResults
.
 res
 sample(lpost, t0, cov=None, nwalkers=500, niter=100, burnin=100, threads=1, print_results=True, plot=False, namestr='test', pool=False)[source]¶
Sample the
Posterior
distribution defined inlpost
using MCMC. Here we use theemcee
package, but other implementations could in principle be used. Parameters
 lpostinstance of a
Posterior
subclass and instance of class
Posterior
or one of its subclasses that defines the function to be minimized (either inloglikelihood
orlogposterior
) t0iterable
list or array containing the starting parameters. Its length must match
lpost.model.npar
. nwalkersint, optional, default 500
The number of walkers (chains) to use during the MCMC procedure. The more walkers are used, the slower the estimation will be, but the better the final distribution is likely to be.
 niterint, optional, default 100
The number of iterations to run the MCMC chains for. The larger this number, the longer the estimation will take, but the higher the chance that the walkers have actually converged on the true posterior distribution.
 burninint, optional, default 100
The number of iterations to run the walkers before convergence is assumed to have occurred. This part of the chain will be discarded before sampling from what is then assumed to be the posterior distribution desired.
 threadsDEPRECATED int, optional, default 1
The number of threads for parallelization. Default is
1
, i.e. no parallelization With the change to the new emcee version 3, threads is deprecated. Use thepool
keyword argument instead. This will no longer have any effect. print_resultsbool, optional, default
True
Boolean flag setting whether the results of the MCMC run should be printed to standard output. Default: True
 plotbool, optional, default
False
Boolean flag setting whether summary plots of the MCMC chains should be produced. Default: False
 namestrstr, optional, default
test
Optional string for output file names for the plotting.
 poolbool, default False
If True, use pooling to parallelize the operation.
 lpostinstance of a
 Returns
 resclass:
SamplingResults
object An object of class
SamplingResults
summarizing the results of the MCMC run.
 resclass:
 class stingray.modeling.PSDParEst(ps, fitmethod='BFGS', max_post=True)[source]¶
Parameter estimation for parametric modelling of power spectra.
This class contains functionality that allows parameter estimation and related tasks that involve fitting a parametric model to an (averaged) power spectrum.
 Parameters
 psclass:
stingray.Powerspectrum
or class:stingray.AveragedPowerspectrum
object The power spectrum to be modelled
 fitmethodstr, optional, default
BFGS
A string allowed by
scipy.optimize.minimize
as a valid fitting method max_postbool, optional, default
True
If
True
, do a MaximumAPosteriori (MAP) fit, i.e. fit with priors, otherwise do a Maximum Likelihood fit instead
 psclass:
 calibrate_highest_outlier(lpost, t0, sample=None, max_post=False, nsim=1000, niter=200, nwalkers=500, burnin=200, namestr='test', seed=None)[source]¶
Calibrate the highest outlier in a data set using MCMCsimulated power spectra.
In short, the procedure does a MAP fit to the data, computes the statistic
\[\max{(T_R = 2(\mathrm{data}/\mathrm{model}))}\]and then does an MCMC run using the data and the model, or generates parameter samples from the likelihood distribution using the derived covariance in a Maximum Likelihood fit. From the (posterior) samples, it generates fake power spectra. Each fake spectrum is fit in the same way as the data, and the highest data/model outlier extracted as for the data. The observed value of \(T_R\) can then be directly compared to the simulated distribution of \(T_R\) values in order to derive a pvalue of the null hypothesis that the observed \(T_R\) is compatible with being generated by noise.
 Parameters
 lpost
stingray.modeling.PSDPosterior
object An instance of class
stingray.modeling.PSDPosterior
that defines the function to be minimized (either inloglikelihood
orlogposterior
) t0{list  numpy.ndarray}
List/array with set of initial parameters
 sample
SamplingResults
instance, optional, defaultNone
If a sampler has already been run, the
SamplingResults
instance can be fed into this method here, otherwise this method will run a sampler automatically max_post: bool, optional, default ``False``
If
True
, do MAP fits on the power spectrum to find the highest data/model outlier Otherwise, do a Maximum Likelihood fit. IfTrue
, the simulated power spectra will be generated from an MCMC run, otherwise the method will employ the approximated covariance matrix for the parameters derived from the likelihood surface to generate samples from that likelihood function. nsimint, optional, default
1000
Number of fake power spectra to simulate from the posterior sample. Note that this number sets the resolution of the resulting pvalue. For
nsim=1000
, the highest resolution that can be achieved is \(10^{3}\). niterint, optional, default 200
If
sample
isNone
, this variable will be used to set the number of steps in the MCMC procedure after burnin. nwalkersint, optional, default 500
If
sample
isNone
, this variable will be used to set the number of MCMC chains run in parallel in the sampler. burninint, optional, default 200
If
sample
isNone
, this variable will be used to set the number of burnin steps to be discarded in the initial phase of the MCMC run namestrstr, optional, default
test
A string to be used for storing MCMC output and plots to disk
 seedint, optional, default
None
An optional number to seed the random number generator with, for reproducibility of the results obtained with this method.
 lpost
 Returns
 pvalfloat
The pvalue that the highest data/model outlier is produced by random noise, calibrated using simulated power spectra from an MCMC run.
References
For more details on the procedure employed here, see
Vaughan, 2010: https://arxiv.org/abs/0910.2706
Huppenkothen et al, 2013: https://arxiv.org/abs/1212.1011
 calibrate_lrt(lpost1, t1, lpost2, t2, sample=None, neg=True, max_post=False, nsim=1000, niter=200, nwalkers=500, burnin=200, namestr='test', seed=None)¶
Calibrate the outcome of a Likelihood Ratio Test via MCMC.
In order to compare models via likelihood ratio test, one generally aims to compute a pvalue for the null hypothesis (generally the simpler model). There are two special cases where the theoretical distribution used to compute that pvalue analytically given the observed likelihood ratio (a chisquare distribution) is not applicable:
the models are not nested (i.e. Model 1 is not a special, simpler case of Model 2),
the parameter values fixed in Model 2 to retrieve Model 1 are at the edges of parameter space (e.g. if one must set, say, an amplitude to zero in order to remove a component in the more complex model, and negative amplitudes are excluded a priori)
In these cases, the observed likelihood ratio must be calibrated via simulations of the simpler model (Model 1), using MCMC to take into account the uncertainty in the parameters. This function does exactly that: it computes the likelihood ratio for the observed data, and produces simulations to calibrate the likelihood ratio and compute a pvalue for observing the data under the assumption that Model 1 istrue.
If
max_post=True
, the code will use MCMC to sample the posterior of the parameters and simulate fake data from there.If
max_post=False
, the code will use the covariance matrix derived from the fit to simulate data sets for comparison. Parameters
 lpost1object of a subclass of
Posterior
The
Posterior
object for model 1 t1iterable
The starting parameters for model 1
 lpost2object of a subclass of
Posterior
The
Posterior
object for model 2 t2iterable
The starting parameters for model 2
 negbool, optional, default
True
Boolean flag to decide whether to use the negative loglikelihood or logposterior
 max_post: bool, optional, default ``False``
If
True
, set the internal state to do the optimization with the loglikelihood rather than the logposterior.
 lpost1object of a subclass of
 Returns
 pvaluefloat [0,1]
pvalue ‘n stuff
 compute_lrt(lpost1, t1, lpost2, t2, neg=True, max_post=False)¶
This function computes the Likelihood Ratio Test between two nested models.
 Parameters
 lpost1object of a subclass of
Posterior
The
Posterior
object for model 1 t1iterable
The starting parameters for model 1
 lpost2object of a subclass of
Posterior
The
Posterior
object for model 2 t2iterable
The starting parameters for model 2
 negbool, optional, default
True
Boolean flag to decide whether to use the negative loglikelihood or logposterior
 max_post: bool, optional, default ``False``
If
True
, set the internal state to do the optimization with the loglikelihood rather than the logposterior.
 lpost1object of a subclass of
 Returns
 lrtfloat
The likelihood ratio for model 2 and model 1
 res1OptimizationResults object
Contains the result of fitting
lpost1
 res2OptimizationResults object
Contains the results of fitting
lpost2
 fit(lpost, t0, neg=True, scipy_optimize_options=None)[source]¶
Do either a MaximumAPosteriori (MAP) or Maximum Likelihood (ML) fit to the power spectrum.
MAP fits include priors, ML fits do not.
 Parameters
 lpost
stingray.modeling.PSDPosterior
object An instance of class
stingray.modeling.PSDPosterior
that defines the function to be minimized (either inloglikelihood
orlogposterior
) t0{list  numpy.ndarray}
List/array with set of initial parameters
 negbool, optional, default
True
Boolean to be passed to
lpost
, setting whether to use the negative posterior or the negative loglikelihood. scipy_optimize_optionsdict, optional, default None
A dictionary with options for
scipy.optimize.minimize
, directly passed on as keyword arguments.
 lpost
 Returns
 res
OptimizationResults
object An object containing useful summaries of the fitting procedure. For details, see documentation of
OptimizationResults
.
 res
 plotfits(res1, res2=None, save_plot=False, namestr='test', log=False)[source]¶
Plotting method that allows to plot either one or two bestfit models with the data.
Plots a power spectrum with the bestfit model, as well as the data/model residuals for each model.
 Parameters
 res1
OptimizationResults
object Output of a successful fitting procedure
 res2
OptimizationResults
object, optional, defaultNone
Optional output of a second successful fitting procedure, e.g. with a competing model
 save_plotbool, optional, default
False
If
True
, the resulting figure will be saved to a file namestrstr, optional, default
test
If
save_plot
isTrue
, this string defines the path and file name for the output plot logbool, optional, default
False
If
True
, plot the axes logarithmically.
 res1
 sample(lpost, t0, cov=None, nwalkers=500, niter=100, burnin=100, threads=1, print_results=True, plot=False, namestr='test')[source]¶
Sample the posterior distribution defined in
lpost
using MCMC. Here we use theemcee
package, but other implementations could in principle be used. Parameters
 lpostinstance of a
Posterior
subclass and instance of class
Posterior
or one of its subclasses that defines the function to be minimized (either inloglikelihood
orlogposterior
) t0iterable
list or array containing the starting parameters. Its length must match
lpost.model.npar
. nwalkersint, optional, default 500
The number of walkers (chains) to use during the MCMC procedure. The more walkers are used, the slower the estimation will be, but the better the final distribution is likely to be.
 niterint, optional, default 100
The number of iterations to run the MCMC chains for. The larger this number, the longer the estimation will take, but the higher the chance that the walkers have actually converged on the true posterior distribution.
 burninint, optional, default 100
The number of iterations to run the walkers before convergence is assumed to have occurred. This part of the chain will be discarded before sampling from what is then assumed to be the posterior distribution desired.
 threadsint, optional, default 1
The number of threads for parallelization. Default is
1
, i.e. no parallelization print_resultsbool, optional, default True
Boolean flag setting whether the results of the MCMC run should be printed to standard output
 plotbool, optional, default False
Boolean flag setting whether summary plots of the MCMC chains should be produced
 namestrstr, optional, default
test
Optional string for output file names for the plotting.
 lpostinstance of a
 Returns
 res
SamplingResults
object An object containing useful summaries of the sampling procedure. For details see documentation of
SamplingResults
.
 res
 simulate_highest_outlier(s_all, lpost, t0, max_post=True, seed=None)[source]¶
Simulate \(n\) power spectra from a model and then find the highest data/model outlier in each.
The data/model outlier is defined as
\[\max{(T_R = 2(\mathrm{data}/\mathrm{model}))} .\] Parameters
 s_allnumpy.ndarray
A list of parameter values derived either from an approximation of the likelihood surface, or from an MCMC run. Has dimensions
(n, ndim)
, wheren
is the number of simulated power spectra to generate, andndim
the number of model parameters. lpostinstance of a
Posterior
subclass an instance of class
Posterior
or one of its subclasses that defines the function to be minimized (either inloglikelihood
orlogposterior
) t0iterable
list or array containing the starting parameters. Its length must match
lpost.model.npar
. max_post: bool, optional, default ``False``
If
True
, do MAP fits on the power spectrum to find the highest data/model outlier Otherwise, do a Maximum Likelihood fit. If True, the simulated power spectra will be generated from an MCMC run, otherwise the method will employ the approximated covariance matrix for the parameters derived from the likelihood surface to generate samples from that likelihood function. seedint, optional, default
None
An optional number to seed the random number generator with, for reproducibility of the results obtained with this method.
 Returns
 max_y_allnumpy.ndarray
An array of maximum outliers for each simulated power spectrum
 simulate_lrts(s_all, lpost1, t1, lpost2, t2, seed=None)[source]¶
Simulate likelihood ratios for two given models based on MCMC samples for the simpler model (i.e. the null hypothesis).
 Parameters
 s_allnumpy.ndarray of shape
(nsamples, lpost1.npar)
An array with MCMC samples derived from the null hypothesis model in
lpost1
. Its second dimension must match the number of free parameters inlpost1.model
. lpost1
LogLikelihood
orPosterior
subclass object Object containing the null hypothesis model
 t1iterable of length
lpost1.npar
A starting guess for fitting the model in
lpost1
 lpost2
LogLikelihood
orPosterior
subclass object Object containing the alternative hypothesis model
 t2iterable of length
lpost2.npar
A starting guess for fitting the model in
lpost2
 max_postbool, optional, default
True
If
True
, thenlpost1
andlpost2
should bePosterior
subclass objects; ifFalse
, thenlpost1
andlpost2
should beLogLikelihood
subclass objects seedint, optional default
None
A seed to initialize the
numpy.random.RandomState
object to be passed on to_generate_data
. Useful for producing exactly reproducible results
 s_allnumpy.ndarray of shape
 Returns
 lrt_simnumpy.ndarray
An array with the simulated likelihood ratios for the simulated data
Auxiliary Classes¶
These are helper classes instantiated by stingray.modeling.ParameterEstimation
and its subclasses to
organize the results of model fitting and sampling in a more meaningful, easily accessible way.
 class stingray.modeling.OptimizationResults(lpost, res, neg=True, log=None)[source]¶
Helper class that will contain the results of the regression. Less fiddly than a dictionary.
 Parameters
 lpost: instance ofclass:
Posterior
or one of its subclasses The object containing the function that is being optimized in the regression
 res: instance of ``scipy.OptimizeResult``
The object containing the results from a optimization run
 negbool, optional, default
True
A flag that sets whether the loglikelihood or negative loglikelihood is being used
 loga logging.getLogger() object, default None
You can pass a predefined object for logging, else a new logger will be instantiated
 lpost: instance ofclass:
References
 Attributes
 resultfloat
The result of the optimization, i.e. the function value at the minimum that the optimizer found
 p_optiterable
The list of parameters at the minimum found by the optimizer
 model
astropy.models.Model
instance The parametric model fit to the data
 covnumpy.ndarray
The covariance matrix for the parameters, has shape
(len(p_opt), len(p_opt))
 errnumpy.ndarray
The standard deviation of the parameters, derived from the diagonal of
cov
. Has the same shape asp_opt
 mfitnumpy.ndarray
The values of the model for all
x
 deviancefloat
The deviance, calculated as
2*log(likelihood)
 aicfloat
The Akaike Information Criterion, derived from the log(likelihood) and often used in model comparison between nonnested models; For more details, see 7
 bicfloat
The Bayesian Information Criterion, derived from the log(likelihood) and often used in model comparison between nonnested models; For more details, see 8
 meritfloat
sum of squared differences between data and model, normalized by the model values
 dofint
The number of degrees of freedom in the problem, defined as the number of data points  the number of parameters
 sexpint
2*(number of parameters)*(number of data points)
 ssdfloat
sqrt(2*(sexp))
, expected sum of datamodel residuals sobsfloat
sum of datamodel residuals
 _compute_covariance(lpost, res)[source]¶
Compute the covariance of the parameters using inverse of the Hessian, i.e. the secondorder derivative of the loglikelihood. Also calculates an estimate of the standard deviation in the parameters, using the square root of the diagonal of the covariance matrix.
The Hessian is either estimated directly by the chosen method of fitting, or approximated using the
statsmodel
approx_hess
function. Parameters
 lpost: instance ofclass:
Posterior
or one of its subclasses The object containing the function that is being optimized in the regression
 res: instance of ``scipy``’s ``OptimizeResult`` class
The object containing the results from a optimization run
 lpost: instance ofclass:
 _compute_criteria(lpost)[source]¶
Compute various information criteria useful for model comparison in nonnested models.
Currently implemented are the Akaike Information Criterion 9 and the Bayesian Information Criterion 10.
 Parameters
 lpost: instance ofclass:
Posterior
or one of its subclasses The object containing the function that is being optimized in the regression
 lpost: instance ofclass:
References
 _compute_model(lpost)[source]¶
Compute the values of the bestfit model for all
x
. Parameters
 lpost: instance ofclass:
Posterior
or one of its subclasses The object containing the function that is being optimized in the regression
 lpost: instance ofclass:
 class stingray.modeling.SamplingResults(sampler, ci_min=5, ci_max=95, log=None)[source]¶
Helper class that will contain the results of the sampling in a handy format.
Less fiddly than a dictionary.
 Parameters
 sampler: ``emcee.EnsembleSampler`` object
The object containing the sampler that’s done all the work.
 ci_min: float out of [0,100]
The lower bound percentile for printing credible intervals on the parameters
 ci_max: float out of [0,100]
The upper bound percentile for printing credible intervals on the parameters
 loga logging.getLogger() object, default None
You can pass a predefined object for logging, else a new logger will be instantiated
References
 Attributes
 samplesnumpy.ndarray
An array of samples from the MCMC run, including all chains flattened into one long (
nwalkers*niter
,ndim
) array nwalkersint
The number of chains used in the MCMC procedure
 niterint
The number of MCMC iterations in each chain
 ndimint
The dimensionality of the problem, i.e. the number of parameters in the model
 acceptancefloat
The mean acceptance ratio, calculated over all chains
 Lfloat
The product of acceptance ratio and number of samples
 acorfloat
The autocorrelation length for the chains; should be shorter than the chains themselves for independent sampling
 rhatfloat
weighted average of betweensequence variance and withinsequence variance; GelmanRubin convergence statistic 11
 meannumpy.ndarray
An array of size
ndim
, with the posterior means of the parameters derived from the MCMC chains stdnumpy.ndarray
An array of size
ndim
with the posterior standard deviations of the parameters derived from the MCMC chains cinumpy.ndarray
An array of shape
(ndim, 2)
containing the lower and upper bounds of the credible interval (the Bayesian equivalent of the confidence interval) for each parameter using the bounds set byci_min
andci_max
 _check_convergence(sampler)[source]¶
Compute common statistics for convergence of the MCMC chains. While you can never be completely sure that your chains converged, these present reasonable heuristics to give an indication whether convergence is very far off or reasonably close.
Currently implemented are the autocorrelation time 12 and the GelmanRubin convergence criterion 13.
 Parameters
 sampleran
emcee.EnsembleSampler
object
 sampleran
References