pipelines
Classes#
IngestPipeline #
Bases: Pipeline
Pipeline class designed to read in raw, unstandardized time series data and enhance its quality and usability by converting it into a standard format, embedding metadata, applying quality checks and controls, generating reference plots, and saving the data in an accessible format so it can be used later in scientific analyses or in higher-level tsdat Pipelines.
Attributes#
Functions#
get_ancillary_filepath #
Returns the path to where an ancillary file should be saved so that it can be synced to the storage area automatically.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
title |
str
|
The title to use for the plot filename. Should only contain alphanumeric and '_' characters. |
required |
extension |
str
|
The file extension. Defaults to "png". |
'png'
|
Returns:
Name | Type | Description |
---|---|---|
Path |
Path
|
The ancillary filepath. |
Source code in tsdat/pipeline/pipelines.py
hook_customize_dataset #
Code hook to customize the retrieved dataset prior to qc being applied.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset |
Dataset
|
The output dataset structure returned by the retriever API. |
required |
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: The customized dataset. |
Source code in tsdat/pipeline/pipelines.py
hook_finalize_dataset #
Code hook to finalize the dataset after qc is applied but before it is saved.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset |
Dataset
|
The output dataset returned by the retriever API and
modified by the |
required |
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: The finalized dataset, ready to be saved. |
Source code in tsdat/pipeline/pipelines.py
hook_plot_dataset #
Code hook to create plots for the data which runs after the dataset has been saved.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset |
Dataset
|
The dataset to plot. |
required |
Source code in tsdat/pipeline/pipelines.py
run #
Source code in tsdat/pipeline/pipelines.py
TransformationPipeline #
Bases: IngestPipeline
Pipeline class designed to read in standardized time series data and enhance its quality and usability by combining multiple sources of data, using higher-level processing techniques, etc.
Attributes#
Classes#
Parameters #
Functions#
hook_customize_input_datasets #
hook_customize_input_datasets(
input_datasets: Dict[str, xr.Dataset], **kwargs: Any
) -> Dict[str, xr.Dataset]
Code hook to customize any input datasets prior to datastreams being combined and data converters being run.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_datasets |
Dict[str, Dataset]
|
The dictionary of input key (str) to input dataset. Note that for transformation pipelines, input keys != input filename, rather each input key is a combination of the datastream and date range used to pull the input data from the storage retriever. |
required |
Returns:
Type | Description |
---|---|
Dict[str, Dataset]
|
Dict[str, xr.Dataset]: The customized input datasets. |
Source code in tsdat/pipeline/pipelines.py
run #
Runs the data pipeline on the provided inputs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
inputs |
List[str]
|
A 2-element list of start-date, end-date that the pipeline should process. |
required |
Returns:
Type | Description |
---|---|
Dataset
|
xr.Dataset: The processed dataset. |