Skip to main content

Module custom

Module custom 

Source
Expand description

Custom data persistence: shared helpers and orchestration.

Centralizes the logic for appending the data_type column and metadata to Arrow batches (Parquet/Feather), and custom-data write preparation, path construction, and decode logic so the catalog delegates here instead of inlining custom-specific branching.

Functionsยง

augment_batch_with_data_type_column
Appends a data_type column (JSON string per row) and type_name + optional metadata to the batch schema. Used by both the Parquet catalog and Feather writer for catalog-compatible output.
custom_data_path_components
Returns path components for custom data: ["data", "custom", type_name, ...identifier segments]. Used by the catalog to build full object-store paths via make_object_store_path_owned.
decode_batch_to_data
Decodes a RecordBatch to Data objects based on metadata.
decode_custom_batches_to_data
Decodes multiple RecordBatches (e.g. from custom data files) into a single Vec<Data>. Optionally replaces ts_init column with ts_event before decoding each batch.
prepare_custom_data_batch
Prepares a batch of custom data for writing: encodes to Arrow, augments with data_type column, and returns type identity and timestamp range so the catalog can build path and perform I/O.
schema_with_data_type_column
Builds a schema that adds the data_type column and type_name metadata to a base schema. Used when creating a Feather buffer for custom data (single type per writer).