Skip to content

node¤

Node objects in the hierarchical tree.

Dataset ¤

Dataset(
    *,
    name: str,
    parent: Group | None,
    read_only: bool,
    shape: ShapeLike = (0,),
    dtype: DTypeLike = float,
    buffer: Buffer | None = None,
    offset: SupportsIndex = 0,
    strides: ShapeLike | None = None,
    order: Literal["K", "A", "C", "F"] | None = None,
    data: ArrayLike | None = None,
    **metadata: Any,
)

Bases: NDArrayOperatorsMixin, Sequence[Any]

A Dataset functions as a numpy ndarray with Metadata.

Warning

Do not instantiate directly. Create a new Dataset using the create_dataset method.

Parameters:

Name Type Description Default
name str

A name to associate with this Dataset.

required
parent Group | None

The parent Group to the Dataset.

required
read_only bool

Whether the Dataset is initialised in read-only mode.

required
shape ShapeLike

See numpy ndarray. Only used if data is None.

(0,)
dtype DTypeLike

See numpy ndarray. Only used if data is None or if data is not a numpy ndarray instance.

float
buffer Buffer | None

See numpy ndarray. Only used if data is None.

None
offset SupportsIndex

See numpy ndarray. Only used if data is None.

0
strides ShapeLike | None

See numpy ndarray. Only used if data is None.

None
order Literal['K', 'A', 'C', 'F'] | None

See numpy ndarray. Only used if data is None or if data is not a numpy ndarray instance.

None
data ArrayLike | None

If not None, it must be either a numpy ndarray or an array-like object which will be passed to asarray, as well as dtype and order, to be used as the underlying data.

None
metadata Any

All other keyword arguments are used as Metadata for this Dataset.

{}
Source code in src/msl/io/node.py
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
def __init__(  # noqa: PLR0913
    self,
    *,
    name: str,
    parent: Group | None,
    read_only: bool,
    shape: ShapeLike = (0,),
    dtype: DTypeLike = float,
    buffer: Buffer | None = None,
    offset: SupportsIndex = 0,
    strides: ShapeLike | None = None,
    order: Literal["K", "A", "C", "F"] | None = None,
    data: ArrayLike | None = None,
    **metadata: Any,
) -> None:
    """A *Dataset* functions as a numpy [ndarray][numpy.ndarray] with [Metadata][msl.io.metadata.Metadata].

    !!! warning
        Do not instantiate directly. Create a new [Dataset][msl.io.node.Dataset] using the
        [create_dataset][msl.io.node.Group.create_dataset] method.

    Args:
        name: A name to associate with this [Dataset][msl.io.node.Dataset].
        parent: The parent [Group][msl.io.node.Group] to the [Dataset][msl.io.node.Dataset].
        read_only: Whether the [Dataset][msl.io.node.Dataset] is initialised in read-only mode.
        shape: See numpy [ndarray][numpy.ndarray]. Only used if `data` is `None`.
        dtype: See numpy [ndarray][numpy.ndarray]. Only used if `data` is `None` or if
            `data` is not a numpy [ndarray][numpy.ndarray] instance.
        buffer: See numpy [ndarray][numpy.ndarray]. Only used if `data` is `None`.
        offset: See numpy [ndarray][numpy.ndarray]. Only used if `data` is `None`.
        strides: See numpy [ndarray][numpy.ndarray]. Only used if `data` is `None`.
        order: See numpy [ndarray][numpy.ndarray]. Only used if `data` is `None` or if
            `data` is not a numpy [ndarray][numpy.ndarray] instance.
        data: If not `None`, it must be either a numpy [ndarray][numpy.ndarray] or
            an array-like object which will be passed to [asarray][numpy.asarray],
            as well as `dtype` and `order`, to be used as the underlying data.
        metadata: All other keyword arguments are used as
            [Metadata][msl.io.metadata.Metadata] for this [Dataset][msl.io.node.Dataset].
    """
    name = _unix_name(name, parent)
    self._name: str = name
    self._parent: Group | None = parent
    self._metadata: Metadata = Metadata(read_only=read_only, node_name=name, **metadata)

    self._data: NDArray[Any]
    if data is None:
        self._data = np.ndarray(shape, dtype=dtype, buffer=buffer, offset=offset, strides=strides, order=order)
    elif isinstance(data, np.ndarray):
        self._data = data
    elif isinstance(data, Dataset):
        self._data = data.data
    else:
        self._data = np.asarray(data, dtype=dtype, order=order)

    self.read_only = read_only
    _notify_created(self, parent)

data property ¤

data: NDArray[Any]

ndarray — The data of the Dataset.

Tip

You do not have to call this attribute to access the underlying numpy ndarray. You can directly call any ndarray attribute from the Dataset instance.

For example,

>>> dataset
<Dataset '/my_data' shape=(4, 3) dtype='<f8' (0 metadata)>
>>> dataset.data
array([[ 0.,  1.,  2.],
       [ 3.,  4.,  5.],
       [ 6.,  7.,  8.],
       [ 9., 10., 11.]])
>>> dataset.size
12
>>> dataset.tolist()
[[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0], [9.0, 10.0, 11.0]]
>>> dataset.mean(axis=0)
array([4.5, 5.5, 6.5])
>>> dataset[::2]
array([[0., 1., 2.],
       [6., 7., 8.]])

metadata property ¤

metadata: Metadata

Metadata — The metadata for this Dataset.

name property ¤

name: str

str — The name of this Dataset.

parent property ¤

parent: Group | None

Group | None — The parent of this Dataset.

read_only property writable ¤

read_only: bool

bool — Whether the Dataset is in read-only mode.

This is equivalent to setting the WRITEABLE property in numpy.ndarray.setflags.

add_metadata ¤

add_metadata(**metadata: Any) -> None

Add metadata to the Dataset.

Parameters:

Name Type Description Default
metadata Any

Key-value pairs to add to the Metadata for this Dataset.

{}
Source code in src/msl/io/node.py
197
198
199
200
201
202
203
204
def add_metadata(self, **metadata: Any) -> None:
    """Add metadata to the [Dataset][msl.io.node.Dataset].

    Args:
        metadata: Key-value pairs to add to the [Metadata][msl.io.metadata.Metadata] for this
            [Dataset][msl.io.node.Dataset].
    """
    self._metadata.update(**metadata)

copy ¤

copy(*, read_only: bool | None = None) -> Dataset

Create a copy of this Dataset.

Parameters:

Name Type Description Default
read_only bool | None

Whether the copy should be created in read-only mode. If None, creates a copy using the mode for the Dataset that is being copied.

None

Returns:

Type Description
Dataset

A copy of this Dataset.

Source code in src/msl/io/node.py
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
def copy(self, *, read_only: bool | None = None) -> Dataset:
    """Create a copy of this [Dataset][msl.io.node.Dataset].

    Args:
        read_only: Whether the copy should be created in read-only mode. If `None`,
            creates a copy using the mode for the [Dataset][msl.io.node.Dataset] that is being copied.

    Returns:
        A copy of this [Dataset][msl.io.node.Dataset].
    """
    return Dataset(
        name=self._name,
        parent=self._parent,
        read_only=self.read_only if read_only is None else read_only,
        data=self._data.copy(),
        **self._metadata.copy(),
    )

DatasetLogging ¤

DatasetLogging(
    *,
    name: str,
    parent: Group | None,
    attributes: Sequence[str],
    level: str | int = NOTSET,
    logger: Logger | None = None,
    date_fmt: str | None = None,
    **kwargs: Any,
)

Bases: Dataset

A Dataset that handles logging records.

Warning

Do not instantiate directly. Create a new DatasetLogging using create_dataset_logging.

Parameters:

Name Type Description Default
name str

A name to associate with the Dataset.

required
parent Group | None

The parent to the DatasetLogging.

required
attributes Sequence[str]

The attribute names to include in the Dataset for each logging record.

required
level str | int

The logging level to use.

NOTSET
logger Logger | None

The Logger that this DatasetLogging instance will be associated with. If None, it is associated with the root Logger.

None
date_fmt str | None

The datetime format code to use to represent the asctime attribute (only if asctime is included as one of the attributes).

None
kwargs Any

All additional keyword arguments are passed to Dataset. The default behaviour is to append every logging record to the Dataset. This guarantees that the size of the Dataset is equal to the number of logging records that were added to it. However, this behaviour can decrease performance if many logging records are added often because a copy of the data in the Dataset is created for each logging record that is added. You can improve performance by specifying an initial size of the Dataset by including a shape or a size keyword argument. This will also automatically allocate more memory that is proportional to the size of the Dataset, if the size of the Dataset needs to be increased. If you do this then you will want to call remove_empty_rows before writing DatasetLogging to a file or interacting with the data in DatasetLogging to remove the extra empty rows that were created.

{}
Source code in src/msl/io/node.py
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
def __init__(  # noqa: PLR0913
    self,
    *,
    name: str,
    parent: Group | None,
    attributes: Sequence[str],
    level: str | int = logging.NOTSET,
    logger: logging.Logger | None = None,
    date_fmt: str | None = None,
    **kwargs: Any,
) -> None:
    """A [Dataset][msl.io.node.Dataset] that handles [logging][]{:target="_blank"} records.

    !!! warning
        Do not instantiate directly. Create a new [DatasetLogging][msl.io.node.DatasetLogging] using
        [create_dataset_logging][msl.io.node.Group.create_dataset_logging].

    Args:
        name: A name to associate with the [Dataset][msl.io.node.Dataset].
        parent: The parent to the `DatasetLogging`.
        attributes: The [attribute names][logrecord-attributes] to include in the
            [Dataset][msl.io.node.Dataset] for each [logging record][log-record].
        level: The [logging level][levels] to use.
        logger: The [Logger][logging.Logger] that this `DatasetLogging` instance
            will be associated with. If `None`, it is associated with the _root_ [Logger][logging.Logger].
        date_fmt: The [datetime][datetime.datetime] [format code][strftime-strptime-behavior]
            to use to represent the _asctime_ [attribute][logrecord-attributes] (only if
            _asctime_ is included as one of the `attributes`).
        kwargs: All additional keyword arguments are passed to [Dataset][msl.io.node.Dataset].
            The default behaviour is to append every [logging record][log-record]
            to the [Dataset][msl.io.node.Dataset]. This guarantees that the size of the
            [Dataset][msl.io.node.Dataset] is equal to the number of [logging records][log-record]
            that were added to it. However, this behaviour can decrease performance if many
            [logging records][log-record] are added often because a copy of the data in the
            [Dataset][msl.io.node.Dataset] is created for each [logging record][log-record]
            that is added. You can improve performance by specifying an initial size of the
            [Dataset][msl.io.node.Dataset] by including a `shape` or a `size` keyword argument.
            This will also automatically allocate more memory that is proportional to the size of the
            [Dataset][msl.io.node.Dataset], if the size of the
            [Dataset][msl.io.node.Dataset] needs to be increased. If you do this then you will
            want to call [remove_empty_rows][msl.io.node.DatasetLogging.remove_empty_rows] before
            writing `DatasetLogging` to a file or interacting with the data in `DatasetLogging` to
            remove the extra _empty_ rows that were created.
    """
    if not attributes:
        msg = "Must specify logging attributes"
        raise ValueError(msg)

    if not all(isinstance(a, str) for a in attributes):  # pyright: ignore[reportUnnecessaryIsInstance]
        msg = f"Must specify attribute names as strings, got: {attributes}"
        raise ValueError(msg)

    self._logger: logging.Logger | None = None
    self._attributes: tuple[str, ...] = tuple(attributes)
    self._uses_asctime: bool = "asctime" in attributes
    self._date_fmt: str = date_fmt or "%Y-%m-%dT%H:%M:%S.%f"

    _level: int = getattr(logging, level) if isinstance(level, str) else int(level)

    # these 3 keys in the metadata are used to distinguish a DatasetLogging
    # object from a regular Dataset object
    kwargs["logging_level"] = _level
    kwargs["logging_level_name"] = logging.getLevelName(_level)
    kwargs["logging_date_format"] = date_fmt

    self._auto_resize: bool = "size" in kwargs or "shape" in kwargs
    if self._auto_resize:
        if "size" in kwargs:
            kwargs["shape"] = (kwargs.pop("size"),)
        elif isinstance(kwargs["shape"], int):
            kwargs["shape"] = (kwargs["shape"],)

        shape = kwargs["shape"]
        if len(shape) != 1:
            msg = f"Invalid shape {shape}, the number of dimensions must be 1"
            raise ValueError(msg)
        if shape[0] < 0:
            msg = f"Invalid shape {shape}"
            raise ValueError(msg)

    self._dtype: np.dtype[np.object_ | np.void] = np.dtype([(a, object) for a in attributes])
    super().__init__(name=name, parent=parent, read_only=False, dtype=self._dtype, **kwargs)

    self._index: int | np.intp = np.count_nonzero(self._data)
    if self._auto_resize and self._data.shape < kwargs["shape"]:
        self._resize(new_allocated=kwargs["shape"][0])

    self._handler: logging.Handler = logging.Handler(level=_level)
    self._handler.set_name(self.name)
    self._handler.emit = self._emit  # type: ignore[method-assign]
    self.set_logger(logger or logging.getLogger())

attributes property ¤

attributes: tuple[str, ...]

tuple[str, ...] — The attribute names that are logged.

date_fmt property ¤

date_fmt: str

level property writable ¤

level: int

int — The logging level that is used.

logger property ¤

logger: Logger | None

Logger | None — The Logger for this DatasetLogging.

add_filter ¤

add_filter(log_filter: Filter) -> None

Add a logging filter.

Parameters:

Name Type Description Default
log_filter Filter

The logging Filter to add to the Handler

required
Source code in src/msl/io/node.py
405
406
407
408
409
410
411
def add_filter(self, log_filter: logging.Filter) -> None:
    """Add a logging filter.

    Args:
        log_filter: The logging [Filter][logging.Filter] to add to the [Handler][logging.Handler]
    """
    self._handler.addFilter(log_filter)

remove_empty_rows ¤

remove_empty_rows() -> None

Remove empty rows from the Dataset.

If the DatasetLogging object was initialized with a shape or a size keyword argument then the size of the Dataset is always greater than or equal to the number of logging records that were added to it. Calling this method will remove the rows in the Dataset that were not from a logging record.

Source code in src/msl/io/node.py
437
438
439
440
441
442
443
444
445
446
447
448
def remove_empty_rows(self) -> None:
    """Remove empty rows from the [Dataset][msl.io.node.Dataset].

    If the [DatasetLogging][msl.io.node.DatasetLogging] object was initialized with a `shape` or a `size`
    keyword argument then the size of the [Dataset][msl.io.node.Dataset] is always greater than or equal to
    the number of [logging records][log-record] that were added to it. Calling this method will remove the
    rows in the [Dataset][msl.io.node.Dataset] that were not from a [logging record][log-record].
    """
    assert self._dtype.names is not None  # noqa: S101

    # don't use "is not None" since this does not work as expected
    self._data: NDArray[Any] = self._data[self._data[self._dtype.names[0]] != None]  # noqa: E711

remove_filter ¤

remove_filter(log_filter: Filter) -> None

Remove a logging filter.

Parameters:

Name Type Description Default
log_filter Filter

The logging Filter to remove from the Handler.

required
Source code in src/msl/io/node.py
450
451
452
453
454
455
456
def remove_filter(self, log_filter: logging.Filter) -> None:
    """Remove a logging filter.

    Args:
        log_filter: The logging [Filter][logging.Filter] to remove from the [Handler][logging.Handler].
    """
    self._handler.removeFilter(log_filter)

remove_handler ¤

remove_handler() -> None

Remove this class's Handler from the associated Logger.

After calling this method logging records are no longer added to the Dataset.

Source code in src/msl/io/node.py
458
459
460
461
462
463
464
465
def remove_handler(self) -> None:
    """Remove this class's [Handler][logging.Handler] from the associated [Logger][logging.Logger].

    After calling this method [logging records][log-record] are no longer added
    to the [Dataset][msl.io.node.Dataset].
    """
    if self._logger is not None:
        self._logger.removeHandler(self._handler)

set_logger ¤

set_logger(logger: Logger) -> None

Add this class's Handler to a Logger.

Parameters:

Name Type Description Default
logger Logger

The Logger to add this class's Handler to.

required
Source code in src/msl/io/node.py
467
468
469
470
471
472
473
474
475
476
477
478
479
def set_logger(self, logger: logging.Logger) -> None:
    """Add this class's [Handler][logging.Handler] to a [Logger][logging.Logger].

    Args:
        logger: The [Logger][logging.Logger] to add this class's [Handler][logging.Handler] to.
    """
    level = self._handler.level
    if logger.level == 0 or logger.level > level:
        logger.setLevel(level)

    self.remove_handler()
    logger.addHandler(self._handler)
    self._logger = logger

Group ¤

Group(
    *,
    name: str,
    parent: Group | None,
    read_only: bool,
    **metadata: Any,
)

Bases: FreezableMap['Dataset | Group']

A Group can contain sub-Groups and/or Datasets.

Warning

Do not instantiate directly. Create a new Group using create_group.

Parameters:

Name Type Description Default
name str

The name of this Group. Uses a naming convention analogous to UNIX file systems where each Group can be thought of as a directory and where every subdirectory is separated from its parent directory by the / character.

required
parent Group | None

The parent to this Group.

required
read_only bool

Whether the Group is initialised in read-only mode.

required
metadata Any

All additional keyword arguments are used to create the Metadata for this Group.

{}
Source code in src/msl/io/node.py
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
def __init__(self, *, name: str, parent: Group | None, read_only: bool, **metadata: Any) -> None:
    """A [Group][msl.io.node.Group] can contain sub-[Group][msl.io.node.Group]s and/or [Dataset][msl.io.node.Dataset]s.

    !!! warning
        Do not instantiate directly. Create a new [Group][msl.io.node.Group] using
        [create_group][msl.io.node.Group.create_group].

    Args:
        name: The name of this [Group][msl.io.node.Group]. Uses a naming convention analogous to UNIX
            file systems where each [Group][msl.io.node.Group] can be thought
            of as a directory and where every subdirectory is separated from its
            parent directory by the `/` character.
        parent: The parent to this [Group][msl.io.node.Group].
        read_only: Whether the [Group][msl.io.node.Group] is initialised in read-only mode.
        metadata: All additional keyword arguments are used to create the
            [Metadata][msl.io.metadata.Metadata] for this [Group][msl.io.node.Group].
    """  # noqa: E501
    name = _unix_name(name, parent)
    self._name: str = name
    self._parent: Group | None = parent
    self._metadata: Metadata = Metadata(read_only=read_only, node_name=name, **metadata)
    _notify_created(self, parent)
    super().__init__(read_only=read_only)

metadata property ¤

metadata: Metadata

Metadata — The metadata for this Group.

name property ¤

name: str

str — The name of this Group.

parent property ¤

parent: Group | None

Group | None — The parent of this Group.

read_only property writable ¤

read_only: bool

bool — Whether this Group is in read-only mode.

Setting this value will also update all sub-Groups and sub-Datasets to be in the same mode.

add_dataset ¤

add_dataset(name: str, dataset: Dataset) -> None

Add a Dataset.

Parameters:

Name Type Description Default
name str

The name of the new Dataset to add. Automatically creates the ancestor Groups if they do not exist.

required
dataset Dataset

The Dataset to add. The data and the Metadata in the Dataset are copied.

required
Source code in src/msl/io/node.py
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
def add_dataset(self, name: str, dataset: Dataset) -> None:
    """Add a [Dataset][msl.io.node.Dataset].

    Args:
        name: The name of the new [Dataset][msl.io.node.Dataset] to add. Automatically creates the ancestor
            [Group][msl.io.node.Group]s if they do not exist.
        dataset: The [Dataset][msl.io.node.Dataset] to add. The data and the
            [Metadata][msl.io.metadata.Metadata] in the [Dataset][msl.io.node.Dataset] are copied.
    """
    if not isinstance(dataset, Dataset):  # pyright: ignore[reportUnnecessaryIsInstance]
        msg = f"Must pass in a Dataset object, got {dataset!r}"  # type: ignore[unreachable] # pyright: ignore[reportUnreachable]
        raise TypeError(msg)

    name = "/" + name.strip("/")
    _ = self.create_dataset(name, read_only=dataset.read_only, data=dataset.data.copy(), **dataset.metadata.copy())

add_dataset_logging ¤

add_dataset_logging(
    name: str, dataset_logging: DatasetLogging
) -> None

Add a DatasetLogging.

Parameters:

Name Type Description Default
name str

The name of the new DatasetLogging to add. Automatically creates the ancestor Groups if they do not exist.

required
dataset_logging DatasetLogging

The DatasetLogging to add. The data and Metadata are copied.

required
Source code in src/msl/io/node.py
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
def add_dataset_logging(self, name: str, dataset_logging: DatasetLogging) -> None:
    """Add a [DatasetLogging][msl.io.node.DatasetLogging].

    Args:
        name: The name of the new [DatasetLogging][msl.io.node.DatasetLogging] to add.
            Automatically creates the ancestor [Group][msl.io.node.Group]s if they do not exist.
        dataset_logging: The [DatasetLogging][msl.io.node.DatasetLogging] to add. The
            data and [Metadata][msl.io.metadata.Metadata] are copied.
    """
    if not isinstance(dataset_logging, DatasetLogging):  # pyright: ignore[reportUnnecessaryIsInstance]
        msg = f"Must pass in a DatasetLogging object, got {dataset_logging!r}"  # type: ignore[unreachable] # pyright: ignore[reportUnreachable]
        raise TypeError(msg)

    name = "/" + name.strip("/")
    _ = self.create_dataset_logging(
        name,
        level=dataset_logging.level,
        attributes=dataset_logging.attributes,
        logger=dataset_logging.logger,
        date_fmt=dataset_logging.date_fmt,
        data=dataset_logging.data.copy(),
        **dataset_logging.metadata.copy(),
    )

add_group ¤

add_group(name: str, group: Group) -> None

Add a Group.

Parameters:

Name Type Description Default
name str

The name of the new Group to add. Automatically creates the ancestor Groups if they do not exist.

required
group Group

The Group to add. The Datasets and Metadata that are contained within the group will be copied.

required
Source code in src/msl/io/node.py
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
def add_group(self, name: str, group: Group) -> None:
    """Add a [Group][msl.io.node.Group].

    Args:
        name: The name of the new [Group][msl.io.node.Group] to add. Automatically creates the ancestor
            [Group][msl.io.node.Group]s if they do not exist.
        group: The [Group][msl.io.node.Group] to add. The [Dataset][msl.io.node.Dataset]s and
            [Metadata][msl.io.metadata.Metadata] that are contained within the
            `group` will be copied.
    """
    if not isinstance(group, Group):  # pyright: ignore[reportUnnecessaryIsInstance]
        msg = f"Must pass in a Group object, got {group!r}"  # type: ignore[unreachable] # pyright: ignore[reportUnreachable]
        raise TypeError(msg)

    name = "/" + name.strip("/")

    if not group:  # no sub-Groups or Datasets, only add the Metadata
        _ = self.create_group(name + group.name, **group.metadata.copy())
        return

    for key, node in group.items():
        n = name + key
        if isinstance(node, Group):
            _ = self.create_group(n, read_only=node.read_only, **node.metadata.copy())
        else:  # must be a Dataset
            _ = self.create_dataset(n, read_only=node.read_only, data=node.data.copy(), **node.metadata.copy())

add_metadata ¤

add_metadata(**metadata: Any) -> None

Add metadata to the Group.

Parameters:

Name Type Description Default
metadata Any

Key-value pairs to add to the Metadata for this Group.

{}
Source code in src/msl/io/node.py
616
617
618
619
620
621
622
623
def add_metadata(self, **metadata: Any) -> None:
    """Add metadata to the [Group][msl.io.node.Group].

    Args:
        metadata: Key-value pairs to add to the [Metadata][msl.io.metadata.Metadata] for this
            [Group][msl.io.node.Group].
    """
    self._metadata.update(**metadata)

ancestors ¤

ancestors() -> Iterator[Group]

Yield all ancestor (parent) Groups of this Group.

Yields:

Type Description
Group

The ancestors of this Group.

Source code in src/msl/io/node.py
746
747
748
749
750
751
752
753
754
755
def ancestors(self) -> Iterator[Group]:
    """Yield all ancestor (parent) [Group][msl.io.node.Group]s of this [Group][msl.io.node.Group].

    Yields:
        The ancestors of this [Group][msl.io.node.Group].
    """
    parent = self.parent
    while parent is not None:
        yield parent
        parent = parent.parent

create_dataset ¤

create_dataset(
    name: str,
    *,
    read_only: bool | None = None,
    **kwargs: Any,
) -> Dataset

Create a new Dataset.

Parameters:

Name Type Description Default
name str

The name of the new Dataset. Automatically creates the ancestor Groups if they do not exist. See here for an example.

required
read_only bool | None

Whether to create the new Dataset in read-only mode. If None, uses the mode for this Group.

None
kwargs Any

All additional keyword arguments are passed to Dataset.

{}

Returns:

Type Description
Dataset

The new Dataset that was created.

Source code in src/msl/io/node.py
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
def create_dataset(self, name: str, *, read_only: bool | None = None, **kwargs: Any) -> Dataset:
    """Create a new [Dataset][msl.io.node.Dataset].

    Args:
        name: The name of the new [Dataset][msl.io.node.Dataset]. Automatically creates the ancestor
            [Group][msl.io.node.Group]s if they do not exist. See [here][automatic-group-creation]
            for an example.
        read_only: Whether to create the new [Dataset][msl.io.node.Dataset] in read-only mode.
            If `None`, uses the mode for this [Group][msl.io.node.Group].
        kwargs: All additional keyword arguments are passed to [Dataset][msl.io.node.Dataset].

    Returns:
        The new [Dataset][msl.io.node.Dataset] that was created.
    """
    read_only, kwargs = self._check(read_only=read_only, **kwargs)
    name, parent = self._create_ancestors(name, read_only=read_only)
    return Dataset(name=name, parent=parent, read_only=read_only, **kwargs)

create_dataset_logging ¤

create_dataset_logging(
    name: str,
    *,
    level: str | int = "INFO",
    attributes: Sequence[str] | None = None,
    logger: Logger | None = None,
    date_fmt: str | None = None,
    **kwargs: Any,
) -> DatasetLogging

Create a Dataset that handles logging records.

See here for an example.

Parameters:

Name Type Description Default
name str

A name to associate with the Dataset. Automatically creates the ancestor Groups if they do not exist.

required
level str | int

The logging level to use.

'INFO'
attributes Sequence[str] | None

The attribute names to include in the Dataset for each logging record. If None, uses asctime, levelname, name, and message.

None
logger Logger | None

The Logger that the DatasetLogging object will be associated with. If None, it is associated with the root Logger.

None
date_fmt str | None

The datetime format code to use to represent the asctime attribute in. If None, uses the ISO 8601 format "%Y-%m-%dT%H:%M:%S.%f".

None
kwargs Any

All additional keyword arguments are passed to Dataset. The default behaviour is to append every logging record to the Dataset. This guarantees that the size of the Dataset is equal to the number of logging records that were added to it. However, this behaviour can decrease performance if many logging records are added often because a copy of the data in the Dataset is created for each logging record that is added. You can improve performance by specifying an initial size of the Dataset by including a shape or a size keyword argument. This will also automatically create additional empty rows in the Dataset, that is proportional to the size of the Dataset, if the size of the Dataset needs to be increased. If you do this then you will want to call remove_empty_rows before writing DatasetLogging to a file or interacting with the data in DatasetLogging to remove the empty rows that were created.

{}

Returns:

Type Description
DatasetLogging

The DatasetLogging that was created.

Source code in src/msl/io/node.py
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
def create_dataset_logging(
    self,
    name: str,
    *,
    level: str | int = "INFO",
    attributes: Sequence[str] | None = None,
    logger: Logger | None = None,
    date_fmt: str | None = None,
    **kwargs: Any,
) -> DatasetLogging:
    """Create a [Dataset][msl.io.node.Dataset] that handles [logging][] records.

    !!! example "See [here][msl-io-dataset-logging] for an example."

    Args:
        name: A name to associate with the [Dataset][msl.io.node.Dataset].
            Automatically creates the ancestor [Group][msl.io.node.Group]s if they do not exist.
        level: The [logging level][levels] to use.
        attributes: The [attribute names][logrecord-attributes] to include in the
            [Dataset][msl.io.node.Dataset] for each [logging record][log-record].
            If `None`, uses _asctime_, _levelname_, _name_, and _message_.
        logger: The [Logger][logging.Logger] that the [DatasetLogging][msl.io.node.DatasetLogging] object
            will be associated with. If `None`, it is associated with the _root_ [Logger][logging.Logger].
        date_fmt: The [datetime][datetime.datetime] [format code][strftime-strptime-behavior]
            to use to represent the _asctime_ [attribute][logrecord-attributes] in.
            If `None`, uses the ISO 8601 format `"%Y-%m-%dT%H:%M:%S.%f"`.
        kwargs: All additional keyword arguments are passed to [Dataset][msl.io.node.Dataset].
            The default behaviour is to append every [logging record][log-record]
            to the [Dataset][msl.io.node.Dataset]. This guarantees that the size of the
            [Dataset][msl.io.node.Dataset] is equal to the number of
            [logging records][log-record] that were added to it. However, this behaviour
            can decrease performance if many [logging records][log-record] are
            added often because a copy of the data in the [Dataset][msl.io.node.Dataset] is
            created for each [logging record][log-record] that is added. You can improve
            performance by specifying an initial size of the [Dataset][msl.io.node.Dataset]
            by including a `shape` or a `size` keyword argument. This will also automatically
            create additional empty rows in the [Dataset][msl.io.node.Dataset], that is
            proportional to the size of the [Dataset][msl.io.node.Dataset], if the size of the
            [Dataset][msl.io.node.Dataset] needs to be increased. If you do this then you will
            want to call [remove_empty_rows][msl.io.node.DatasetLogging.remove_empty_rows] before
            writing [DatasetLogging][msl.io.node.DatasetLogging] to a file or interacting
            with the data in [DatasetLogging][msl.io.node.DatasetLogging] to remove the
            _empty_ rows that were created.

    Returns:
        The [DatasetLogging][msl.io.node.DatasetLogging] that was created.
    """
    read_only, metadata = self._check(read_only=False, **kwargs)
    name, parent = self._create_ancestors(name, read_only=read_only)
    if attributes is None:
        # if the default attribute names are changed then update the `attributes`
        # description in the docstring of create_dataset_logging() and require_dataset_logging()
        attributes = ["asctime", "levelname", "name", "message"]
    if date_fmt is None:
        # if the default date_fmt is changed then update the `date_fmt`
        # description in the docstring of create_dataset_logging() and require_dataset_logging()
        date_fmt = "%Y-%m-%dT%H:%M:%S.%f"
    return DatasetLogging(
        name=name, parent=parent, level=level, attributes=attributes, logger=logger, date_fmt=date_fmt, **metadata
    )

create_group ¤

create_group(
    name: str,
    *,
    read_only: bool | None = None,
    **metadata: Any,
) -> Group

Create a new Group.

Parameters:

Name Type Description Default
name str

The name of the new Group. Automatically creates the ancestor Groups if they do not exist.

required
read_only bool | None

Whether to create the new Group in read-only mode. If None, uses the mode for this Group.

None
metadata Any

All additional keyword arguments are used to create the Metadata for the new Group.

{}

Returns:

Type Description
Group

The new Group that was created.

Source code in src/msl/io/node.py
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
def create_group(self, name: str, *, read_only: bool | None = None, **metadata: Any) -> Group:
    """Create a new [Group][msl.io.node.Group].

    Args:
        name: The name of the new [Group][msl.io.node.Group]. Automatically creates the ancestor
            [Group][msl.io.node.Group]s if they do not exist.
        read_only: Whether to create the new [Group][msl.io.node.Group] in read-only mode.
            If `None`, uses the mode for this [Group][msl.io.node.Group].
        metadata: All additional keyword arguments are used to create the [Metadata][msl.io.metadata.Metadata]
            for the new [Group][msl.io.node.Group].

    Returns:
        The new [Group][msl.io.node.Group] that was created.
    """
    read_only, metadata = self._check(read_only=read_only, **metadata)
    name, parent = self._create_ancestors(name, read_only=read_only)
    return Group(name=name, parent=parent, read_only=read_only, **metadata)

datasets ¤

datasets(
    *,
    exclude: str | None = None,
    include: str | None = None,
    flags: int = 0,
) -> Iterator[Dataset]

Yield the Datasets in this Group.

Parameters:

Name Type Description Default
exclude str | None

A regular-expression pattern to use to exclude Datasets. The re.search function is used to compare the exclude pattern with the name of each Dataset. If there is a match, the Dataset is not yielded.

None
include str | None

A regular-expression pattern to use to include Datasets. The re.search function is used to compare the include pattern with the name of each Dataset. If there is a match, the Dataset is yielded.

None
flags int

Regular-expression flags that are passed to re.compile.

0

Yields:

Type Description
Dataset

The filtered Datasets based on the exclude and include patterns. The exclude pattern has more precedence than the include pattern if there is a conflict.

Source code in src/msl/io/node.py
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
def datasets(self, *, exclude: str | None = None, include: str | None = None, flags: int = 0) -> Iterator[Dataset]:
    """Yield the [Dataset][msl.io.node.Dataset]s in this [Group][msl.io.node.Group].

    Args:
        exclude: A regular-expression pattern to use to exclude [Dataset][msl.io.node.Dataset]s.
            The [re.search][] function is used to compare the `exclude` pattern
            with the [name][msl.io.node.Dataset.name] of each [Dataset][msl.io.node.Dataset]. If
            there is a match, the [Dataset][msl.io.node.Dataset] is not yielded.
        include: A regular-expression pattern to use to include [Dataset][msl.io.node.Dataset]s.
            The [re.search][] function is used to compare the `include` pattern
            with the [name][msl.io.node.Dataset.name] of each [Dataset][msl.io.node.Dataset]. If
            there is a match, the [Dataset][msl.io.node.Dataset] is yielded.
        flags: Regular-expression flags that are passed to [re.compile][].

    Yields:
        The filtered [Dataset][msl.io.node.Dataset]s based on the `exclude` and `include` patterns.
            The `exclude` pattern has more precedence than the `include` pattern if there is a conflict.
    """
    e = None if exclude is None else re.compile(exclude, flags=flags)
    i = None if include is None else re.compile(include, flags=flags)
    for obj in self._mapping.values():
        if isinstance(obj, Dataset):
            if e and e.search(obj.name):
                continue
            if i and not i.search(obj.name):
                continue
            yield obj

descendants ¤

descendants() -> Iterator[Group]

Yield all descendant (children) Groups of this Group.

Yields:

Type Description
Group

The descendants of this Group.

Source code in src/msl/io/node.py
736
737
738
739
740
741
742
743
744
def descendants(self) -> Iterator[Group]:
    """Yield all descendant (children) [Group][msl.io.node.Group]s of this [Group][msl.io.node.Group].

    Yields:
        The descendants of this [Group][msl.io.node.Group].
    """
    for obj in self._mapping.values():
        if isinstance(obj, Group):
            yield obj

groups ¤

groups(
    *,
    exclude: str | None = None,
    include: str | None = None,
    flags: int = 0,
) -> Iterator[Group]

Yield the sub-Groups of this Group.

Parameters:

Name Type Description Default
exclude str | None

A regular-expression pattern to use to exclude sub-Groups. The re.search function is used to compare the exclude pattern with the name of each sub-Group. If there is a match, the sub-Group is not yielded.

None
include str | None

A regular-expression pattern to use to include sub-Groups. The re.search function is used to compare the include pattern with the name of each sub-Group. If there is a match, the sub-Group is yielded.

None
flags int

Regular-expression flags that are passed to re.compile.

0

Yields:

Type Description
Group

The filtered sub-Groups based on the exclude and include patterns. The exclude pattern has more precedence than the include pattern if there is a conflict.

Source code in src/msl/io/node.py
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
def groups(self, *, exclude: str | None = None, include: str | None = None, flags: int = 0) -> Iterator[Group]:
    """Yield the sub-[Group][msl.io.node.Group]s of this [Group][msl.io.node.Group].

    Args:
        exclude: A regular-expression pattern to use to exclude sub-[Group][msl.io.node.Group]s.
            The [re.search][] function is used to compare the `exclude` pattern with the
            [name][msl.io.node.Group.name] of each sub-[Group][msl.io.node.Group]. If there is a match,
            the sub-[Group][msl.io.node.Group] is not yielded.
        include: A regular-expression pattern to use to include sub-[Group][msl.io.node.Group]s.
            The [re.search][] function is used to compare the `include` pattern with the
            [name][msl.io.node.Group.name] of each sub-[Group][msl.io.node.Group]. If there is a match,
            the sub-[Group][msl.io.node.Group] is yielded.
        flags: Regular-expression flags that are passed to [re.compile][].

    Yields:
        The filtered sub-[Group][msl.io.node.Group]s based on the `exclude` and `include` patterns.
            The `exclude` pattern has more precedence than the `include` pattern if there is a conflict.
    """
    e = None if exclude is None else re.compile(exclude, flags=flags)
    i = None if include is None else re.compile(include, flags=flags)
    for obj in self._mapping.values():
        if isinstance(obj, Group):
            if e and e.search(obj.name):
                continue
            if i and not i.search(obj.name):
                continue
            yield obj

is_dataset staticmethod ¤

is_dataset(obj: object) -> bool

Check if an object is an instance of Dataset.

Parameters:

Name Type Description Default
obj object

The object to check.

required

Returns:

Type Description
bool

Whether obj is an instance of Dataset.

Source code in src/msl/io/node.py
644
645
646
647
648
649
650
651
652
653
654
@staticmethod
def is_dataset(obj: object) -> bool:
    """Check if an object is an instance of [Dataset][msl.io.node.Dataset].

    Args:
        obj: The object to check.

    Returns:
        Whether `obj` is an instance of [Dataset][msl.io.node.Dataset].
    """
    return isinstance(obj, Dataset)

is_dataset_logging staticmethod ¤

is_dataset_logging(obj: object) -> bool

Check if an object is an instance of DatasetLogging.

Parameters:

Name Type Description Default
obj object

The object to check.

required

Returns:

Type Description
bool

Whether obj is an instance of DatasetLogging.

Source code in src/msl/io/node.py
656
657
658
659
660
661
662
663
664
665
666
@staticmethod
def is_dataset_logging(obj: object) -> bool:
    """Check if an object is an instance of [DatasetLogging][msl.io.node.DatasetLogging].

    Args:
        obj: The object to check.

    Returns:
        Whether `obj` is an instance of [DatasetLogging][msl.io.node.DatasetLogging].
    """
    return isinstance(obj, DatasetLogging)

is_group staticmethod ¤

is_group(obj: object) -> bool

Check if an object is an instance of Group.

Parameters:

Name Type Description Default
obj object

The object to check.

required

Returns:

Type Description
bool

Whether obj is an instance of Group.

Source code in src/msl/io/node.py
668
669
670
671
672
673
674
675
676
677
678
@staticmethod
def is_group(obj: object) -> bool:
    """Check if an object is an instance of [Group][msl.io.node.Group].

    Args:
        obj: The object to check.

    Returns:
        Whether `obj` is an instance of [Group][msl.io.node.Group].
    """
    return isinstance(obj, Group)

remove ¤

remove(name: str) -> Dataset | Group | None

Remove a Group or a Dataset.

Parameters:

Name Type Description Default
name str

The name of the Group or Dataset to remove.

required

Returns:

Type Description
Dataset | Group | None

The Group or Dataset that was removed or None if there was no Group or Dataset with the specified name.

Source code in src/msl/io/node.py
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
def remove(self, name: str) -> Dataset | Group | None:
    """Remove a [Group][msl.io.node.Group] or a [Dataset][msl.io.node.Dataset].

    Args:
        name: The name of the [Group][msl.io.node.Group] or [Dataset][msl.io.node.Dataset] to remove.

    Returns:
        The [Group][msl.io.node.Group] or [Dataset][msl.io.node.Dataset] that was removed or `None` if
            there was no [Group][msl.io.node.Group] or [Dataset][msl.io.node.Dataset] with the specified `name`.
    """
    name = "/" + name.strip("/")
    return self.pop(name, None)

require_dataset ¤

require_dataset(
    name: str,
    *,
    read_only: bool | None = None,
    **kwargs: Any,
) -> Dataset

Require that a Dataset exists.

If the Dataset exists it will be returned, otherwise it is created then returned.

Parameters:

Name Type Description Default
name str

The name of the required Dataset. Automatically creates the ancestor Groups if they do not exist.

required
read_only bool | None

Whether to create the required Dataset in read-only mode. If None, uses the mode for this Group.

None
kwargs Any

All additional keyword arguments are passed to Dataset.

{}

Returns:

Type Description
Dataset

The Dataset that was created or that already existed.

Source code in src/msl/io/node.py
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
def require_dataset(self, name: str, *, read_only: bool | None = None, **kwargs: Any) -> Dataset:
    """Require that a [Dataset][msl.io.node.Dataset] exists.

    If the [Dataset][msl.io.node.Dataset] exists it will be returned, otherwise it is created then returned.

    Args:
        name: The name of the required [Dataset][msl.io.node.Dataset]. Automatically creates the ancestor
            [Group][msl.io.node.Group]s if they do not exist.
        read_only: Whether to create the required [Dataset][msl.io.node.Dataset] in read-only mode.
            If `None`, uses the mode for this [Group][msl.io.node.Group].
        kwargs: All additional keyword arguments are passed to [Dataset][msl.io.node.Dataset].

    Returns:
        The [Dataset][msl.io.node.Dataset] that was created or that already existed.
    """
    name = "/" + name.strip("/")
    dataset_name = name if self.parent is None else self.name + name
    for dataset in self.datasets():
        if dataset.name == dataset_name:
            if read_only is not None:
                dataset.read_only = read_only
            if kwargs:  # only add the kwargs that should be Metadata
                for kw in ["shape", "dtype", "buffer", "offset", "strides", "order", "data"]:
                    kwargs.pop(kw, None)
            dataset.add_metadata(**kwargs)
            return dataset
    return self.create_dataset(name, read_only=read_only, **kwargs)

require_dataset_logging ¤

require_dataset_logging(
    name: str,
    *,
    level: str | int = "INFO",
    attributes: Sequence[str] | None = None,
    logger: Logger | None = None,
    date_fmt: str | None = None,
    **kwargs: Any,
) -> DatasetLogging

Require that a Dataset exists for handling logging records.

If the DatasetLogging exists it will be returned otherwise it is created and then returned.

Parameters:

Name Type Description Default
name str

A name to associate with the Dataset. Automatically creates the ancestor Groups if they do not exist.

required
level str | int

The logging level to use.

'INFO'
attributes Sequence[str] | None

The attribute names to include in the Dataset for each logging record. If None, uses asctime, levelname, name, and message.

None
logger Logger | None

The Logger that the DatasetLogging object will be associated with. If None, it is associated with the root Logger.

None
date_fmt str | None

The datetime format code to use to represent the asctime attribute in. If None, uses the ISO 8601 format "%Y-%m-%dT%H:%M:%S.%f".

None
kwargs Any

All additional keyword arguments are passed to Dataset. The default behaviour is to append every logging record to the Dataset. This guarantees that the size of the Dataset is equal to the number of logging records that were added to it. However, this behaviour can decrease performance if many logging records are added often because a copy of the data in the Dataset is created for each logging record that is added. You can improve performance by specifying an initial size of the Dataset by including a shape or a size keyword argument. This will also automatically create additional empty rows in the Dataset, that is proportional to the size of the Dataset, if the size of the Dataset needs to be increased. If you do this then you will want to call remove_empty_rows before writing DatasetLogging to a file or interacting with the data in DatasetLogging to remove the empty rows that were created.

{}

Returns:

Type Description
DatasetLogging

The DatasetLogging that was created or that already existed.

Source code in src/msl/io/node.py
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
def require_dataset_logging(
    self,
    name: str,
    *,
    level: str | int = "INFO",
    attributes: Sequence[str] | None = None,
    logger: Logger | None = None,
    date_fmt: str | None = None,
    **kwargs: Any,
) -> DatasetLogging:
    """Require that a [Dataset][msl.io.node.Dataset] exists for handling [logging][] records.

    If the [DatasetLogging][msl.io.node.DatasetLogging] exists it will be returned
    otherwise it is created and then returned.

    Args:
        name: A name to associate with the [Dataset][msl.io.node.Dataset].
            Automatically creates the ancestor [Group][msl.io.node.Group]s if they do not exist.
        level: The [logging level][levels] to use.
        attributes: The [attribute names][logrecord-attributes] to include in the
            [Dataset][msl.io.node.Dataset] for each [logging record][log-record].
            If `None`, uses _asctime_, _levelname_, _name_, and _message_.
        logger: The [Logger][logging.Logger] that the [DatasetLogging][msl.io.node.DatasetLogging] object
            will be associated with. If `None`, it is associated with the _root_ [Logger][logging.Logger].
        date_fmt: The [datetime][datetime.datetime] [format code][strftime-strptime-behavior]
            to use to represent the _asctime_ [attribute][logrecord-attributes] in.
            If `None`, uses the ISO 8601 format `"%Y-%m-%dT%H:%M:%S.%f"`.
        kwargs: All additional keyword arguments are passed to [Dataset][msl.io.node.Dataset].
            The default behaviour is to append every [logging record][log-record]
            to the [Dataset][msl.io.node.Dataset]. This guarantees that the size of the
            [Dataset][msl.io.node.Dataset] is equal to the number of
            [logging records][log-record] that were added to it. However, this behaviour
            can decrease performance if many [logging records][log-record] are
            added often because a copy of the data in the [Dataset][msl.io.node.Dataset] is
            created for each [logging record][log-record] that is added. You can improve
            performance by specifying an initial size of the [Dataset][msl.io.node.Dataset]
            by including a `shape` or a `size` keyword argument. This will also automatically
            create additional empty rows in the [Dataset][msl.io.node.Dataset], that is
            proportional to the size of the [Dataset][msl.io.node.Dataset], if the size of the
            [Dataset][msl.io.node.Dataset] needs to be increased. If you do this then you will
            want to call [remove_empty_rows][msl.io.node.DatasetLogging.remove_empty_rows] before
            writing [DatasetLogging][msl.io.node.DatasetLogging] to a file or interacting
            with the data in [DatasetLogging][msl.io.node.DatasetLogging] to remove the
            _empty_ rows that were created.

    Returns:
        The [DatasetLogging][msl.io.node.DatasetLogging] that was created or that already existed.
    """
    name = "/" + name.strip("/")
    dataset_name = name if self.parent is None else self.name + name
    for dataset in self.datasets():
        if dataset.name == dataset_name:
            if (
                ("logging_level" not in dataset.metadata)
                or ("logging_level_name" not in dataset.metadata)
                or ("logging_date_format" not in dataset.metadata)
            ):
                msg = "The required Dataset was found but it is not used for logging"
                raise ValueError(msg)

            if attributes and (dataset.dtype.names != tuple(attributes)):
                msg = (
                    f"The attribute names of the existing logging Dataset are "
                    f"{dataset.dtype.names} which does not equal {tuple(attributes)}"
                )
                raise ValueError(msg)

            if isinstance(dataset, DatasetLogging):
                return dataset

            # replace the existing Dataset with a new DatasetLogging object
            meta = dataset.metadata.copy()
            data = dataset.data.copy()

            # remove the existing Dataset from its descendants, itself and its ancestors
            groups = (*tuple(self.descendants()), self, *tuple(self.ancestors()))
            for group in groups:
                for dset in group.datasets():
                    if dset is dataset:
                        key = "/" + dset.name.lstrip(group.name)
                        del group._mapping[key]  # noqa: SLF001
                        break

            # temporarily make this Group not in read-only mode
            original_read_only_mode = bool(self._read_only)
            self._read_only: bool = False
            kwargs.update(meta)
            dset = self.create_dataset_logging(
                name,
                level=level,
                attributes=data.dtype.names,
                logger=logger,
                date_fmt=meta.logging_date_format,
                data=data,
                **kwargs,
            )
            self._read_only = original_read_only_mode
            return dset

    return self.create_dataset_logging(
        name, level=level, attributes=attributes, logger=logger, date_fmt=date_fmt, **kwargs
    )

require_group ¤

require_group(
    name: str,
    *,
    read_only: bool | None = None,
    **metadata: Any,
) -> Group

Require that a Group exists.

If the Group exists it will be returned otherwise it is created then returned.

Parameters:

Name Type Description Default
name str

The name of the Group to require. Automatically creates the ancestor Groups if they do not exist.

required
read_only bool | None

Whether to return the required Group in read-only mode. If None, uses the mode for this Group.

None
metadata Any

All additional keyword arguments are used as Metadata for the required Group.

{}

Returns:

Type Description
Group

The required Group that was created or that already existed.

Source code in src/msl/io/node.py
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
def require_group(self, name: str, *, read_only: bool | None = None, **metadata: Any) -> Group:
    """Require that a [Group][msl.io.node.Group] exists.

    If the [Group][msl.io.node.Group] exists it will be returned otherwise it is created then returned.

    Args:
        name: The name of the [Group][msl.io.node.Group] to require. Automatically creates the ancestor
            [Group][msl.io.node.Group]s if they do not exist.
        read_only: Whether to return the required [Group][msl.io.node.Group] in read-only mode.
            If `None`, uses the mode for this [Group][msl.io.node.Group].
        metadata: All additional keyword arguments are used as [Metadata][msl.io.metadata.Metadata]
            for the required [Group][msl.io.node.Group].

    Returns:
        The required [Group][msl.io.node.Group] that was created or that already existed.
    """
    name = "/" + name.strip("/")
    group_name = name if self.parent is None else self.name + name
    for group in self.groups():
        if group.name == group_name:
            if read_only is not None:
                group.read_only = read_only
            group.add_metadata(**metadata)
            return group
    return self.create_group(name, read_only=read_only, **metadata)