cellpy.readers.core

Contents

cellpy.readers.core#

This module contains several of the most important classes used in cellpy.

It also contains functions that are used by readers and utils. And it has the file version definitions.

Module Contents#

Classes#

BaseDbReader

Base class for database readers.

Data

Object to store data for a cell-test.

FileID

class for storing information about the raw-data files.

InstrumentFactory

Factory for instrument loaders.

PickleProtocol

Context for using a specific pickle protocol.

Functions#

check64bit([current_system])

checks if you are on a 64-bit platform

collect_capacity_curves(cell[, direction, ...])

Create a list of pandas.DataFrames, one for each charge step.

convert_from_simple_unit_label_to_string_unit_label(k, v)

Convert from simple unit label to string unit label.

find_all_instruments(→ Dict[str, Tuple[str, pathlib.Path]])

finds all the supported instruments

generate_default_factory()

This function searches for all available instrument readers

group_by_interpolate(df[, x, y, group_by, ...])

Do a pandas.DataFrame.group_by and perform interpolation for all groups.

humanize_bytes(b[, precision])

Return a humanized string representation of a number of b.

identify_last_data_point(data)

Find the last data point and store it in the fid instance

instrument_configurations(→ Dict[str, Any])

This function returns a dictionary with information about the available

interpolate_y_on_x(df[, x, y, new_x, dx, ...])

Interpolate a column based on another column.

pickle_protocol(level)

xldate_as_datetime(xldate[, datemode, option])

Converts a xls date stamp to a more sensible format.

Attributes#

HEADERS_NORMAL

HEADERS_STEP_TABLE

HEADERS_SUMMARY

LOADERS_NOT_READY_FOR_PROD

Q

ureg

class BaseDbReader[source]#

Base class for database readers.

abstract from_batch(batch_name: str, include_key: bool = False, include_individual_arguments: bool = False) dict[source]#
abstract get_area(pk: int) float[source]#
abstract get_args(pk: int) dict[source]#
abstract get_by_column_label(pk: int, name: str) Any[source]#
abstract get_cell_name(pk: int) str[source]#
abstract get_cell_type(pk: int) str[source]#
abstract get_comment(pk: int) str[source]#
abstract get_experiment_type(pk: int) str[source]#
abstract get_group(pk: int) str[source]#
abstract get_instrument(pk: int) str[source]#
abstract get_label(pk: int) str[source]#
abstract get_loading(pk: int) float[source]#
abstract get_mass(pk: int) float[source]#
abstract get_nom_cap(pk: int) float[source]#
abstract get_total_mass(pk: int) float[source]#
abstract inspect_hd5f_fixed(pk: int) int[source]#
abstract select_batch(batch: str) List[int][source]#
class Data(**kwargs)[source]#

Object to store data for a cell-test.

This class is used for storing all the relevant data for a cell-test, i.e. all the data collected by the tester as stored in the raw-files, and user-provided metadata about the cell-test.

raw_data_files#

list of FileID objects.

Type:

list

raw#

raw data.

Type:

pandas.DataFrame

summary#

summary data.

Type:

pandas.DataFrame

steps#

step data.

Type:

pandas.DataFrame

meta_common#

common meta-data.

Type:

CellpyMetaCommon

meta_test_dependent#

test-dependent meta-data.

Type:

CellpyMetaIndividualTest

custom_info#

custom meta-data.

Type:

Any

raw_units#

dictionary with units for the raw data.

Type:

dict

raw_limits#

dictionary with limits for the raw data.

Type:

dict

loaded_from#

name of the file where the data was loaded from.

Type:

str

property active_electrode_area[source]#
property cell_name[source]#
property empty[source]#

Check if the data object is empty.

property has_data[source]#
property has_steps[source]#

check if the step table exists

property has_summary[source]#

check if the summary table exists

property mass[source]#
property material[source]#
property nom_cap[source]#
property raw_id[source]#
property start_datetime[source]#
property tot_mass[source]#
populate_defaults()[source]#

Populate the data object with default values.

class FileID(filename: str | cellpy.internals.core.OtherPath = None, is_db: bool = False)[source]#

class for storing information about the raw-data files.

This class is used for storing and handling raw-data file information. It is important to keep track of when the data was extracted from the raw-data files so that it is easy to know if the hdf5-files used for @storing “treated” data is up-to-date.

name#

Filename of the raw-data file.

Type:

str

full_name#

Filename including path of the raw-data file.

Type:

str

size#

Size of the raw-data file.

Type:

float

last_modified#

Last time of modification of the raw-data file.

Type:

datetime

last_accessed#

last time of access of the raw-data file.

Type:

datetime

last_info_changed#

st_ctime of the raw-data file.

Type:

datetime

location#

Location of the raw-data file.

Type:

str

Initialize the FileID class.

property last_data_point[source]#

Get the last data point.

get_last()[source]#

Get last modification time of the file.

get_name()[source]#

Get the filename.

get_raw()[source]#

Get a list with information about the file.

The returned list contains name, size, last_modified and location.

get_size()[source]#

Get the size of the file.

populate(filename: str | cellpy.internals.core.OtherPath)[source]#

Finds the file-stats and populates the class with stat values.

Parameters:

filename (str, OtherPath) – name of the file.

class InstrumentFactory[source]#

Factory for instrument loaders.

property builders[source]#
create(key: str | None, **kwargs)[source]#

Create the instrument loader module and initialize the loader class.

Parameters:
  • key – instrument id

  • **kwargs – sent to the initializer of the loader class.

Returns:

instance of loader class.

create_all(**kwargs)[source]#

Create all the instrument loader modules.

Parameters:

**kwargs – sent to the initializer of the loader class.

Returns:

dict of instances of loader classes.

get_registered_builder(key)[source]#
get_registered_builders()[source]#
get_registered_kwargs()[source]#
query(key: str, variable: str) Any[source]#

performs a get_params lookup for the instrument loader.

Parameters:
  • key – instrument id.

  • variable – the variable you want to lookup.

Returns:

The value of the variable if the loaders get_params method supports it.

register_builder(key: str, builder: Tuple[str, Any], **kwargs) None[source]#

register an instrument loader module.

Parameters:
  • key – instrument id

  • builder – (module_name, module_path)

  • **kwargs – stored in the factory (will be used in the future for allowing to set defaults to the builders to allow for using .query).

unregister_builder(key: str) None[source]#

unregister an instrument loader module.

Parameters:

key – instrument id

class PickleProtocol(level)[source]#

Context for using a specific pickle protocol.

check64bit(current_system='python')[source]#

checks if you are on a 64-bit platform

collect_capacity_curves(cell, direction='charge', trim_taper_steps=None, steps_to_skip=None, steptable=None, max_cycle_number=None, **kwargs)[source]#

Create a list of pandas.DataFrames, one for each charge step.

The DataFrames are named by its cycle number.

Parameters:
  • cell (CellpyCell) – object

  • direction (str)

  • trim_taper_steps (integer) – number of taper steps to skip (counted from the end, i.e. 1 means skip last step in each cycle).

  • steps_to_skip (list) – step numbers that should not be included.

  • steptable (pandas.DataFrame) – optional steptable.

  • max_cycle_number (int) – only select cycles up to this value.

Returns:

list of pandas.DataFrames, list of cycle numbers, minimum voltage value, maximum voltage value

convert_from_simple_unit_label_to_string_unit_label(k, v)[source]#

Convert from simple unit label to string unit label.

find_all_instruments(name_contains: str | None = None) Dict[str, Tuple[str, pathlib.Path]][source]#

finds all the supported instruments

generate_default_factory()[source]#

This function searches for all available instrument readers and registers them in an InstrumentFactory instance.

Returns:

InstrumentFactory

group_by_interpolate(df, x=None, y=None, group_by=None, number_of_points=100, tidy=False, individual_x_cols=False, header_name='Unit', dx=10.0, generate_new_x=True)[source]#

Do a pandas.DataFrame.group_by and perform interpolation for all groups.

This function is a wrapper around an internal interpolation function in cellpy (that uses scipy.interpolate.interp1d) that combines doing a group-by operation and interpolation.

Parameters:
  • df (pandas.DataFrame) – the dataframe to morph.

  • x (str) – the header for the x-value (defaults to normal header step_time_txt) (remark that the default group_by column is the cycle column, and each cycle normally consist of several steps (so you risk interpolating / merging several curves on top of each other (not good)).

  • y (str) – the header for the y-value (defaults to normal header voltage_txt).

  • group_by (str) – the header to group by (defaults to normal header cycle_index_txt)

  • number_of_points (int) – if generating new x-column, how many values it should contain.

  • tidy (bool) – return the result in tidy (i.e. long) format.

  • individual_x_cols (bool) – return as xy xy xy … data.

  • header_name (str) – name for the second level of the columns (only applies for xy xy xy … data) (defaults to “Unit”).

  • dx (float) – if generating new x-column and number_of_points is None or zero, distance between the generated values.

  • generate_new_x (bool) –

    create a new x-column by using the x-min and x-max values from the original dataframe where the method is set by the number_of_points key-word:

    1. if number_of_points is not None (default is 100):

      new_x = np.linspace(x_max, x_min, number_of_points)
      
    2. else:

      new_x = np.arange(x_max, x_min, dx)
      

Returns: pandas.DataFrame with interpolated x- and y-values. The returned

dataframe is in tidy (long) format for tidy=True.

humanize_bytes(b, precision=1)[source]#

Return a humanized string representation of a number of b.

identify_last_data_point(data)[source]#

Find the last data point and store it in the fid instance

instrument_configurations(search_text: str = '') Dict[str, Any][source]#

This function returns a dictionary with information about the available instrument loaders and their models.

Parameters:

search_text – string to search for in the instrument names.

Returns:

nested dictionary with information about the available instrument loaders and their models.

Return type:

dict

interpolate_y_on_x(df, x=None, y=None, new_x=None, dx=10.0, number_of_points=None, direction=1, **kwargs)[source]#

Interpolate a column based on another column.

Parameters:
  • df – DataFrame with the (cycle) data.

  • x – Column name for the x-value (defaults to the step-time column).

  • y – Column name for the y-value (defaults to the voltage column).

  • new_x (numpy array or None) – Interpolate using these new x-values instead of generating x-values based on dx or number_of_points.

  • dx – step-value (defaults to 10.0)

  • number_of_points – number of points for interpolated values (use instead of dx and overrides dx if given).

  • direction (-1,1) – if direction is negative, then invert the x-values before interpolating.

  • **kwargs – arguments passed to scipy.interpolate.interp1d

Returns: DataFrame with interpolated y-values based on given or

generated x-values.

pickle_protocol(level)[source]#
xldate_as_datetime(xldate, datemode=0, option='to_datetime')[source]#

Converts a xls date stamp to a more sensible format.

Parameters:
  • xldate (str, int) – date stamp in Excel format.

  • datemode (int) – 0 for 1900-based, 1 for 1904-based.

  • option (str) – option in (“to_datetime”, “to_float”, “to_string”), return value

Returns:

datetime (datetime object, float, or string).

HEADERS_NORMAL[source]#
HEADERS_STEP_TABLE[source]#
HEADERS_SUMMARY[source]#
LOADERS_NOT_READY_FOR_PROD = ['ext_nda_reader'][source]#
Q[source]#
ureg[source]#