Core Utilities

The core utilities provide fundamental functionality for configuration, class manipulation, function composition, and common operations.

config

Configuration management with the Setting class - a nested dictionary with dot notation access, lock/unlock mechanism, and environment-based configuration loading.

Config related settings, follows 12factor.net.

class Setting(*args, **kwargs)[source]

Bases: dict

Dict where d['foo'] can also be accessed as d.foo.

Automatically creates new sub-attributes of type Setting. This behavior can be locked to turn off later.

Warning

Not copy safe.

Basic Usage:

>>> cfg = Setting()
>>> cfg.unlock() # locked after config.py load
>>> cfg.foo.bar = 1
>>> hasattr(cfg.foo, 'bar')
True
>>> cfg.foo.bar
1

Locking Behavior:

>>> cfg.lock()
>>> cfg.foo.bar = 2
Traceback (most recent call last):
 ...
ValueError: This Setting object is locked from editing
>>> cfg.foo.baz = 3
Traceback (most recent call last):
 ...
ValueError: This Setting object is locked from editing

Unlocking:

>>> cfg.unlock()
>>> cfg.foo.baz = 3
>>> cfg.foo.baz
3
__getattr__(name)[source]

Create sub-setting fields on the fly

static lock()[source]
static unlock()[source]
class ConfigOptions[source]

Bases: ABC

Abstract base class for loading options from config.py.

classmethod from_config(setting, config=None)[source]
load_options(func=None, *, cls=<class 'libb.config.ConfigOptions'>)[source]

Wrapper that builds dataclass options from config file.

Standard interface:

  • options: str | dict | ConfigOptions | None

  • config: config module that defines options in Settings format

  • kwargs: additional kw-args to pass to function

Setup:

>>> from libb import Setting, create_mock_module
>>> Setting.unlock()
>>> test = Setting()
>>> test.foo.ftp.host = 'foo'
>>> test.foo.ftp.user = 'bar'
>>> test.foo.ftp.pazz = 'baz'
>>> Setting.lock()
>>> create_mock_module('test_config', {'test': test})
>>> import test_config
>>> @dataclass
... class Options(ConfigOptions):
...     host: str = None
...     user: str = None
...     pazz: str = None

On a Function:

>>> @load_options(cls=Options)
... def testfunc(options=None, config=None, **kwargs):
...     return options.host, options.user, options.pazz
>>> testfunc('test.foo.ftp', config=test_config)
('foo', 'bar', 'baz')

As Simple Kwargs:

>>> testfunc(host='foo', user='bar', pazz='baz')
('foo', 'bar', 'baz')

On a Class:

>>> class Test:
...     @load_options(cls=Options)
...     def __init__(self, options, config, **kwargs):
...         self.host = options.host
...         self.user = options.user
...         self.pazz = options.pazz
>>> t = Test('test.foo.ftp', test_config)
>>> t.host, t.user, t.pazz
('foo', 'bar', 'baz')
configure_environment(module, **config_overrides)[source]

Configure environment settings at runtime.

Dynamically sets configuration values on Setting objects in the provided module. Keys should follow the pattern setting_attribute or setting_nested_attribute.

Parameters:
  • module – The module containing Setting objects to configure.

  • config_overrides (Any) – Configuration values to set with keys as dotted paths.

Return type:

None

Example:

>>> from libb import create_mock_module
>>> Setting.unlock()
>>> db = Setting()
>>> db.host = 'localhost'
>>> Setting.lock()
>>> create_mock_module('my_config', {'db': db})
>>> import my_config
>>> configure_environment(my_config, db_host='remotehost')
>>> my_config.db.host
'remotehost'
patch_library_config(library_name, config_name='config', **config_overrides)[source]

Patch a library’s config module directly in sys.modules.

Finds and patches the library’s config module before the library imports it. Works regardless of import order by patching the config module directly.

Parameters:
  • library_name (str) – Name of the library whose config should be patched.

  • config_name (str) – Name of the config module (default: ‘config’).

  • config_overrides (Any) – Configuration values to set with keys as dotted paths.

Return type:

None

Example:

>>> import sys
>>> from libb import create_mock_module
>>> Setting.unlock()
>>> api = Setting()
>>> api.key = 'oldkey'
>>> Setting.lock()
>>> create_mock_module('mylib', {})  # parent module
>>> create_mock_module('mylib.config', {'api': api})
>>> patch_library_config('mylib', api_key='newkey')
>>> sys.modules['mylib.config'].api.key
'newkey'
setting_unlocked(setting)[source]

Context manager to safely modify a setting with unlock/lock protection.

Parameters:

setting (Setting) – The Setting object to unlock/lock.

Example:

>>> cfg = Setting()
>>> cfg.lock()
>>> with setting_unlocked(cfg):
...     cfg.foo = 'bar'
>>> cfg.foo
'bar'
>>> cfg.baz = 'qux'
Traceback (most recent call last):
 ...
ValueError: This Setting object is locked from editing
get_tempdir()[source]

Get temporary directory setting from environment or system default.

Returns:

Setting object with dir attribute pointing to temp directory.

Return type:

Setting

Uses CONFIG_TMPDIR_DIR environment variable if set, otherwise falls back to system temp directory.

get_vendordir()[source]

Get vendor directory setting from environment or system default.

Returns:

Setting object with dir attribute pointing to vendor directory.

Return type:

Setting

Uses CONFIG_VENDOR_DIR environment variable if set, otherwise falls back to system temp directory.

get_outputdir()[source]

Get output directory setting from environment or system default.

Returns:

Setting object with dir attribute pointing to output directory.

Return type:

Setting

Uses CONFIG_OUTPUT_DIR environment variable if set, otherwise falls back to system temp directory.

get_localdir()[source]

Get local data directory setting using platform-appropriate location.

Returns:

Setting object with dir attribute pointing to local data directory.

Return type:

Setting

Uses platformdirs to determine the appropriate local data directory for the current operating system.

classes

Class utilities including singleton enforcement, memoization, lazy properties, metaclass resolution, and dynamic instance extension.

attrs(*attrnames)[source]

Create property getters/setters for private attributes.

Automatically generates property accessors for attributes that follow the _name convention, allowing clean access to private attributes.

Parameters:

attrnames (str) – Names of attributes to create properties for (without underscore prefix).

Return type:

None

Basic Usage:

>>> class Foo:
...     _a = 1
...     _b = 2
...     _c = 3
...     _z = (_a, _b, _c)
...     attrs('a', 'b', 'c', 'z')
>>> f = Foo()
>>> f.a
1
>>> f.a+f.b==f.c
True

Setter Functionality:

>>> f.a = 2
>>> f.a==f._a==2
True

Lazy Definitions:

>>> len(f.z)==3
True
>>> sum(f.z)==6
True
>>> f.z[0]==f._z[0]==1
True
>>> f.z = (4, 5, 6,)
>>> sum(f.z)
15
>>> f.a==2
True
include(source, names=())[source]

Include dictionary items as class attributes during class declaration.

Injects dictionary key-value pairs into the calling class namespace, optionally filtering by specific names.

Parameters:
  • source (dict) – Dictionary containing attributes to include.

  • names (tuple) – Optional tuple of specific attribute names to include (includes all if empty).

Return type:

None

Include All Attributes:

>>> d = dict(x=10, y='foo')
>>> class Foo:
...     include(d)
>>> Foo.x
10
>>> Foo.y
'foo'

Include Specific Attributes:

>>> class Boo:
...     include(d, ('y',))
>>> hasattr(Boo, 'x')
False
>>> hasattr(Boo, 'y')
True
singleton(cls)[source]

Decorator that enforces singleton pattern on a class.

Ensures only one instance of the decorated class can exist. All calls to the class return the same instance.

Parameters:

cls (type) – The class to convert to a singleton.

Return type:

object

Returns:

The single instance of the class.

Basic Usage:

>>> @singleton
... class Foo:
...     _x = 100
...     _y = 'y'
...     attrs('x', 'y')
>>> F = Foo
>>> F() is F() is F
True
>>> id(F()) == id(F())
True

Shared State:

>>> f = F()
>>> f.x == F().x == f.x == 100
True
>>> F.x = 50
>>> f.x == F().x == F.x == 50
True

Deep Copy Behavior:

>>> import copy
>>> fc = copy.deepcopy(f)
>>> FC = copy.deepcopy(F)
>>> fc.y==f.y==F.y==FC.y=='y'
True
memoize(obj)[source]

Decorator that caches function results based on arguments.

Stores function call results in a cache dictionary attached to the function itself, avoiding redundant computations for repeated calls with the same arguments.

Parameters:

obj (Callable[[ParamSpec(P)], TypeVar(R)]) – The function to memoize.

Return type:

Callable[[ParamSpec(P)], TypeVar(R)]

Returns:

A wrapped function with caching behavior.

Basic Usage:

>>> def n_with_sum_k(n, k):
...     if n==0:
...         return 0
...     elif k==0:
...         return 1
...     else:
...         less_n = n_with_sum_k(n-1, k)
...         less_k = n_with_sum_k(n, k-1)
...         less_both = n_with_sum_k(n-1, k-1)
...         return less_n + less_k + less_both

Memoization Speeds Up Recursive Calls:

>>> n_with_sum_k_mz = memoize(n_with_sum_k)
>>> n_with_sum_k_mz(3, 5)
61
>>> n_with_sum_k_mz.cache
{((3, 5), ()): 61}
class classproperty(fget=None, fset=None, fdel=None, doc=None)[source]

Bases: property

Decorator that creates computed properties at the class level.

Similar to @property but works on classes rather than instances, allowing dynamic class-level attributes.

Basic Usage:

>>> class Foo:
...     include(dict(a=1, b=2))
...     @classproperty
...     def c(cls):
...         return cls.a+cls.b
>>> Foo.a
1
>>> Foo.b
2
>>> Foo.c
3

Dynamic Updates:

>>> Foo.a = 2
>>> Foo.c
4
delegate(deleg, attrs)[source]

Delegate attribute access to another object.

Creates properties that forward attribute access to a specified delegate object, enabling composition over inheritance.

Parameters:
  • deleg (str) – Name of the attribute containing the delegate object.

  • attrs (str or list[str]) – Single attribute name or list of attribute names to delegate.

Return type:

None

Delegate Simple Attributes:

>>> class X:
...     a = 1
>>> class Y:
...     x = X()
...     delegate('x', 'a')
>>> Y().a
1

Delegate Methods:

>>> class A:
...     def echo(self, x):
...         print(x)
>>> class B:
...     a = A()
...     delegate('a', ['echo'])
>>> B().echo('whoa!')
whoa!
lazy_property(fn)[source]

Decorator that makes a property lazy-evaluated.

Computes the property value only once on first access, then caches the result for subsequent accesses. Useful for expensive computations.

Parameters:

fn (Callable[[Any], TypeVar(R)]) – The property method to make lazy.

Return type:

property

Returns:

A lazy property descriptor.

Basic Lazy Evaluation:

>>> import time
>>> class Sloth:
...     def _slow_cool(self, n):
...         time.sleep(n)
...         return n**2
...     @lazy_property
...     def slow(self):
...         return True
...     @lazy_property
...     def cool(self):
...         return self._slow_cool(3)

Instantiation is Fast:

>>> x = time.time()
>>> s = Sloth()
>>> time.time()-x < 1
True
>>> time.time()-x < 1
True

First Access Triggers Computation:

>>> hasattr(s, '_lazy_slow')
False
>>> s.slow
True
>>> hasattr(s, '_lazy_slow')
True

Expensive Computation Happens Once:

>>> s.cool
9
>>> 3 < time.time()-x < 6
True
>>> s.cool
9
>>> 3 < time.time()-x < 6
True
class cachedstaticproperty(func)[source]

Bases: object

Decorator combining @property and @staticmethod with caching.

Creates a class-level property that is computed once on first access and cached for subsequent accesses.

Basic Usage (expensive computation runs only once):

>>> def somecalc():
...     print('Running somecalc...')
...     return 1
>>> class Foo:
...    @cachedstaticproperty
...    def somefunc():
...        return somecalc()
>>> Foo.somefunc
Running somecalc...
1
>>> Foo.somefunc
1
class staticandinstancemethod(f)[source]

Bases: object

Decorator allowing a method to work as both static and instance method.

When called on the class, self is None. When called on an instance, self is the instance.

Basic Usage (dual behavior):

>>> class Foo:
...     @staticandinstancemethod
...     def bar(self, x, y):
...         print(self is None and "static" or "instance")
>>> Foo.bar(1,2)
static
>>> Foo().bar(1,2)
instance
metadict = <WeakValueDictionary>

Cache for generated metaclasses to avoid redundant class creation.

makecls(*metas, **options)[source]

Class factory that resolves metaclass conflicts automatically.

When multiple inheritance involves conflicting metaclasses, this factory generates a compatible metaclass that inherits from all necessary metaclasses.

Parameters:
  • metas (type) – Explicit metaclasses to use.

  • options (Any) –

    Keyword options:

    • priority: If True, given metaclasses take precedence over base metaclasses.

Return type:

Callable[[str, tuple[type, ...], dict[str, Any]], type]

Returns:

A class factory function that creates classes with resolved metaclasses.

Metaclass Conflict Resolution:

>>> class M_A(type):
...     pass
>>> class M_B(type):
...     pass
>>> class A(metaclass=M_A):
...     pass
>>> class B(metaclass=M_B):
...     pass

Normal Inheritance Fails:

>>> class C(A,B):
...     pass
Traceback (most recent call last):
...
TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases

Using makecls Resolves the Conflict:

>>> class C(A,B,metaclass=makecls()):
...    pass
>>> (C, C.__class__)
(<class '....C'>, <class '...._M_AM_B'>)

Metaclass Caching:

>>> class D(A,B,metaclass=makecls()):
...    pass
>>> (D, D.__class__)
(<class '....D'>, <class '...._M_AM_B'>)
>>> C.__class__ is D.__class__
True
extend_instance(obj, cls, left=True)[source]

Dynamically extend an instance’s class hierarchy at runtime.

Modifies an object’s class to include additional base classes, effectively adding mixins or extending functionality after instantiation.

Parameters:
  • obj (object) – The instance to extend.

  • cls (type) – The class to mix into the instance’s hierarchy.

  • left (bool) – If True, adds cls with higher precedence; if False, lower precedence.

Return type:

None

Method Resolution Order Demonstration:

>>> from pprint import pprint
>>> class X:pass
>>> class Y: pass
>>> class Z:pass
>>> class A(X,Y):pass
>>> class B(A,Y,Z):pass
>>> class F(B): pass
>>> pprint(F.mro())
[<class '....F'>,
 <class '....B'>,
 <class '....A'>,
 <class '....X'>,
 <class '....Y'>,
 <class '....Z'>,
 <class 'object'>]

Left Precedence (higher priority):

>>> class F_L:
...     def __init__(self):
...         extend_instance(self, B, left=True)
>>> f_l = F_L()
>>> pprint(f_l.__class__.__mro__)
(<class '....F_L'>,
 <class '....B'>,
 <class '....A'>,
 <class '....X'>,
 <class '....Y'>,
 <class '....Z'>,
 <class '....F_L'>,
 <class 'object'>)

Right Precedence (lower priority):

>>> class F_R:
...     def __init__(self):
...         extend_instance(self, B, left=False)
>>> f_r = F_R()
>>> pprint(f_r.__class__.__mro__)
(<class '....F_R'>,
 <class '....F_R'>,
 <class '....B'>,
 <class '....A'>,
 <class '....X'>,
 <class '....Y'>,
 <class '....Z'>,
 <class 'object'>)
ultimate_type(typeobj)[source]

Find the ultimate non-object base class in an inheritance hierarchy.

Traverses the inheritance chain to find the most fundamental base class that isn’t ‘object’ itself. Useful for identifying the core type of subclassed objects.

Parameters:

typeobj (object | type | None) – An object, type, or None to analyze.

Returns:

The ultimate base type (excluding object).

Return type:

type

Finding Base Types:

>>> import datetime
>>> class DateFoo(datetime.date):
...     pass
>>> class DateBar(DateFoo):
...    pass
>>> d0 = datetime.date(2000, 1, 1)
>>> d1 = DateFoo(2000, 1, 1)
>>> d2 = DateBar(2000, 1, 1)
>>> ultimate_type(d0)
<class 'datetime.date'>
>>> ultimate_type(d1)
<class 'datetime.date'>
>>> ultimate_type(d1)
<class 'datetime.date'>
>>> ultimate_type(d1.__class__)
<class 'datetime.date'>
>>> ultimate_type(d2.__class__)
<class 'datetime.date'>

Special Cases:

>>> ultimate_type(None)
<class 'NoneType'>
>>> ultimate_type(object)
<class 'object'>
catch_exception(f=None, *, level=10)[source]

Decorator that catches and reports exceptions without re-raising.

Can be used with or without parameters to specify the logging level.

Parameters:
  • f (Callable[[ParamSpec(P)], TypeVar(R)] | None) – Function to wrap with exception handling (when used without parameters).

  • level (int) – Logging level for exception details (default: logging.DEBUG).

Return type:

Callable[[Callable[[ParamSpec(P)], TypeVar(R)]], Callable[[ParamSpec(P)], TypeVar(R)]] | Callable[[ParamSpec(P)], Optional[TypeVar(R)]]

Returns:

Wrapped function that prints exceptions instead of raising them.

Default Usage (DEBUG level):

>>> @catch_exception
... def divide(x, y):
...     return x / y
>>> divide(1, 0) is None
True

Specifying Log Level:

>>> @catch_exception(level=logging.ERROR)
... def risky_operation():
...     raise ValueError("Something went wrong")
>>> risky_operation() is None
True
class ErrorCatcher(name, bases, dct)[source]

Bases: type

Metaclass that wraps all methods with exception catching.

Automatically applies exception handling to all callable attributes of a class, preventing exceptions from propagating. Can optionally specify the logging level for all wrapped methods via the _error_log_level class attribute.

Automatic Exception Handling:

>>> import logging
>>> logging.getLogger(__name__).setLevel(logging.CRITICAL)
>>> class Test(metaclass=ErrorCatcher):
...     def __init__(self, val):
...         self.val = val
...     def calc(self):
...         return self.val / 0
>>> t = Test(5)
>>> t.calc() is None
True

With Custom Log Level:

>>> class TestWithLevel(metaclass=ErrorCatcher):
...     _error_log_level = logging.ERROR
...     def risky(self):
...         raise RuntimeError("Oops")
>>> t2 = TestWithLevel()
>>> t2.risky() is None
True

func

Function decorators and composition utilities: compose, decompose, repeat, delay, suppresswarning, MultiMethod dispatch.

is_instance_method(func)[source]

Check if a function is an instance method.

Parameters:

func – Function to check.

Returns:

True if function is an instance method.

Return type:

bool

Example:

>>> class MyClass:
...     def my_method(self):
...         pass
>>> def my_function():
...     pass
>>> is_instance_method(MyClass.my_method)
True
>>> is_instance_method(my_function)
False
find_decorators(target)[source]

Find decorators applied to functions in a target module/class.

Parameters:

target – Module or class to inspect.

Returns:

Dictionary mapping function names to decorator AST representations.

Return type:

dict

Example:

>>> class Example:  
...     @staticmethod
...     def static_method():
...         pass
>>> decorators = find_decorators(Example)  
>>> 'static_method' in decorators  
True
compose(*functions)[source]

Return a function folding over a list of functions.

Each arg must have a single param.

Parameters:

functions – Functions to compose.

Returns:

Composed function.

Example:

>>> f = lambda x: x+4
>>> g = lambda y: y/2
>>> h = lambda z: z*3
>>> fgh = compose(f, g, h)

Beware of order for non-commutative functions (first in, last out):

>>> fgh(2)==h(g(f(2)))
False
>>> fgh(2)==f(g(h(2)))
True
composable(decorators)[source]

Decorator that takes a list of decorators to be composed.

Useful when list of decorators starts getting large and unruly.

Parameters:

decorators – List of decorators to compose.

Returns:

Composed decorator.

Setup:

>>> def m3(func):
...     def wrapped(n):
...         return func(n)*3.
...     return wrapped
>>> def d2(func):
...     def wrapped(n):
...         return func(n)/2.
...     return wrapped
>>> def p3(n):
...     return n+3.
>>> @m3
... @d2
... def plusthree(x):
...     return p3(x)
>>> @composable([d2, m3])
... def cplusthree(x):
...     return p3(x)

Note: composed decorators are not interchangeable with compose:

>>> func = compose(m3, d2, p3)(4)
>>> hasattr(func, '__call__')
True
>>> compose(lambda n: n*3., lambda n: n/2., p3)(4)
10.5

What they do allow is consolidating longer decorator chains:

>>> plusthree(4)
10.5
>>> cplusthree(4)
10.5
copydoc(fromfunc, sep='\n', basefirst=True)[source]

Decorator to copy the docstring of another function.

Parameters:
  • fromfunc – Function to copy docstring from.

  • sep (str) – Separator between docstrings.

  • basefirst (bool) – If True, base docstring comes first.

Returns:

Decorator function.

Example:

>>> class A():
...     def myfunction():
...         '''Documentation for A.'''
...         pass
>>> class B(A):
...     @copydoc(A.myfunction)
...     def myfunction():
...         '''Extra details for B.'''
...         pass
>>> class C(A):
...     @copydoc(A.myfunction, basefirst=False)
...     def myfunction():
...         '''Extra details for B.'''
...         pass

Do not activate doctests:

>>> class D():
...     def myfunction():
...         '''.>>> 2 + 2 = 5'''
...         pass
>>> class E(D):
...     @copydoc(D.myfunction)
...     def myfunction():
...         '''Extra details for E.'''
...         pass
>>> help(B.myfunction)
Help on function myfunction in module ...:

myfunction()
    Documentation for A.
    Extra details for B.

>>> help(C.myfunction)
Help on function myfunction in module ...:

myfunction()
    Extra details for B.
    Documentation for A.

>>> help(E.myfunction)
Help on function myfunction in module ...:

myfunction()
    .>>> 2 + 2 = 5 
    Extra details for E.
get_calling_function()[source]

Find the calling function in many common cases.

Returns:

The calling function object.

Raises:

AttributeError – If function cannot be found.

See also

See tests/test_func.py for usage examples.

repeat(x_times=2)[source]

Decorator to repeat a function multiple times.

Parameters:

x_times (int) – Number of times to repeat (default: 2).

Returns:

Decorator function.

Example:

>>> @repeat(3)
... def printme():
...    print('Foo')
...    return 'Bar'
>>> printme()
Foo
Foo
Foo
'Bar'
timing(func)[source]

Decorator to log function execution time.

Parameters:

func – Function to time.

Returns:

Wrapped function that logs execution time.

Example:

>>> @timing  
... def slow_function():
...     import time
...     time.sleep(0.01)
...     return 42
>>> result = slow_function()  
>>> result  
42
suppresswarning(func)[source]

Decorator to suppress warnings during function execution.

Parameters:

func – Function to wrap.

Returns:

Wrapped function that suppresses warnings.

Example:

>>> import warnings
>>> @suppresswarning
... def noisy_function():
...     warnings.warn("This warning is suppressed")
...     return "done"
>>> noisy_function()
'done'
class MultiMethod(name)[source]

Bases: object

Multimethod that supports args (no kwargs by design).

Use with the @multimethod decorator to register type-specific implementations.

register(types, function)[source]
multimethod(*types)[source]

Decorator for type-based method dispatch (multiple dispatch).

Register function overloads that dispatch based on argument types.

Parameters:

types – Type(s) to match for this overload.

Returns:

Decorator that registers the function with MultiMethod.

Example:

>>> @multimethod(int, int)
... def foo(a, b):
...     return a + b
>>> @multimethod(str, str)
... def foo(a, b):
...     return a + ' ' + b

iter

Iterator utilities extending itertools: chunking, partitioning, windowing, flattening, grouping, and sequence operations.

chunked(iterable, n, strict=False)[source]

Split iterable into chunks of length n. See more_itertools.chunked().

chunked_even(iterable, n)[source]

Split iterable into n chunks of roughly equal size. See more_itertools.chunked_even().

collapse(*args)

Recursively flatten nested lists/tuples into a single list.

Parameters:

args – Items to collapse (can be nested lists/tuples).

Returns:

Flattened list of items.

Examples

>>> collapse([['a', ['b', ('c', 'd')]], -2, -1, [0, 1]])
['a', 'b', 'c', 'd', -2, -1, 0, 1]
compact(iterable)[source]

Remove falsy values from an iterable (including None and 0).

Parameters:

iterable – Iterable to filter.

Returns:

Tuple of truthy values.

Return type:

tuple

Warning

This also removes zero!

Example:

>>> compact([0,2,3,4,None,5])
(2, 3, 4, 5)
grouper(iterable, n, incomplete='fill', fillvalue=None)[source]

Collect data into fixed-length chunks. See more_itertools.grouper().

hashby(iterable, keyfunc)[source]

Create a dictionary from iterable using a key function.

Parameters:
  • iterable – Items to hash.

  • keyfunc – Function to extract key from each item.

Returns:

Dictionary mapping keys to items.

Return type:

dict

Example:

>>> items = [{'id': 1, 'name': 'a'}, {'id': 2, 'name': 'b'}]
>>> hashby(items, lambda x: x['id'])
{1: {'id': 1, 'name': 'a'}, 2: {'id': 2, 'name': 'b'}}
infinite_iterator(iterable)[source]

Create an iterator that cycles infinitely through items.

Parameters:

iterable – Sequence to cycle through.

Returns:

Generator that cycles forever.

Example:

>>> ii = infinite_iterator([1,2,3,4,5])
>>> [next(ii) for i in range(9)]
[1, 2, 3, 4, 5, 1, 2, 3, 4]
iscollection(obj)[source]

Check if object is a collection (iterable and not a string).

Parameters:

obj – Object to check.

Returns:

True if collection.

Return type:

bool

Example:

>>> iscollection(object())
False
>>> iscollection(range(10))
True
>>> iscollection('hello')
False
isiterable(obj)[source]

Check if object is iterable (excluding strings).

Parameters:

obj – Object to check.

Returns:

True if iterable and not a string.

Return type:

bool

Example:

>>> isiterable([])
True
>>> isiterable(tuple())
True
>>> isiterable(object())
False
>>> isiterable('foo')
False

Note: DataFrames and arrays are iterable:

>>> import pandas as pd
>>> isiterable(pd.DataFrame([['foo', 1]], columns=['key', 'val']))
True
>>> import numpy as np
>>> isiterable(np.array([1,2,3]))
True
issequence(obj)[source]

Check if object is a sequence (excluding strings).

Parameters:

obj – Object to check.

Returns:

True if sequence and not a string.

Return type:

bool

Example:

>>> issequence([])
True
>>> issequence(tuple())
True
>>> issequence('foo')
False
>>> issequence(object())
False

Note: DataFrames and arrays are NOT sequences:

>>> import pandas as pd
>>> issequence(pd.DataFrame([['foo', 1]], columns=['key', 'val']))
False
>>> import numpy as np
>>> issequence(np.array([1,2,3]))
False
partition(pred, iterable)[source]

Partition items into those where pred is False/True. See more_itertools.partition().

peel(str_or_iter)[source]

Peel iterator one by one, yield item, aliasor item, item

>>> list(peel(["a", ("", "b"), "c"]))
[('a', 'a'), ('', 'b'), ('c', 'c')]
roundrobin(*iterables)[source]

Interleave items from multiple iterables. See more_itertools.roundrobin().

rpeel(str_or_iter)[source]

Peel iterator one by one, yield alias if tuple, else item”

>>> list(rpeel(["a", ("", "b"), "c"]))
['a', 'b', 'c']
unique(iterable, key=None)[source]

Remove duplicate elements while preserving order.

Unlike more_itertools.unique, this preserves the original insertion order rather than returning elements in sorted order. Internally uses more_itertools.unique_everseen(). Returns a list instead of a generator.

Parameters:
  • iterable – Iterable to deduplicate.

  • key – Optional function to compute uniqueness key.

Returns:

List of unique elements.

Return type:

list

Basic Usage:

>>> unique([9,0,2,1,0])
[9, 0, 2, 1]

With Key Function:

>>> unique(['Foo', 'foo', 'bar'], key=lambda s: s.lower())
['Foo', 'bar']

Unhashable Items (use hashing keys for better performance):

>>> unique(([1, 2],[2, 3],[1, 2]), key=tuple)
[[1, 2], [2, 3]]
>>> unique(({1,2,3},{4,5,6},{1,2,3}), key=frozenset)
[{1, 2, 3}, {4, 5, 6}]
>>> unique(({'a':1,'b':2},{'a':3,'b':4},{'a':1,'b':2}), key=lambda x: frozenset(x.items()))
[{'a': 1, 'b': 2}, {'a': 3, 'b': 4}]
unique_iter(iterable, key=None)

Yield unique elements, preserving order. See more_itertools.unique_everseen().

same_order(ref_list, comp)

Compare two lists and check if elements in ref appear in same order in comp.

Parameters:
  • ref_list – Reference list of elements.

  • comp – Comparison list to check order against.

Returns:

True if all ref elements appear in comp in the same relative order.

Examples

>>> same_order(['x', 'y', 'z'], ['x', 'a', 'b', 'y', 'd', 'z'])
True
>>> same_order(['x', 'y', 'z'], ['x', 'z', 'y'])
False
coalesce(*args)[source]

Return first non-None value.

Example:

>>> coalesce(None, None, 1, 2)
1
>>> coalesce(None, None) is None
True
>>> coalesce(0, 1, 2)
0
getitem(sequence, index, default=None)[source]

Safe sequence indexing with default value

>>> getitem([1, 2, 3], 1)
2
>>> getitem([1, 2, 3], 10) is None
True
>>> getitem([1, 2, 3], -1)
3
>>> getitem([1, 2, 3], -100) is None
True
backfill(values)

Back-fill a sorted array with the latest value.

Parameters:

values – List of values (may contain None).

Returns:

List with None values replaced by the most recent non-None value.

Examples

>>> backfill([None, None, 1, 2, 3, None, 4])
[1, 1, 1, 2, 3, 3, 4]
backfill_iterdict(iterdict)

Back-fill a sorted iterdict with the latest values.

Parameters:

iterdict – List of dicts with possibly None values.

Returns:

List of dicts with None values replaced by most recent non-None values per key.

Examples

>>> backfill_iterdict([{'a': 1, 'b': None}, {'a': 4, 'b': 2}, {'a': None, 'b': None}])
[{'a': 1, 'b': 2}, {'a': 4, 'b': 2}, {'a': 4, 'b': 2}]
align_iterdict(iterdict_a, iterdict_b, **kw)[source]

Given two lists of dicts (‘iterdicts’), sorted on some attribute, build a single list with dicts, with keys within a given tolerance anything that cannot be aligned is DROPPED

>>> list(zip(*align_iterdict(
... [{'a': 1}, {'a': 2}, {'a': 5}],
... [{'b': 5}],
... a='a',
... b='b',
... diff=lambda x, y: x - y,
... )))
[({'a': 5},), ({'b': 5},)]
>>> list(zip(*align_iterdict(
... [{'b': 5}],
... [{'a': 1}, {'a': 2}, {'a': 5}],
... a='b',
... b='a',
... diff=lambda x, y: x - y
... )))
[({'b': 5},), ({'a': 5},)]

text

Text processing: encoding fixes, camelCase conversion, fuzzy search, number parsing, truncation, base64 encoding, strtobool.

random_string(length)[source]

Generate a random alphanumeric string.

Parameters:

length (int) – Length of the string to generate.

Returns:

Random string of uppercase letters and digits.

Return type:

str

fix_text(text)[source]

Use ftfy magic to fix text encoding issues.

Parameters:

text (str) – Text to fix.

Returns:

Fixed text.

Return type:

str

Example:

>>> fix_text('âœ" No problems')  
'✔ No problems'
>>> print(fix_text("&macr;\\_(ã\x83\x84)_/&macr;"))
¯\_(ツ)_/¯
>>> fix_text('Broken text&hellip; it&#x2019;s flubberific!')
"Broken text… it's flubberific!"
>>> fix_text('LOUD NOISES')
'LOUD NOISES'
underscore_to_camelcase(s)

Convert underscore_delimited_text to camelCase.

Parameters:

s – Underscore-delimited string.

Returns:

camelCase string.

Examples

>>> underscore_to_camelcase('foo_bar_baz')
'fooBarBaz'
>>> underscore_to_camelcase('FOO_BAR')
'fooBar'
>>> underscore_to_camelcase('_foo_bar')
'fooBar'
uncamel(camel)

Convert camelCase to snake_case.

Parameters:

camel – CamelCase string.

Returns:

snake_case string.

Examples

>>> uncamel('CamelCase')
'camel_case'
>>> uncamel('CamelCamelCase')
'camel_camel_case'
>>> uncamel('getHTTPResponseCode')
'get_http_response_code'
strip_ascii(s)[source]

Remove non-ASCII characters from a string.

Parameters:

s (str) – Input string.

Returns:

String with only ASCII characters.

Return type:

str

sanitize_vulgar_string(s)

Replace vulgar fractions with decimal equivalents.

Converts number and vulgar fraction combinations to number and decimal.

Parameters:

s – String containing vulgar fractions.

Returns:

String with fractions converted to decimals.

Examples

>>> sanitize_vulgar_string("Foo-Bar+Baz: 17s 4¾ 1 ⅛ 20 93¾ - 94⅛")
'Foo-Bar+Baz: 17s 4.75 1.125 20 93.75 - 94.125'
>>> sanitize_vulgar_string("⅓ cup")
'0.333333 cup'
round_digit_string(s, places=None)[source]

Round a numeric string to specified decimal places.

Parameters:
  • s (str) – Numeric string to round.

  • places (int) – Number of decimal places (None to preserve original).

Returns:

Rounded numeric string.

Return type:

str

Example:

>>> round_digit_string('7283.1234', 3)
'7283.123'
>>> round_digit_string('7283.1234', None)
'7283.1234'
>>> round_digit_string('7283', 3)
'7283'
parse_number(s, force=True)[source]

Extract number from string.

Handles various formats including commas, parentheses for negatives, and trailing characters.

Parameters:
  • s (str) – String to parse.

  • force (bool) – If True, return None on parse failure; if False, return original string.

Returns:

Parsed int or float, None, or original string (if force=False).

Example:

>>> parse_number('1,200m')
1200
>>> parse_number('100.0')
100.0
>>> parse_number('100')
100
>>> parse_number('0.002k')
0.002
>>> parse_number('-1')
-1
>>> parse_number('(1)')
-1
>>> parse_number('-100.0')
-100.0
>>> parse_number('(100.)')
-100.0
>>> parse_number('')
>>> parse_number('foo')
>>> parse_number('foo', force=False)
'foo'
truncate(s, width, suffix='...')[source]

Truncate a string to max width characters.

Adds suffix if the string was truncated. Tries to break on whitespace.

Parameters:
  • s (str) – String to truncate.

  • width (int) – Maximum width including suffix.

  • suffix (str) – Suffix to append when truncated.

Returns:

Truncated string.

Return type:

str

Raises:

AssertionError – If width is not longer than suffix.

Example:

>>> truncate('fubarbaz', 6)
'fub...'
>>> truncate('fubarbaz', 3)
Traceback (most recent call last):
    ...
AssertionError: Desired width must be longer than suffix
>>> truncate('fubarbaz', 3, suffix='..')
'f..'
rotate(s)[source]

Apply rot13-like translation to string.

Rotates characters including digits and punctuation.

Parameters:

s (str) – String to rotate.

Returns:

Rotated string.

Return type:

str

Example:

>>> rotate("foobarbaz")
';^^-,{-,E'
smart_base64(encoded_words)[source]

Decode base64 encoded words with intelligent charset handling.

Splits out encoded words per RFC 2047, Section 2 and handles common encoding issues like multiline subjects and charset mismatches.

Parameters:

encoded_words (str) – Base64 encoded string or plain text.

Returns:

Decoded string (or original if not encoded).

Return type:

str

Basic Usage:

>>> smart_base64('=?utf-8?B?U1RaOiBGNFExNSBwcmV2aWV3IOKAkyBUaGUgc3RhcnQgb2YgdGh'
...              'lIGNhc2ggcmV0dXJuIHN0b3J5PyBQYXRoIHRvICQyMDAgc3RvY2sgcHJpY2U/?=')
'STZ: F4Q15 preview – The start of the cash return story? Path to $200 stock price?'

Multiline Subjects (common email bug - base64 encoded per line):

>>> smart_base64('=?UTF-8?B?JDEwTU0rIENJVCBHUk9VUCBUUkFERVMgLSBDSVQgNScyMiAxMDLi'
...              'hZ0tMTAz4oWbICBNSw==?=\r\n\t=?UTF-8?B?VA==?=')
"$10MM+ CIT GROUP TRADES - CIT 5'22 102.625-103.125 MK T"

Charset Mismatch (UTF-8 header with Latin-1 content):

>>> smart_base64('=?UTF-8?B?TVMgZW5lcmd5OiByaWcgMTdzIDkxwr4vOTLihZsgMThzIDkzwr4v'
...              'OTTihZsgMjBzIDgywg==?=\r\n\t=?UTF-8?B?vS84Mw==?=')
'MS energy: rig 17s 91.75/92.125 18s 93.75/94.125 20s 82.5/83'

Unicode Characters:

>>> smart_base64('=?UTF-8?B?VGhpcyBpcyBhIGhvcnNleTog8J+Qjg==?=')
'This is a horsey: \U0001f40e'
>>> smart_base64('=?UTF-8?B?U0xBQiAxIOKFnDogIDEwOSAtIMK9IHYgNzYuMjU=?=')
'SLAB 1.375: 109 - 0.5 v 76.25'

Plain Text Passthrough:

>>> smart_base64('This is plain text')
'This is plain text'
strtobool(val)[source]

Convert a string representation of truth to boolean.

True values are ‘y’, ‘yes’, ‘t’, ‘true’, ‘on’, and ‘1’. False values are ‘n’, ‘no’, ‘f’, ‘false’, ‘off’, ‘0’, and empty string.

Parameters:

val – Value to convert (string, bool, or None).

Returns:

Boolean value.

Return type:

bool

Raises:

ValueError – If val is not a recognized truth value.

fuzzy_search(search_term, items, case_sensitive=False)[source]

Search for term in a list of items using fuzzy matching.

Scores each item using Jaro-Winkler similarity and token set ratio, returning the highest score for each item tuple.

Parameters:
  • search_term (str) – Term to search for.

  • items – Iterable of tuples containing searchable strings.

  • case_sensitive (bool) – Whether to use case-sensitive matching.

Yields:

Tuples of (items, max_score).

Example:

>>> results = fuzzy_search("OCR", [("Omnicare", "OCR",), ("Ocra", "OKK"), ("GGG",)])
>>> (_,ocr_score), (_,okk_score), (_,ggg_score) = results
>>> '{:.4}'.format(ocr_score)
'1.0'
>>> '{:.4}'.format(okk_score)
'0.9417'
>>> '{:.4}'.format(ggg_score)
'0.0'
>>> x, y = list(zip(*fuzzy_search("Ramco-Gers",
...     [("RAMCO-GERSHENSON PROPERTIES", "RPT US Equity",),
...     ("Ramco Inc.", "RMM123FAKE")])))[1]
>>> '{:.4}'.format(x), '{:.4}'.format(y)
('0.8741', '0.6667')
is_numeric(txt)[source]

Check if value can be converted to a float.

Parameters:

txt – Value to check.

Returns:

True if value can be converted to float.

Return type:

bool

Warning

Complex types cannot be converted to float.

Example:

>>> is_numeric('a')
False
>>> is_numeric(1e4)
True
>>> is_numeric('1E2')
True
>>> is_numeric(complex(-1,0))
False

format

String formatting utilities for numbers, currency, time intervals, phone numbers, capitalization, and custom number formats.

class Percent(val)[source]

Bases: float

Float subclass that marks values for percentage formatting in display tables.

Example:

>>> p = Percent(0.25)
>>> float(p)
0.25
>>> p.pct
True
capitalize(s)[source]

Capitalize with special handling for known abbreviations.

Parameters:

s (str) – String to capitalize.

Returns:

Capitalized string or known abbreviation.

Return type:

str

Example:

>>> capitalize('goo')
'Goo'
>>> capitalize('mv')
'MV'
>>> capitalize('pct')
'%'
capwords(s)[source]

Capitalize words in a string, accommodating acronyms.

Parameters:

s (str) – String to capitalize.

Returns:

Capitalized string.

Return type:

str

Example:

>>> capwords("f.o.o")
'F.O.O'
>>> capwords("bar")
'Bar'
>>> capwords("foo bar")
'Foo Bar'
commafy(n)[source]

Add commas to a numeric value.

Parameters:

n – Number or string to add commas to.

Returns:

String with comma separators.

Return type:

str or None

Example:

>>> commafy(1)
'1'
>>> commafy(123)
'123'
>>> commafy(-123)
'-123'
>>> commafy(1234)
'1,234'
>>> commafy(1234567890)
'1,234,567,890'
>>> commafy(123.0)
'123.0'
>>> commafy(1234.5)
'1,234.5'
>>> commafy(1234.56789)
'1,234.56789'
>>> commafy(f'{-1234.5:.2f}')
'-1,234.50'
>>> commafy(None)
>>>
fmt(value, style)

Alias for format().

format(value, style)[source]

Format a numeric value with various formatting options.

Supports commas, parens for negative values, and special cases for zeros.

Parameters:
  • value – Numeric value to format.

  • style (str) – Format specification string.

Returns:

Formatted string.

Return type:

str

Style format: n[cpzZkKmMbBsS%#]/[kmb] (e.g., '2c', '0cpz', '1%', '1s')

  • n - number of decimals

  • c - use commas

  • p - wrap negative numbers in parenthesis

  • z - use a ‘ ‘ for zero values

  • Z - use a ‘-’ for zero values

  • K/k - convert to thousands and add ‘K’ suffix

  • M/m - convert to millions and add ‘M’ suffix

  • B/b - convert to billions and add ‘B’ suffix

  • S/s - convert to shorter of KMB formats

  • % - scale by 100 and add a percent sign at the end (unless z/Z)

  • # - scale by 10000 and add ‘bps’ at the end

  • /x - divide the number by 1e3 (k), 1e6 (m), 1e9 (b) first (does not append the units like KMB do)

Example:

>>> format(1234.56, '2c')
'1,234.56'
>>> format(-100, '0cp')
'(100)'
>>> format(0, '2z')
''
>>> format(0.5, '1%')
'50.0%'
>>> format(1500000, '1M')
'1.5M'
format_phone(phone)[source]

Reformat phone numbers for display.

Parameters:

phone – Phone number as string or integer.

Returns:

Formatted phone number with dashes.

Return type:

str

Example:

>>> format_phone('6877995559')
'687-799-5559'
format_secondsdelta(seconds)[source]

Format seconds as human-readable time delta.

Parameters:

seconds (float) – Number of seconds.

Returns:

Human-readable time string.

Return type:

str

Example:

>>> format_secondsdelta(3661)
'1.0 hrs'
>>> format_secondsdelta(90)
'1.5 min'
format_timedelta(td)[source]

Format a timedelta as human-readable string.

Parameters:

td (timedelta) – Time delta to format.

Returns:

Human-readable string (e.g., ‘2 hrs’, ‘30 min’).

Return type:

str

Example:

>>> format_timedelta(datetime.timedelta(days=2))
'2 days'
>>> format_timedelta(datetime.timedelta(hours=3))
'3 hrs'
>>> format_timedelta(datetime.timedelta(seconds=45))
'45 sec'
format_timeinterval(start, end=None)[source]

Format a time interval as human-readable string.

Parameters:
  • start (datetime) – Start datetime.

  • end (datetime) – End datetime (defaults to now).

Returns:

Human-readable time interval string.

Return type:

str

Example:

>>> start = datetime.datetime(2020, 1, 1, 12, 0, 0)
>>> end = datetime.datetime(2020, 1, 1, 14, 30, 0)
>>> format_timeinterval(start, end)
'2.5 hrs'
splitcap(s, delim=None)[source]

Split and capitalize string by delimiter (or camelcase).

Parameters:
  • s (str) – String to split and capitalize.

  • delim (str) – Delimiter to split on (auto-detected if None).

Returns:

Title-cased string with spaces.

Return type:

str

Example:

>>> splitcap("foo_bar")
'Foo Bar'
>>> splitcap("fooBar")
'Foo Bar'
titlecase(s)[source]

Convert string to title case using python-titlecase library.

Parameters:

s (str) – String to convert.

Returns:

Title-cased string.

Return type:

str

Example:

>>> titlecase('the quick brown fox')
'The Quick Brown Fox'

path

Path operations: add to sys.path, get module directory, context manager for directory changes, script name extraction.

add_to_sys_path(path=None, relative_path=None)[source]

Add a path to the Python system search path.

Parameters:
  • path (str) – Base path, defaults to calling module’s directory.

  • relative_path (str) – Relative path to append to base path.

Example for Unit Tests:

add_to_sys_path('..')
import run_task
cd(path)[source]

Context manager to safely change working directory.

Restores original directory when context exits.

Parameters:

path – Directory to change to.

Example:

with cd("/some/folder"):
    run_command("some_command")
get_module_dir(module=None)[source]

Get the directory containing a module.

Parameters:

module – Module to get directory for, defaults to caller’s module.

Returns:

Directory path containing the module.

Return type:

Path

Example:

etcdir = get_module_dir() / '../../etc'
scriptname(task=None)[source]

Return name of script being run, without file extension.

Parameters:

task (str) – Script path, defaults to sys.argv[0].

Returns:

Script name without extension.

Return type:

str

Example:

>>> scriptname(__file__)
'path'
>>> scriptname() in sys.argv[0]
True
>>> scriptname()==sys.argv[0]
False

dicts

Dictionary utilities: inversion, key/value mapping, flattening, nested access, multikey sorting, comparison, tree operations.

ismapping(something)[source]

Check if something is a mapping (dict-like).

Parameters:

something – Object to check.

Returns:

True if the object is a mapping.

Return type:

bool

Example:

>>> ismapping(dict())
True
invert(dct)[source]

Invert a dictionary, swapping keys and values.

Parameters:

dct (dict) – Dictionary to invert.

Returns:

New dictionary with keys and values swapped.

Return type:

dict

Example:

>>> invert({'a': 1, 'b': 2})
{1: 'a', 2: 'b'}
mapkeys(func, dct)[source]

Apply a function to all keys in a dictionary.

Parameters:
  • func – Function to apply to each key.

  • dct (dict) – Dictionary to transform.

Returns:

New dictionary with transformed keys.

Return type:

dict

Example:

>>> mapkeys(str.upper, {'a': 1, 'b': 2})
{'A': 1, 'B': 2}
mapvals(func, dct)[source]

Apply a function to all values in a dictionary.

Parameters:
  • func – Function to apply to each value.

  • dct (dict) – Dictionary to transform.

Returns:

New dictionary with transformed values.

Return type:

dict

Example:

>>> mapvals(lambda x: x * 2, {'a': 1, 'b': 2})
{'a': 2, 'b': 4}
flatten(kv, prefix=None)[source]

Flatten a dictionary, recursively flattening nested dicts.

Unlike more_itertools.flatten, this operates on dictionaries rather than iterables. It recursively flattens nested dict keys by joining them with underscores (e.g., {'a': {'b': 1}} becomes ('a_b', 1)), whereas more_itertools.flatten removes one level of nesting from a list of lists.

Parameters:
  • kv (dict) – Dictionary to flatten.

  • prefix (list) – Internal prefix list for recursion (do not set manually).

Yields:

Tuples of (flattened_key, value).

Example:

>>> data = [
...     {'event': 'User Clicked', 'properties': {'user_id': '123', 'page_visited': 'contact_us'}},
...     {'event': 'User Clicked', 'properties': {'user_id': '456', 'page_visited': 'homepage'}},
...     {'event': 'User Clicked', 'properties': {'user_id': '789', 'page_visited': 'restaurant'}}
... ]
>>> from pandas import DataFrame
>>> df = DataFrame({k:v for k,v in flatten(kv)} for kv in data)
>>> list(df)
['event', 'properties_user_id', 'properties_page_visited']
>>> len(df)
3
unnest(d, keys=None)[source]

Recursively convert dict into list of tuples.

Parameters:
  • d (dict) – Dictionary to unnest.

  • keys (list) – Internal key accumulator (do not set manually).

Returns:

List of tuples representing paths to leaf values.

Return type:

list

Example:

>>> unnest({'a': {'b': 1}, 'c': 2})
[('a', 'b', 1), ('c', 2)]
replacekey(d, key, newval)[source]

Context manager for temporarily patching a dictionary value.

Parameters:
  • d (dict) – Dictionary to patch.

  • key – Key to temporarily replace.

  • newval – Temporary value to set.

Basic Usage:

>>> f = dict(x=13)
>>> with replacekey(f, 'x', 'pho'):
...     f['x']
'pho'
>>> f['x']
13

If the dict does not have the key set before, we return to that state:

>>> import os, sys
>>> rand_key = str(int.from_bytes(os.urandom(10), sys.byteorder))
>>> with replacekey(os.environ, rand_key, '22'):
...     os.environ[rand_key]=='22'
True
>>> rand_key in os.environ
False
replaceattr(obj, attrname, newval)[source]

Context manager for temporarily monkey patching an object attribute.

Parameters:
  • obj – Object to patch.

  • attrname (str) – Attribute name to temporarily replace.

  • newval – Temporary value to set.

Basic Usage:

>>> class Foo: pass
>>> f = Foo()
>>> f.x = 13
>>> with replaceattr(f, 'x', 'pho'):
...     f.x
'pho'
>>> f.x
13

If the obj did not have the attr set, we remove it:

>>> with replaceattr(f, 'y', 'boo'):
...     f.y=='boo'
True
>>> hasattr(f, 'y')
False
cmp(left, right)[source]

Python 2 style cmp function with null value handling.

Handles null values gracefully in sort comparisons.

Parameters:
  • left – First value to compare.

  • right – Second value to compare.

Returns:

-1 if left < right, 0 if equal, 1 if left > right.

Return type:

int

Example:

>>> cmp(None, 2)
-1
>>> cmp(2, None)
1
>>> cmp(-1, 2)
-1
>>> cmp(2, -1)
1
>>> cmp(1, 1)
0
multikeysort(items, columns, inplace=False)[source]

Sort list of dictionaries by list of keys.

Equivalent to SQL ORDER BY - use no prefix for ascending, - prefix for descending.

Parameters:
  • items (list) – List of dictionaries to sort.

  • columns – List of column names to sort by (prefix with - for descending).

  • inplace (bool) – If True, sort in place; otherwise return new sorted list.

Returns:

Sorted list if inplace=False, otherwise None.

Basic Usage:

>>> ds = [
...     {'category': 'c1', 'total': 96.0},
...     {'category': 'c2', 'total': 96.0},
...     {'category': 'c3', 'total': 80.0},
...     {'category': 'c4', 'total': None},
...     {'category': 'c5', 'total': 80.0},
... ]
>>> asc = multikeysort(ds, ['total', 'category'])
>>> total = [_['total'] for _ in asc]
>>> assert all([cmp(total[i], total[i+1]) in (0,-1,)
...             for i in range(len(total)-1)])

Missing Columns are Ignored:

>>> us = multikeysort(ds, ['missing',])
>>> assert us[0]['total'] == 96.0
>>> assert us[1]['total'] == 96.0
>>> assert us[2]['total'] == 80.0
>>> assert us[3]['total'] == None
>>> assert us[4]['total'] == 80.0

None Columns are Handled:

>>> us = multikeysort(ds, None)
>>> assert us[0]['total'] == 96.0
>>> assert us[1]['total'] == 96.0
>>> assert us[2]['total'] == 80.0
>>> assert us[3]['total'] == None
>>> assert us[4]['total'] == 80.0

Descending Order with Inplace:

>>> multikeysort(ds, ['-total', 'category'], inplace=True) # desc
>>> total = [_['total'] for _ in ds]
>>> assert all([cmp(total[i], total[i+1]) in (0, 1,)
...             for i in range(len(total)-1)])
map(func, *iterables)[source]

Simulate a Python 2-like map with longest iterable behavior.

Continues until the longest of the argument iterables is exhausted, extending the other arguments with None.

Parameters:
  • func – Function to apply (or None for tuple aggregation).

  • iterables – Iterables to map over.

Returns:

Iterator of mapped results.

Example:

>>> def foo(a, b):
...     if b is not None:
...         return a - b
...     return -a
>>> list(map(foo, range(5), [3,2,1]))
[-3, -1, 1, -3, -4]
get_attrs(klazz)[source]

Get class attributes (excluding methods and dunders).

Parameters:

klazz (type) – Class to inspect.

Returns:

List of (name, value) tuples for class attributes.

Return type:

list

Example:

>>> class MyClass(object):
...     a = '12'
...     b = '34'
...     def myfunc(self):
...         return self.a
>>> get_attrs(MyClass)
[('a', '12'), ('b', '34')]
trace_key(d, attrname)[source]

Trace dictionary key in nested dictionary.

Parameters:
  • d (dict) – Dictionary to search.

  • attrname (str) – Key name to find.

Returns:

List of paths (as lists) to the key.

Return type:

list[list]

Raises:

AttributeError – If key is not found.

Basic Usage:

>>> l=dict(a=dict(b=dict(c=dict(d=dict(e=dict(f=1))))))
>>> trace_key(l,'f')
[['a', 'b', 'c', 'd', 'e', 'f']]

Multiple Locations:

>>> l=dict(a=dict(b=dict(c=dict(d=dict(e=dict(f=1))))), f=2)
>>> trace_key(l,'f')
[['a', 'b', 'c', 'd', 'e', 'f'], ['f']]

With Missing Key:

>>> trace_key(l, 'g')
Traceback (most recent call last):
...
AttributeError: g
trace_value(d, attrname)[source]

Get values at all locations of a key in nested dictionary.

Parameters:
  • d (dict) – Dictionary to search.

  • attrname (str) – Key name to find.

Returns:

List of values found at each key location.

Return type:

list

Raises:

AttributeError – If key is not found.

Basic Usage:

>>> l=dict(a=dict(b=dict(c=dict(d=dict(e=dict(f=1))))))
>>> trace_value(l, 'f')
[1]

Multiple Locations:

>>> l=dict(a=dict(b=dict(c=dict(d=dict(e=dict(f=1))))), f=2)
>>> trace_value(l,'f')
[1, 2]

With Missing Key:

>>> trace_value(l, 'g')
Traceback (most recent call last):
...
AttributeError: g
add_branch(tree, vector, value)[source]

Insert a value into a dict at the path specified by vector.

Given a dict, a vector, and a value, insert the value into the dict at the tree leaf specified by the vector. Recursive!

Parameters:
  • tree (dict) – The data structure to insert the vector into.

  • vector (list) – A list of values representing the path to the leaf node.

  • value – The object to be inserted at the leaf.

Returns:

The dict with the value placed at the path specified.

Return type:

dict

Algorithm:
  • If we’re at the leaf, add it as key/value to the tree

  • Else: If the subtree doesn’t exist, create it.

  • Recurse with the subtree and the left shifted vector.

  • Return the tree.

Useful for parsing ini files with dot-delimited keys:

[app]
site1.ftp.host = hostname
site1.ftp.username = username
site1.database.hostname = db_host

Example 1:

>>> tree = {'a': 'apple'}
>>> vector = ['b', 'c', 'd']
>>> value = 'dog'
>>> tree = add_branch(tree, vector, value)
>>> unnest(tree)
[('a', 'apple'), ('b', 'c', 'd', 'dog')]

Example 2:

>>> vector2 = ['b', 'c', 'e']
>>> value2 = 'egg'
>>> tree = add_branch(tree, vector2, value2)
>>> unnest(tree)
[('a', 'apple'), ('b', 'c', 'd', 'dog'), ('b', 'c', 'e', 'egg')]
merge_dict(old, new, inplace=True)[source]

Recursively merge two dictionaries, including nested dictionaries and iterables.

This function performs a deep merge of new into old, handling nested dictionaries, iterables (like lists and tuples), and type mismatches gracefully.

Parameters:
  • old (dict) – The dictionary to merge into (will be modified if inplace=True).

  • new (dict) – The dictionary to merge from (remains unchanged).

  • inplace (bool) – If True, modifies old in place; if False, returns a new merged dict.

Return type:

dict[str, Any] | None

Returns:

If inplace=False, returns the merged dictionary. Otherwise, returns None.

Basic Nested Merge:

>>> l1 = {'a': {'b': 1, 'c': 2}, 'b': 2}
>>> l2 = {'a': {'a': 9}, 'c': 3}
>>> merge_dict(l1, l2, inplace=False)
{'a': {'b': 1, 'c': 2, 'a': 9}, 'b': 2, 'c': 3}
>>> l1=={'a': {'b': 1, 'c': 2}, 'b': 2}
True
>>> l2=={'a': {'a': 9}, 'c': 3}
True

Multilevel Merging:

>>> xx = {'a': {'b': 1, 'c': 2}, 'b': 2}
>>> nice = {'a': {'a': 9}, 'c': 3}
>>> merge_dict(xx, nice)
>>> 'a' in xx['a']
True
>>> 'c' in xx
True

Values Get Overwritten:

>>> warn = {'a': {'c': 9}, 'b': 3}
>>> merge_dict(xx, warn)
>>> xx['a']['c']
9
>>> xx['b']
3

Merges Iterables (preserving types when possible):

>>> l1 = {'a': {'c': [5, 2]}, 'b': 1}
>>> l2 = {'a': {'c': [1, 2]}, 'b': 3}
>>> merge_dict(l1, l2)
>>> len(l1['a']['c'])
4
>>> l1['b']
3

Handles Type Mismatches (converts to lists):

>>> l1 = {'a': {'c': [5, 2]}, 'b': 1}
>>> l3 = {'a': {'c': (1, 2,)}, 'b': 3}
>>> merge_dict(l1, l3)
>>> len(l1['a']['c'])
4
>>> isinstance(l1['a']['c'], list)
True

Handles None Values:

>>> l1 = {'a': {'c': None}, 'b': 1}
>>> l2 = {'a': {'c': [1, 2]}, 'b': 3}
>>> merge_dict(l1, l2)
>>> l1['a']['c']
[1, 2]

module

Module management: dynamic importing, module patching, class instantiation, virtual modules, package discovery.

class OverrideModuleGetattr(wrapped, override)[source]

Bases: object

Wrapper to override __getattr__ of a Python module.

Allows dynamic attribute access for modules, typically used for config.py settings. Looks up attributes in an override module before falling back to the wrapped module.

Parameters:
  • wrapped (ModuleType) – The original module to wrap.

  • override (ModuleType) – The override module to check first.

Config.py Example:

self = OverrideModuleGetattr(sys.modules[__name__], local_config)
sys.modules[__name__] = self

Usage Example:

>>> from libb import Setting
>>> create_mock_module('config', {'foo': Setting(bar=1)})
>>> original_config = sys.modules['config']

>>> override_config = ModuleType('override_config')
>>> override_config.foo = Setting(bar=2)

>>> wrapped_config = OverrideModuleGetattr('config', override_config)
>>> sys.modules['config'] = wrapped_config # important!

>>> import config
>>> assert config.foo.bar == 2

>>> sys.modules['config'] = original_config
>>> import config
>>> assert config.foo.bar == 1

>>> del sys.modules['config']  # cleanup
__getattr__(name)[source]

Get attribute, checking override module first then wrapped module.

__getitem__(name)[source]

Allow dynamic module lookups like config[‘bloomberg.data’].

get_module(modulename)[source]

Import a dotted module name and return the innermost module.

Handles the quirk where __import__('a.b.c') returns module a.

Parameters:

modulename (str) – Dotted module name to import.

Returns:

The imported module.

Return type:

ModuleType

Example:

>>> m = get_module('libb.module')
>>> m.__name__
'libb.module'
get_class(classname)[source]

Get a class by its fully qualified name.

If classname has a module prefix, imports that module first. Otherwise assumes the class is already in globals.

Parameters:

classname (str) – Class name, optionally with module prefix.

Returns:

The class object.

Return type:

type

Example:

>>> cls = get_class('libb.Setting')
>>> cls.__name__
'Setting'
get_subclasses(module, parentcls)[source]

Get all classes in a module that are subclasses of parentcls.

Parameters:
  • module (str | ModuleType) – Module name or module object.

  • parentcls (type) – Parent class to check inheritance against.

Returns:

List of subclasses found in the module.

Return type:

list[type]

get_function(funcname, module=None)[source]

Get a function by name from a module.

Parameters:
  • funcname (str) – Name of the function.

  • module (ModuleType | None) – Module to search, defaults to caller’s module.

Returns:

The function or None if not found.

Return type:

Callable or None

load_module(name, path)[source]

Load a module from a file path.

Parameters:
  • name (str) – Name to assign to the module.

  • path (str) – Absolute path to the module file.

Returns:

The loaded module.

Return type:

ModuleType

Example:

>>> import os
>>> m = load_module('module', os.path.abspath(__file__))
>>> type(m.load_module).__name__
'function'
>>> m.load_module.__name__
'load_module'
patch_load(module_name, funcs, releft='', reright='', repl='_', module_name_prefix='')[source]

Patch and load a module with regex substitutions.

Useful for replacing function names with test prefixes.

Parameters:
  • module_name (str) – Name of the module to load.

  • funcs (list) – List of function names to patch.

  • releft (str) – Left side of regex pattern.

  • reright (str) – Right side of regex pattern.

  • repl (str) – Replacement prefix (default: ‘_’).

  • module_name_prefix (str) – Prefix for the module name.

Returns:

The patched module.

Return type:

ModuleType

Usage:

mod = patch_load(<module_name>, <funcs>)
mod.<func_name>(<*params>)
patch_module(source_name, target_name)[source]

Replace a source module with a target module in sys.modules.

Useful when writing a module with the same name as a standard library module and needing to import the original.

Parameters:
  • source_name (str) – Original module name to replace.

  • target_name (str) – New name to assign to the module.

Returns:

The target module.

Return type:

ModuleType

Example:

>>> import sys
>>> original_sys = sys.modules['sys']

>>> _sys = patch_module('sys', '_sys')
>>> 'sys' in sys.modules
False
>>> '_sys' in sys.modules
True

>>> sys.modules['sys'] = original_sys  # Restore original sys module
create_instance(classname, *args, **kwargs)[source]

Create an instance of a class by its fully qualified name.

Parameters:
  • classname (str) – Fully qualified class name.

  • args (Any) – Positional arguments for the constructor.

  • kwargs (Any) – Keyword arguments for the constructor.

Return type:

Any

Returns:

Instance of the class.

Example:

>>> instance = create_instance('libb.Setting', foo=42)
>>> instance.foo
42
create_mock_module(modname, params=None)[source]

Create a mock module with specified attributes.

Useful for testing config settings without creating actual config files.

Parameters:
  • modname (str) – Name for the mock module.

  • params (dict) – Dictionary of attribute names to values.

Return type:

None

Basic Example:

>>> create_mock_module('foomod', {'x': {'foo': 1, 'bar': 2}})
>>> import foomod
>>> foomod.x
{'foo': 1, 'bar': 2}

Unittest Mock Example:

>>> from unittest.mock import Mock
>>> mock = Mock(name='foomod.x', return_value='bar')
>>> create_mock_module('foomod', {'x': mock})
>>> import foomod
>>> foomod.x.return_value
'bar'
class VirtualModule(modname, submodules)[source]

Bases: object

Virtual module with submodules sourced from other modules.

Use via create_virtual_module().

Parameters:
  • modname (str) – Name for the virtual module.

  • submodules (dict) – Mapping of submodule names to actual module names.

create_virtual_module(modname, submodules)[source]

Create a virtual module with submodules from other modules.

Parameters:
  • modname (str) – Name of the virtual module to create.

  • submodules (dict) – Mapping of submodule names to actual module names.

Return type:

None

Submodule Example:

>>> create_virtual_module('foo', {'libb': 'libb'})
>>> import foo
>>> foo.libb.Setting()
{}

Virtual Config Example:

>>> from libb import Setting
>>> create_mock_module('mock_config', {'ENVIRONMENT': 'prod', 'bar': Setting(baz=1)})
>>> import mock_config
>>> create_virtual_module('foo', {'config': 'mock_config'})
>>> import foo
>>> foo.config.ENVIRONMENT
'prod'
>>> foo.config.bar.baz
1
get_packages_in_module(*m)[source]

Get package info for modules, useful for pytest conftest loading.

Parameters:

m (ModuleType) – One or more modules to inspect.

Returns:

Iterable of ModuleInfo objects.

Return type:

Iterable[ModuleInfo]

Example:

>>> import libb
>>> _ = get_package_paths_in_module(libb)
>>> assert 'libb.module' in _
get_package_paths_in_module(*m)[source]

Get package paths within modules, useful for pytest conftest loading.

Parameters:

m (ModuleType) – One or more modules to inspect.

Returns:

Iterable of package path strings.

Return type:

Iterable[str]

Conftest.py Example:

pytest_plugins = [*get_package_paths_in_module(tests.fixtures)]
# Or multiple modules:
pytest_plugins = [*get_package_paths_in_module(tests.fixtures, tests.plugins)]
import_non_local(name, custom_name=None)[source]

Import a module using a custom name to avoid local name conflicts.

Useful when you have a local module with the same name as a standard library or third-party module.

Parameters:
  • name (str) – The original module name.

  • custom_name (str) – Custom name for the imported module.

Returns:

The imported module with the custom name.

Return type:

ModuleType

Raises:

ModuleNotFoundError – If the module cannot be found.

Example:

>>> create_mock_module('mock_calendar')
>>> import mock_calendar
>>> mock_calendar.isleap = lambda year: year % 4 == 0 and (year % 100 != 0 or year % 400 == 0)

>>> calendar = import_non_local('calendar', 'mock_calendar')
>>> 'mock_calendar' in sys.modules
True
>>> calendar.isleap(2020)
True

typedefs

Type definitions for file-like objects, IO streams, and common data types used across the library.

FileLike

Type alias for file-like objects (IO streams, BytesIO, FileIO, TextIOWrapper).

alias of IO[BytesIO] | BytesIO | FileIO | TextIOWrapper

Attachable

Type alias for attachable content (string, dict, file-like, or nested iterable).

alias of str | dict | IO[BytesIO] | BytesIO | FileIO | TextIOWrapper | Iterable[Iterable[Any]]

Dimension

Type alias for dimensions as (width, height) tuple.

alias of tuple[int, int]