core.calculate._batch#
Copyright (c) Meta Platforms, Inc. and affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.
Classes#
Base class for protocol classes. |
|
Batches incoming inference requests. |
Functions#
|
Get a backend to run ASE calculations concurrently. |
Module Contents#
- class core.calculate._batch.ExecutorProtocol#
Bases:
ProtocolBase class for protocol classes.
Protocol classes are defined as:
class Proto(Protocol): def meth(self) -> int: ...
Such classes are primarily used with static type checkers that recognize structural subtyping (static duck-typing).
For example:
class C: def meth(self) -> int: return 0 def func(x: Proto) -> int: return x.meth() func(C()) # Passes static type check
See PEP 544 for details. Protocol classes decorated with @typing.runtime_checkable act as simple-minded runtime protocols that check only the presence of given attributes, ignoring their type signatures. Protocol classes can be generic, they are defined as:
class GenProto[T](Protocol): def meth(self) -> T: ...
- submit(fn, *args, **kwargs)#
- map(fn, *iterables, **kwargs)#
- shutdown(wait: bool = True)#
- core.calculate._batch._get_concurrency_backend(backend: Literal['threads'], options: dict) ExecutorProtocol#
Get a backend to run ASE calculations concurrently.
- class core.calculate._batch.InferenceBatcher(predict_unit: fairchem.core.units.mlip_unit.predict.MLIPPredictUnit, max_batch_size: int = 16, batch_wait_timeout_s: float = 0.1, num_replicas: int = 1, concurrency_backend: Literal['threads'] = 'threads', concurrency_backend_options: dict | None = None, ray_actor_options: dict | None = None)#
Batches incoming inference requests.
- predict_unit#
- max_batch_size#
- batch_wait_timeout_s#
- num_replicas#
- predict_server_handle#
- executor: ExecutorProtocol#
- __enter__()#
- __exit__(exc_type, exc_val, exc_tb)#
- property batch_predict_unit#
- shutdown(wait: bool = True)#
Shutdown the executor.
- __del__()#
Cleanup on deletion.