I perform evaluations using a MultiRunner and one or more goal setups.

Construct me with a list avList of AV objects, parents and children, known and unknown, for all setups.

Then, when you can wait on a Deferred, call my Setup method with the following dicts keyed by setup ID with one entry for each setup:

  1. nameLists with lists of vector names,
  2. analyzers with sequences that define the analysis to be done, each beginning with either a subclass of an Ngspice Analyzer or an instance of a Python analyzer,
  3. Xs with evaluation goals,
  4. netlists of text for Ngspice Analyzers or None for Python Analyzers,
  5. indies with lists of the vector names in each list of nameLists that are independent, and
ParameterstransformsYou can provide a dict of sequences that each contain a callable and the names of vectors whose values it will accept as arguments. Before an evaluation that includes a vector name with an entry in transforms, each transforming callable for such a name takes the setup ID followed by vector values resulting from a call to my MultiRunner and produces a vector value for the name under which it was keyed in transforms.
logValuesYou can provide a set of vector names that are to be evaluated in log space rather than linear.
N_coresThe number of that analyzers that my MultiRunner maintains for each setup.
verboseSet True to include info about Ngspice failures in the output. Otherwise, failures are only indicated by a "#" character.
spewSet True to include warnings from Ngspice in the output. This can make things pretty messy.
See AlsoSetup for further details. For a TCP-based counterpart, see WireEvaluator.
Method __init__ Constructor
Method shutdown Shuts down my MultiRunner and then my local ThreadQueue.
Method setup Sets up evaluation for each of my setups.
Method __iter__ I iterate over setup IDs that I started with and have not been told to skip.
Method __len__ My length is the number of eval specs I am actively managing.
Method skip Call this to make me skip the supplied setup ID or IDs, even if I have evalSpecs for them.
Method evaluate Runs an analysis with the supplied alterableValues for all setup IDs.
Method evaluateSSE Gets the SSE for the supplied alterableValues, possibly only for a single specified setup ID.
Method evaluateAll Calls evaluateSSE for each list of alterable values in the supplied list-of-lists.
Method _gotV Called by evaluate and evaluateSSE to add simulation results in V to the supplied ErrorTabulator et, for the specified setup ID.
def __init__(self, cfg, avList, transforms={}, logValues=set(), N_cores=None):

Constructor

@defer.inlineCallbacks
def shutdown(self):

Shuts down my MultiRunner and then my local ThreadQueue.

Returns a Deferred that fires when I'm done shutting down.

@defer.inlineCallbacks
def setup(self, nameLists, analyzers, Xs, netlists, indies, weights={}, skipIDs=[]):

Sets up evaluation for each of my setups.

Parameter Notes

Each sequence in analyzers begins with either a subclass of sim.Analyzer or an object constructed from psim.Analyzer, and is followed by one or more args used in the construction or setup, respectively, of that analyzer.

Each entry in Xs is either a 2-D Numpy array of expected goal values for the points in the vectors for its set up, or a callable (TODO: Describe). Each row of that array represents one point in the N-dimensional simulation result space for that setup, where N is the length of the vector name list for that setup.

Post-simulation evaluation time will scale somewhat with the number of rows in each X, so pick your points judiciously.

There almost always will be at least one (and usually one) independent vector. However, it is not an error to supply an empty list for any entry of indies.

Call Parameters

Call with the following dicts keyed by setup ID with one entry for each setup:

ParametersnameListsLists of vector names.
analyzersSequences that define the analysis to be done, each beginning with either a subclass of an Ngspice Analyzer or an instance of a Python analyzer, followed by args for the constructor or setup method of the analyzer, as the case may be.
XsEvaluation goals.
netlistsText for Ngspice Analyzers (entries may be omitted or set to None for Python Analyzers).
indiesLists of the vector names in each list of nameLists that are independent.
weightsYou can provide a dict (keyed by parameter name, for all IDs) to give non-unity weight to values in the evaluations.
skipIDsAn optional list or set of setup IDs to skip, in addition to any I've already been told to skip.
def __iter__(self):

I iterate over setup IDs that I started with and have not been told to skip.

No particular order.

def __len__(self):

My length is the number of eval specs I am actively managing.

If I've been told (via skip) to skip one or more setup IDs even if I started with and have evalSpecs for them, my length (and iterations) will omit those.

def skip(self, *IDs):

Call this to make me skip the supplied setup ID or IDs, even if I have evalSpecs for them.

def _gotV(self, V, et, ID):

Called by evaluate and evaluateSSE to add simulation results in V to the supplied ErrorTabulator et, for the specified setup ID.

Returns a Deferred that fires when the results have been added, unless there was an Ngspice error.

In that case, et has its bogus flag set to prevent it from wasting time completing other tabulations. Then, if the configuration cfg calls for me to stop on errors, I call shutdown and return the Deferred from that. Otherwise, I then just return immediately.

Yes, it is a real mess trying to get this unwieldy asynchronous beast to shut down properly.

@defer.inlineCallbacks
def evaluate(self, alterableValues, study=False, xSSE=None):

Runs an analysis with the supplied alterableValues for all setup IDs.

Returns a Deferred that fires with an instance of ErrorTabulator containing info about the analysis, most importantly SSE, the sum of squared errors between the closest simulated point to each goal point previously established by a call to setup.

If one or more independent variables has been defined (almost always the case), the SSE is the sum of evaluations of each simulated point whose independent variable components are closest to each goal point defined by the independent variables. Otherwise, each evaluation is for the simulated point closest to each goal point. Weighting is used in either case.

If the analysis fails or a needed vector is not present as an analysis's result, the "returned" SSE is set to None. If pingspice is shutting down, the SSE is set to -1, which alerts ade that it needs to abort.

ParametersstudySet True to have netlists of each setup written for study, even without an error and even if it means overwriting an existing netlist file.
xSSESet to an SSE value of a target individual that the evaluation has to be in order to be considered a successful challenge. If the SSE reaches that value, evaluation will be aborted because it won't change any outcomes and the (partial) SSE won't be reported or used in any way.
@defer.inlineCallbacks
def evaluateSSE(self, alterableValues, ID=None, xSSE=None):

Gets the SSE for the supplied alterableValues, possibly only for a single specified setup ID.

Returns a Deferred that fires with the SSE.

If there was an error at any point during the evaluation(s), the SSE is set to infinite, unless I am configured to stop on error. In that case, the SSE is set to -1, which alerts upstream stuff that it's time to abort everything.

ParametersIDSet to a setup ID if evaluation is only desired for a single setup rather than all of them.
xSSESet to a target SSE. Evaluations are allowed to quit early if they accumulate an SSE greater than this. With this set, the returned SSE is only be accurate to the extent that it exceeds xSSE and thus a challenge based on the evaluation would fail.
def evaluateAll(self, alterableValueLists):

Calls evaluateSSE for each list of alterable values in the supplied list-of-lists.

Returns a Deferred that fires with a list of sum-of-squared-errors, one for each sub-list.

If there is just one alterable value, you can supply a single list of those values.

API Documentation for pingspice, generated by pydoctor at 2021-09-18 08:41:11.