I evaluate rise and fall times.

Construct me with your device's specified turn-on delay time tdOn, rise time tR, turn-off delay time tdOff, and fall time tF, and then call the instance with three vectors (time, trigger (gate) voltages, and result (drain) voltages) from your simulated test circuit to get an SSE-equivalent fitness metric.

Class Variable us_dt The difference between an observed and expected time that results in an SSE contribution of one. (Because the SSE contribution is the square of the time error, 10x this time error will result in an SSE contribution of 100.) My default is 1E-9, which works out to an SSE delta of one per nanosecond.
Class Variable us_lumpiness The lumpiness metric for the result (drain) voltage's negative-going transition that results in an an SSE contribution of one. (Not squared in this case; twice this metric will result in an SSE contribution of two.) A really lumpy transition will have a metric around of around one, so us_lumpiness should be set to a small fraction of 1.0 if a lumpy transition bothers you.
Instance Variable tSpecs (on, rise, off, fall)
Method __init__ TimingEvaluator(Vt_max, Vr_max, *tSpecs, **kw)
Method interval Undocumented
Method eval_lumpiness Adds an SSE penalty for the slope of the supplied vector Z being lumpy between times t0 and t1 of the supplied time vector.
Method __call__ Undocumented

Inherited from GenericEvaluator:

Method worst_SSE Undocumented
Method setup Undocumented
Method V Undocumented
Method findFirstReally Undocumented
Method timeWhenGreater Returns a 2-tuple with (1) the index of the first element of supplied 1-D array X that is greater than scalar value Xk and (2) the time value at that index.
Method timeWhenLess Returns a 2-tuple with (1) the index of the first element of supplied 1-D array X that is less than scalar value Xk and (2) the time value at that index.
Method add_SSE Adds to my running SSE total the scaled square (unless notSquared set) of the difference between the two supplied scalars value and expected, multiplied by my weight.
Method evalTimeDiff Adds to my running SSE total the squared difference between (1) the interval t2-t1 and (2) the expected interval dtExpected.
Method result Returns my running SSE total and a 2-D Numpy array of goal points that will be plotted with the "x" symbol.
Method _resultFrom Adds k to my index k0 and, if the updated value is still a valid index for my time vector, returns it along with the time value at that index.
us_dt =
The difference between an observed and expected time that results in an SSE contribution of one. (Because the SSE contribution is the square of the time error, 10x this time error will result in an SSE contribution of 100.) My default is 1E-9, which works out to an SSE delta of one per nanosecond.
us_lumpiness =
The lumpiness metric for the result (drain) voltage's negative-going transition that results in an an SSE contribution of one. (Not squared in this case; twice this metric will result in an SSE contribution of two.) A really lumpy transition will have a metric around of around one, so us_lumpiness should be set to a small fraction of 1.0 if a lumpy transition bothers you.
tSpecs =
(on, rise, off, fall)
def __init__(self, Vt_max, Vr_max, *tSpecs, **kw):

TimingEvaluator(Vt_max, Vr_max, *tSpecs, **kw)

def interval(self, time, t0, t1):
Undocumented
def eval_lumpiness(self, time, Z, t0, t1):

Adds an SSE penalty for the slope of the supplied vector Z being lumpy between times t0 and t1 of the supplied time vector.

def __call__(self, time, Vtrigger, Vresult):
Undocumented
API Documentation for pingspice, generated by pydoctor at 2021-09-18 08:41:11.