Benchmark SOAP’s performance using a set of criteria

SOAP benchmark module, calculating statistics on the scores.

class benchmark.dsscorestats(dslist=[], dsscore=<rankScores.dsscore object>)[source]

Summarize the ranking results based on RMSDs and scores.

Parameters:
  • dslist (list) – the names of decoy sets used for benchmark, for retrieving the RMSD values

  • dsscore (dsscore) – the score object rankScores.dsscore, with the scores

analyze_enrichment()[source]

Calculate the enrichment score

analyze_score(slevel='NativeSelection', dsscoreo=[], score=[], report='single')[source]

Calculate the performance measure.

Parameters:
  • slevel (str) – benchmark criteria/performance measure

  • dsscoreo (dsscore) – the score object rankScores.dsscore, with the scores

  • score (list) – the scores

  • report (str) – single|full, report single number summarizing the performance or detailed measures

initialize_topmodels(slevel)[source]
slevel: define the stats we want to calculate

top+(cn)+’_’+(cp)+_+(cv)+_+(cf)

cn: number of top models to look at cf: top model filter: defines which part of the top model we are looking at

lessthan: bool values whether the rmsd is less than the specified value,”rmsd10” First:take the property of the first model None: or no filter

cp: properties of the filtered top model set

(rmsd,rank,rmsddiff,irmsd,rrank,rrankr,rlrank,rlrankr,””)

cs: combine the properties, default

(mean,min,sum,max) “”: the value itself, only for first pass models. len:len() nonempty: len>0 perc: percentage in total

Examples::

top1000_nonempty__rmsd10ORirmsd4 # whether there is a model sastifi top1000_len__rmsd10ORirmsd4 # the number of such models top1000_sum_revrank_rmsd10ORirmsd4FIRST

cv: the values to look at for those models.

The values of certain rmsd The value diff of certain rmsd The rank

Certain values combine the rank and the rmsd…

The filter values

rmsd