Open
Description
With R versions 3.6.1 and 3.6.2, using pm4py's evaluation functions on heuristicsmineR's causal nets converted to petri nets seems to give evaluation results which vary randomly.
For example, using the L_heur_1 provided directly with heuristicsmineR and used as in https://github.com/bupaverse/heuristicsmineR, we get the following petri net:
library(heuristicsmineR)
library(petrinetR)
data("L_heur_1")
cn<-causal_net(L_heur_1,
threshold=.7)
pn<-as.petrinet(cn)
render_PN(pn)
Now, using the evaluation_all() function provided with pm4py and directly writing the net's final marking:
library(pm4py)
evaluation_all(L_heur_1,
pn,
pn$marking,
c("p_in_6"))
#> $fitness
#> $fitness$perc_fit_traces
#> [1] 72.5
#>
#> $fitness$average_trace_fitness
#> [1] 0.9692162
#>
#> $fitness$log_fitness
#> [1] 0.9678538
#>
#>
#> $precision
#> [1] 0.9963899
#>
#> $generalization
#> [1] 0.6225678
#>
#> $simplicity
#> [1] 0.7777778
#>
#> $metricsAverageWeight
#> [1] 0.8411473
#>
#> $fscore
#> [1] 0.9819146
The same command exetuted once more gives us the following result:
evaluation_all(L_heur_1,
pn,
pn$marking,
c("p_in_6"))
#> $fitness
#> $fitness$perc_fit_traces
#> [1] 97.5
#>
#> $fitness$average_trace_fitness
#> [1] 0.9801938
#>
#> $fitness$log_fitness
#> [1] 0.9784578
#>
#>
#> $precision
#> [1] 0.9966443
#>
#> $generalization
#> [1] 0.6320084
#>
#> $simplicity
#> [1] 0.7777778
#>
#> $metricsAverageWeight
#> [1] 0.8462221
#>
#> $fscore
#> [1] 0.9874673
All values have changed, particularly the perc_fit_traces.
However, the amount of different values the function will output seems to be finite and to depend on the number of unique traces present in the original log.