You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This package currently makes it rather straightforward to compare the performance of different Turing.jl models, and of different ADs being applied to those models. This is great. However, it is not quite as straightforward to get hold of the various functions which are actually being benchmarked.
One reason to want this is to be able to profile the functions in question, in order to understand where performance bottlenecks are -- in my case, I would really like to be able to understand the performance of the various ADs in different situations in a granular way.
Could a function get_items_to_benchmark (I'm not attached to this particular name) be exposed as part of the public API? It would return the exact values of functions + arguments which would be benchmarked if you called benchmark_model. This would enable you to run the benchmarks independently of the suite, therefore enabling profiling, and a more fine-grained understanding of performance.
The text was updated successfully, but these errors were encountered:
You can extract this from the benchmark suite (obtained through make_benchmark_suite), no? This contains the @benchmarkable expression, which you can just run. This is what I've done before when wanting to profile. Or is that not enough?
This package currently makes it rather straightforward to compare the performance of different Turing.jl models, and of different ADs being applied to those models. This is great. However, it is not quite as straightforward to get hold of the various functions which are actually being benchmarked.
One reason to want this is to be able to profile the functions in question, in order to understand where performance bottlenecks are -- in my case, I would really like to be able to understand the performance of the various ADs in different situations in a granular way.
Could a function
get_items_to_benchmark
(I'm not attached to this particular name) be exposed as part of the public API? It would return the exact values of functions + arguments which would be benchmarked if you calledbenchmark_model
. This would enable you to run the benchmarks independently of the suite, therefore enabling profiling, and a more fine-grained understanding of performance.The text was updated successfully, but these errors were encountered: