-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How finalizers |>
work
#165
Comments
Yes, this is exactly right. Unlike Einstein it does not know that The macro always generates just one loop nest. All that you can do with
Maybe "finaliser" is the wrong word, but that's all it does. I see what you're hoping for but that requires a more complicated set of loops which Tullio doesn't understand. I think the macro has not noticed the |
Thanks for the explanation. This makes sense. I found a way to write it, but I guess this is still doing pretty much the same as the first example above. julia> @btime $c .= (@tullio $y[j,i] := $a[i,k] * $b[j,k]) .+ (@tullio $y[j,i] := $a[m,j] * $b[m,i]);
673.596 ms (4 allocations: 15.26 MiB) The timings and memory are for 1k x1k arrays. The memory consumption got me a little worried, which is why I also tried a simple matrix multiplication: julia> @btime c .= $a * transpose($b);
9.269 ms (3 allocations: 7.63 MiB)
julia> @btime @tullio c[j,i] = $a[i,k] * $b[j,k];
549.171 ms (9 allocations: 176 bytes) Memory is no problem here, but speed is (probably cache usage?). |
The non-tullio way of writing this is also using 15 Mb, but is faster: julia> @btime $c .= $a*transpose($b) .+ transpose($a)*$b;
19.053 ms (4 allocations: 15.26 MiB) I guess this may have to do with the magic of efficient (<O(N²)) implementation of matrix multiplications. |
If you have
So will things like For straight matrix multiplication, Tullio will usually lose to more specialised routines. See e.g. this graph: https://github.com/JuliaLinearAlgebra/Octavian.jl Around size 100, it suffers from the overhead of using Base's threads. Around size 3000, it suffers from not knowing about some optimisations. (I don't think <N^3 algorithms like Strassen are actually used in BLAS, but not very sure.) Tullio's main purpose in life is handling weird contractions which aren't served at all by such libraries, or which would require expensive |
Thanks. Of course <O(N^3) is what I meant. Interesting to know that Strassen or alike are not actually used in BLAS as discussed here: |
I am having some trouble to grasp how to limit the action of the
einsum
:my current implementation uses
@tullio
by splitting the expression into multiple@tullio
calls as in the first two examples.6*ones(3,3)
is the correct result, which I want to achieve. Yet it seems like a single expression (applied over a large array) should be faster. But the 3rd result yields not what I expected. Rethinking, I can understand that it is probably interpreted by moving both sums (over k and over l) to the very outside. To limit this effect, I thought the|>
operation would limit ("finalize") the action, but this does not seem to be the case. What did I get wrong here? I guess in this example one could use the same index form
andk
, but in my case this is not possible, since the ranges differ.The text was updated successfully, but these errors were encountered: