Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhance transform and plot docs #88

Merged
merged 13 commits into from
Sep 15, 2023
10 changes: 10 additions & 0 deletions docs/refs.bib
Original file line number Diff line number Diff line change
Expand Up @@ -95,4 +95,14 @@ @misc{cotengra
howpublished={https://github.com/jcmgray/cotengra},
url={https://github.com/jcmgray/cotengra},
}
@article{arute2019quantum,
title={Quantum supremacy using a programmable superconducting processor},
author={Arute, Frank and Arya, Kunal and Babbush, Ryan and Bacon, Dave and Bardin, Joseph C and Barends, Rami and Biswas, Rupak and Boixo, Sergio and Brandao, Fernando GSL and Buell, David A and others},
journal={Nature},
volume={574},
number={7779},
pages={505--510},
year={2019},
publisher={Nature Publishing Group}
}
}
4 changes: 2 additions & 2 deletions docs/src/transformations.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ In tensor network computations, it is good practice to apply various transformat

A crucial reason why these methods are indispensable lies in their ability to drastically reduce the problem size of the contraction path search and also the contraction. This doesn't necessarily involve reducing the maximum rank of the Tensor Network itself, but more importantly, it reduces the size (or rank) of the involved tensors.

Our approach has been significantly inspired by the ideas presented in the [Quimb](https://quimb.readthedocs.io/) library, explained in [this paper](https://arxiv.org/pdf/2002.01935.pdf).
Our approach is based in [gray2021hyper](@cite), which can also be found in [quimb](https://quimb.readthedocs.io/).

In Tenet, we provide a set of predefined transformations which you can apply to your `TensorNetwork` using both the `transform`/`transform!` functions.

Expand Down Expand Up @@ -249,7 +249,7 @@ fig #hide

## Example: RQC simplification

Here we show how can we reduce the complexity of the tensor network by applying a tranformation to it. We take as an example the Sycamore circuit from the [Google's quantum supremacy paper](https://www.nature.com/articles/s41586-019-1666-5)
Local transformations can dramatically reduce the complexity of tensor networks. Take as an example the Random Quantum Circuit circuit on the Sycamore chip from Google's quantum advantage experiment [arute2019quantum](@cite).

```@setup plot
using Makie
Expand Down
5 changes: 2 additions & 3 deletions ext/TenetMakieExt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,8 @@ Plot a [`TensorNetwork`](@ref) as a graph.

# Keyword Arguments

- `inds` Whether to show the index labels. Defaults to `false`.
- `layout` Algorithm used to map graph vertices to a (2D or 3D) coordinate system.
The algorithms implemented in the `NetworkLayout` package are recommended.
- `labels` If `true`, show the labels of the tensor indices. Defaults to `false`.
- The rest of `kwargs` are passed to `GraphMakie.graphplot`.
"""
function Makie.plot(tn::TensorNetwork; kwargs...)
f = Figure()
Expand Down
Loading