Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize qubit hash for Set operations #6908

Draft
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

daxfohl
Copy link
Collaborator

@daxfohl daxfohl commented Jan 1, 2025

Improves amortized Set operations perf by around 50%, though with the caveat that sets with qudits of different dimensions but the same index will always have the same key (not just the same bucket), and thus have to check __eq__, causing degenerate perf impact. It seems unlikely that anyone would intentionally do this though.

s = set()
for q in cirq.GridQubit.square(100):
    s = s.union({q})

Fixes #6886, if we decide to do this.

Improves amortized `Set` operations perf by around 50%, though with the caveat that sets with qudits of different dimensions but the same index will always have the same key (not just the same bucket), and thus have to check `__eq__`, causing degenerate perf impact. It seems unlikely that anyone would intentionally do this though.

```python
s = set()
for q in cirq.GridQubit.square(100):
    s = s.union({q})
```
@daxfohl daxfohl requested review from vtomole and a team as code owners January 1, 2025 19:38
@daxfohl daxfohl requested a review from mhucka January 1, 2025 19:38
@CirqBot CirqBot added the size: S 10< lines changed <50 label Jan 1, 2025
Copy link

codecov bot commented Jan 2, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 97.86%. Comparing base (c5d29fe) to head (ac5a752).

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #6908   +/-   ##
=======================================
  Coverage   97.86%   97.86%           
=======================================
  Files        1084     1084           
  Lines       94290    94308   +18     
=======================================
+ Hits        92280    92298   +18     
  Misses       2010     2010           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Comment on lines +41 to +42
# This approach seems to perform better than traditional "random" hash in `Set`
# operations for typical circuits, as it reduces bucket collisions. Caveat: it does not
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How did you evaluate this reduction in bucket collisions? Would be good to show this explicitly before we decide to abandon the standard tuple hash.

Copy link
Collaborator Author

@daxfohl daxfohl Jan 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test code is up in the description. It's about 50% faster with this implementation.

One note is that it seems like it's only faster for copy-on-change ops like s = s.union({q}). It doesn't seem to have any effect when we operate on sets mutably like s |= {q}. But given most of our stuff is immutable, we see a lot more of the former in our codebase.

Comment on lines 60 to 70
square_index = max(abs_row, abs_col)
inner_square_side_len = square_index * 2 - 1
outer_square_side_len = inner_square_side_len + 2
inner_square_area = inner_square_side_len**2
if abs_row == square_index:
offset = 0 if row < 0 else outer_square_side_len
i = inner_square_area + offset + (col + square_index)
else:
offset = (2 * outer_square_side_len) + (0 if col < 0 else inner_square_side_len)
i = inner_square_area + offset + (row + (square_index - 1))
self._hash = hash(i)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like this is almost 3x slower than the current tuple hash, which is quite a big regression so unless we can really show that this reduces hash collisions I'm not sure we would want to make this change.

In [1]: def tuple_hash(row, col, d):
   ...:     return hash((row, col, d))
   ...: 

In [2]: def square_hash(row, col, d):
   ...:     if row == 0 and col == 0:
   ...:         return 0
   ...:     abs_row = abs(row)
   ...:     abs_col = abs(col)
   ...:     square_index = max(abs_row, abs_col)
   ...:     inner_square_side_len = square_index * 2 - 1
   ...:     outer_square_side_len = inner_square_side_len + 2
   ...:     inner_square_area = inner_square_side_len**2
   ...:     if abs_row == square_index:
   ...:         offset = 0 if row < 0 else outer_square_side_len
   ...:         i = inner_square_area + offset + (col + square_index)
   ...:     else:
   ...:         offset = (2 * outer_square_side_len) + (0 if col < 0 else inner_square_side_len)
   ...:         i = inner_square_area + offset + (row + (square_index - 1))
   ...:     return hash(i)
   ...: 

In [3]: %timeit [tuple_hash(r, c, d) for r in range(20) for c in range(20) for d in [2, 3, 4]]
151 µs ± 427 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)

In [4]: %timeit [square_hash(r, c, d) for r in range(20) for c in range(20) for d in [2, 3, 4]]
437 µs ± 2.37 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not married to it. It was something I noticed when looking into creating very wide circuits and got nerd sniped. It's a reasonable optimization for copy-on-change operations on large sets. But if we want to stick to the existing approach, I'd say it's completely justifiable.

@daxfohl daxfohl marked this pull request as draft January 6, 2025 17:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size: S 10< lines changed <50
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Make Line and Grid Qubit hashes faster for common set ops
3 participants