Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RC_1_2: Crashes when checking torrent(s) with piece size of 256.0 MiB #7735

Open
xavier2k6 opened this issue Sep 28, 2024 · 14 comments
Open

Comments

@xavier2k6
Copy link
Contributor

Please provide the following information

libtorrent version (or branch): RC_1_2

platform/architecture: Windows x64

compiler and compiler version: N/A

please describe what symptom you see, what you would expect to see instead and
how to reproduce it.


Torrent(s) with a (256.0MiB piece size) are causing users of qBittorrent/libtorrent 1.2.x based builds to experience crashes when checking.

See: qbittorrent/qBittorrent#21011 & qbittorrent/qBittorrent#19745, others.

RC_2_0 - Has no issue loading/checking these types of torrents.

RC_1_2/libtorrent 1.2.x based builds should not allow any torrent over it's supported limit of 128.0 MiB to be loaded/checked etc.

512MiB/1GiB torrents are able to be loaded/get stuck on checking resume data even though qbittorrent logs state invalid piece etc.

With a (256.0MiB piece size) cache/total buffer increases & isn't freed which leads to pagefile/RAM being fully absorbed & eventual crash as system will run out of resources....this has happened even with a system that had 128 GiB RAM installed. qbittorrent/qBittorrent#19745 (comment)

I've attached below a number of torrents with (256.0MiB piece size) for testing:

256.0 MiB Torrents.zip

@arvidn
Copy link
Owner

arvidn commented Sep 29, 2024

thanks for this detailed report!

@glassez
Copy link
Contributor

glassez commented Sep 29, 2024

thanks for this detailed report!

I have investigated this issue recently, so I can provide more technical details later.

@glassez
Copy link
Contributor

glassez commented Sep 29, 2024

The following fields are too small. When you check the torrent with piece size >= 256 MiB some of them can overflow which leads to memory leak (or crash due to failed assertion when using development build).

std::uint32_t num_dirty:14;
// the number of blocks in the cache for this piece
std::uint32_t num_blocks:14;

Increasing their size to 15 bits is enough for libtorrent to successfully create and run torrents with a piece size of 256 MiB.

@glassez
Copy link
Contributor

glassez commented Sep 29, 2024

Increasing their size to 15 bits is enough for libtorrent to successfully create and run torrents with a piece size of 256 MiB.

The size of 16 bit will allow you to successfully create torrents of 512MB piece size. The following field size needs to be increased too:

// the number of blocks that have >= 1 refcount
std::uint32_t pinned:15;

@glassez
Copy link
Contributor

glassez commented Sep 29, 2024

Further increasing their size allowed me to successfully create a torrent of 1024 MiB piece size.
This also required adjusting the following lines:

// the total number of blocks in this piece (and the number
// of elements in the blocks array)
// this field doesn't change for the lifetime of the cached_piece_entry
// and may be read by multiple threads without synchronization.
// Therefore this field may not be mixed with any other fields in a
// bitfield
std::uint16_t blocks_in_piece = 0;

pe.blocks_in_piece = aux::numeric_cast<std::uint16_t>(blocks_in_piece);

@glassez
Copy link
Contributor

glassez commented Sep 29, 2024

Note that despite the fact that I was able to successfully create torrents with a piece size up to 1024 MiB, libtorrent-1.2 still cannot run torrents with a piece size of more than 256 MiB due to limitations in other parts of the code. Perhaps they can be eliminated as easily as the ones above, but I have not progressed that far in this investigation.

I'm not sure about the possibility to fix these drawbacks in the current versions of libtorrent. Won't this affect the ABIcompatibility @arvidn cares about? In addition, @arvidn seems to be concerned about maximum packing of data types, which is why all these bitfields are here. So I'm not sure how this could be changed in an acceptable way.

@arvidn
Copy link
Owner

arvidn commented Sep 29, 2024

This is the assert that fails:

assertion failed. Please file a bugreport at https://github.com/arvidn/libtorrent/issues
Please include the following information:

version: 1.2.19.0-e2d32cd44

file: '../src/block_cache.cpp'
line: 1605
function: check_invariant
expression: num_blocks == int(p.num_blocks)

stack:
1: libtorrent::assert_fail(char const*, int, char const*, char const*, char const*, int)
2: libtorrent::block_cache::check_invariant() const
3: void libtorrent::invariant_access::check_invariant<libtorrent::block_cache>(libtorrent::block_cache const&)
4: void libtorrent::check_invariant<libtorrent::block_cache>(libtorrent::block_cache const&)
5: libtorrent::invariant_checker_impl<libtorrent::block_cache>::~invariant_checker_impl()
6: libtorrent::invariant_checker_impl<libtorrent::block_cache>::~invariant_checker_impl()
7: libtorrent::block_cache::insert_blocks(libtorrent::cached_piece_entry*, int, libtorrent::span<libtorrent::span<char> const>, libtorrent::disk_io_job*, int)
8: libtorrent::disk_io_thread::do_hash(libtorrent::disk_io_job*, libtorrent::tailqueue<libtorrent::disk_io_job>&)
9: libtorrent::disk_io_thread::perform_job(libtorrent::disk_io_job*, libtorrent::tailqueue<libtorrent::disk_io_job>&)
10: libtorrent::disk_io_thread::execute_job(libtorrent::disk_io_job*)
(lldb) p num_blocks
(int) 16384
(lldb) p p.num_blocks
(const uint32_t) 0

@arvidn
Copy link
Owner

arvidn commented Sep 29, 2024

At the very least, libtorrent should check torrent files earlier and reject them if they exceed these limits.
It might make sense to bump some of them, certainly to have regression tests with torrents pushing the limits.
However, I also think an important consideration is whether it ever makes sense to have piece sizes this large. It adds friction to sharing data. You have to download more bytes before you can start upload them.

@glassez
Copy link
Contributor

glassez commented Sep 29, 2024

At the very least, libtorrent should check torrent files earlier and reject them if they exceed these limits.

👍

@arvidn
Copy link
Owner

arvidn commented Sep 29, 2024

#7736

@arvidn
Copy link
Owner

arvidn commented Sep 30, 2024

#7740

@xavier2k6
Copy link
Contributor Author

xavier2k6 commented Oct 5, 2024

#7740

@arvidn This blocks/rejects unsupported piece sizes above 128MiB being loaded for RC_1_2 Branch & needs to be ported to RC_2_0 branch as it will still load torrents with piece size of 256MiB

@arvidn
Copy link
Owner

arvidn commented Oct 21, 2024

I thought the RC_2_0 branch actually supported 256 MiB pieces (but not larger). As far as I can tell, the check when loading torrents also worked correctly there.

@xavier2k6
Copy link
Contributor Author

@arvidn

  • I believe RC_2_0 branch has been limited to creating torrents with piece size =128MiB being on par with RC_1_2 branch.
    However, unlike RC_1_2 branch, the RC_2_0 branch allows loading/checking of torrents with piece size of <=256MiB.

If torrents with piece size = 256MiB are not allowed to be created via RC_2_0 branch, then why are they allowed to be loaded/checked?

If RC_2_0 branch truly supports (piece size = 256MiB) which in testing it previously had now issues, then creation should also be allowed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants