You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A normal user will search for files using filename. The primary tracking a file is to use the hash of the file. The big problem is that filename can be big and indexing all files on every peer may result in large memory consumption. Moreover, if there are too many search queries with results taking very long to be found, the network will be clogged by these requests. So, currently I am thinking of building a Trie of filenames and each peer keeps a Trie of not only its own files but also maybe its neighbors or maybe second neighbors (this will require requests to neighbors). Also, need to cookup time to time sync requests with various peers. It can be assumed that the graph is a "nice" graph for such of kind of recurrent requests by construction. Folder names also need to be included.
Then second problem is of keeping track of hashes. It will be best to not store all hashes and reduce the number of hashes for files required to be calculated for a query.
The text was updated successfully, but these errors were encountered:
A normal user will search for files using filename. The primary tracking a file is to use the hash of the file. The big problem is that filename can be big and indexing all files on every peer may result in large memory consumption. Moreover, if there are too many search queries with results taking very long to be found, the network will be clogged by these requests. So, currently I am thinking of building a Trie of filenames and each peer keeps a Trie of not only its own files but also maybe its neighbors or maybe second neighbors (this will require requests to neighbors). Also, need to cookup time to time sync requests with various peers. It can be assumed that the graph is a "nice" graph for such of kind of recurrent requests by construction. Folder names also need to be included.
Then second problem is of keeping track of hashes. It will be best to not store all hashes and reduce the number of hashes for files required to be calculated for a query.
The text was updated successfully, but these errors were encountered: