Skip to content

Commit

Permalink
Merge branch 'main' into dhtv2
Browse files Browse the repository at this point in the history
  • Loading branch information
iand committed Dec 6, 2023
2 parents 9433ba4 + ba5ba1a commit 86480e2
Show file tree
Hide file tree
Showing 139 changed files with 5,616 additions and 9,583 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/gateway-conformance.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ jobs:
steps:
# 1. Download the gateway-conformance fixtures
- name: Download gateway-conformance fixtures
uses: ipfs/gateway-conformance/.github/actions/extract-fixtures@v0.3
uses: ipfs/gateway-conformance/.github/actions/extract-fixtures@v0.4
with:
output: fixtures
merged: true
Expand All @@ -40,7 +40,7 @@ jobs:

# 4. Run the gateway-conformance tests
- name: Run gateway-conformance tests
uses: ipfs/gateway-conformance/.github/actions/test@v0.3
uses: ipfs/gateway-conformance/.github/actions/test@v0.4
with:
gateway-url: http://127.0.0.1:8040
json: output.json
Expand Down
83 changes: 82 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,87 @@ The following emojis are used to highlight certain changes:

### Removed

### Security

## [v0.16.0]

### Changed

* 🛠 `boxo/namesys`: now fails when multiple valid DNSLink entries are found for the same domain. This used to cause undefined behavior before. Now, we return an error, according to the [specification](https://dnslink.dev/).

### Removed

* 🛠 `boxo/gateway`: removed support for undocumented legacy `ipfs-404.html`. Use [`_redirects`](https://specs.ipfs.tech/http-gateways/web-redirects-file/) instead.
* 🛠 `boxo/namesys`: removed support for legacy DNSLink entries at the root of the domain. Use [`_dnslink.` TXT record](https://docs.ipfs.tech/concepts/dnslink/) instead.
* 🛠 `boxo/coreapi`, an intrinsic part of Kubo, has been removed and moved to `kubo/core/coreiface`.

### Fixed

* `boxo/gateway`
* a panic (which is recovered) could sporadically be triggered inside a CAR request, if the right [conditions were met](https://github.com/ipfs/boxo/pull/511).
* no longer emits `http: superfluous response.WriteHeader` warnings when an error happens.

## [v0.15.0]

### Changed

* 🛠 Bumped to [`go-libp2p` 0.32](https://github.com/libp2p/go-libp2p/releases/tag/v0.32.0).

## [v0.14.0]

### Added

* `boxo/gateway`:
* A new `WithResolver(...)` option can be used with `NewBlocksBackend(...)` allowing the user to pass their custom `Resolver` implementation.
* The gateway now sets a `Cache-Control` header for requests under the `/ipns/` namespace if the TTL for the corresponding IPNS Records or DNSLink entities is known.
* `boxo/bitswap/client`:
* A new `WithoutDuplicatedBlockStats()` option can be used with `bitswap.New` and `bsclient.New`. This disable accounting for duplicated blocks, which requires a `blockstore.Has()` lookup for every received block and thus, can impact performance.
* ✨ Migrated repositories into Boxo
* [`github.com/ipfs/kubo/peering`](https://pkg.go.dev/github.com/ipfs/kubo/peering) => [`./peering`](./peering)
A service which establish, overwatch and maintain long lived connections.
* [`github.com/ipfs/kubo/core/bootstrap`](https://pkg.go.dev/github.com/ipfs/kubo/core/bootstrap) => [`./bootstrap](./bootstrap)
A service that maintains connections to a number of bootstrap peers.

### Changed

* `boxo/gateway`
* 🛠 The `IPFSBackend` interface was updated to make the responses of the
`Head` method more explicit. It now returns a `HeadResponse` instead of a
`files.Node`.
* `boxo/routing/http/client.Client` is now exported. This means you can now pass
it around functions, or add it to a struct if you want.
* 🛠 The `path` package has been massively refactored. With this refactor, we have
condensed the different path-related and/or Kubo-specific packages under a single generic one. Therefore, there
are many breaking changes. Please consult the [documentation](https://pkg.go.dev/github.com/ipfs/boxo/path)
for more details on how to use the new package.
* Note: content paths created with `boxo/path` are automatically normalized:
- Replace multiple slashes with a single slash.
- Eliminate each `.` path name element (the current directory).
- Eliminate each inner `..` path name element (the parent directory) along with the non-`..` element that precedes it.
- Eliminate `..` elements that begin a rooted path: that is, replace "`/..`" by "`/`" at the beginning of a path.
* 🛠 The signature of `CoreAPI.ResolvePath` in `coreiface` has changed to now return
the remainder segments as a second return value, matching the signature of `resolver.ResolveToLastNode`.
* 🛠 `routing/http/client.FindPeers` now returns `iter.ResultIter[types.PeerRecord]` instead of `iter.ResultIter[types.Record]`. The specification indicates that records for this method will always be Peer Records.
* 🛠 The `namesys` package has been refactored. The following are the largest modifications:
* The options in `coreiface/options/namesys` have been moved to `namesys` and their names
have been made more consistent.
* Many of the exported structs and functions have been renamed in order to be consistent with
the remaining packages.
* `namesys.Resolver.Resolve` now returns a TTL, in addition to the resolved path. If the
TTL is unknown, 0 is returned. `IPNSResolver` is able to resolve a TTL, while `DNSResolver`
is not.
* `namesys/resolver.ResolveIPNS` has been moved to `namesys.ResolveIPNS` and now returns a TTL
in addition to the resolved path.
*`boxo/ipns` record defaults follow recommendations from [IPNS Record Specification](https://specs.ipfs.tech/ipns/ipns-record/#ipns-record):
* `DefaultRecordTTL` is now set to `1h`
* `DefaultRecordLifetime` follows the increased expiration window of Amino DHT ([go-libp2p-kad-dht#793](https://github.com/libp2p/go-libp2p-kad-dht/pull/793)) and is set to `48h`
* 🛠 The `gateway`'s `IPFSBackend.ResolveMutable` is now expected to return a TTL in addition to
the resolved path. If the TTL is unknown, 0 should be returned.

### Removed

* 🛠 `util.MultiErr` has been removed. Please use Go's native support for wrapping errors, or `errors.Join` instead.

### Fixed

### Security
Expand Down Expand Up @@ -243,7 +324,7 @@ None.
- `InternalKeys`
- 🛠 `provider/batched.New` has been moved to `provider.New` and arguments has been changed. (https://github.com/ipfs/boxo/pulls/273)
- A routing system is now passed with the `provider.Online` option, by default the system run in offline mode (push stuff onto the queue).
- When using `provider.Online` calling the `.Run` method is not required anymore, the background worker is implicitely started in the background by `provider.New`.
- When using `provider.Online` calling the `.Run` method is not required anymore, the background worker is implicitly started in the background by `provider.New`.
- You do not have to pass a queue anymore, you pass a `datastore.Datastore` exclusively.
- 🛠 `provider.NewOfflineProvider` has been renamed to `provider.NewNoopProvider` to show more clearly that is does nothing. (https://github.com/ipfs/boxo/pulls/273)
- 🛠 `provider.Provider` and `provider.Reprovider` has been merged under one `provider.System`. (https://github.com/ipfs/boxo/pulls/273)
Expand Down
33 changes: 25 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,7 @@ Boxo powers [Kubo](https://github.com/ipfs/kubo), which is [the most popular IPF
so its code has been battle-tested on the IPFS network for years, and is well-understood by the community.

### Motivation

**TL;DR** The goal of this repo is to help people build things. Previously users struggled to find existing useful code or to figure out how to use what they did find. We observed many running Kubo and using its HTTP RPC API. This repo aims to do better. We're taking the libraries that many were already effectively relying on in production and making them more easily discoverable and usable.

The maintainers primarily aim to help people trying to build with IPFS in Go that were previously either giving up or relying on the [Kubo HTTP RPC API](https://docs.ipfs.tech/reference/kubo/rpc/). Some of these people will end up being better served by IPFS tooling in other languages (e.g., Javascript, Rust, Java, Python), but for those who are either looking to write in Go or to leverage the set of IPFS tooling we already have in Go we’d like to make their lives easier.
Expand All @@ -73,6 +74,7 @@ Boxo is not exhaustive nor comprehensive--there are plenty of useful IPFS protoc
More details can also be found in the [Rationale FAQ](./docs/FAQ.md#rationale-faq).

## Scope

### What kind of components does Boxo have?

Boxo includes high-quality components useful for interacting with IPFS protocols, public and private IPFS networks, and content-addressed data, such as:
Expand All @@ -86,20 +88,23 @@ Boxo includes high-quality components useful for interacting with IPFS protocols
Boxo aims to provide a cohesive interface into these components. Note that not all of the underlying components necessarily reside in this respository.

### Does Boxo == IPFS?
No. This repo houses some IPFS functionality written in Go that has been useful in practice, and is maintained by a group that has long term commitments to the IPFS project

### Is everything related to IPFS in the Go ecosystem in this repo?
No. This repo houses some IPFS functionality written in Go that has been useful in practice, and is maintained by a group that has long term commitments to the IPFS project

No. Not everything related to IPFS is intended to be in Boxo. View it as a starter toolbox (potentially among multiple). If you’d like to build an IPFS implementation with Go, here are some tools you might want that are maintained by a group that has long term commitments to the IPFS project. There are certainly repos that others maintain that aren't included here (e.g., ipfs/go-car) which are still useful to IPFS implementations. It's expected and fine for new IPFS functionality to be developed that won't be part of Boxo.
### Is everything related to IPFS in the Go ecosystem in this repo?

No. Not everything related to IPFS is intended to be in Boxo. View it as a starter toolbox (potentially among multiple). If you’d like to build an IPFS implementation with Go, here are some tools you might want that are maintained by a group that has long term commitments to the IPFS project. There are certainly repos that others maintain that aren't included here (e.g., ipfs/go-car) which are still useful to IPFS implementations. It's expected and fine for new IPFS functionality to be developed that won't be part of Boxo.

## Consuming

### Getting started

See [examples](./examples/README.md).

If you are migrating to Boxo, see [Migrating to Boxo](#migrating-to-boxo).

### Migrating to Boxo

Many Go modules under github.com/ipfs have moved here. Boxo provides a tool to ease this migration, which does most of the work for you:

* `cd` into the root directory of your module (where the `go.mod` file is)
Expand All @@ -116,10 +121,13 @@ We recommend upgrading to v0.8.0 first, and _then_ upgrading to the latest Boxo
If you encounter any challenges, please [open an issue](https://github.com/ipfs/boxo/issues/new/choose) and Boxo maintainers will help you.

### Deprecations & Breaking Changes

See [RELEASE.md](./RELEASE.md).

## Development

### Should I add my IPFS component to Boxo?

We happily accept external contributions! However, Boxo maintains a high quality bar, so code accepted into Boxo must meet some minimum maintenance criteria:

* Actively maintained
Expand All @@ -137,37 +145,46 @@ We happily accept external contributions! However, Boxo maintains a high quality
If you have some experimental component that you think would benefit the IPFS community, we suggest you build the component in your own repository until it's clear that there's community demand for it, and then open an issue/PR in this repository to discuss including it in Boxo.

### Release Process

See [RELEASE.md](./RELEASE.md).

### Why is the code coverage so bad?

The code coverage of this repo is not currently representative of the actual test coverage of this code. Much of the code in this repo is currently covered by integration tests in [Kubo](https://github.com/ipfs/kubo). We are in the process of moving those tests here, and as that continues the code coverage will significantly increase.

## General

### Help

If you have questions, feel free to open an issue. You can also find the Boxo maintainers in [Filecoin Slack](https://filecoin.io/slack/) at #Boxo-maintainers. (If you would like to engage via IPFS Discord or ipfs.io Matrix, please drop into the #ipfs-implementers channel/room or file an issue, and we'll get bridging from #Boxo-maintainers to these other chat platforms.)

### What is the response time for issues or PRs filed?
TODO: fill this in. New issues and PRs to this repo are usually looked at on a weekly basis as part of [Kubo triage](https://pl-strflt.notion.site/Kubo-Issue-Triage-Notes-7d4983e8cf294e07b3cc51b0c60ede9a).

New issues and PRs to this repo are usually looked at on a weekly basis as part of [Kubo triage](https://pl-strflt.notion.site/Kubo-Issue-Triage-Notes-7d4983e8cf294e07b3cc51b0c60ede9a). However, the response time may vary.

### What are some projects that depend on this project?
The exhaustive list is https://github.com/ipfs/boxo/network/dependents. Some notable projects include:

The exhaustive list is https://github.com/ipfs/boxo/network/dependents. Some notable projects include:

1. [Kubo](https://github.com/ipfs/kubo), an IPFS implementation in Go
2. [Lotus](https://github.com/filecoin-project/lotus), a Filecoin implementation in Go
3. [Bifrost Gateway](https://github.com/ipfs/bifrost-gateway), a dedicated IPFS gateway
6. [rainbow](https://github.com/ipfs/rainbow), a specialized IPFS gateway
4. [ipfs-check](https://github.com/ipfs-shipyard/ipfs-check), checks IPFS data availability
5. [someguy](https://github.com/ipfs-shipyard/someguy), a dedicated Delegated Routing V1 server and client
3. [Bifrost Gateway](https://github.com/ipfs/bifrost-gateway), a dedicated IPFS Gateway daemon backed by a remote datastore

### Governance and Access
See [CODEOWNERS](./docs/CODEOWNERS) for the current maintainers list. Governance for graduating additional maintainers hasn't been established. Repo permissions are all managed through [ipfs/github-mgmt](https://github.com/ipfs/github-mgmt).

See [CODEOWNERS](./docs/CODEOWNERS) for the current maintainers list. Governance for graduating additional maintainers hasn't been established. Repo permissions are all managed through [ipfs/github-mgmt](https://github.com/ipfs/github-mgmt).

### Why is this named "Boxo"?

See https://github.com/ipfs/boxo/issues/215.

### Additional Docs & FAQs

See [the wiki](https://github.com/ipfs/boxo/wiki).

### License

[SPDX-License-Identifier: Apache-2.0 OR MIT](LICENSE.md)

2 changes: 1 addition & 1 deletion RELEASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ The amount of backporting of a fix depends on the severity of the issue and the
As a result, Boxo maintainers recommend that consumers stay up-to-date with Boxo releases.

### Go Compatibility
At any given point, the Go team supports only the latest two versions of Go released (see https://go.dev/doc/devel/release). Boxo maintainers will strive to maintain compatibilty with the older of the two supported versions, so that Boxo is also compatible with the latest two versions of Go.
At any given point, the Go team supports only the latest two versions of Go released (see https://go.dev/doc/devel/release). Boxo maintainers will strive to maintain compatibility with the older of the two supported versions, so that Boxo is also compatible with the latest two versions of Go.

### Release Criteria
Boxo releases occur _at least_ on every Kubo release. Releases can also be initiated on-demand, regardless of Kubo's release cadence, whenever there are significant changes (new features, refactorings, deprecations, etc.).
Expand Down
36 changes: 31 additions & 5 deletions bitswap/client/client.go
Original file line number Diff line number Diff line change
Expand Up @@ -44,14 +44,21 @@ var log = logging.Logger("bitswap-client")
// bitswap instances
type Option func(*Client)

// ProviderSearchDelay overwrites the global provider search delay
// ProviderSearchDelay sets the initial dely before triggering a provider
// search to find more peers and broadcast the want list. It also partially
// controls re-broadcasts delay when the session idles (does not receive any
// blocks), but these have back-off logic to increase the interval. See
// [defaults.ProvSearchDelay] for the default.
func ProviderSearchDelay(newProvSearchDelay time.Duration) Option {
return func(bs *Client) {
bs.provSearchDelay = newProvSearchDelay
}
}

// RebroadcastDelay overwrites the global provider rebroadcast delay
// RebroadcastDelay sets a custom delay for periodic search of a random want.
// When the value ellapses, a random CID from the wantlist is chosen and the
// client attempts to find more peers for it and sends them the single want.
// [defaults.RebroadcastDelay] for the default.
func RebroadcastDelay(newRebroadcastDelay delay.D) Option {
return func(bs *Client) {
bs.rebroadcastDelay = newRebroadcastDelay
Expand Down Expand Up @@ -79,6 +86,19 @@ func WithBlockReceivedNotifier(brn BlockReceivedNotifier) Option {
}
}

// WithoutDuplicatedBlockStats disable collecting counts of duplicated blocks
// received. This counter requires triggering a blockstore.Has() call for
// every block received by launching goroutines in parallel. In the worst case
// (no caching/blooms etc), this is an expensive call for the datastore to
// answer. In a normal case (caching), this has the power of evicting a
// different block from intermediary caches. In the best case, it doesn't
// affect performance. Use if this stat is not relevant.
func WithoutDuplicatedBlockStats() Option {
return func(bs *Client) {
bs.skipDuplicatedBlocksStats = true
}
}

type BlockReceivedNotifier interface {
// ReceivedBlocks notifies the decision engine that a peer is well-behaving
// and gave us useful data, potentially increasing its score and making us
Expand Down Expand Up @@ -155,7 +175,7 @@ func New(parent context.Context, network bsnet.BitSwapNetwork, bstore blockstore
dupMetric: bmetrics.DupHist(ctx),
allMetric: bmetrics.AllHist(ctx),
provSearchDelay: defaults.ProvSearchDelay,
rebroadcastDelay: delay.Fixed(time.Minute),
rebroadcastDelay: delay.Fixed(defaults.RebroadcastDelay),
simulateDontHavesOnTimeout: true,
}

Expand Down Expand Up @@ -226,6 +246,9 @@ type Client struct {

// whether we should actually simulate dont haves on request timeout
simulateDontHavesOnTimeout bool

// dupMetric will stay at 0
skipDuplicatedBlocksStats bool
}

type counters struct {
Expand Down Expand Up @@ -373,14 +396,17 @@ func (bs *Client) updateReceiveCounters(blocks []blocks.Block) {
// Check which blocks are in the datastore
// (Note: any errors from the blockstore are simply logged out in
// blockstoreHas())
blocksHas := bs.blockstoreHas(blocks)
var blocksHas []bool
if !bs.skipDuplicatedBlocksStats {
blocksHas = bs.blockstoreHas(blocks)
}

bs.counterLk.Lock()
defer bs.counterLk.Unlock()

// Do some accounting for each block
for i, b := range blocks {
has := blocksHas[i]
has := (blocksHas != nil) && blocksHas[i]

blkLen := len(b.RawData())
bs.allMetric.Observe(float64(blkLen))
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ func TestAllPeersDoNotHaveBlock(t *testing.T) {
{[]peer.ID{p1}, []cid.Cid{c2}, []cid.Cid{}},
{[]peer.ID{p2}, []cid.Cid{c2}, []cid.Cid{c2}},

// p0 recieved DONT_HAVE for c1 & c2 (but not for c0)
// p0 received DONT_HAVE for c1 & c2 (but not for c0)
{[]peer.ID{p0}, []cid.Cid{c0, c1, c2}, []cid.Cid{c1, c2}},
{[]peer.ID{p0, p1}, []cid.Cid{c0, c1, c2}, []cid.Cid{}},
// Both p0 and p2 received DONT_HAVE for c2
Expand Down
2 changes: 1 addition & 1 deletion bitswap/client/internal/getter/getter.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ type GetBlocksFunc func(context.Context, []cid.Cid) (<-chan blocks.Block, error)

// SyncGetBlock takes a block cid and an async function for getting several
// blocks that returns a channel, and uses that function to return the
// block syncronously.
// block synchronously.
func SyncGetBlock(p context.Context, k cid.Cid, gb GetBlocksFunc) (blocks.Block, error) {
p, span := internal.StartSpan(p, "Getter.SyncGetBlock")
defer span.End()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -281,7 +281,7 @@ func TestRateLimitingRequests(t *testing.T) {
defer fpn.queriesMadeMutex.Unlock()
if fpn.queriesMade != maxInProcessRequests+1 {
t.Logf("Queries made: %d\n", fpn.queriesMade)
t.Fatal("Did not make all seperate requests")
t.Fatal("Did not make all separate requests")
}
}

Expand Down
Loading

0 comments on commit 86480e2

Please sign in to comment.