Fix a race condition where interrupting a download (via Ctrl-C) during a
retry attempt could cause a crash. When `enqueueItem()` throws because the
download thread is shutting down, the exception would propagate without
setting `done=true`, causing the `TransferItem` destructor to invoke the
callback a second time.
This triggered an assertion failure in `Callback::rethrow()` with:
`Assertion '!prev' failed` and the error message `cannot enqueue download
request because the download thread is shutting down`.
The fix catches the exception from `enqueueItem()` and calls `fail()` to
properly complete the transfer, ensuring the callback is invoked exactly
once.
This continues the work for formalizing our current JSON docs. Note that
in the process, a few bugs were caught:
- `closureSize` was repeated twice, forgot `closureDownloadSize`
- `file*` fields should be `download*`. They are in fact called that in
the line-oriented `.narinfo` file, but were renamed in the JSON
format.
We immediately use this in the JSON schemas for Derivation and Deriving
Path, but we cannot yet use it in Store Object Info because those paths
*do* include the store dir currently.
- Uses the more explicit `@ingroup` most of the time, to avoid problems
with nested groups, and to make group membership more explicit.
The division into headers is not great for documentation purposes,
so this helps.
- More attention for memory management details
- Various other improvements to doc comments
Implement `uploadPart()` for uploading individual parts in S3 multipart
uploads:
- Constructs URL with `?partNumber=N&uploadId=ID` query parameters
- Uploads chunk data with `application/octet-stream` mime type
- Extracts and returns `ETag` from response
This is a good default (the methods that allow for an arbitrary choice
of source accessor are generally preferable both to implement and to
use). And it also pays its way by allowing us to delete *both* the
`DummyStore` and `LocalStore` implementations.
Introduces `scanForReferencesDeep` to provide per-file granularity when
scanning for store path references, enabling better diagnostics for
cycle detection and `nix why-depends --precise`.
Implement `abortMultipartUpload()` for cleaning up incomplete multipart
uploads on error:
- Constructs URL with `?uploadId=ID` query parameter
- Issues `DELETE` request to abort the multipart upload
With #14314, in some places in the parser we started using C++ objects
directly rather than pointers. In those places lines like `$$ = $1` now
imply a copy when we don't need one. This commit changes those to `$$ =
std::move($1)` to avoid those copies.
Previously it used the `ThreadPool` default,
i.e. `std:🧵:hardware_concurrency()`. But copying signatures is
not primarily CPU-bound so it makes more sense to use the
`http-connections` setting (since we're typically copying from/to a
binary cache).
The `showBytes()` function was redundant with `renderSize()` as the
latter automatically selects the appropriate unit (KiB, MiB, GiB, etc.)
based on the value, whereas `showBytes()` always formatted as MiB
regardless of size.
Co-authored-by: Bernardo Meurer Costa <beme@anthropic.com>