Fix a race condition where interrupting a download (via Ctrl-C) during a
retry attempt could cause a crash. When `enqueueItem()` throws because the
download thread is shutting down, the exception would propagate without
setting `done=true`, causing the `TransferItem` destructor to invoke the
callback a second time.
This triggered an assertion failure in `Callback::rethrow()` with:
`Assertion '!prev' failed` and the error message `cannot enqueue download
request because the download thread is shutting down`.
The fix catches the exception from `enqueueItem()` and calls `fail()` to
properly complete the transfer, ensuring the callback is invoked exactly
once.
Some zsh setups (including mine) do not load the
completion if `#compdef` is not on the first line.
So we move the `# shellcheck` comment to the
second line to avoid this issue.
This continues the work for formalizing our current JSON docs. Note that
in the process, a few bugs were caught:
- `closureSize` was repeated twice, forgot `closureDownloadSize`
- `file*` fields should be `download*`. They are in fact called that in
the line-oriented `.narinfo` file, but were renamed in the JSON
format.
We immediately use this in the JSON schemas for Derivation and Deriving
Path, but we cannot yet use it in Store Object Info because those paths
*do* include the store dir currently.
- Uses the more explicit `@ingroup` most of the time, to avoid problems
with nested groups, and to make group membership more explicit.
The division into headers is not great for documentation purposes,
so this helps.
- More attention for memory management details
- Various other improvements to doc comments
Per #7591, the `nix-store --gc --print-dead` command does not provide
any feedback about the amount of disk space that is used by dead store
paths. It looks like this has been the case since 7ab68961e (* Garbage
collector: added an option `--use-atime' to delete paths in...,
2008-09-17).
Update the nix-store documentation to remove the claim that this is
function that `nix-store --gc --print-dead` performs.
Implement `uploadPart()` for uploading individual parts in S3 multipart
uploads:
- Constructs URL with `?partNumber=N&uploadId=ID` query parameters
- Uploads chunk data with `application/octet-stream` mime type
- Extracts and returns `ETag` from response
This is a good default (the methods that allow for an arbitrary choice
of source accessor are generally preferable both to implement and to
use). And it also pays its way by allowing us to delete *both* the
`DummyStore` and `LocalStore` implementations.
Add concurrency group configuration to the CI workflow to automatically
cancel outdated runs when a PR receives new commits or is force-pushed.
This prevents wasting CI resources on superseded code.
Introduces `scanForReferencesDeep` to provide per-file granularity when
scanning for store path references, enabling better diagnostics for
cycle detection and `nix why-depends --precise`.
Implement `abortMultipartUpload()` for cleaning up incomplete multipart
uploads on error:
- Constructs URL with `?uploadId=ID` query parameter
- Issues `DELETE` request to abort the multipart upload