This makes the proto serializer characterisation test data be
accompanied by JSON data.
This is arguably useful for a reasons:
- The JSON data is human-readable while the binary data is not, so it
provides some indication of what the test data means beyond the C++
literals.
- The JSON data is language-agnostic, and so can be used to quickly rig
up tests for implementation in other languages, without having source
code literals at all (just go back and forth between the JSON and the
binary).
- Even though we have no concrete plans to place the binary protocol 1-1
or with JSON, it is still nice to ensure that the JSON serializers and
binary protocols have (near) equal coverage over data types, to help
ensure we didn't forget a JSON (de)serializer.
Make instances for them that share code with `nix path-info`, but do a
slightly different format without store paths containing store dirs
(matching the other latest JSON formats).
Progress on #13570.
If we depend on the store dir, our JSON serializers/deserializers take
extra arguements, and that interfaces with the likes of various
frameworks for associating these with types (e.g. nlohmann in C++, Serde
in Rust, and Aeson in Haskell).
For now, `nix path-info` still uses the previous format, with store
dirs. We may yet decide to "rip of the band-aid", and just switch it
over, but that is left as a future PR.
Add support for configuring S3 storage class via the storage-class
parameter for S3BinaryCacheStore. This allows users to optimize costs
by selecting appropriate storage tiers (STANDARD, GLACIER,
INTELLIGENT_TIERING, etc.) based on access patterns.
The storage class is applied via the x-amz-storage-class header for
both regular PUT uploads and multipart upload initiation.
`allowedReferences` and friends can, in addition to supporting store
paths (and placeholders, but because those will be rewritten to store
paths), they also support to refering to other outputs in the derivation
by name.
We update the tests in order to cover for that.
(While we are at it, also introduce some scratch variables for paths and
placeholders to make the C++ literalsf for this test more concise.)
Since we haven't released v2 yet (2.32 has v1) we can just update this
in-place and avoid version churn.
Note that as a nice side effect of using the standard `Hash` JSON impl,
we don't neeed this `hashFormat` parameter anymore.
The old string format is a holdover from the pre JSON days. It is not
friendly to users who need to get the information out of it.
Also introduce the sort of versioning we have for derivation for this
format too.
- Use canonical content address JSON format for floating content
addressed derivation outputs
This keeps it more consistent.
- Reorganize inputs into nested structure (`inputs.srcs` and
`inputs.drvs`)
This will allow for an easier to use, but less compact, alternative
where `srcs` is just a list of derived paths.
It also allows for other experiments for derivations with a different
input structure, as I suspect will be needed for secure build traces.
This will allow us to more accurately test dropping support for
dependent realisations, by separating the tests that should not change
from the tests that should.
I do that change in PR #14247, but even if for some reasons we don't end
up doing this soon, I think it is still good to separate the test data
this way so we have the option of doing that at some point.
Introduces `scanForReferencesDeep` to provide per-file granularity when
scanning for store path references, enabling better diagnostics for
cycle detection and `nix why-depends --precise`.
S3 buckets support object versioning to prevent unexpected changes,
but Nix previously lacked the ability to fetch specific versions of
S3 objects. This adds support for a `versionId` query parameter in S3
URLs, enabling users to pin to specific object versions:
```
s3://bucket/key?region=us-east-1&versionId=abc123
```
A few changes had cropped up with `_NIX_TEST_ACCEPT=1`:
1. Blake hashing test JSON had a different indentation
2. Store URI had improper non-quoted spaces
(1) was is just fixed, as we trust nlohmann JSON to parse JSON
correctly, regardless of whitespace.
For (2), the existing URL was made a read-only test, since we very much
wish to continue parsing such invalid URLs directly. And then the
original read/write test was updated to properly percent-encode the
space, as the normal form should be.
I realized that we can actually do this thing, even though it is not
what nlohmann expects at all, because the extra parameter has a default
argument so nlohmann doesn't need to care. Sneaky!
Realisations are conceptually key-value pairs, mapping `DrvOutputs` (the
key) to information about that derivation output.
This separate the value type, which will be useful in maps, etc., where
we don't want to denormalize by including the key twice.
This matches similar changes for existing types:
| keyed | unkeyed |
|--------------------|------------------------|
| `ValidPathInfo` | `UnkeyedValidPathInfo` |
| `KeyedBuildResult` | `BuildResult` |
| `Realisation` | `UnkeyedRealisation` |
Co-authored-by: Sergei Zimmerman <sergei@zimmerman.foo>
Move S3 URL parsing, store configuration, and public bucket support
outside of NIX_WITH_S3_SUPPORT guards. Only AWS credential resolution
remains gated, allowing builds with withAWS = false to:
- Parse s3:// URLs
- Register S3 store types
- Access public S3 buckets (via HTTPS conversion)
- Use S3-compatible services without authentication
The setupForS3() function now always performs URL conversion, with
authentication code conditionally compiled based on NIX_WITH_S3_SUPPORT.
The aws-creds.cc file (only code using AWS CRT SDK) is now conditionally
compiled by meson.
This commit replaces the AWS C++ SDK with a lighter curl-based approach
for S3 binary cache operations.
- Removed dependency on the heavy aws-cpp-sdk-s3 and aws-cpp-sdk-transfer
- Added lightweight aws-crt-cpp for credential resolution only
- Leverages curl's native AWS SigV4 authentication (requires curl >= 7.75.0)
- S3BinaryCacheStore now delegates to HttpBinaryCacheStore
- Function s3ToHttpsUrl converts ParsedS3URL to ParsedURL
- Multipart uploads are no longer supported (may be reimplemented later)
- Build now requires curl >= 7.75.0 for AWS SigV4 support
Fixes: #13084, #12671, #11748, #12403, #5947
Instead of specifying env variables all the time
we can instead embed the __asan_default_options symbol
in all executables / shared objects. This reduces code
duplication.
Add a new S3BinaryCacheStore implementation that inherits from
HttpBinaryCacheStore.
The implementation is activated with NIX_WITH_CURL_S3, keeping the
existing NIX_WITH_S3_SUPPORT (AWS SDK) implementation unchanged.
Introduce a new build option 'curl-s3-store' for the curl-based S3
implementation, separate from the existing AWS SDK-based 's3-store'.
The two options are mutually exclusive to avoid conflicts.
Users can enable the new implementation with:
-Dcurl-s3-store=enabled -Ds3-store=disabled
Realisations are conceptually key-value pairs, mapping `DrvOutputs` (the
key) to information about that derivation output.
This separate the value type, which will be useful in maps, etc., where
we don't want to denormalize by including the key twice.
This matches similar changes for existing types:
| keyed | unkeyed |
|--------------------|------------------------|
| `ValidPathInfo` | `UnkeyedValidPathInfo` |
| `KeyedBuildResult` | `BuildResult` |
| `Realisation` | `UnkeyedRealisation` |