Add support for pre-resolving AWS credentials in the parent process
before forking for builtin:fetchurl. This avoids recreating credential
providers in the forked child process.
The previous implementation had a check-then-create race condition where
multiple threads could simultaneously:
1. Check the cache and find no provider (line 122)
2. Create their own providers (lines 126-145)
3. Insert into cache (line 161)
This resulted in multiple credential providers being created when
downloading multiple packages in parallel, as each .narinfo download
would trigger provider creation on its own thread.
Fix by using boost::concurrent_flat_map's try_emplace_and_cvisit, which
provides atomic get-or-create semantics:
- f1 callback: Called atomically during insertion, creates the provider
- f2 callback: Called if key exists, returns cached provider
- Other threads are blocked during f1, so no nullptr is ever visible
Instead of specifying env variables all the time
we can instead embed the __asan_default_options symbol
in all executables / shared objects. This reduces code
duplication.
This commit adds two key fixes to http-binary-cache-store.cc to
properly support the new curl-based S3 implementation:
1. **Consistent cache key handling**: Use `getReference().render(withParams=false)`
for disk cache keys instead of `cacheUri.to_string()`. This ensures cache
keys are consistent with the S3 implementation and don't include query
parameters, which matches the behavior expected by Store::queryPathInfo()
lookups.
2. **S3 query parameter preservation**: When generating file transfer requests
for S3 URLs, preserve query parameters from the base URL (region, endpoint,
etc.) when the relative path doesn't have its own query parameters. This
ensures S3-specific configuration is propagated to all requests.
I want to separate "policy" from "mechanism".
Now the logic to decide how to build (a policy choice, though with some
hard constraints) is all in derivation building goal, and all in the
same spot. build hook, external builder, or local builder --- the choice
between all three is made in the same spot --- pure policy.
Now, if you want to use the external deriation builder, you simply
provide the `ExternalBuilder` you wish to use, and there is no
additional checking --- pure mechanism. It is the responsibility of the
caller to choose an external builder that works for the derivation in
question.
Also, `checkSystem()` was the only thing throwing `BuildError` from
`startBuilder`. Now that that is gone, we can now remove the
`try...catch` around that.
Add a new S3BinaryCacheStore implementation that inherits from
HttpBinaryCacheStore.
The implementation is activated with NIX_WITH_CURL_S3, keeping the
existing NIX_WITH_S3_SUPPORT (AWS SDK) implementation unchanged.
This code had several issues:
1. Not going through the SourceAccessor means that we can only work
with physical paths.
2. It did not actually check that the file exists. (std::ifstream does not check
it by default).
Most of the eval cache logic is flake-independent and libexpr,
but the loading part is not.
`nix-flake` is the right component for this, as the eval cache
isn't exactly specific to the command line.
This barfed with
error: [json.exception.type_error.302] type must be string, but is array
on `nix build github:malt3/bazel-env#bazel-env` because it has a `exportReferencesGraph` with a value like `["string",...["string"]]`.
Add a `UsernameAuth` struct and optional `usernameAuth` field to
`FileTransferRequest` to support programmatic username/password
authentication.
This uses curl's `CURLOPT_USERNAME`/`CURLOPT_PASSWORD` options, which
works with multiple protocols (HTTP, FTP, etc.) and is not specific to
any particular authentication scheme.
The primary motivation is to enable S3 authentication refactoring where
AWS credentials (access key ID and secret access key) can be passed
through this general-purpose mechanism, reducing the amount of
S3-specific code behind `#if NIX_WITH_CURL_S3` guards.