Should be pretty self-explanatory. We didn't really have unit tests
for the filesystem source accessor. Now we do and this will be immensely
useful for implementing a unix-only smarter accessor that doesn't suffer
from TOCTOU on symlinks.
This progress on #11896. It introduces some issues temporarily which
will be fixed when #11928 is fixed.
The SQL tables are left in place because there is no point inducing a
migration now, when we will be immediately landing more changes after
this that also require schema changes. They will simply be ignored by in
this commit, and so all data will be preserved.
Error messages now include suggestions like:
error: unknown compression method 'bzip'
Did you mean one of bzip2, gzip, lzip, grzip or lrzip?
Also a bit of progress on making the compression code use less stringly
typed compression type, which is good because it's easy to confuse
which strings are accepted where (e.g. Content-Encoding should be able
to accept x-gzip, but it shouldn't be exposed in NAR decompression and
so on). An enum cleanly separates the concerns of parsing strings / handling
libarchive write/read filters.
Ref #14787
This really doesn't really fixes the problem of the symlink, but it
solves the progress of getting windows working.
TODO: find out if it's a bug from meason & make a feature request to
avoid symlinks or generate symlinks upon build and git ignore, but still
goes back to the issue of is this a bug or do we need to make a feature
requests.
Co-authored-by: John Ericson <git@JohnEricson.me>
Unfortunately previous tarball caches had loose objects written to
them and subsequent switch to thin packfiles. This results in possibly
broken thin packfiles when the loose objects backend is disabled. Thin
packfiles do not necessarily contain the whole closure of objects.
When packfilesOnly is true we end up with an inconsistent state where
a tree lives in a packfiles which refers to a blob in the loose objects
backend.
In the future we might want to nuke old cache directories and repack
the tarball cache.
The SSO provider was unconditionally setting profile_name_override to
the (potentially empty) profile string from the S3 URL. When profile
was empty, this prevented the AWS CRT SDK from falling back to the
AWS_PROFILE environment variable.
Only set profile_name_override when a profile is explicitly specified
in the URL, allowing the SDK's built-in AWS_PROFILE handling to work.
The default (empty) profile case was using CreateCredentialsProviderChainDefault
which didn't properly support role_arn/source_profile based role assumption via
STS because TLS context wasn't being passed to the Profile provider.
This change unifies the credential chain for all profiles (default and named),
ensuring:
- Consistent behavior between default and named profiles
- Proper TLS context is passed for STS operations
- SSO support works for both cases
Add validation for TLS context and client bootstrap initialization,
with appropriate error messages when these fail. The TLS context failure
is now a warning that gracefully disables SSO, while bootstrap failure
throws since it's required for all providers.
This enables seamless AWS SSO authentication for S3 binary caches
without requiring users to manually export credentials.
This adds SSO support by calling aws_credentials_provider_new_sso() from
the C library directly. It builds a custom credential chain: Env → SSO →
Profile → IMDS
The SSO provider requires a TLS context for HTTPS connections to SSO
endpoints, which is created once and shared across all providers.
Good to explicitly declare things to not accidentally do twice the work by
preventing that kind of misuse.
This is essentially just cppcoreguidelines-special-member-functions lint
in clang-tidy.