Instead of iterating over the newly built bindings we can
do a cheaper set_intersection to count duplicates or fall back
to a per-element binary search over the "base" bindings.
This speeds up `hello` evaluation by around 10ms (0.196s -> 0.187s) and
`nixos.closures.ec2.x86_64-linux` by 140ms (2.744s -> 2.609s).
This addresses a somewhat steep performance regression from 82315c3807
that reduced memory requirements of attribute set merges. With this patch
we get back around to 2.31 level of eval performance while keeping the memory
usage optimization.
Also document the optimization a bit more.
In particular
- Remove `get`, it is redundant with `valueAt` and the `get` in
`util.hh`.
- Remove `nullableValueAt`. It is morally just the function composition
`getNullable . valueAt`, not an orthogonal combinator like the others.
- `optionalValueAt` return a pointer, not `std::optional`. This also
expresses optionality, but without creating a needless copy. This
brings it in line with the other combinators which also return
references.
- Delete `valueAt` and `optionalValueAt` taking the map by value, as we
did for `get` in 408c09a120, which
prevents bugs / unnecessary copies.
`adl_serializer<DerivationOptions::OutputChecks>::from_json` was the one
use of `getNullable`. I give it a little static function for the
ultimate creation of a `std::optional` it does need to do (after
switching it to using `getNullable . valueAt`. That could go in
`json-utils.hh` eventually, but I didn't bother for now since only one
things needs it.
Co-authored-by: Sergei Zimmerman <sergei@zimmerman.foo>
S3 buckets support object versioning to prevent unexpected changes,
but Nix previously lacked the ability to fetch specific versions of
S3 objects. This adds support for a `versionId` query parameter in S3
URLs, enabling users to pin to specific object versions:
```
s3://bucket/key?region=us-east-1&versionId=abc123
```
This has already been implemented in 1e709554d5
as a side-effect of mounting the accessors in storeFS. Let's test this so it
doesn't regress.
(cherry-picked from https://github.com/NixOS/nix/pull/12915)
Move HttpBinaryCacheStore class from .cc file to header to enable
inheritance by S3BinaryCacheStore. Create S3BinaryCacheStore class that
overrides upsertFile() to implement multipart upload logic.
Add a sizeHint parameter to BinaryCacheStore::upsertFile() to enable
size-based upload decisions in implementations. This lays the groundwork
for reintroducing S3 multipart upload support.
Add support for HTTP DELETE requests to FileTransfer infrastructure:
This enables S3 multipart upload abort functionality via DELETE requests
to S3 endpoints.
This reverts commit 90d1ff4805.
The initial issue with EPIPE was solved in 9f680874c5.
Now this patch does move bad than good by eating up boost::io::format_error that are
bugs.
addToStore(): Don't parse the NAR
* StringSource: Implement skip()
This is slightly faster than doing a read() into a buffer just to
discard the data.
* LocalStore::addToStore(): Skip unnecessary NARs rather than parsing them
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
A few changes had cropped up with `_NIX_TEST_ACCEPT=1`:
1. Blake hashing test JSON had a different indentation
2. Store URI had improper non-quoted spaces
(1) was is just fixed, as we trust nlohmann JSON to parse JSON
correctly, regardless of whitespace.
For (2), the existing URL was made a read-only test, since we very much
wish to continue parsing such invalid URLs directly. And then the
original read/write test was updated to properly percent-encode the
space, as the normal form should be.
Since 2.32, nix now needs boost 1.87 or later to build,
due to using unordered::concurrent_flat_map try_emplace_and_cvisit
../src/libexpr/eval.cc: In member function ‘void nix::EvalState::evalFile(const nix::SourcePath&, nix::Value&, bool)’:
../src/libexpr/eval.cc:1096:20: error: ‘class boost::unordered::concurrent_flat_map<nix::SourcePath, nix::Value*, std::hash<nix::SourcePath>, std::equal_to<nix::SourcePath>, traceable_allocator<std::pair<const nix::SourcePath, nix::Value*> > >’ has no member named ‘try_emplace_and_cvisit’; did you mean ‘try_emplace_or_cvisit’?
1096 | fileEvalCache->try_emplace_and_cvisit(
| ^~~~~~~~~~~~~~~~~~~~~~
| try_emplace_or_cvisit
See 834580b539
The s3:ListBucket permission is required for read operations on S3
binary caches, not just for writes. Without this permission, users get
"Access Denied" errors when running nix-build.
Extract the path-based compression method determination logic into a
protected method that returns std::optional<std::string>. This allows
subclasses to reuse the logic and makes the semantics clearer (nullopt
means no compression, not empty string).
This prepares for S3BinaryCacheStore to apply the same compression
rules when implementing multipart uploads.
Fix POST requests with data to use the correct curl option for specifying
body size. Previously used CURLOPT_INFILESIZE_LARGE for both POST and PUT,
but POST requires CURLOPT_POSTFIELDSIZE_LARGE.
This caused POST request bodies to not be sent correctly, manifesting as
S3 multipart CompleteMultipartUpload requests failing with "You must
specify at least one part" even though the XML body contained valid parts.