In current aptly, each repository and snapshot has its own reflist in
the database. This brings a few problems with it:
- Given a sufficiently large repositories and snapshots, these lists can
get enormous, reaching >1MB. This is a problem for LevelDB's overall
performance, as it tends to prefer values around the confiruged block
size (defaults to just 4KiB).
- When you take these large repositories and snapshot them, you have a
full, new copy of the reflist, even if only a few packages changed.
This means that having a lot of snapshots with a few changes causes
the database to basically be full of largely duplicate reflists.
- All the duplication also means that many of the same refs are being
loaded repeatedly, which can cause some slowdown but, more notably,
eats up huge amounts of memory.
- Adding on more and more new repositories and snapshots will cause the
time and memory spent on things like cleanup and publishing to grow
roughly linearly.
At the core, there are two problems here:
- Reflists get very big because there are just a lot of packages.
- Different reflists can tend to duplicate much of the same contents.
*Split reflists* aim at solving this by separating reflists into 64
*buckets*. Package refs are sorted into individual buckets according to
the following system:
- Take the first 3 letters of the package name, after dropping a `lib`
prefix. (Using only the first 3 letters will cause packages with
similar prefixes to end up in the same bucket, under the assumption
that packages with similar names tend to be updated together.)
- Take the 64-bit xxhash of these letters. (xxhash was chosen because it
relatively good distribution across the individual bits, which is
important for the next step.)
- Use the first 6 bits of the hash (range [0:63]) as an index into the
buckets.
Once refs are placed in buckets, a sha256 digest of all the refs in the
bucket is taken. These buckets are then stored in the database, split
into roughly block-sized segments, and all the repositories and
snapshots simply store an array of bucket digests.
This approach means that *repositories and snapshots can share their
reflist buckets*. If a snapshot is taken of a repository, it will have
the same contents, so its split reflist will point to the same buckets
as the base repository, and only one copy of each bucket is stored in
the database. When some packages in the repository change, only the
buckets containing those packages will be modified; all the other
buckets will remain unchanged, and thus their contents will still be
shared. Later on, when these reflists are loaded, each bucket is only
loaded once, short-cutting loaded many megabytes of data. In effect,
split reflists are essentially copy-on-write, with only the changed
buckets stored individually.
Changing the disk format means that a migration needs to take place, so
that task is moved into the database cleanup step, which will migrate
reflists over to split reflists, as well as delete any unused reflist
buckets.
All the reflist tests are also changed to additionally test out split
reflists; although the internal logic is all shared (since buckets are,
themselves, just normal reflists), some special additions are needed to
have native versions of the various reflist helper methods.
In our tests, we've observed the following improvements:
- Memory usage during publish and database cleanup, with
`GOMEMLIMIT=2GiB`, goes down from ~3.2GiB (larger than the memory
limit!) to ~0.7GiB, a decrease of ~4.5x.
- Database size decreases from 1.3GB to 367MB.
*In my local tests*, publish times had also decreased down to mere
seconds but the same effect wasn't observed on the server, with the
times staying around the same. My suspicions are that this is due to I/O
performance: my local system is an M1 MBP, which almost certainly has
much faster disk speeds than our DigitalOcean block volumes. Split
reflists include a side effect of requiring more random accesses from
reading all the buckets by their keys, so if your random I/O
performance is slower, it might cancel out the benefits. That being
said, even in that case, the memory usage and database size advantages
still persist.
Signed-off-by: Ryan Gonzalez <ryan.gonzalez@collabora.com>
This commit allows to add, remove and update components of published repositories without the need to recreate them.
Signed-off-by: Christoph Fiehe <c.fiehe@eurodata.de>
This commit modifies the behavior of the publish switch method in the way, that also new components can be added to an already published repository. It is no longer necessary to drop and recreate the whole publish.
Signed-off-by: Christoph Fiehe <c.fiehe@eurodata.de>
This change makes it possible to publish multiple distributions
with packages named the same but with different content by changing
structure of the generated pool hierarchy. The option not enabled
by default as this changes the structure of the output which could
break the expectations of other tools.
The output doesn't actually depend on the reflists, and loading them for
every published repo starts to take substantial time and memory.
Signed-off-by: Ryan Gonzalez <ryan.gonzalez@collabora.com>
The cleanup phase needs to list out all the files in each component in
order to determine what's still in use. When there's a large number of
sources (e.g. from having many snapshots), the time spent just loading
the package information becomes substantial. However, in many cases,
most of the packages being loaded are actually shared across the
sources; if you're taking frequent snapshots, for instance, most of the
packages in each snapshot will be the same as other snapshots. In these
cases, re-reading the packages repeatedly is just a waste of time.
To improve this, we maintain a list of refs that we know were processed
for each component. When listing the refs from a source, only the ones
that have not yet been processed will be examined. Some tests were also
added specifically to check listing the files in a component.
With this change, listing the files in components on a copy of our
production database went from >10 minutes to ~10 seconds, and the newly
added benchmark went from ~300ms to ~43ms.
Signed-off-by: Ryan Gonzalez <ryan.gonzalez@collabora.com>
Ubuntu has started depreciating the Debian installer in focal
and moved the installer images to a different path. In versions
after focal, they are completly removed. This basically gives
us more time to figure out how to use the new system.
While testing out Aptly, the `apt-get` client complains with the following error, since the `codename` was switched from the InRelease files that are baked out by Aptly:
```
E: Repository 'http://debianrepo.example.com/bionic testing InRelease' changed its 'Codename' value from '' to 'testing'
```
For any action which is multi-step (requires updating more than 1 DB
key), use transaction to make update atomic.
Also pack big chunks of updates (importing packages for importing and
mirror updates) into single transaction to improve aptly performance and
get some isolation.
Note that still layers up (Collections) provide some level of isolation,
so this is going to shine with the future PRs to remove collection
locks.
Spin-off of #459
This is spin-off of changes from #459.
Transactions are not being used yet, but batches are updated to work
with the new API.
`database/` package was refactored to split abstract interfaces and
implementation via goleveldb. This should make it easier to implement
new database types.
See #761
aptly had a concept of loading small amount of info per each object
into memory once collection is accessed for the first time.
This might have simplified some operations, but it doesn't scale well
with huge aptly databases.
This is just intermediate step towards better memory management -
list of objects is not loaded unless some method is called.
`ForEach` method (mainly used in cleanup) is reimplemented to
iterate over database without ever loading all the objects into memory.
Memory was even worse with previous approach, as for each item usually
`LoadComplete()` is called, which pulls even more data into memory
and item stays in memory till the end of the iteration as it is referenced
from `collection.list`.
For the subsequent PR: reimplement `ByUUID()` and probably other methods
to avoid loading all the items into memory, at least for all the collecitons
except for published repos. When published repository is being loaded, it
might pull source local repo which in turn would trigger loading for all the
local repos which is not acceptable.
updates with contents generation were super syscall-heavy. for each path
in a package (so at least 2-4, but ordinarily >4) we'd do a db.Put in
ContentsIndex which results in one syscall.Write. so, for every package in
a published repo we'd have to do *at least* 2 but ordinarily >4 syscalls.
this gets abysmally slow very quickly depending on the available system
specs.
instead, start a batch inside each package and finish it when we are done
with the package. this should keep the memory footprint negligible, but
reduce the write() calls from N to 1.
on one of KDE's servers I have seen update publishing of 7600 packages go
from ~28s to ~9s when using batch putting on an HDD.
on my local system the same set of packages go from ~14s to ~6s on an SSD.
(all inodes in cache in both cases)
- new publish calls can now enable AcquireByHash by right away (previously
one would have had to create a new publishing endpoint and then
explicitly switch it to AcquireByHash)
- all json marshals of PublishedRepo now contain AcquireByHash (allows
inspecting if a given endpoint has AcquireByHash enabled already; also
enables verification that a switch/update actually applied a
potential AcquireByHash change
- update all tests to reflect that default state of AcquireByHash
- update creation and switch testing to explicitly toggle AcquireByHash to
make sure state mutation works as expected
The added "aptly publish repo" option "-access-by-hash" publishes
the index files (Packages*, Sources*) also as hardlinked hashes.
Example:
/dists/yakkety/main/binary-amd64/by-hash/SHA512/31833ec39acc...
The Release files indicate this with the option "Acquire-By-Hash: yes"
This is used by apt >= 1.2.0 and prevents the "Hash sum mismatch" race
condition between a server side "aptly publish repo" and "apt-get update"
on a client.
See: http://www.chiark.greenend.org.uk/~cjwatson/blog/no-more-hash-sum-mismatch-errors.html
This implementation uses symlinks in the by-hash/*/ directory for keeping
only two versions of the index files and deleting older files
automatically.
Note: this only works with aptly.FileSystemPublishedStorage
Closes: #536
Signed-off-by: André Roth <neolynx@gmail.com>
Allow skipping unreferenced files cleanup on publish switch/update/drop
via the -skip-cleanup command line option.
Also support API SkipCleanup parameter.
Fixes#570.