Commit Graph

77 Commits

Author SHA1 Message Date
Ryan Gonzalez
19a705f80d Split reflists to share their contents across snapshots
In current aptly, each repository and snapshot has its own reflist in
the database. This brings a few problems with it:

- Given a sufficiently large repositories and snapshots, these lists can
  get enormous, reaching >1MB. This is a problem for LevelDB's overall
  performance, as it tends to prefer values around the confiruged block
  size (defaults to just 4KiB).
- When you take these large repositories and snapshot them, you have a
  full, new copy of the reflist, even if only a few packages changed.
  This means that having a lot of snapshots with a few changes causes
  the database to basically be full of largely duplicate reflists.
- All the duplication also means that many of the same refs are being
  loaded repeatedly, which can cause some slowdown but, more notably,
  eats up huge amounts of memory.
- Adding on more and more new repositories and snapshots will cause the
  time and memory spent on things like cleanup and publishing to grow
  roughly linearly.

At the core, there are two problems here:

- Reflists get very big because there are just a lot of packages.
- Different reflists can tend to duplicate much of the same contents.

*Split reflists* aim at solving this by separating reflists into 64
*buckets*. Package refs are sorted into individual buckets according to
the following system:

- Take the first 3 letters of the package name, after dropping a `lib`
  prefix. (Using only the first 3 letters will cause packages with
  similar prefixes to end up in the same bucket, under the assumption
  that packages with similar names tend to be updated together.)
- Take the 64-bit xxhash of these letters. (xxhash was chosen because it
  relatively good distribution across the individual bits, which is
  important for the next step.)
- Use the first 6 bits of the hash (range [0:63]) as an index into the
  buckets.

Once refs are placed in buckets, a sha256 digest of all the refs in the
bucket is taken. These buckets are then stored in the database, split
into roughly block-sized segments, and all the repositories and
snapshots simply store an array of bucket digests.

This approach means that *repositories and snapshots can share their
reflist buckets*. If a snapshot is taken of a repository, it will have
the same contents, so its split reflist will point to the same buckets
as the base repository, and only one copy of each bucket is stored in
the database. When some packages in the repository change, only the
buckets containing those packages will be modified; all the other
buckets will remain unchanged, and thus their contents will still be
shared. Later on, when these reflists are loaded, each bucket is only
loaded once, short-cutting loaded many megabytes of data. In effect,
split reflists are essentially copy-on-write, with only the changed
buckets stored individually.

Changing the disk format means that a migration needs to take place, so
that task is moved into the database cleanup step, which will migrate
reflists over to split reflists, as well as delete any unused reflist
buckets.

All the reflist tests are also changed to additionally test out split
reflists; although the internal logic is all shared (since buckets are,
themselves, just normal reflists), some special additions are needed to
have native versions of the various reflist helper methods.

In our tests, we've observed the following improvements:

- Memory usage during publish and database cleanup, with
  `GOMEMLIMIT=2GiB`, goes down from ~3.2GiB (larger than the memory
  limit!) to ~0.7GiB, a decrease of ~4.5x.
- Database size decreases from 1.3GB to 367MB.

*In my local tests*, publish times had also decreased down to mere
seconds but the same effect wasn't observed on the server, with the
times staying around the same. My suspicions are that this is due to I/O
performance: my local system is an M1 MBP, which almost certainly has
much faster disk speeds than our DigitalOcean block volumes. Split
reflists include a side effect of requiring more random accesses from
reading all the buckets by their keys, so if your random I/O
performance is slower, it might cancel out the benefits. That being
said, even in that case, the memory usage and database size advantages
still persist.

Signed-off-by: Ryan Gonzalez <ryan.gonzalez@collabora.com>
2025-02-15 23:49:21 +01:00
André Roth
e319f3cd14 update doc
make descrptions consistent
2024-12-11 11:19:46 +01:00
André Roth
e5e3c49ace swagger: document async 2024-12-11 10:40:44 +01:00
André Roth
c6e0a06b14 swagger: cleanup 2024-12-11 10:40:44 +01:00
André Roth
9b8f6b1d56 fix conflict 2024-12-11 10:40:43 +01:00
André Roth
ba86851d07 add api documentation stubs 2024-12-11 10:40:43 +01:00
André Roth
9ca9569714 fix build and golangci-lint 2024-11-17 14:09:37 +01:00
Mauro Regli
1357d246d8 rename addon files to skel files 2024-11-17 14:09:37 +01:00
Mauro Regli
c75c2c7594 pass down addonpath from api and cmd context 2024-11-17 14:09:37 +01:00
André Roth
f79423a4ee update swagger documentation 2024-11-01 17:48:03 +01:00
André Roth
eb94211053 fix race conditions 2024-11-01 17:48:03 +01:00
André Roth
bd01cd4033 update swagger documentation 2024-11-01 17:48:03 +01:00
Christoph Fiehe
451de79666 Improve consistency between API and Swagger docs.
Signed-off-by: Christoph Fiehe <c.fiehe@eurodata.de>
2024-11-01 17:48:03 +01:00
André Roth
755fdfaca2 update swagger documentation
- add default values
-  set default values
2024-11-01 17:48:03 +01:00
André Roth
f4057850b9 fix compile and lint errors 2024-11-01 17:47:50 +01:00
André Roth
4d6688d68e sanitize archs 2024-10-22 16:58:15 +02:00
Christoph Fiehe
7a7ff1142c Minor code and documentation changes.
Signed-off-by: Christoph Fiehe <c.fiehe@eurodata.de>
2024-10-22 16:58:15 +02:00
Christoph Fiehe
8cceed12f7 Fix tests.
Signed-off-by: Christoph Fiehe <c.fiehe@eurodata.de>
2024-10-22 16:58:15 +02:00
Christoph Fiehe
f8f28e9554 Fixing tests and fix cleanup.
Signed-off-by: Christoph Fiehe <c.fiehe@eurodata.de>
2024-10-22 16:58:15 +02:00
Christoph Fiehe
ac5ecf946d Cleanup improved and code redundant code removed.
Signed-off-by: Christoph Fiehe <c.fiehe@eurodata.de>
2024-10-22 16:58:15 +02:00
Christoph Fiehe
d87d8bac92 Fix test cases.
Signed-off-by: Christoph Fiehe <c.fiehe@eurodata.de>
2024-10-22 16:58:15 +02:00
Christoph Fiehe
14c29ff912 Fixing tests.
Signed-off-by: Christoph Fiehe <c.fiehe@eurodata.de>
2024-10-22 16:58:15 +02:00
Christoph Fiehe
73cdf5417b Use POST instead of PUT for source creation.
Signed-off-by: Christoph Fiehe <c.fiehe@eurodata.de>
2024-10-22 16:58:15 +02:00
André Roth
fa0d2860f0 fix multidist in publish 2024-10-22 16:58:15 +02:00
André Roth
dcbb2a06a5 fix build 2024-10-22 16:58:15 +02:00
Christoph Fiehe
bd64232eb6 Allow management of components
This commit allows to add, remove and update components of published repositories without the need to recreate them.

Signed-off-by: Christoph Fiehe <c.fiehe@eurodata.de>
2024-10-22 16:58:15 +02:00
André Roth
f16a68f59c fix race condition with repo add files
Do all relevant database reading/modifying inside `maybeRunTaskInBackground`.

Notably, `LoadComplete` will load the reflist of a repo. if this is done outside of a background operation,
the data might be outdated when the background tasks runs.
2024-10-22 15:12:25 +02:00
André Roth
cefc09a41b more sanitize 2024-10-11 14:11:09 +02:00
André Roth
57639c4adf Sanitize path api params
- fix path traversal complains by CodeQL
2024-10-11 12:56:08 +02:00
André Roth
861260198a publish: persist multidist flag 2024-10-08 22:28:12 +02:00
André Roth
06cbd29d0d add storage API 2024-10-02 18:48:48 +02:00
André Roth
fb538333fa add swagger documentation 2024-10-01 01:07:09 +02:00
Christoph Fiehe
4195ad90bc Allow to add a new component to a published repo
This commit modifies the behavior of the publish switch method in the way, that also new components can be added to an already published repository. It is no longer necessary to drop and recreate the whole publish.

Signed-off-by: Christoph Fiehe <c.fiehe@eurodata.de>
2024-09-24 15:43:27 +02:00
Noa Resare
b4cd86aa14 Introduce option multi-dist to the publish commands
This change makes it possible to publish multiple distributions
with packages named the same but with different content by changing
structure of the generated pool hierarchy. The option not enabled
by default as this changes the structure of the output which could
break the expectations of other tools.
2024-06-15 11:27:26 +02:00
Ryan Gonzalez
8d09c202db Skip loading reflists when listing published repos
The output doesn't actually depend on the reflists, and loading them for
every published repo starts to take substantial time and memory.

Signed-off-by: Ryan Gonzalez <ryan.gonzalez@collabora.com>
2024-04-24 17:35:44 +02:00
André Roth
72a7780054 fix golint complaints 2024-03-06 06:21:36 +01:00
Mauro Regli
40c242f9d1 Fix: Remove Batch from API options, set to true by default, add comments
Fixes: #1106
2023-09-14 10:34:20 +02:00
Mauro Regli
dbf1ac7867 Fix: Drop Publish returned wrong status code if not found
Deleting a publish that does not exist now results in a status code 404
instead of 500.

Fixes: #1006
2023-03-07 13:46:57 +01:00
Markus Muellner
ecc41f0c0f replace AbortWithError calls by custom function that sets the content type correctly 2023-01-23 10:42:57 +01:00
Benj Fassbind
af899149c7 Fix wrong nil check for SkipBz2 2022-08-16 09:04:16 +02:00
Sjoerd Simons
f61514edaf Allow disabling bzip2 compression for index files
Using bzip2 generates smaller index files (roughly 20% smaller Packages
files) but it comes with a big performance penalty.  When publishing a
debian mirror snapshot (amd64, arm64, armhf, source) without contents
skipping bzip speeds things up around 1.8 times.

```
$ hyperfine -w 1 -L skip-bz2 true,false  -m 3 -p "aptly -config aptly.conf publish drop bullseye || true" "aptly -config aptly.conf  publish snapshot  --skip-bz2={skip-bz2} --skip-contents --skip-signing bullseye"
Benchmark 1: aptly -config aptly.conf  publish snapshot  --skip-bz2=true --skip-contents --skip-signing bullseye
  Time (mean ± σ):     35.567 s ±  0.307 s    [User: 39.366 s, System: 10.075 s]
  Range (min … max):   35.311 s … 35.907 s    3 runs

Benchmark 2: aptly -config aptly.conf  publish snapshot  --skip-bz2=false --skip-contents --skip-signing bullseye
  Time (mean ± σ):     64.740 s ±  0.135 s    [User: 68.565 s, System: 10.129 s]
  Range (min … max):   64.596 s … 64.862 s    3 runs

Summary
  'aptly -config aptly.conf  publish snapshot  --skip-bz2=true --skip-contents --skip-signing bullseye' ran
    1.82 ± 0.02 times faster than 'aptly -config aptly.conf  publish snapshot  --skip-bz2=false --skip-contents --skip-signing bullseye'
```

Allow skipping bz2 creation for setups where faster publishing is more
important then Package file size.

Signed-off-by: Sjoerd Simons <sjoerd@collabora.com>
2022-06-22 11:25:45 +02:00
Ximon Eighteen
5342e549cd govet: compose literal uses unkeyed fields 2022-01-27 09:30:14 +01:00
Lorenzo Bolla
ff51c46915 More informative return value for task.Process 2022-01-27 09:30:14 +01:00
Lorenzo Bolla
9b28d8984f Configurable background task execution 2022-01-27 09:30:14 +01:00
Oliver Sauder
f09a273ad7 Add publish output progress counting remaining number of packages 2022-01-27 09:30:14 +01:00
Oliver Sauder
6ab5e60833 Add task api and resource locking ability 2022-01-27 09:30:14 +01:00
Oliver Sauder
1e7731c317 Removed obsolete RWMutexes 2022-01-27 09:30:14 +01:00
Oliver Sauder
208a2151c1 every go routine needs to have its own collection factory
this is needed so concurrent reads and writes are possible.
2022-01-27 09:30:14 +01:00
Andrey Smirnov
2c91bcdc30 Bump Go versions for Travis, fix tests
Replace gometalinter with golangci-lint.

Fix system tests (wheezy is gone, replace with stretch).

Fix linter warnings.
2019-07-04 00:16:12 +03:00
Andrey Smirnov
b8c5303fdb Fix paths after repository transfer to aptly-dev 2018-04-18 21:19:43 +03:00