Commit Graph

37 Commits

Author SHA1 Message Date
Ryan Gonzalez
19a705f80d Split reflists to share their contents across snapshots
In current aptly, each repository and snapshot has its own reflist in
the database. This brings a few problems with it:

- Given a sufficiently large repositories and snapshots, these lists can
  get enormous, reaching >1MB. This is a problem for LevelDB's overall
  performance, as it tends to prefer values around the confiruged block
  size (defaults to just 4KiB).
- When you take these large repositories and snapshot them, you have a
  full, new copy of the reflist, even if only a few packages changed.
  This means that having a lot of snapshots with a few changes causes
  the database to basically be full of largely duplicate reflists.
- All the duplication also means that many of the same refs are being
  loaded repeatedly, which can cause some slowdown but, more notably,
  eats up huge amounts of memory.
- Adding on more and more new repositories and snapshots will cause the
  time and memory spent on things like cleanup and publishing to grow
  roughly linearly.

At the core, there are two problems here:

- Reflists get very big because there are just a lot of packages.
- Different reflists can tend to duplicate much of the same contents.

*Split reflists* aim at solving this by separating reflists into 64
*buckets*. Package refs are sorted into individual buckets according to
the following system:

- Take the first 3 letters of the package name, after dropping a `lib`
  prefix. (Using only the first 3 letters will cause packages with
  similar prefixes to end up in the same bucket, under the assumption
  that packages with similar names tend to be updated together.)
- Take the 64-bit xxhash of these letters. (xxhash was chosen because it
  relatively good distribution across the individual bits, which is
  important for the next step.)
- Use the first 6 bits of the hash (range [0:63]) as an index into the
  buckets.

Once refs are placed in buckets, a sha256 digest of all the refs in the
bucket is taken. These buckets are then stored in the database, split
into roughly block-sized segments, and all the repositories and
snapshots simply store an array of bucket digests.

This approach means that *repositories and snapshots can share their
reflist buckets*. If a snapshot is taken of a repository, it will have
the same contents, so its split reflist will point to the same buckets
as the base repository, and only one copy of each bucket is stored in
the database. When some packages in the repository change, only the
buckets containing those packages will be modified; all the other
buckets will remain unchanged, and thus their contents will still be
shared. Later on, when these reflists are loaded, each bucket is only
loaded once, short-cutting loaded many megabytes of data. In effect,
split reflists are essentially copy-on-write, with only the changed
buckets stored individually.

Changing the disk format means that a migration needs to take place, so
that task is moved into the database cleanup step, which will migrate
reflists over to split reflists, as well as delete any unused reflist
buckets.

All the reflist tests are also changed to additionally test out split
reflists; although the internal logic is all shared (since buckets are,
themselves, just normal reflists), some special additions are needed to
have native versions of the various reflist helper methods.

In our tests, we've observed the following improvements:

- Memory usage during publish and database cleanup, with
  `GOMEMLIMIT=2GiB`, goes down from ~3.2GiB (larger than the memory
  limit!) to ~0.7GiB, a decrease of ~4.5x.
- Database size decreases from 1.3GB to 367MB.

*In my local tests*, publish times had also decreased down to mere
seconds but the same effect wasn't observed on the server, with the
times staying around the same. My suspicions are that this is due to I/O
performance: my local system is an M1 MBP, which almost certainly has
much faster disk speeds than our DigitalOcean block volumes. Split
reflists include a side effect of requiring more random accesses from
reading all the buckets by their keys, so if your random I/O
performance is slower, it might cancel out the benefits. That being
said, even in that case, the memory usage and database size advantages
still persist.

Signed-off-by: Ryan Gonzalez <ryan.gonzalez@collabora.com>
2025-02-15 23:49:21 +01:00
Mikel Olasagasti Uranga
7074fc8856 Switch to google/uuid module
Current used github.com/pborman/uuid hasn't seen any updates in years.

Signed-off-by: Mikel Olasagasti Uranga <mikel@olasagasti.info>
2025-01-11 23:18:50 +01:00
André Roth
0b3dd2709b apply PR feedback 2024-07-31 22:16:00 +02:00
André Roth
67771795ca etcd: implement transactions
- use temporary db for lookups in transactions
- use batch implementation to commit transaction
2024-07-31 22:16:00 +02:00
André Roth
7a01c9c62d etcd: implement batch operations
- cache the operations internally in a list
- Write() applies the list to etcd
2024-07-31 22:16:00 +02:00
André Roth
9768ecef22 etcd: implement temporary db support
- temporary db support is implemented with a unique key prefix
- prevent closing etcd connection when closing temporary db
2024-07-31 22:16:00 +02:00
André Roth
5b74f82edb etcd: fix int overflow
goxc fails with:

Error: database/etcddb/database.go:17:25: cannot use 2048 * 1024 * 1024 (untyped int constant 2147483648) as int value in struct literal (overflows)
2024-07-31 22:16:00 +02:00
hudeng
78172d11d7 feat: Add etcd database support
improve concurrent access and high availability of aptly with the help of the characteristics of etcd
2024-07-31 22:16:00 +02:00
Markus Muellner
352f4e8772 update golangci-lint and replace deprecated calls to io/ioutil 2022-12-12 10:21:39 +01:00
Oliver Sauder
e63d74dff2 Fixed not running tests 2022-01-27 09:30:14 +01:00
Andrey Smirnov
77d7c3871a Consistently use transactions to update database
For any action which is multi-step (requires updating more than 1 DB
key), use transaction to make update atomic.

Also pack big chunks of updates (importing packages for importing and
mirror updates) into single transaction to improve aptly performance and
get some isolation.

Note that still layers up (Collections) provide some level of isolation,
so this is going to shine with the future PRs to remove collection
locks.

Spin-off of #459
2019-08-11 00:11:53 +03:00
Andrey Smirnov
67e38955ae Refactor database code to support standalone batches, transactions.
This is spin-off of changes from #459.

Transactions are not being used yet, but batches are updated to work
with the new API.

`database/` package was refactored to split abstract interfaces and
implementation via goleveldb. This should make it easier to implement
new database types.
2019-08-09 00:46:40 +03:00
Andrey Smirnov
211ac0501f Rework the way database is open/re-open in aptly
Allow database to be initialized without opening, unify all the
open paths to retry on failure.

In API router make sure open requests are matched with acks in explicit
way.

This also enables re-open attempts in all the aptly commands, so it
should make running aptly CLI much easier now hopefully.

Fix up system tests for oldoldstable ;)
2017-07-05 00:17:48 +03:00
Andrey Smirnov
4c06e26d85 Throttle compaction on temporary DB 2017-02-23 01:01:17 +03:00
Andrey Smirnov
f58d2627c1 Add temporary DB and prefix methods to Storage 2017-02-14 02:26:32 +03:00
Andrey Smirnov
067d197dac Update to latest version of goleveldb. 2016-03-01 12:53:25 +03:00
Andrey Smirnov
cf644289a3 Lower limit for goleveldb open files cache to 256 #260 2015-10-01 12:37:43 +03:00
Chris Read
daf887e54f Upgrade gocheck 2014-11-05 13:27:15 -06:00
Andrey Smirnov
e4b9e974d2 bytes.Equal should be faster than bytes.Compare. 2014-10-06 15:17:25 +04:00
Andrey Smirnov
43eb993160 Don't panic on double re-open/close, ignore it. #45 #114 2014-10-03 01:20:26 +04:00
Andrey Smirnov
3e5ba27cb7 Ability to re-open db after close. #45 #114 2014-10-02 21:13:56 +04:00
Andrey Smirnov
a0870f6726 Refactor mirror download code, split it into separate methods. #45 #114 2014-10-02 19:30:37 +04:00
Andrey Smirnov
4afa3126e4 Fix ugly bug. #25 2014-04-05 16:20:04 +04:00
Andrey Smirnov
400d0da7d4 Add method to recover LevelDB after crash. 2014-04-05 00:30:39 +04:00
Andrey Smirnov
6a42aad322 Compact LevelDB in aptly db cleanup. #19 2014-03-17 16:22:49 +04:00
Andrey Smirnov
c28a641293 Don't overwrite entry if there are no changes. 2014-03-10 19:42:08 +04:00
Andrey Smirnov
97becf199f Simplify iteration in LevelDB. 2014-02-24 12:05:51 +04:00
Andrey Smirnov
bff299d268 Batch writes/deletes in LevelDB. 2014-02-11 21:11:15 +04:00
Andrey Smirnov
8d72f1a959 Database: return list of keys by prefix. 2014-02-11 17:21:35 +04:00
Andrey Smirnov
78c7bf6af1 Double delete is not a problem. 2014-01-29 14:34:30 +04:00
Andrey Smirnov
bbec7ef948 Implement key deletion. 2014-01-17 00:28:10 +04:00
Sebastien Binet
ca33e07366 gofmt 2013-12-20 15:42:56 +01:00
Andrey Smirnov
97f4e8d5f2 Fetch by prefix from db. 2013-12-19 16:06:28 +04:00
Andrey Smirnov
055c38a4d9 Package comment. 2013-12-18 12:11:46 +04:00
Andrey Smirnov
3660a94ea6 Comments on public entities. 2013-12-17 18:01:07 +04:00
Andrey Smirnov
5e078fb413 Fix usage with new version of goleveldb. 2013-12-17 12:06:38 +04:00
Andrey Smirnov
b73def6b32 LevelDB first mockup. 2013-12-17 12:01:32 +04:00