Several sections of the code *required* a LocalPackagePool, but they
could still perform their operations with a standard PackagePool.
Signed-off-by: Ryan Gonzalez <ryan.gonzalez@collabora.com>
For any action which is multi-step (requires updating more than 1 DB
key), use transaction to make update atomic.
Also pack big chunks of updates (importing packages for importing and
mirror updates) into single transaction to improve aptly performance and
get some isolation.
Note that still layers up (Collections) provide some level of isolation,
so this is going to shine with the future PRs to remove collection
locks.
Spin-off of #459
Apply retries as global, config-level option `downloadRetries` so that
it can be applied to any aptly command which downloads objects.
Unwrap `errors.Wrap` which is used in downloader.
Unwrap `*url.Error` which should be the actual error returned from the
HTTP client, catch more cases, be more specific around failures.
There are two fixes here:
1. Abort package download immediately as ^C is pressed.
2. Import all the already downloaded files into package pool,
so that next time mirror is updated, aptly won't download them
once again.
This requires splitting up import file phase as separate step in then end,
it should be pretty fast, as it only does file move (hardlink) and
DB update for new checksums.
`PackageDownloadTask` is just a reference to file now. Whole process
was rewritten to follow pattern: download to temp location inside the pool,
verify/update checksums, import into pool as final step.
This removes a lot of edge cases when aptly internal state might be broken
if updating from rogue mirror.
Also this changes whole memory model: package list/files are kept in memory
now during the duration of `mirror update` command and saved to disk
only in the end.