package count chainged again -.-
I am working on a fixture set for repositories so we can stop talking to
live repositories. that's quite the undertaking though, so let's fix the
output reference to unbreak the test in the meantime
Init is actually never called and I have no clue why it is there if it is
not called.
Take this opportunity to introduce a New function which only does the
helper lookup and panics iff that fails. Panic may be a bit too aggressive,
but seems the most certain way to get out of not finding a suitable gpg1
binary.
Newer versions of debian and ubuntu come with gpg pointing to gpg2.
We can currently only handle gpg1 CLIs though. Luckily the old gpg is still
available in the package gnupg1 (providing bin/gpg1).
As a bit of a stop-gap, until #657 can be resolved properly, we'll detect
the version of bin/gpg. If it is unsuitable we'll fall back and try
bin/gpg1. If neither is found to be suitable the signer/verifier will
not work.
Same applies to gpgv/gpgv1.
1. Don't run long steps for Go versions other than 1.9 & 1.10
according to Golang Release Policy (two latest versions).
2. Switch to codecov.io, collect coverage only on Go 1.10 which
has fixes for multi-module coverage & ./... ignoring vendor.
3. Simplify Makefile.
updates with contents generation were super syscall-heavy. for each path
in a package (so at least 2-4, but ordinarily >4) we'd do a db.Put in
ContentsIndex which results in one syscall.Write. so, for every package in
a published repo we'd have to do *at least* 2 but ordinarily >4 syscalls.
this gets abysmally slow very quickly depending on the available system
specs.
instead, start a batch inside each package and finish it when we are done
with the package. this should keep the memory footprint negligible, but
reduce the write() calls from N to 1.
on one of KDE's servers I have seen update publishing of 7600 packages go
from ~28s to ~9s when using batch putting on an HDD.
on my local system the same set of packages go from ~14s to ~6s on an SSD.
(all inodes in cache in both cases)
presently there is no use case where we need this. on the other hand,
passing empty paths into any of the remove methods is indicative of a bug.
this is particularly dangerous as this can temporarily smash the publish
root but later restore it again when actually publishing. this makes
for super nasty and hard to track down problems.
to guard against this simply disallow root dir removal using empty
strings. should we find a use case for this in the future we can always
revisit this (FTR: I think very explicitly API should be used so everyone
knows what is going on and you can't accidentally run it)
the logic here was wrong.
if we managed to find the link target (the physical index file) pointed to
by our old symlink we want to remove it (this is basically "cleaning up old
index" logic).
previously we'd try to only delete it when the ReadLink came back with
error. which had two serious issues with it:
a) linkTarget was empty, so we basically called Remove("") which would
delete the storage -> root <- directory if the root is a symlink!
b) we'd leak old indexes as the cleanup logic only ran if there was en
error which would ordinarily never be
new code correctly cleans up unless there was an error.
this relates to a previous bugfix of readLink which incorrectly returned
absolute paths ultimately rendering the Remove call also broken.
previously it'd return an absolute path which makes the path absolutely
useless as all other functions of PublishedStorage need relative input
and will prepend them with the rootPath, so getting an absolute ReadLink
and then trying to remove that'd would ultimately try to remove the
absolute path `$root/AbsoluteRoot/LinkTarget` instead of `$root/LinkTarget`
add a unit test to actually verify readlink