Compare commits

...

30 Commits

Author SHA1 Message Date
Mike Frysinger
ade45de770 docs: windows: mention Developer Mode for symlinks
This is probably better than recommending Administrator access.

Change-Id: Ic916f15fe03f7fa1e03c685265b4774bfc1279c2
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/563581
Tested-by: Mike Frysinger <vapier@google.com>
Reviewed-by: Gavin Mak <gavinmak@google.com>
Commit-Queue: Mike Frysinger <vapier@google.com>
2026-03-20 14:07:06 -07:00
Lucas Tanure
0251fb33c4 project: don't re-shallow manually unshallowed repos during sync
If a user has manually unshallowed a repo (e.g. via
`git fetch --unshallow`), the absence of the `shallow` file in the
gitdir indicates a full clone. Re-applying depth during a subsequent
sync would undo the user's intent. Skip re-shallowing in this case
by clearing depth when the project is not new and no shallow file
is present.

Change-Id: I4ee0e78018de9078fe1bd77a9615613ef0c40d33
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/558743
Reviewed-by: Gavin Mak <gavinmak@google.com>
Reviewed-by: Carlos Fernandez <carlosfsanz@meta.com>
Reviewed-by: Mike Frysinger <vapier@google.com>
Commit-Queue: Lucas Alves <ltanure@gmail.com>
Tested-by: Lucas Alves <ltanure@gmail.com>
Reviewed-by: Lucas Alves <ltanure@gmail.com>
2026-03-20 10:08:40 -07:00
Jacky Liu
0176586544 Use git_superproject.UseSuperproject() everywhere
Currently somewhere use git_superproject.UseSuperproject(), which checks
both the manifest config and user's config, and otherwhere use
manifest.manifestProject.use_superproject, which only checks the
manifest config. This causes Inconsistent behaviors for users who do not
set --use-superproject when doing repo init but have
repo.superprojectChoice in their git config.

Replace where using manifest.manifestProject.use_superproject with
git_superproject.UseSuperproject() to respect user's config and avoid
inconsistency.

Bug: 454514213
Change-Id: I1f734235cdd67b8a6915f1d05967d1aaa4d03f2a
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/561801
Commit-Queue: Jacky Liu <qsliu@google.com>
Reviewed-by: Mike Frysinger <vapier@google.com>
Tested-by: Jacky Liu <qsliu@google.com>
2026-03-18 21:03:07 -07:00
Mike Frysinger
582804a59e pydev: drop Python 2 reference
Not sure who uses this anymore, but might as well delete obviously
wrong content.

Change-Id: I5cdf1cf699c81b7db32b400f371134d21f474743
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/563161
Commit-Queue: Mike Frysinger <vapier@google.com>
Reviewed-by: Gavin Mak <gavinmak@google.com>
Tested-by: Mike Frysinger <vapier@google.com>
2026-03-18 12:01:37 -07:00
Mike Frysinger
afc3d55d39 isort: merge config into pyproject.toml
Change-Id: I3a50de04897789c7b2f291882faf1c862645b054
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/563141
Tested-by: Mike Frysinger <vapier@google.com>
Reviewed-by: Gavin Mak <gavinmak@google.com>
Commit-Queue: Mike Frysinger <vapier@google.com>
2026-03-18 12:01:25 -07:00
Mike Frysinger
f24bc7aed5 tests: switch some test modules to pytest
Change-Id: I524b5ff2d77f8232f94e21921b00ba4027d2ac4f
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/563081
Tested-by: Mike Frysinger <vapier@google.com>
Reviewed-by: Gavin Mak <gavinmak@google.com>
Commit-Queue: Mike Frysinger <vapier@google.com>
2026-03-18 11:13:26 -07:00
Gavin Mak
83b8ebdbbe git_superproject: avoid re-initing bare repo
Running sync with reftable on a files-backed workspace fails to re-init
the superproject dir with:
```
fatal: could not open
'.../.repo/exp-superproject/<hash>-superproject.git/refs/heads' for writing:
Is a directory
```

Bug: 476209856
Change-Id: Ie8473d66069aafefa5661bd3ea8e73b2b27c6a38
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/550981
Commit-Queue: Gavin Mak <gavinmak@google.com>
Reviewed-by: Mike Frysinger <vapier@google.com>
Tested-by: Gavin Mak <gavinmak@google.com>
2026-03-17 14:30:35 -07:00
Gavin Mak
a0abfd7339 project: resolve unborn HEAD robustly in reftable repos
Use `git symbolic-ref` to resolve HEAD before trying to parse .git/HEAD
directly which is unreliable for reftable repos.

Bug: 476209856
Change-Id: I60185d945c5b43c871945c0126cfdf52194e745d
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/550762
Commit-Queue: Gavin Mak <gavinmak@google.com>
Reviewed-by: Mike Frysinger <vapier@google.com>
Tested-by: Gavin Mak <gavinmak@google.com>
2026-03-17 14:30:20 -07:00
Gavin Mak
403fedfeb5 project: support reftable anchors in worktree .git migration
The reftable backend creates real refs/ and reftable/ dirs. Update
_MigrateOldWorkTreeGitDir to expect these dirs and remove them.

Bug: 476209856
Change-Id: I4700da70cb466e25ecbc51ba4de9a906b8716bd8
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/550761
Reviewed-by: Mike Frysinger <vapier@google.com>
Tested-by: Gavin Mak <gavinmak@google.com>
Commit-Queue: Gavin Mak <gavinmak@google.com>
2026-03-17 14:30:01 -07:00
Gavin Mak
f14c577fce project: avoid direct packed-refs writes during fetch
Replace raw file manipulation with native `git update-ref` commands
inside a try/finally block to ensure temp refs are created/cleaned up
regardless of storage format.

Bug: 476209856
Change-Id: I228e81d3d3b323328260f6672075193421c8dc47
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/550421
Tested-by: Gavin Mak <gavinmak@google.com>
Commit-Queue: Gavin Mak <gavinmak@google.com>
Reviewed-by: Mike Frysinger <vapier@google.com>
2026-03-17 14:29:38 -07:00
Gavin Mak
67881c0c3b git_refs: read refs via git plumbing for files/reftable
Replace direct `packed-refs` file parsing with `git for-each-ref`
plumbing to support both `files` and `reftable` backends.

Bug: 476209856
Change-Id: I2ad8ff8f3382426600f15370c997f9bc17165485
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/550401
Commit-Queue: Gavin Mak <gavinmak@google.com>
Reviewed-by: Mike Frysinger <vapier@google.com>
Tested-by: Gavin Mak <gavinmak@google.com>
2026-03-17 14:28:39 -07:00
Mike Frysinger
551087cd98 tests: add a util module for sharing code
We've started duplicating code among test modules.  Start a common
utils module to hold that, and migrate over TempGitTree to start.

Change-Id: I10b2abd133535c90fbda4d6686602d7e5861d875
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/559041
Tested-by: Mike Frysinger <vapier@google.com>
Commit-Queue: Mike Frysinger <vapier@google.com>
Reviewed-by: Gavin Mak <gavinmak@google.com>
2026-03-12 20:09:19 -07:00
Mike Frysinger
8da56a0cc5 man: refresh after recent changes
Change-Id: Ibd60f89406e89255b3284413442b1d9c0ccbfb6d
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/559601
Commit-Queue: Mike Frysinger <vapier@google.com>
Tested-by: Mike Frysinger <vapier@google.com>
Reviewed-by: Mike Frysinger <vapier@google.com>
Reviewed-by: Jeffery Miller <jefferymiller@google.com>
2026-03-10 07:35:40 -07:00
Jeffery Miller
0f01cd24e9 docs: Document support for child elements in extend-project
Clarify the existence and behavior of child elements when added to
extend-project.

Change-Id: Id9f270166c8498d4051495b9a1f68360f66e9143
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/553742
Reviewed-by: Mike Frysinger <vapier@google.com>
Tested-by: Jeffery Miller <jefferymiller@google.com>
Commit-Queue: Jeffery Miller <jefferymiller@google.com>
2026-02-19 08:38:56 -08:00
Jeffery Miller
1ee98667cc tests: Add extend-project test for additional annotations
Multiple annotations can exist for the same name when
extending projects. Add a test case to show this behavior.

Change-Id: I12bbd25e642c7e615e32f66a1c364a39ac81902c
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/553906
Reviewed-by: Mike Frysinger <vapier@google.com>
Commit-Queue: Jeffery Miller <jefferymiller@google.com>
Tested-by: Jeffery Miller <jefferymiller@google.com>
2026-02-19 08:35:03 -08:00
Jordan Esh
6f9622fe1c sync: Remove dependency on ssh if not needed
When running on a machine without the `ssh` command, repo sync would fail even if no ssh or ssh proxy was required. Use exception handling inside ssh.ProxyManager to more gracefully handle the case where ssh is not installed.

Bug: 467714011
Change-Id: I602a0819638ead4d02de88b750839bc3d70549ce
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/535141
Reviewed-by: Mike Frysinger <vapier@google.com>
Reviewed-by: Gavin Mak <gavinmak@google.com>
Tested-by: Jordan Esh <esh.jordan@gmail.com>
Commit-Queue: Jordan Esh <esh.jordan@gmail.com>
2026-02-11 17:00:07 -08:00
Gavin Mak
5cb0251248 gc: fix untargeted projects being deleted
`delete_unused_projects` needs a full list of active projects to figure
out which orphaned .git dirs need to be deleted. Otherwise it thinks
that only the projects specified in args are active.

Bug: 447626164
Change-Id: I02beebf6a01c77742a8db78221452d71cd78ea73
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/550061
Tested-by: Gavin Mak <gavinmak@google.com>
Reviewed-by: Mike Frysinger <vapier@google.com>
Commit-Queue: Gavin Mak <gavinmak@google.com>
2026-02-09 08:34:42 -08:00
Gavin Mak
a214fd31bd manifest: Introduce sync-j-max attribute to cap sync jobs
Add a way for manifest owners to limit how many sync jobs run in
parallel.

Bug: 481100878
Change-Id: Ia6cbe02cbc83c9e414b53b8d14fe5e7e1b802505
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/548963
Reviewed-by: Mike Frysinger <vapier@google.com>
Tested-by: Gavin Mak <gavinmak@google.com>
Commit-Queue: Gavin Mak <gavinmak@google.com>
2026-02-06 13:32:28 -08:00
Sam Saccone
62cd0de6cf Make git trace2 logging errors conditional on verbose mode.
Add a verbose attribute to the EventLog class, defaulting to False.
Error messages printed to sys.stderr within the EventLog.Write method
are now guarded by this verbose flag. In main.py, set EventLog.verbose
to True if the command-line --verbose option is used. This prevents
trace2 logging failures from being printed to stderr unless verbose
output is explicitly requested.

PROMPT=convert all git trace2 logging print messages to verbose only
logging

BUG: b/479811034
Change-Id: I8757ee52117d766f2f3ec47856db64cc4f51143c
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/547542
Tested-by: Sam Saccone <samccone@google.com>
Reviewed-by: Julia Tuttle <juliatuttle@google.com>
Reviewed-by: Gavin Mak <gavinmak@google.com>
2026-02-03 11:53:48 -08:00
Mike Frysinger
b60512a75a run_tests: log tool versions
Change-Id: I4eee58786bae6d442773c63fa937fb11eda1e2f0
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/547863
Commit-Queue: Mike Frysinger <vapier@google.com>
Reviewed-by: Gavin Mak <gavinmak@google.com>
Tested-by: Mike Frysinger <vapier@google.com>
2026-01-29 12:32:46 -08:00
Mike Frysinger
5d88972390 Revert "init: change --manifest-depth default to 1"
This reverts commit 622a5bf9c2.

CrOS infra is failing to sync code now for some reason.
Until we can investigate further, pull this back out.

Bug: 475668525
Bug: 468033850
Change-Id: I35a8623a95336df1be27ea870afbfc8065609f01
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/545141
Reviewed-by: Gavin Mak <gavinmak@google.com>
Tested-by: Mike Frysinger <vapier@google.com>
Commit-Queue: Mike Frysinger <vapier@google.com>
2026-01-21 12:32:59 -08:00
Gavin Mak
3c0e67bbc5 manifest_xml: prevent extend-project from inheriting local groups
When extending a project in a local manifest, the project inherits the
`local:` group. This causes the superproject override logic (which omits
projects with `local:` groups) to incorrectly exclude the project from
the override manifest. This leads to "extend-project element specifies
non-existent project" errors during sync reload.

Fix this by stripping `local:` groups from extended projects, ensuring
they remain visible to superproject overrides while still allowing other
inherited groups to persist.

Bug: 470374343
Change-Id: I1a057ebffebc11a19dc14dde7cc13b9f18cdd0a3
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/543222
Reviewed-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
Reviewed-by: Mike Frysinger <vapier@google.com>
Tested-by: Gavin Mak <gavinmak@google.com>
Commit-Queue: Gavin Mak <gavinmak@google.com>
2026-01-15 10:27:32 -08:00
Nico Wald
3b7b20ac1d CONTRIBUTING: fix HTTP password URL
Change-Id: I7ae085896fe951c2b1c662689fa111a0661f988d
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/539762
Tested-by: Nico Wald <nicowald@mac.com>
Reviewed-by: Mike Frysinger <vapier@google.com>
Reviewed-by: Gavin Mak <gavinmak@google.com>
Commit-Queue: Nico Wald <nicowald@mac.com>
2026-01-12 09:48:47 -08:00
Gavin Mak
e71a8c6dd8 project: disable auto-gc for depth=1 in git config
During sync, `git checkout` can trigger fetch for missing objects in
partial clones. This internal fetch can trigger `git maintenance` or
`git gc` and cause delays during the local checkout phase. Set
maintenance.auto to false and gc.auto to 0 in during `_InitRemote` if
`depth=1` to ensure that implicit fetches spawned by git skip GC.

Bug: 379111283
Change-Id: I6b22a4867f29b6e9598746cb752820a84dc2aeb6
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/540681
Reviewed-by: Mike Frysinger <vapier@google.com>
Tested-by: Gavin Mak <gavinmak@google.com>
Commit-Queue: Gavin Mak <gavinmak@google.com>
2026-01-08 11:33:40 -08:00
Mike Frysinger
c687b5df9e run_tests/release: require Python 3.9+
While we support running `repo` on clients with older Python versions,
we don't need to hold the runners & release code back.  These are only
used by repo devs on their systems to develop & release repo.

Python 3.9 was picked due to its typing changs which we've already
started using in this code.

Change-Id: I6f8885c84298760514c25abeb1fccb0338947bf4
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/539801
Commit-Queue: Mike Frysinger <vapier@google.com>
Reviewed-by: Gavin Mak <gavinmak@google.com>
Tested-by: Mike Frysinger <vapier@google.com>
2026-01-06 11:36:26 -08:00
Mike Frysinger
1dd9c57a28 tests: drop tox support
This hasn't been working out as well as we'd hope.  Tox relies on
the system having Python versions installed which distros don't
tend to carry anymore.  Our custom run_tests leverages vpython
when possible to run stable Python 3.8 & 3.11 versions which is
providing an OK level of coverage in practice.

Change-Id: Ida517f7be47ca95703e43bc0af5a24dd70c0467e
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/540001
Tested-by: Mike Frysinger <vapier@google.com>
Reviewed-by: Gavin Mak <gavinmak@google.com>
Commit-Queue: Mike Frysinger <vapier@google.com>
2026-01-06 11:32:42 -08:00
Mike Frysinger
4525c2e0ad github: add black check action
Change-Id: Ic87c1c5c72fb8a01108146c1f9d78466acb57278
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/540021
Tested-by: Mike Frysinger <vapier@google.com>
Commit-Queue: Mike Frysinger <vapier@google.com>
Reviewed-by: Gavin Mak <gavinmak@google.com>
2026-01-06 11:00:32 -08:00
Mike Frysinger
45dcd738b7 tests: skip AF_UNIX tests when unavailable
UNIX sockets aren't available under Windows, so skip the test.

Change-Id: Ic4ca22d161c6dee628352aad07ac6aaceb472ac2
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/540002
Reviewed-by: Gavin Mak <gavinmak@google.com>
Commit-Queue: Mike Frysinger <vapier@google.com>
Tested-by: Mike Frysinger <vapier@google.com>
2026-01-06 10:17:53 -08:00
Mike Frysinger
1dad86dc00 check-metadata: skip files that do not exist
If the files don't exist, then they can't have errors, so skip checking.

Change-Id: I3ed4be4912b253c5454df41d690cb33dfe191289
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/540003
Reviewed-by: Gavin Mak <gavinmak@google.com>
Tested-by: Mike Frysinger <vapier@google.com>
Commit-Queue: Mike Frysinger <vapier@google.com>
2026-01-06 10:17:32 -08:00
Mike Frysinger
622a5bf9c2 init: change --manifest-depth default to 1
Most users do not care about the manifest history in .repo/manifests/.
Let's change the default to 1 so things work smoothly for most people
most of the time.  For the rare folks who want the full history, they
can add --manifest-depth=0 to their `repo init`.

This has no effect on existing checkouts.

Spot checking Android & CrOS manifests shows significant speedups.
Full history can take O(10's seconds) to O(minutes) while depth of 1
takes constant time of O(~5 seconds).

Bug: 468033850
Change-Id: I4b8ed62a8a636babcc5226552badb69600d0c353
Reviewed-on: https://gerrit-review.googlesource.com/c/git-repo/+/535481
Tested-by: Mike Frysinger <vapier@google.com>
Commit-Queue: Mike Frysinger <vapier@google.com>
Reviewed-by: Gavin Mak <gavinmak@google.com>
2026-01-05 06:36:08 -08:00
51 changed files with 1923 additions and 1073 deletions

16
.github/workflows/black.yml vendored Normal file
View File

@@ -0,0 +1,16 @@
# GitHub actions workflow.
# https://help.github.com/en/actions/automating-your-workflow-with-github-actions/workflow-syntax-for-github-actions
# https://black.readthedocs.io/en/stable/integrations/github_actions.html
name: Format
on:
push:
branches: [main]
jobs:
format:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: psf/black@stable

View File

@@ -27,6 +27,6 @@ jobs:
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install tox tox-gh-actions
- name: Test with tox
run: tox
python -m pip install pytest
- name: Run tests
run: python -m pytest

View File

@@ -1,41 +0,0 @@
# Copyright (C) 2023 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Config file for the isort python module.
# This is used to enforce import sorting standards.
#
# https://pycqa.github.io/isort/docs/configuration/options.html
[settings]
# Be compatible with `black` since it also matches what we want.
profile = black
line_length = 80
length_sort = false
force_single_line = true
lines_after_imports = 2
from_first = false
case_sensitive = false
force_sort_within_sections = true
order_by_type = false
# Ignore generated files.
extend_skip_glob = *_pb2.py
# Allow importing multiple classes on a single line from these modules.
# https://google.github.io/styleguide/pyguide#s2.2-imports
single_line_exclusions =
abc,
collections.abc,
typing,

View File

@@ -5,6 +5,5 @@
<pydev_pathproperty name="org.python.pydev.PROJECT_SOURCE_PATH">
<path>/git-repo</path>
</pydev_pathproperty>
<pydev_property name="org.python.pydev.PYTHON_PROJECT_VERSION">python 2.7</pydev_property>
<pydev_property name="org.python.pydev.PYTHON_PROJECT_INTERPRETER">Default</pydev_property>
</pydev_project>

View File

@@ -43,17 +43,12 @@ probably need to split up your commit to finer grained pieces.
Lint any changes by running:
```sh
$ tox -e lint -- file.py
$ flake8
```
And format with:
```sh
$ tox -e format -- file.py
```
Or format everything:
```sh
$ tox -e format
$ black file.py
```
Repo uses [black](https://black.readthedocs.io/) with line length of 80 as its
@@ -73,15 +68,11 @@ the entire project in the included `.flake8` file.
[PEP 8]: https://www.python.org/dev/peps/pep-0008/
[flake8 documentation]: https://flake8.pycqa.org/en/3.1.1/user/ignoring-errors.html#in-line-ignoring-errors
## Running tests
We use [pytest](https://pytest.org/) and [tox](https://tox.readthedocs.io/) for
running tests. You should make sure to install those first.
To run the full suite against all supported Python versions, simply execute:
```sh
$ tox -p auto
```
We use [pytest](https://pytest.org/) for running tests. You should make sure to
install that first.
We have [`./run_tests`](./run_tests) which is a simple wrapper around `pytest`:
```sh
@@ -143,7 +134,7 @@ they have right to redistribute your work under the Apache License:
Ensure you have obtained an HTTP password to authenticate:
https://gerrit-review.googlesource.com/new-password
https://www.googlesource.com/new-password
Ensure that you have the local commit hook installed to automatically
add a ChangeId to your commits:

View File

@@ -51,6 +51,7 @@ following DTD:
<!ATTLIST default dest-branch CDATA #IMPLIED>
<!ATTLIST default upstream CDATA #IMPLIED>
<!ATTLIST default sync-j CDATA #IMPLIED>
<!ATTLIST default sync-j-max CDATA #IMPLIED>
<!ATTLIST default sync-c CDATA #IMPLIED>
<!ATTLIST default sync-s CDATA #IMPLIED>
<!ATTLIST default sync-tags CDATA #IMPLIED>
@@ -213,7 +214,9 @@ can be found. Used when syncing a revision locked manifest in
-c mode to avoid having to sync the entire ref space. Project elements
not setting their own `upstream` will inherit this value.
Attribute `sync-j`: Number of parallel jobs to use when synching.
Attribute `sync-j`: Number of parallel jobs to use when syncing.
Attribute `sync-j-max`: Maximum number of parallel jobs to use when syncing.
Attribute `sync-c`: Set to true to only sync the given Git
branch (specified in the `revision` attribute) rather than the
@@ -395,6 +398,11 @@ attributes of an existing project without completely replacing the
existing project definition. This makes the local manifest more robust
against changes to the original manifest.
The `extend-project` element can also contain `annotation`, `copyfile`, and
`linkfile` child elements. These are added to the project's definition. A
`copyfile` or `linkfile` with a `dest` that already exists in the project
will overwrite the original.
Attribute `path`: If specified, limit the change to projects checked out
at the specified path, rather than all projects with the given name.

View File

@@ -50,8 +50,11 @@ Git worktrees (see the previous section for more info).
Repo will use symlinks heavily internally.
On *NIX platforms, this isn't an issue, but Windows makes it a bit difficult.
There are some documents out there for how to do this, but usually the easiest
answer is to run your shell as an Administrator and invoke repo/git in that.
The easiest method to allow users to create symlinks is by enabling
[Windows Developer Mode](https://learn.microsoft.com/en-us/windows/advanced-settings/developer-mode).
The next easiest answer is to run your shell as an Administrator and invoke
repo/git in that.
This isn't a great solution, but Windows doesn't make this easy, so here we are.

View File

@@ -22,7 +22,6 @@ from typing import Any, Optional
from error import GitError
from error import RepoExitError
from git_refs import HEAD
from git_trace2_event_log_base import BaseEventLog
import platform_utils
from repo_logging import RepoLogger
@@ -83,7 +82,7 @@ def RepoSourceVersion():
proj = os.path.dirname(os.path.abspath(__file__))
env[GIT_DIR] = os.path.join(proj, ".git")
result = subprocess.run(
[GIT, "describe", HEAD],
[GIT, "describe", "HEAD"],
stdout=subprocess.PIPE,
stderr=subprocess.DEVNULL,
encoding="utf-8",

View File

@@ -222,6 +222,12 @@ class GitConfig:
value = "true" if value else "false"
self.SetString(name, value)
def SetInt(self, name: str, value: int) -> None:
"""Set an integer value for a key."""
if value is not None:
value = str(value)
self.SetString(name, value)
def GetString(self, name: str, all_keys: bool = False) -> Union[str, None]:
"""Get the first value for a key, or None if it is not defined.

View File

@@ -14,6 +14,7 @@
import os
from git_command import GitCommand
import platform_utils
from repo_trace import Trace
@@ -86,9 +87,8 @@ class GitRefs:
self._symref = {}
self._mtime = {}
self._ReadPackedRefs()
self._ReadLoose("refs/")
self._ReadLoose1(os.path.join(self._gitdir, HEAD), HEAD)
self._ReadRefs()
self._ReadSymbolicRef(HEAD)
scan = self._symref
attempts = 0
@@ -102,64 +102,95 @@ class GitRefs:
scan = scan_next
attempts += 1
def _ReadPackedRefs(self):
path = os.path.join(self._gitdir, "packed-refs")
self._TrackMtime(HEAD)
self._TrackMtime("config")
self._TrackMtime("packed-refs")
self._TrackTreeMtimes("refs")
self._TrackTreeMtimes("reftable")
@staticmethod
def _IsNullRef(ref_id: str) -> bool:
"""Check if a ref_id is a null object ID."""
return ref_id and all(ch == "0" for ch in ref_id)
def _ReadRefs(self) -> None:
"""Read all references using git for-each-ref."""
p = GitCommand(
None,
["for-each-ref", "--format=%(objectname)%00%(refname)%00%(symref)"],
capture_stdout=True,
capture_stderr=True,
bare=True,
gitdir=self._gitdir,
)
if p.Wait() != 0:
return
for line in p.stdout.splitlines():
ref_id, name, symref = line.split("\0")
if symref:
self._symref[name] = symref
elif ref_id and not self._IsNullRef(ref_id):
self._phyref[name] = ref_id
def _ReadSymbolicRef(self, name: str) -> None:
"""Read a symbolic reference."""
p = GitCommand(
None,
["symbolic-ref", "-q", name],
capture_stdout=True,
capture_stderr=True,
bare=True,
gitdir=self._gitdir,
)
if p.Wait() == 0:
ref = p.stdout.strip()
if ref:
self._symref[name] = ref
return
p = GitCommand(
None,
["rev-parse", "--verify", "-q", name],
capture_stdout=True,
capture_stderr=True,
bare=True,
gitdir=self._gitdir,
)
if p.Wait() == 0:
ref_id = p.stdout.strip()
if ref_id:
self._phyref[name] = ref_id
def _TrackMtime(self, name: str) -> None:
"""Track the modification time of a specific gitdir path."""
path = os.path.join(self._gitdir, name)
try:
fd = open(path)
mtime = os.path.getmtime(path)
self._mtime[name] = os.path.getmtime(path)
except OSError:
return
def _TrackTreeMtimes(self, root: str) -> None:
"""Recursively track modification times for a directory tree."""
root_path = os.path.join(self._gitdir, root)
try:
for line in fd:
line = str(line)
if line[0] == "#":
continue
if line[0] == "^":
continue
line = line[:-1]
p = line.split(" ")
ref_id = p[0]
name = p[1]
self._phyref[name] = ref_id
finally:
fd.close()
self._mtime["packed-refs"] = mtime
def _ReadLoose(self, prefix):
base = os.path.join(self._gitdir, prefix)
for name in platform_utils.listdir(base):
p = os.path.join(base, name)
# We don't implement the full ref validation algorithm, just the
# simple rules that would show up in local filesystems.
# https://git-scm.com/docs/git-check-ref-format
if name.startswith(".") or name.endswith(".lock"):
pass
elif platform_utils.isdir(p):
self._mtime[prefix] = os.path.getmtime(base)
self._ReadLoose(prefix + name + "/")
else:
self._ReadLoose1(p, prefix + name)
def _ReadLoose1(self, path, name):
try:
with open(path) as fd:
mtime = os.path.getmtime(path)
ref_id = fd.readline()
except (OSError, UnicodeError):
if not platform_utils.isdir(root_path):
return
except OSError:
return
try:
ref_id = ref_id.decode()
except AttributeError:
pass
if not ref_id:
return
ref_id = ref_id[:-1]
to_scan = [root]
while to_scan:
name = to_scan.pop()
self._TrackMtime(name)
path = os.path.join(self._gitdir, name)
if not platform_utils.isdir(path):
continue
if ref_id.startswith("ref: "):
self._symref[name] = ref_id[5:]
else:
self._phyref[name] = ref_id
self._mtime[name] = mtime
for child in platform_utils.listdir(path):
child_name = os.path.join(name, child)
child_path = os.path.join(self._gitdir, child_name)
if platform_utils.isdir(child_path):
to_scan.append(child_name)
else:
self._TrackMtime(child_name)

View File

@@ -23,9 +23,11 @@ Examples:
"""
import functools
import glob
import hashlib
import os
import sys
import tempfile
import time
from typing import NamedTuple
import urllib.parse
@@ -34,6 +36,7 @@ from git_command import git_require
from git_command import GitCommand
from git_config import RepoConfig
from git_refs import GitRefs
import platform_utils
_SUPERPROJECT_GIT_NAME = "superproject.git"
@@ -215,30 +218,63 @@ class Superproject:
"""
if not os.path.exists(self._superproject_path):
os.mkdir(self._superproject_path)
if not self._quiet and not os.path.exists(self._work_git):
if os.path.exists(self._work_git):
return True
if not self._quiet:
print(
"%s: Performing initial setup for superproject; this might "
"take several minutes." % self._work_git
)
cmd = ["init", "--bare", self._work_git_name]
p = GitCommand(
None,
cmd,
cwd=self._superproject_path,
capture_stdout=True,
capture_stderr=True,
tmp_gitdir_prefix = ".tmp-superproject-initgitdir-"
tmp_gitdir = tempfile.mkdtemp(
prefix=tmp_gitdir_prefix,
dir=self._superproject_path,
)
retval = p.Wait()
if retval:
self._LogWarning(
"git init call failed, command: git {}, "
"return code: {}, stderr: {}",
tmp_git_name = os.path.basename(tmp_gitdir)
try:
cmd = ["init", "--bare", tmp_git_name]
p = GitCommand(
None,
cmd,
retval,
p.stderr,
cwd=self._superproject_path,
capture_stdout=True,
capture_stderr=True,
)
return False
return True
retval = p.Wait()
if retval:
self._LogWarning(
"git init call failed, command: git {}, "
"return code: {}, stderr: {}",
cmd,
retval,
p.stderr,
)
return False
platform_utils.rename(tmp_gitdir, self._work_git)
tmp_gitdir = None
return True
finally:
# Clean up the temporary directory created during the process,
# as well as any stale ones left over from previous attempts.
if tmp_gitdir and os.path.exists(tmp_gitdir):
platform_utils.rmtree(tmp_gitdir)
age_threshold = 60 * 60 * 24 # 1 day in seconds
now = time.time()
for tmp_dir in glob.glob(
os.path.join(self._superproject_path, f"{tmp_gitdir_prefix}*")
):
try:
mtime = os.path.getmtime(tmp_dir)
if now - mtime > age_threshold:
platform_utils.rmtree(tmp_dir)
except OSError:
pass
def _Fetch(self):
"""Fetches a superproject for the manifest based on |_remote_url|.

View File

@@ -68,6 +68,7 @@ class BaseEventLog:
global p_init_count
p_init_count += 1
self._log = []
self.verbose = False
# Try to get session-id (sid) from environment (setup in repo launcher).
KEY = "GIT_TRACE2_PARENT_SID"
if env is None:
@@ -309,10 +310,12 @@ class BaseEventLog:
# ignore the attempt and continue to DGRAM below. Otherwise,
# issue a warning.
if err.errno != errno.EPROTOTYPE:
print(
f"repo: warning: git trace2 logging failed: {err}",
file=sys.stderr,
)
if self.verbose:
print(
"repo: warning: git trace2 logging failed:",
f"{err}",
file=sys.stderr,
)
return None
if socket_type == socket.SOCK_DGRAM or socket_type is None:
try:
@@ -322,18 +325,20 @@ class BaseEventLog:
self._WriteLog(lambda bs: sock.sendto(bs, path))
return f"af_unix:dgram:{path}"
except OSError as err:
print(
f"repo: warning: git trace2 logging failed: {err}",
file=sys.stderr,
)
if self.verbose:
print(
f"repo: warning: git trace2 logging failed: {err}",
file=sys.stderr,
)
return None
# Tried to open a socket but couldn't connect (SOCK_STREAM) or write
# (SOCK_DGRAM).
print(
"repo: warning: git trace2 logging failed: could not write to "
"socket",
file=sys.stderr,
)
if self.verbose:
print(
"repo: warning: git trace2 logging failed: could not"
"write to socket",
file=sys.stderr,
)
return None
# Path is an absolute path
@@ -348,9 +353,10 @@ class BaseEventLog:
self._WriteLog(f.write)
log_path = f.name
except FileExistsError as err:
print(
"repo: warning: git trace2 logging failed: %r" % err,
file=sys.stderr,
)
if self.verbose:
print(
"repo: warning: git trace2 logging failed: %r" % err,
file=sys.stderr,
)
return None
return log_path

View File

@@ -337,6 +337,9 @@ class _Repo:
)
return 1
cmd.CommonValidateOptions(copts, cargs)
git_trace2_event_log.verbose = copts.verbose
if gopts.pager is not False and not isinstance(cmd, InteractiveCommand):
config = cmd.client.globalConfig
if gopts.pager:
@@ -359,7 +362,6 @@ class _Repo:
Execute the subcommand.
"""
nonlocal result
cmd.CommonValidateOptions(copts, cargs)
cmd.ValidateOptions(copts, cargs)
this_manifest_only = copts.this_manifest_only

View File

@@ -1,5 +1,5 @@
.\" DO NOT MODIFY THIS FILE! It was generated by help2man.
.TH REPO "1" "December 2025" "repo manifest" "Repo Manual"
.TH REPO "1" "March 2026" "repo manifest" "Repo Manual"
.SH NAME
repo \- repo manifest - manual page for repo manifest
.SH SYNOPSIS
@@ -131,6 +131,7 @@ include*)>
<!ATTLIST default dest\-branch CDATA #IMPLIED>
<!ATTLIST default upstream CDATA #IMPLIED>
<!ATTLIST default sync\-j CDATA #IMPLIED>
<!ATTLIST default sync\-j\-max CDATA #IMPLIED>
<!ATTLIST default sync\-c CDATA #IMPLIED>
<!ATTLIST default sync\-s CDATA #IMPLIED>
<!ATTLIST default sync\-tags CDATA #IMPLIED>
@@ -309,7 +310,9 @@ when syncing a revision locked manifest in \fB\-c\fR mode to avoid having to syn
entire ref space. Project elements not setting their own `upstream` will inherit
this value.
.PP
Attribute `sync\-j`: Number of parallel jobs to use when synching.
Attribute `sync\-j`: Number of parallel jobs to use when syncing.
.PP
Attribute `sync\-j\-max`: Maximum number of parallel jobs to use when syncing.
.PP
Attribute `sync\-c`: Set to true to only sync the given Git branch (specified in
the `revision` attribute) rather than the whole ref space. Project elements
@@ -475,6 +478,11 @@ of an existing project without completely replacing the existing project
definition. This makes the local manifest more robust against changes to the
original manifest.
.PP
The `extend\-project` element can also contain `annotation`, `copyfile`, and
`linkfile` child elements. These are added to the project's definition. A
`copyfile` or `linkfile` with a `dest` that already exists in the project will
overwrite the original.
.PP
Attribute `path`: If specified, limit the change to projects checked out at the
specified path, rather than all projects with the given name.
.PP

View File

@@ -155,6 +155,7 @@ class _Default:
upstreamExpr = None
remote = None
sync_j = None
sync_j_max = None
sync_c = False
sync_s = False
sync_tags = True
@@ -631,6 +632,9 @@ https://gerrit.googlesource.com/git-repo/+/HEAD/docs/manifest-format.md
if d.sync_j is not None:
have_default = True
e.setAttribute("sync-j", "%d" % d.sync_j)
if d.sync_j_max is not None:
have_default = True
e.setAttribute("sync-j-max", "%d" % d.sync_j_max)
if d.sync_c:
have_default = True
e.setAttribute("sync-c", "true")
@@ -1487,6 +1491,14 @@ https://gerrit.googlesource.com/git-repo/+/HEAD/docs/manifest-format.md
continue
if groups:
p.groups |= groups
# Drop local groups so we don't mistakenly omit this
# project from the superproject override manifest.
p.groups = {
g
for g in p.groups
if not g.startswith(LOCAL_MANIFEST_GROUP_PREFIX)
}
if revision:
if base_revision:
if p.revisionExpr != base_revision:
@@ -1755,6 +1767,13 @@ https://gerrit.googlesource.com/git-repo/+/HEAD/docs/manifest-format.md
% (self.manifestFile, d.sync_j)
)
d.sync_j_max = XmlInt(node, "sync-j-max", None)
if d.sync_j_max is not None and d.sync_j_max <= 0:
raise ManifestParseError(
'%s: sync-j-max must be greater than 0, not "%s"'
% (self.manifestFile, d.sync_j_max)
)
d.sync_c = XmlBool(node, "sync-c", False)
d.sync_s = XmlBool(node, "sync-s", False)
d.sync_tags = XmlBool(node, "sync-tags", True)

View File

@@ -28,7 +28,7 @@ import sys
import tarfile
import tempfile
import time
from typing import List, NamedTuple
from typing import List, NamedTuple, Optional
import urllib.parse
from color import Coloring
@@ -1245,6 +1245,7 @@ class Project:
verbose=False,
output_redir=None,
is_new=None,
use_superproject=None,
current_branch_only=None,
force_sync=False,
clone_bundle=True,
@@ -1390,6 +1391,15 @@ class Project:
else:
depth = self.manifest.manifestProject.depth
# If the project has been manually unshallowed (e.g. via
# `git fetch --unshallow`), don't re-shallow it during sync.
if (
depth
and not is_new
and not os.path.exists(os.path.join(self.gitdir, "shallow"))
):
depth = None
if depth and clone_filter_for_depth:
depth = None
clone_filter = clone_filter_for_depth
@@ -1399,7 +1409,9 @@ class Project:
if not (
optimized_fetch
and IsId(self.revisionExpr)
and self._CheckForImmutableRevision()
and self._CheckForImmutableRevision(
use_superproject=use_superproject
)
):
remote_fetched = True
try:
@@ -1409,6 +1421,7 @@ class Project:
verbose=verbose,
output_redir=output_redir,
alt_dir=alt_dir,
use_superproject=use_superproject,
current_branch_only=current_branch_only,
tags=tags,
prune=prune,
@@ -2397,7 +2410,9 @@ class Project:
return None
def _CheckForImmutableRevision(self):
def _CheckForImmutableRevision(
self, use_superproject: Optional[bool] = None
) -> bool:
try:
# if revision (sha or tag) is not present then following function
# throws an error.
@@ -2405,7 +2420,9 @@ class Project:
upstream_rev = None
# Only check upstream when using superproject.
if self.upstream and self.manifest.manifestProject.use_superproject:
if self.upstream and git_superproject.UseSuperproject(
use_superproject, self.manifest
):
upstream_rev = self.GetRemote().ToLocal(self.upstream)
revs.append(upstream_rev)
@@ -2419,7 +2436,9 @@ class Project:
# Only verify upstream relationship for superproject scenarios
# without affecting plain usage.
if self.upstream and self.manifest.manifestProject.use_superproject:
if self.upstream and git_superproject.UseSuperproject(
use_superproject, self.manifest
):
self.bare_git.merge_base(
"--is-ancestor",
self.revisionExpr,
@@ -2450,6 +2469,7 @@ class Project:
def _RemoteFetch(
self,
name=None,
use_superproject=None,
current_branch_only=False,
initial=False,
quiet=False,
@@ -2489,7 +2509,9 @@ class Project:
tag_name = self.upstream[len(R_TAGS) :]
if is_sha1 or tag_name is not None:
if self._CheckForImmutableRevision():
if self._CheckForImmutableRevision(
use_superproject=use_superproject
):
if verbose:
print(
"Skipped fetching project %s (already have "
@@ -2516,18 +2538,20 @@ class Project:
if not remote.PreConnectFetch(ssh_proxy):
ssh_proxy = None
alt_tmp_refs = []
if initial:
if alt_dir and "objects" == os.path.basename(alt_dir):
ref_dir = os.path.dirname(alt_dir)
packed_refs = os.path.join(self.gitdir, "packed-refs")
all_refs = self.bare_ref.all
ids = set(all_refs.values())
tmp = set()
update_ref_cmds = []
for r, ref_id in GitRefs(ref_dir).all.items():
if r not in all_refs:
if r.startswith(R_TAGS) or remote.WritesTo(r):
update_ref_cmds.append(f"create {r} {ref_id}\n")
all_refs[r] = ref_id
ids.add(ref_id)
continue
@@ -2536,22 +2560,18 @@ class Project:
continue
r = "refs/_alt/%s" % ref_id
update_ref_cmds.append(f"create {r} {ref_id}\n")
all_refs[r] = ref_id
ids.add(ref_id)
tmp.add(r)
alt_tmp_refs.append(r)
tmp_packed_lines = []
old_packed_lines = []
for r in sorted(all_refs):
line = f"{all_refs[r]} {r}\n"
tmp_packed_lines.append(line)
if r not in tmp:
old_packed_lines.append(line)
tmp_packed = "".join(tmp_packed_lines)
old_packed = "".join(old_packed_lines)
_lwrite(packed_refs, tmp_packed)
if update_ref_cmds:
GitCommand(
self,
["update-ref", "--no-deref", "--stdin"],
bare=True,
input="".join(update_ref_cmds),
).Wait()
else:
alt_dir = None
@@ -2652,153 +2672,168 @@ class Project:
retry_fetches = max(retry_fetches, 2)
retry_cur_sleep = retry_sleep_initial_sec
ok = prune_tried = False
for try_n in range(retry_fetches):
verify_command = try_n == retry_fetches - 1
gitcmd = GitCommand(
self,
cmd,
bare=True,
objdir=os.path.join(self.objdir, "objects"),
ssh_proxy=ssh_proxy,
merge_output=True,
capture_stdout=quiet or bool(output_redir),
verify_command=verify_command,
)
if gitcmd.stdout and not quiet and output_redir:
output_redir.write(gitcmd.stdout)
ret = gitcmd.Wait()
if ret == 0:
ok = True
break
# Retry later due to HTTP 429 Too Many Requests.
elif (
gitcmd.stdout
and "error:" in gitcmd.stdout
and "HTTP 429" in gitcmd.stdout
):
# Fallthru to sleep+retry logic at the bottom.
pass
# TODO(b/360889369#comment24): git may gc commits incorrectly.
# Until the root cause is fixed, retry fetch with --refetch which
# will bring the repository into a good state.
elif gitcmd.stdout and (
"could not parse commit" in gitcmd.stdout
or "unable to parse commit" in gitcmd.stdout
):
cmd.insert(1, "--refetch")
print(
"could not parse commit error, retrying with refetch",
file=output_redir,
)
continue
# Try to prune remote branches once in case there are conflicts.
# For example, if the remote had refs/heads/upstream, but deleted
# that and now has refs/heads/upstream/foo.
elif (
gitcmd.stdout
and "error:" in gitcmd.stdout
and "git remote prune" in gitcmd.stdout
and not prune_tried
):
prune_tried = True
prunecmd = GitCommand(
try:
for try_n in range(retry_fetches):
verify_command = try_n == retry_fetches - 1
gitcmd = GitCommand(
self,
["remote", "prune", name],
cmd,
bare=True,
objdir=os.path.join(self.objdir, "objects"),
ssh_proxy=ssh_proxy,
merge_output=True,
capture_stdout=quiet or bool(output_redir),
verify_command=verify_command,
)
ret = prunecmd.Wait()
if ret:
if gitcmd.stdout and not quiet and output_redir:
output_redir.write(gitcmd.stdout)
ret = gitcmd.Wait()
if ret == 0:
ok = True
break
print(
"retrying fetch after pruning remote branches",
file=output_redir,
)
# Continue right away so we don't sleep as we shouldn't need to.
continue
elif (
ret == 128
and gitcmd.stdout
and "fatal: could not read Username" in gitcmd.stdout
):
# User needs to be authenticated, and Git wants to prompt for
# username and password.
print(
"git requires authentication, but repo cannot perform "
"interactive authentication. Check git credentials.",
file=output_redir,
)
break
elif (
ret == 128
and gitcmd.stdout
and "remote helper 'sso' aborted session" in gitcmd.stdout
):
# User needs to be authenticated, and Git wants to prompt for
# username and password.
print(
"git requires authentication, but repo cannot perform "
"interactive authentication.",
file=output_redir,
)
raise GitAuthError(gitcmd.stdout)
break
elif current_branch_only and is_sha1 and ret == 128:
# Exit code 128 means "couldn't find the ref you asked for"; if
# we're in sha1 mode, we just tried sync'ing from the upstream
# field; it doesn't exist, thus abort the optimization attempt
# and do a full sync.
break
elif depth and is_sha1 and ret == 1:
# In sha1 mode, when depth is enabled, syncing the revision
# from upstream may not work because some servers only allow
# fetching named refs. Fetching a specific sha1 may result
# in an error like 'server does not allow request for
# unadvertised object'. In this case, attempt a full sync
# without depth.
break
elif ret < 0:
# Git died with a signal, exit immediately.
break
# Figure out how long to sleep before the next attempt, if there is
# one.
if not verbose and gitcmd.stdout:
print(
f"\n{self.name}:\n{gitcmd.stdout}",
end="",
file=output_redir,
)
if try_n < retry_fetches - 1:
print(
"%s: sleeping %s seconds before retrying"
% (self.name, retry_cur_sleep),
file=output_redir,
)
time.sleep(retry_cur_sleep)
retry_cur_sleep = min(
retry_exp_factor * retry_cur_sleep, MAXIMUM_RETRY_SLEEP_SEC
)
retry_cur_sleep *= 1 - random.uniform(
-RETRY_JITTER_PERCENT, RETRY_JITTER_PERCENT
)
# Retry later due to HTTP 429 Too Many Requests.
elif (
gitcmd.stdout
and "error:" in gitcmd.stdout
and "HTTP 429" in gitcmd.stdout
):
# Fallthru to sleep+retry logic at the bottom.
pass
if initial:
if alt_dir:
if old_packed != "":
_lwrite(packed_refs, old_packed)
else:
platform_utils.remove(packed_refs)
self.bare_git.pack_refs("--all", "--prune")
# TODO(b/360889369#comment24): git may gc commits incorrectly.
# Until the root cause is fixed, retry fetch with --refetch
# which will bring the repository into a good state.
elif gitcmd.stdout and (
"could not parse commit" in gitcmd.stdout
or "unable to parse commit" in gitcmd.stdout
):
cmd.insert(1, "--refetch")
print(
"could not parse commit error, retrying with refetch",
file=output_redir,
)
continue
# Try to prune remote branches once in case there are conflicts.
# For example, if the remote had refs/heads/upstream, but
# deleted that and now has refs/heads/upstream/foo.
elif (
gitcmd.stdout
and "error:" in gitcmd.stdout
and "git remote prune" in gitcmd.stdout
and not prune_tried
):
prune_tried = True
prunecmd = GitCommand(
self,
["remote", "prune", name],
bare=True,
ssh_proxy=ssh_proxy,
)
ret = prunecmd.Wait()
if ret:
break
print(
"retrying fetch after pruning remote branches",
file=output_redir,
)
# Continue right away so we don't sleep as we shouldn't
# need to.
continue
elif (
ret == 128
and gitcmd.stdout
and "fatal: could not read Username" in gitcmd.stdout
):
# User needs to be authenticated, and Git wants to prompt
# for username and password.
print(
"git requires authentication, but repo cannot perform "
"interactive authentication. Check git credentials.",
file=output_redir,
)
break
elif (
ret == 128
and gitcmd.stdout
and "remote helper 'sso' aborted session" in gitcmd.stdout
):
# User needs to be authenticated, and Git wants to prompt
# for username and password.
print(
"git requires authentication, but repo cannot perform "
"interactive authentication.",
file=output_redir,
)
raise GitAuthError(gitcmd.stdout)
break
elif current_branch_only and is_sha1 and ret == 128:
# Exit code 128 means "couldn't find the ref you asked for";
# if we're in sha1 mode, we just tried sync'ing from the
# upstream field; it doesn't exist, thus abort the
# optimization attempt and do a full sync.
break
elif depth and is_sha1 and ret == 1:
# In sha1 mode, when depth is enabled, syncing the revision
# from upstream may not work because some servers only allow
# fetching named refs. Fetching a specific sha1 may result
# in an error like 'server does not allow request for
# unadvertised object'. In this case, attempt a full sync
# without depth.
break
elif ret < 0:
# Git died with a signal, exit immediately.
break
# Figure out how long to sleep before the next attempt, if
# there is one.
if not verbose and gitcmd.stdout:
print(
f"\n{self.name}:\n{gitcmd.stdout}",
end="",
file=output_redir,
)
if try_n < retry_fetches - 1:
print(
"%s: sleeping %s seconds before retrying"
% (self.name, retry_cur_sleep),
file=output_redir,
)
time.sleep(retry_cur_sleep)
retry_cur_sleep = min(
retry_exp_factor * retry_cur_sleep,
MAXIMUM_RETRY_SLEEP_SEC,
)
retry_cur_sleep *= 1 - random.uniform(
-RETRY_JITTER_PERCENT, RETRY_JITTER_PERCENT
)
finally:
if initial:
if alt_tmp_refs:
delete_cmds = "".join(
f"delete {ref}\n" for ref in alt_tmp_refs
)
GitCommand(
self,
["update-ref", "--no-deref", "--stdin"],
bare=True,
input=delete_cmds,
log_as_error=False,
).Wait()
for ref in alt_tmp_refs:
self.bare_ref.deleted(ref)
self.bare_git.pack_refs("--all", "--prune")
if is_sha1 and current_branch_only:
# We just synced the upstream given branch; verify we
# got what we wanted, else trigger a second run of all
# refs.
if not self._CheckForImmutableRevision():
if not self._CheckForImmutableRevision(
use_superproject=use_superproject
):
# Sync the current branch only with depth set to None.
# We always pass depth=None down to avoid infinite recursion.
return self._RemoteFetch(
@@ -2806,6 +2841,7 @@ class Project:
quiet=quiet,
verbose=verbose,
output_redir=output_redir,
use_superproject=use_superproject,
current_branch_only=current_branch_only and depth,
initial=False,
alt_dir=alt_dir,
@@ -3316,6 +3352,15 @@ class Project:
remote.ResetFetch(mirror=True)
remote.Save()
# Disable auto-gc for depth=1 to prevent hangs during lazy fetches
# inside git checkout for partial clones.
effective_depth = (
self.clone_depth or self.manifest.manifestProject.depth
)
if effective_depth == 1:
self.config.SetBoolean("maintenance.auto", False)
self.config.SetInt("gc.auto", 0)
def _InitMRef(self):
"""Initialize the pseudo m/<manifest branch> ref."""
if self.manifest.branch:
@@ -3612,14 +3657,20 @@ class Project:
here. The path updates will happen independently.
"""
# Figure out where in .repo/projects/ it's pointing to.
if not os.path.islink(os.path.join(dotgit, "refs")):
gitdir = None
for name in ("refs", "reftable", "objects"):
path = os.path.join(dotgit, name)
if os.path.islink(path):
gitdir = os.path.dirname(os.path.realpath(path))
break
else:
raise GitError(
f"{dotgit}: unsupported checkout state", project=project
)
gitdir = os.path.dirname(os.path.realpath(os.path.join(dotgit, "refs")))
# Remove known symlink paths that exist in .repo/projects/.
KNOWN_LINKS = {
# go/keep-sorted start
"config",
"description",
"hooks",
@@ -3628,9 +3679,11 @@ class Project:
"objects",
"packed-refs",
"refs",
"reftable",
"rr-cache",
"shallow",
"svn",
# go/keep-sorted end
}
# Paths that we know will be in both, but are safe to clobber in
# .repo/projects/.
@@ -3655,7 +3708,16 @@ class Project:
dotgit_path = os.path.join(dotgit, name)
if name in KNOWN_LINKS:
if not platform_utils.islink(dotgit_path):
unknown_paths.append(f"{dotgit_path}: should be a symlink")
# In reftable format, refs and reftable can be directories.
if name in ("refs", "reftable") and platform_utils.isdir(
dotgit_path
):
pass
else:
unknown_paths.append(
f"{dotgit_path}: should be a symlink"
)
else:
gitdir_path = os.path.join(gitdir, name)
if name not in SAFE_TO_CLOBBER and os.path.exists(gitdir_path):
@@ -3677,7 +3739,14 @@ class Project:
if name.endswith("~") or (name[0] == "#" and name[-1] == "#"):
platform_utils.remove(dotgit_path)
elif name in KNOWN_LINKS:
platform_utils.remove(dotgit_path)
if (
name in ("refs", "reftable")
and platform_utils.isdir(dotgit_path)
and not platform_utils.islink(dotgit_path)
):
platform_utils.rmtree(dotgit_path)
else:
platform_utils.remove(dotgit_path)
else:
gitdir_path = os.path.join(gitdir, name)
platform_utils.remove(gitdir_path, missing_ok=True)
@@ -3901,6 +3970,24 @@ class Project:
return self.rev_parse(HEAD)
return symbolic_head
except GitError as e:
# `git rev-parse --symbolic-full-name HEAD` will fail for unborn
# branches, so try symbolic-ref before falling back to raw file
# parsing.
try:
p = GitCommand(
self._project,
["symbolic-ref", "-q", HEAD],
bare=True,
gitdir=self._gitdir,
capture_stdout=True,
capture_stderr=True,
log_as_error=False,
)
if p.Wait() == 0:
return p.stdout.rstrip("\n")
except GitError:
pass
logger.warning(
"project %s: unparseable HEAD; trying to recover.\n"
"Check that HEAD ref in .git/HEAD is valid. The error "
@@ -4765,6 +4852,7 @@ class ManifestProject(MetaProject):
quiet=not verbose,
verbose=verbose,
clone_bundle=clone_bundle,
use_superproject=use_superproject,
current_branch_only=current_branch_only,
tags=tags,
submodules=submodules,

View File

@@ -14,9 +14,32 @@
[tool.black]
line-length = 80
# NB: Keep in sync with tox.ini.
target-version = ['py36', 'py37', 'py38', 'py39', 'py310', 'py311'] #, 'py312'
# Config file for the isort python module.
# This is used to enforce import sorting standards.
#
# https://pycqa.github.io/isort/docs/configuration/options.html
[tool.isort]
# Be compatible with `black` since it also matches what we want.
profile = 'black'
line_length = 80
length_sort = false
force_single_line = true
lines_after_imports = 2
from_first = false
case_sensitive = false
force_sort_within_sections = true
order_by_type = false
# Ignore generated files.
extend_skip_glob = '*_pb2.py'
# Allow importing multiple classes on a single line from these modules.
# https://google.github.io/styleguide/pyguide#s2.2-imports
single_line_exclusions = ['abc', 'collections.abc', 'typing']
[tool.pytest.ini_options]
markers = """
skip_cq: Skip tests in the CQ. Should be rarely used!

View File

@@ -90,7 +90,10 @@ def check_license(path: Path, lines: list[str]) -> bool:
def check_path(opts: argparse.Namespace, path: Path) -> bool:
"""Check a single path."""
data = path.read_text(encoding="utf-8")
try:
data = path.read_text(encoding="utf-8")
except FileNotFoundError:
return True
lines = data.splitlines()
# NB: Use list comprehension and not a generator so we run all the checks.
return all(

View File

@@ -27,6 +27,9 @@ import sys
import util
assert sys.version_info >= (3, 9), "Release framework requires Python 3.9+"
def sign(opts):
"""Sign the launcher!"""
output = ""

View File

@@ -30,6 +30,9 @@ import sys
import util
assert sys.version_info >= (3, 9), "Release framework requires Python 3.9+"
# We currently sign with the old DSA key as it's been around the longest.
# We should transition to RSA by Jun 2020, and ECC by Jun 2021.
KEYID = util.KEYID_DSA

View File

@@ -24,7 +24,7 @@ from typing import List, Optional
import urllib.request
assert sys.version_info >= (3, 8), "Python 3.8+ required"
assert sys.version_info >= (3, 9), "Release framework requires Python 3.9+"
TOPDIR = Path(__file__).resolve().parent.parent

View File

@@ -30,6 +30,10 @@ import tempfile
from typing import List
# NB: This script is currently imported by tests/ to unittest some logic.
assert sys.version_info >= (3, 6), "Python 3.6+ required"
THIS_FILE = Path(__file__).resolve()
TOPDIR = THIS_FILE.parent.parent
MANDIR = TOPDIR.joinpath("man")

View File

@@ -21,7 +21,11 @@ import shlex
import shutil
import subprocess
import sys
from typing import List
# NB: While tests/* support Python >=3.6 to match requirements.json for `repo`,
# the higher level runner logic does not need to be held back.
assert sys.version_info >= (3, 9), "Test/release framework requires Python 3.9+"
ROOT_DIR = os.path.dirname(os.path.realpath(__file__))
@@ -38,7 +42,7 @@ def is_ci() -> bool:
return os.getenv("LUCI_CQ") == "yes"
def run_pytest(argv: List[str]) -> int:
def run_pytest(argv: list[str]) -> int:
"""Returns the exit code from pytest."""
if is_ci():
argv = ["-m", "not skip_cq"] + argv
@@ -51,7 +55,7 @@ def run_pytest(argv: List[str]) -> int:
).returncode
def run_pytest_py38(argv: List[str]) -> int:
def run_pytest_py38(argv: list[str]) -> int:
"""Returns the exit code from pytest under Python 3.8."""
if is_ci():
argv = ["-m", "not skip_cq"] + argv
@@ -77,6 +81,14 @@ def run_pytest_py38(argv: List[str]) -> int:
def run_black():
"""Returns the exit code from black."""
argv = ["--version"]
log_cmd("black", argv)
subprocess.run(
[sys.executable, "-m", "black"] + argv,
check=True,
cwd=ROOT_DIR,
)
# Black by default only matches .py files. We have to list standalone
# scripts manually.
extra_programs = [
@@ -96,6 +108,14 @@ def run_black():
def run_flake8():
"""Returns the exit code from flake8."""
argv = ["--version"]
log_cmd("flake8", argv)
subprocess.run(
[sys.executable, "-m", "flake8"] + argv,
check=True,
cwd=ROOT_DIR,
)
argv = [ROOT_DIR]
log_cmd("flake8", argv)
return subprocess.run(
@@ -107,6 +127,14 @@ def run_flake8():
def run_isort():
"""Returns the exit code from isort."""
argv = ["--version-number"]
log_cmd("isort", argv)
subprocess.run(
[sys.executable, "-m", "isort"] + argv,
check=True,
cwd=ROOT_DIR,
)
argv = ["--check", ROOT_DIR]
log_cmd("isort", argv)
return subprocess.run(

23
ssh.py
View File

@@ -52,12 +52,12 @@ def _parse_ssh_version(ver_str=None):
@functools.lru_cache(maxsize=None)
def version():
"""return ssh version as a tuple"""
"""Return ssh version as a tuple.
If ssh is not available, a FileNotFoundError will be raised.
"""
try:
return _parse_ssh_version()
except FileNotFoundError:
print("fatal: ssh not installed", file=sys.stderr)
sys.exit(1)
except subprocess.CalledProcessError as e:
print(
"fatal: unable to detect ssh version"
@@ -102,9 +102,18 @@ class ProxyManager:
self._clients = manager.list()
# Path to directory for holding master sockets.
self._sock_path = None
# See if ssh is usable.
self._ssh_installed = False
def __enter__(self):
"""Enter a new context."""
# Check which version of ssh is available.
try:
version()
self._ssh_installed = True
except FileNotFoundError:
self._ssh_installed = False
return self
def __exit__(self, exc_type, exc_value, traceback):
@@ -282,6 +291,9 @@ class ProxyManager:
def preconnect(self, url):
"""If |uri| will create a ssh connection, setup the ssh master for it.""" # noqa: E501
if not self._ssh_installed:
return False
m = URI_ALL.match(url)
if m:
scheme = m.group(1)
@@ -306,6 +318,9 @@ class ProxyManager:
This has all the master sockets so clients can talk to them.
"""
if not self._ssh_installed:
return None
if self._sock_path is None:
if not create:
return None

View File

@@ -284,7 +284,16 @@ class Gc(Command):
args, all_manifests=not opt.this_manifest_only
)
ret = self.delete_unused_projects(projects, opt)
# If the user specified projects, fetch the global list separately
# to avoid deleting untargeted projects.
if args:
all_projects = self.GetProjects(
[], all_manifests=not opt.this_manifest_only
)
else:
all_projects = projects
ret = self.delete_unused_projects(all_projects, opt)
if ret != 0:
return ret

View File

@@ -808,6 +808,7 @@ later is required to fix a server side protocol bug.
quiet=opt.quiet,
verbose=opt.verbose,
output_redir=buf,
use_superproject=opt.use_superproject,
current_branch_only=cls._GetCurrentBranchOnly(
opt, project.manifest
),
@@ -1499,6 +1500,7 @@ later is required to fix a server side protocol bug.
quiet=opt.quiet,
verbose=opt.verbose,
output_redir=buf,
use_superproject=opt.use_superproject,
current_branch_only=self._GetCurrentBranchOnly(
opt, manifest
),
@@ -1830,6 +1832,7 @@ later is required to fix a server side protocol bug.
quiet=not opt.verbose,
output_redir=buf,
verbose=opt.verbose,
use_superproject=opt.use_superproject,
current_branch_only=self._GetCurrentBranchOnly(
opt, mp.manifest
),
@@ -1940,15 +1943,33 @@ later is required to fix a server side protocol bug.
opt.jobs_network = min(opt.jobs_network, jobs_soft_limit)
opt.jobs_checkout = min(opt.jobs_checkout, jobs_soft_limit)
# Warn once if effective job counts seem excessively high.
sync_j_max = mp.manifest.default.sync_j_max or None
# Check for shared options.
# Prioritize --jobs, then --jobs-network, then --jobs-checkout.
job_options_to_check = (
("--jobs", opt.jobs),
("--jobs-network", opt.jobs_network),
("--jobs-checkout", opt.jobs_checkout),
job_attributes = (
("--jobs", "jobs"),
("--jobs-network", "jobs_network"),
("--jobs-checkout", "jobs_checkout"),
)
for name, value in job_options_to_check:
if value > self._JOBS_WARN_THRESHOLD:
warned = False
limit_warned = False
for name, attr in job_attributes:
value = getattr(opt, attr)
if sync_j_max and value > sync_j_max:
if not limit_warned:
logger.warning(
"warning: manifest limits %s to %d",
name,
sync_j_max,
)
limit_warned = True
setattr(opt, attr, sync_j_max)
value = sync_j_max
if not warned and value > self._JOBS_WARN_THRESHOLD:
logger.warning(
"High job count (%d > %d) specified for %s; this may "
"lead to excessive resource usage or diminishing returns.",
@@ -1956,7 +1977,7 @@ later is required to fix a server side protocol bug.
self._JOBS_WARN_THRESHOLD,
name,
)
break
warned = True
def Execute(self, opt, args):
errors = []
@@ -2338,6 +2359,7 @@ later is required to fix a server side protocol bug.
quiet=opt.quiet,
verbose=opt.verbose,
output_redir=network_output_capture,
use_superproject=opt.use_superproject,
current_branch_only=cls._GetCurrentBranchOnly(
opt, project.manifest
),

View File

@@ -27,6 +27,7 @@ from error import SilentRepoExitError
from error import UploadError
from git_command import GitCommand
from git_refs import R_HEADS
import git_superproject
from hooks import RepoHook
from project import ReviewableBranch
from repo_logging import RepoLogger
@@ -627,7 +628,7 @@ Gerrit Code Review: https://www.gerritcodereview.com/
# If using superproject, add the root repo as a push option.
manifest = branch.project.manifest
push_options = list(opt.push_options)
if manifest.manifestProject.use_superproject:
if git_superproject.UseSuperproject(None, manifest):
sp = manifest.superproject
if sp:
r_id = sp.repo_id

20
tests/README.md Normal file
View File

@@ -0,0 +1,20 @@
# Repo Tests
There is a mixture of [pytest] & [Python unittest] in here. We adopted [pytest]
later on but didn't migrate existing tests (since they still work). New tests
should be written using [pytest] only.
## File layout
* `test_xxx.py`: Unittests for the `xxx` module in the main repo codebase.
Modules that are in subdirs normalize the `/` into `_`.
For example, [test_error.py](./test_error.py) is for the
[error.py](../error.py) module, and
[test_subcmds_forall.py](./test_subcmds_forall.py) is for the
[subcmds/forall.py](../subcmds/forall.py) module.
* [conftest.py](./conftest.py): Custom pytest fixtures for sharing.
* [utils_for_test.py](./utils_for_test.py): Helpers for sharing in tests.
[pytest]: https://pytest.org/
[Python unittest]: https://docs.python.org/3/library/unittest.html#unittest.TestCase

View File

@@ -15,60 +15,66 @@
"""Unittests for the color.py module."""
import os
import unittest
import pytest
import color
import git_config
def fixture(*paths):
def fixture(*paths: str) -> str:
"""Return a path relative to test/fixtures."""
return os.path.join(os.path.dirname(__file__), "fixtures", *paths)
class ColoringTests(unittest.TestCase):
"""tests of the Coloring class."""
@pytest.fixture
def coloring() -> color.Coloring:
"""Create a Coloring object for testing."""
config_fixture = fixture("test.gitconfig")
config = git_config.GitConfig(config_fixture)
color.SetDefaultColoring("true")
return color.Coloring(config, "status")
def setUp(self):
"""Create a GitConfig object using the test.gitconfig fixture."""
config_fixture = fixture("test.gitconfig")
self.config = git_config.GitConfig(config_fixture)
color.SetDefaultColoring("true")
self.color = color.Coloring(self.config, "status")
def test_Color_Parse_all_params_none(self):
"""all params are None"""
val = self.color._parse(None, None, None, None)
self.assertEqual("", val)
def test_Color_Parse_all_params_none(coloring: color.Coloring) -> None:
"""all params are None"""
val = coloring._parse(None, None, None, None)
assert val == ""
def test_Color_Parse_first_parameter_none(self):
"""check fg & bg & attr"""
val = self.color._parse(None, "black", "red", "ul")
self.assertEqual("\x1b[4;30;41m", val)
def test_Color_Parse_one_entry(self):
"""check fg"""
val = self.color._parse("one", None, None, None)
self.assertEqual("\033[33m", val)
def test_Color_Parse_first_parameter_none(coloring: color.Coloring) -> None:
"""check fg & bg & attr"""
val = coloring._parse(None, "black", "red", "ul")
assert val == "\x1b[4;30;41m"
def test_Color_Parse_two_entry(self):
"""check fg & bg"""
val = self.color._parse("two", None, None, None)
self.assertEqual("\033[35;46m", val)
def test_Color_Parse_three_entry(self):
"""check fg & bg & attr"""
val = self.color._parse("three", None, None, None)
self.assertEqual("\033[4;30;41m", val)
def test_Color_Parse_one_entry(coloring: color.Coloring) -> None:
"""check fg"""
val = coloring._parse("one", None, None, None)
assert val == "\033[33m"
def test_Color_Parse_reset_entry(self):
"""check reset entry"""
val = self.color._parse("reset", None, None, None)
self.assertEqual("\033[m", val)
def test_Color_Parse_empty_entry(self):
"""check empty entry"""
val = self.color._parse("none", "blue", "white", "dim")
self.assertEqual("\033[2;34;47m", val)
val = self.color._parse("empty", "green", "white", "bold")
self.assertEqual("\033[1;32;47m", val)
def test_Color_Parse_two_entry(coloring: color.Coloring) -> None:
"""check fg & bg"""
val = coloring._parse("two", None, None, None)
assert val == "\033[35;46m"
def test_Color_Parse_three_entry(coloring: color.Coloring) -> None:
"""check fg & bg & attr"""
val = coloring._parse("three", None, None, None)
assert val == "\033[4;30;41m"
def test_Color_Parse_reset_entry(coloring: color.Coloring) -> None:
"""check reset entry"""
val = coloring._parse("reset", None, None, None)
assert val == "\033[m"
def test_Color_Parse_empty_entry(coloring: color.Coloring) -> None:
"""check empty entry"""
val = coloring._parse("none", "blue", "white", "dim")
assert val == "\033[2;34;47m"
val = coloring._parse("empty", "green", "white", "bold")
assert val == "\033[1;32;47m"

View File

@@ -14,43 +14,32 @@
"""Unittests for the editor.py module."""
import unittest
import pytest
from editor import Editor
class EditorTestCase(unittest.TestCase):
@pytest.fixture(autouse=True)
def reset_editor() -> None:
"""Take care of resetting Editor state across tests."""
def setUp(self):
self.setEditor(None)
def tearDown(self):
self.setEditor(None)
@staticmethod
def setEditor(editor):
Editor._editor = editor
Editor._editor = None
yield
Editor._editor = None
class GetEditor(EditorTestCase):
"""Check GetEditor behavior."""
def test_basic(self):
"""Basic checking of _GetEditor."""
self.setEditor(":")
self.assertEqual(":", Editor._GetEditor())
def test_basic() -> None:
"""Basic checking of _GetEditor."""
Editor._editor = ":"
assert Editor._GetEditor() == ":"
class EditString(EditorTestCase):
"""Check EditString behavior."""
def test_no_editor() -> None:
"""Check behavior when no editor is available."""
Editor._editor = ":"
assert Editor.EditString("foo") == "foo"
def test_no_editor(self):
"""Check behavior when no editor is available."""
self.setEditor(":")
self.assertEqual("foo", Editor.EditString("foo"))
def test_cat_editor(self):
"""Check behavior when editor is `cat`."""
self.setEditor("cat")
self.assertEqual("foo", Editor.EditString("foo"))
def test_cat_editor() -> None:
"""Check behavior when editor is `cat`."""
Editor._editor = "cat"
assert Editor.EditString("foo") == "foo"

View File

@@ -16,7 +16,9 @@
import inspect
import pickle
import unittest
from typing import Iterator, Type
import pytest
import command
import error
@@ -26,7 +28,7 @@ import project
from subcmds import all_modules
imports = all_modules + [
_IMPORTS = all_modules + [
error,
project,
git_command,
@@ -35,36 +37,35 @@ imports = all_modules + [
]
class PickleTests(unittest.TestCase):
"""Make sure all our custom exceptions can be pickled."""
def get_exceptions() -> Iterator[Type[Exception]]:
"""Return all our custom exceptions."""
for entry in _IMPORTS:
for name in dir(entry):
cls = getattr(entry, name)
if isinstance(cls, type) and issubclass(cls, Exception):
yield cls
def getExceptions(self):
"""Return all our custom exceptions."""
for entry in imports:
for name in dir(entry):
cls = getattr(entry, name)
if isinstance(cls, type) and issubclass(cls, Exception):
yield cls
def testExceptionLookup(self):
"""Make sure our introspection logic works."""
classes = list(self.getExceptions())
self.assertIn(error.HookError, classes)
# Don't assert the exact number to avoid being a change-detector test.
self.assertGreater(len(classes), 10)
def test_exception_lookup() -> None:
"""Make sure our introspection logic works."""
classes = list(get_exceptions())
assert error.HookError in classes
# Don't assert the exact number to avoid being a change-detector test.
assert len(classes) > 10
def testPickle(self):
"""Try to pickle all the exceptions."""
for cls in self.getExceptions():
args = inspect.getfullargspec(cls.__init__).args[1:]
obj = cls(*args)
p = pickle.dumps(obj)
try:
newobj = pickle.loads(p)
except Exception as e: # pylint: disable=broad-except
self.fail(
"Class %s is unable to be pickled: %s\n"
"Incomplete super().__init__(...) call?" % (cls, e)
)
self.assertIsInstance(newobj, cls)
self.assertEqual(str(obj), str(newobj))
@pytest.mark.parametrize("cls", get_exceptions())
def test_pickle(cls: Type[Exception]) -> None:
"""Try to pickle all the exceptions."""
args = inspect.getfullargspec(cls.__init__).args[1:]
obj = cls(*args)
p = pickle.dumps(obj)
try:
newobj = pickle.loads(p)
except Exception as e:
pytest.fail(
f"Class {cls} is unable to be pickled: {e}\n"
"Incomplete super().__init__(...) call?"
)
assert isinstance(newobj, cls)
assert str(obj) == str(newobj)

View File

@@ -15,176 +15,219 @@
"""Unittests for the git_config.py module."""
import os
import tempfile
import unittest
from pathlib import Path
from typing import Any
import pytest
import git_config
def fixture(*paths):
def fixture_path(*paths: str) -> str:
"""Return a path relative to test/fixtures."""
return os.path.join(os.path.dirname(__file__), "fixtures", *paths)
class GitConfigReadOnlyTests(unittest.TestCase):
"""Read-only tests of the GitConfig class."""
def setUp(self):
"""Create a GitConfig object using the test.gitconfig fixture."""
config_fixture = fixture("test.gitconfig")
self.config = git_config.GitConfig(config_fixture)
def test_GetString_with_empty_config_values(self):
"""
Test config entries with no value.
[section]
empty
"""
val = self.config.GetString("section.empty")
self.assertEqual(val, None)
def test_GetString_with_true_value(self):
"""
Test config entries with a string value.
[section]
nonempty = true
"""
val = self.config.GetString("section.nonempty")
self.assertEqual(val, "true")
def test_GetString_from_missing_file(self):
"""
Test missing config file
"""
config_fixture = fixture("not.present.gitconfig")
config = git_config.GitConfig(config_fixture)
val = config.GetString("empty")
self.assertEqual(val, None)
def test_GetBoolean_undefined(self):
"""Test GetBoolean on key that doesn't exist."""
self.assertIsNone(self.config.GetBoolean("section.missing"))
def test_GetBoolean_invalid(self):
"""Test GetBoolean on invalid boolean value."""
self.assertIsNone(self.config.GetBoolean("section.boolinvalid"))
def test_GetBoolean_true(self):
"""Test GetBoolean on valid true boolean."""
self.assertTrue(self.config.GetBoolean("section.booltrue"))
def test_GetBoolean_false(self):
"""Test GetBoolean on valid false boolean."""
self.assertFalse(self.config.GetBoolean("section.boolfalse"))
def test_GetInt_undefined(self):
"""Test GetInt on key that doesn't exist."""
self.assertIsNone(self.config.GetInt("section.missing"))
def test_GetInt_invalid(self):
"""Test GetInt on invalid integer value."""
self.assertIsNone(self.config.GetBoolean("section.intinvalid"))
def test_GetInt_valid(self):
"""Test GetInt on valid integers."""
TESTS = (
("inthex", 16),
("inthexk", 16384),
("int", 10),
("intk", 10240),
("intm", 10485760),
("intg", 10737418240),
)
for key, value in TESTS:
self.assertEqual(value, self.config.GetInt(f"section.{key}"))
@pytest.fixture
def readonly_config() -> git_config.GitConfig:
"""Create a GitConfig object using the test.gitconfig fixture."""
config_fixture = fixture_path("test.gitconfig")
return git_config.GitConfig(config_fixture)
class GitConfigReadWriteTests(unittest.TestCase):
"""Read/write tests of the GitConfig class."""
def test_get_string_with_empty_config_values(
readonly_config: git_config.GitConfig,
) -> None:
"""Test config entries with no value.
def setUp(self):
self.tmpfile = tempfile.NamedTemporaryFile()
self.config = self.get_config()
[section]
empty
def get_config(self):
"""Get a new GitConfig instance."""
return git_config.GitConfig(self.tmpfile.name)
"""
val = readonly_config.GetString("section.empty")
assert val is None
def test_SetString(self):
"""Test SetString behavior."""
# Set a value.
self.assertIsNone(self.config.GetString("foo.bar"))
self.config.SetString("foo.bar", "val")
self.assertEqual("val", self.config.GetString("foo.bar"))
# Make sure the value was actually written out.
config = self.get_config()
self.assertEqual("val", config.GetString("foo.bar"))
def test_get_string_with_true_value(
readonly_config: git_config.GitConfig,
) -> None:
"""Test config entries with a string value.
# Update the value.
self.config.SetString("foo.bar", "valll")
self.assertEqual("valll", self.config.GetString("foo.bar"))
config = self.get_config()
self.assertEqual("valll", config.GetString("foo.bar"))
[section]
nonempty = true
# Delete the value.
self.config.SetString("foo.bar", None)
self.assertIsNone(self.config.GetString("foo.bar"))
config = self.get_config()
self.assertIsNone(config.GetString("foo.bar"))
"""
val = readonly_config.GetString("section.nonempty")
assert val == "true"
def test_SetBoolean(self):
"""Test SetBoolean behavior."""
# Set a true value.
self.assertIsNone(self.config.GetBoolean("foo.bar"))
for val in (True, 1):
self.config.SetBoolean("foo.bar", val)
self.assertTrue(self.config.GetBoolean("foo.bar"))
# Make sure the value was actually written out.
config = self.get_config()
self.assertTrue(config.GetBoolean("foo.bar"))
self.assertEqual("true", config.GetString("foo.bar"))
def test_get_string_from_missing_file() -> None:
"""Test missing config file."""
config_fixture = fixture_path("not.present.gitconfig")
config = git_config.GitConfig(config_fixture)
val = config.GetString("empty")
assert val is None
# Set a false value.
for val in (False, 0):
self.config.SetBoolean("foo.bar", val)
self.assertFalse(self.config.GetBoolean("foo.bar"))
# Make sure the value was actually written out.
config = self.get_config()
self.assertFalse(config.GetBoolean("foo.bar"))
self.assertEqual("false", config.GetString("foo.bar"))
def test_get_boolean_undefined(readonly_config: git_config.GitConfig) -> None:
"""Test GetBoolean on key that doesn't exist."""
assert readonly_config.GetBoolean("section.missing") is None
# Delete the value.
self.config.SetBoolean("foo.bar", None)
self.assertIsNone(self.config.GetBoolean("foo.bar"))
config = self.get_config()
self.assertIsNone(config.GetBoolean("foo.bar"))
def test_GetSyncAnalysisStateData(self):
"""Test config entries with a sync state analysis data."""
superproject_logging_data = {}
superproject_logging_data["test"] = False
options = type("options", (object,), {})()
options.verbose = "true"
options.mp_update = "false"
TESTS = (
("superproject.test", "false"),
("options.verbose", "true"),
("options.mpupdate", "false"),
("main.version", "1"),
)
self.config.UpdateSyncAnalysisState(options, superproject_logging_data)
sync_data = self.config.GetSyncAnalysisStateData()
for key, value in TESTS:
self.assertEqual(
sync_data[f"{git_config.SYNC_STATE_PREFIX}{key}"], value
)
self.assertTrue(
sync_data[f"{git_config.SYNC_STATE_PREFIX}main.synctime"]
)
def test_get_boolean_invalid(readonly_config: git_config.GitConfig) -> None:
"""Test GetBoolean on invalid boolean value."""
assert readonly_config.GetBoolean("section.boolinvalid") is None
def test_get_boolean_true(readonly_config: git_config.GitConfig) -> None:
"""Test GetBoolean on valid true boolean."""
assert readonly_config.GetBoolean("section.booltrue") is True
def test_get_boolean_false(readonly_config: git_config.GitConfig) -> None:
"""Test GetBoolean on valid false boolean."""
assert readonly_config.GetBoolean("section.boolfalse") is False
def test_get_int_undefined(readonly_config: git_config.GitConfig) -> None:
"""Test GetInt on key that doesn't exist."""
assert readonly_config.GetInt("section.missing") is None
def test_get_int_invalid(readonly_config: git_config.GitConfig) -> None:
"""Test GetInt on invalid integer value."""
assert readonly_config.GetInt("section.intinvalid") is None
@pytest.mark.parametrize(
"key, expected",
(
("inthex", 16),
("inthexk", 16384),
("int", 10),
("intk", 10240),
("intm", 10485760),
("intg", 10737418240),
),
)
def test_get_int_valid(
readonly_config: git_config.GitConfig, key: str, expected: int
) -> None:
"""Test GetInt on valid integers."""
assert readonly_config.GetInt(f"section.{key}") == expected
@pytest.fixture
def rw_config_file(tmp_path: Path) -> Path:
"""Return a path to a temporary config file."""
return tmp_path / "config"
def test_set_string(rw_config_file: Path) -> None:
"""Test SetString behavior."""
config = git_config.GitConfig(str(rw_config_file))
# Set a value.
assert config.GetString("foo.bar") is None
config.SetString("foo.bar", "val")
assert config.GetString("foo.bar") == "val"
# Make sure the value was actually written out.
config2 = git_config.GitConfig(str(rw_config_file))
assert config2.GetString("foo.bar") == "val"
# Update the value.
config.SetString("foo.bar", "valll")
assert config.GetString("foo.bar") == "valll"
config3 = git_config.GitConfig(str(rw_config_file))
assert config3.GetString("foo.bar") == "valll"
# Delete the value.
config.SetString("foo.bar", None)
assert config.GetString("foo.bar") is None
config4 = git_config.GitConfig(str(rw_config_file))
assert config4.GetString("foo.bar") is None
def test_set_boolean(rw_config_file: Path) -> None:
"""Test SetBoolean behavior."""
config = git_config.GitConfig(str(rw_config_file))
# Set a true value.
assert config.GetBoolean("foo.bar") is None
for val in (True, 1):
config.SetBoolean("foo.bar", val)
assert config.GetBoolean("foo.bar") is True
# Make sure the value was actually written out.
config2 = git_config.GitConfig(str(rw_config_file))
assert config2.GetBoolean("foo.bar") is True
assert config2.GetString("foo.bar") == "true"
# Set a false value.
for val in (False, 0):
config.SetBoolean("foo.bar", val)
assert config.GetBoolean("foo.bar") is False
# Make sure the value was actually written out.
config3 = git_config.GitConfig(str(rw_config_file))
assert config3.GetBoolean("foo.bar") is False
assert config3.GetString("foo.bar") == "false"
# Delete the value.
config.SetBoolean("foo.bar", None)
assert config.GetBoolean("foo.bar") is None
config4 = git_config.GitConfig(str(rw_config_file))
assert config4.GetBoolean("foo.bar") is None
def test_set_int(rw_config_file: Path) -> None:
"""Test SetInt behavior."""
config = git_config.GitConfig(str(rw_config_file))
# Set a value.
assert config.GetInt("foo.bar") is None
config.SetInt("foo.bar", 10)
assert config.GetInt("foo.bar") == 10
# Make sure the value was actually written out.
config2 = git_config.GitConfig(str(rw_config_file))
assert config2.GetInt("foo.bar") == 10
assert config2.GetString("foo.bar") == "10"
# Update the value.
config.SetInt("foo.bar", 20)
assert config.GetInt("foo.bar") == 20
config3 = git_config.GitConfig(str(rw_config_file))
assert config3.GetInt("foo.bar") == 20
# Delete the value.
config.SetInt("foo.bar", None)
assert config.GetInt("foo.bar") is None
config4 = git_config.GitConfig(str(rw_config_file))
assert config4.GetInt("foo.bar") is None
def test_get_sync_analysis_state_data(rw_config_file: Path) -> None:
"""Test config entries with a sync state analysis data."""
config = git_config.GitConfig(str(rw_config_file))
superproject_logging_data: dict[str, Any] = {"test": False}
class Options:
"""Container for testing."""
options = Options()
options.verbose = "true"
options.mp_update = "false"
TESTS = (
("superproject.test", "false"),
("options.verbose", "true"),
("options.mpupdate", "false"),
("main.version", "1"),
)
config.UpdateSyncAnalysisState(options, superproject_logging_data)
sync_data = config.GetSyncAnalysisStateData()
for key, value in TESTS:
assert sync_data[f"{git_config.SYNC_STATE_PREFIX}{key}"] == value
assert sync_data[f"{git_config.SYNC_STATE_PREFIX}main.synctime"]

99
tests/test_git_refs.py Normal file
View File

@@ -0,0 +1,99 @@
# Copyright (C) 2026 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unittests for the git_refs.py module."""
import os
from pathlib import Path
import subprocess
import pytest
import utils_for_test
import git_refs
def _run(repo, *args):
return subprocess.run(
["git", "-C", repo, *args],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
encoding="utf-8",
check=True,
).stdout.strip()
def _init_repo(tmp_path, reftable=False):
repo = os.path.join(tmp_path, "repo")
ref_format = "reftable" if reftable else "files"
utils_for_test.init_git_tree(repo, ref_format=ref_format)
Path(os.path.join(repo, "a")).write_text("1")
_run(repo, "add", "a")
_run(repo, "commit", "-q", "-m", "init")
return repo
@pytest.mark.parametrize("reftable", [False, True])
def test_reads_refs(tmp_path, reftable):
if reftable and not utils_for_test.supports_reftable():
pytest.skip("reftable not supported")
repo = _init_repo(tmp_path, reftable=reftable)
gitdir = os.path.join(repo, ".git")
refs = git_refs.GitRefs(gitdir)
branch = _run(repo, "symbolic-ref", "--short", "HEAD")
head = _run(repo, "rev-parse", "HEAD")
assert refs.symref("HEAD") == f"refs/heads/{branch}"
assert refs.get("HEAD") == head
assert refs.get(f"refs/heads/{branch}") == head
@pytest.mark.parametrize("reftable", [False, True])
def test_updates_when_refs_change(tmp_path, reftable):
if reftable and not utils_for_test.supports_reftable():
pytest.skip("reftable not supported")
repo = _init_repo(tmp_path, reftable=reftable)
gitdir = os.path.join(repo, ".git")
refs = git_refs.GitRefs(gitdir)
head = _run(repo, "rev-parse", "HEAD")
assert refs.get("refs/heads/topic") == ""
_run(repo, "branch", "topic")
assert refs.get("refs/heads/topic") == head
_run(repo, "branch", "-D", "topic")
assert refs.get("refs/heads/topic") == ""
@pytest.mark.skipif(
not utils_for_test.supports_refs_migrate(),
reason="git refs migrate reftable support is required for this test",
)
def test_updates_when_storage_backend_toggles(tmp_path):
repo = _init_repo(tmp_path, reftable=False)
gitdir = os.path.join(repo, ".git")
refs = git_refs.GitRefs(gitdir)
head = _run(repo, "rev-parse", "HEAD")
assert refs.get("refs/heads/reftable-branch") == ""
_run(repo, "refs", "migrate", "--ref-format=reftable")
_run(repo, "branch", "reftable-branch")
assert refs.get("refs/heads/reftable-branch") == head
assert refs.get("refs/heads/files-branch") == ""
_run(repo, "refs", "migrate", "--ref-format=files")
_run(repo, "branch", "files-branch")
assert refs.get("refs/heads/files-branch") == head

View File

@@ -461,6 +461,62 @@ class SuperprojectTestCase(unittest.TestCase):
"</manifest>",
)
def test_Init_success(self):
"""Test _Init succeeds and creates the work git dir."""
self.assertFalse(os.path.exists(self._superproject._work_git))
with mock.patch(
"git_superproject.GitCommand", autospec=True
) as mock_git_command:
instance = mock_git_command.return_value
instance.Wait.return_value = 0
self.assertTrue(self._superproject._Init())
mock_git_command.assert_called_once()
args, kwargs = mock_git_command.call_args
self.assertEqual(args[1][:2], ["init", "--bare"])
tmp_git_name = args[1][2]
self.assertTrue(
tmp_git_name.startswith(".tmp-superproject-initgitdir-")
)
self.assertTrue(os.path.exists(self._superproject._work_git))
tmp_git_path = os.path.join(
self._superproject._superproject_path, tmp_git_name
)
self.assertFalse(os.path.exists(tmp_git_path))
def test_Init_already_exists(self):
"""Test _Init returns early if the work git dir already exists."""
os.mkdir(self._superproject._superproject_path)
os.mkdir(self._superproject._work_git)
with mock.patch(
"git_superproject.GitCommand", autospec=True
) as mock_git_command:
self.assertTrue(self._superproject._Init())
mock_git_command.assert_not_called()
def test_Init_failure_cleans_up(self):
"""Test _Init cleans up the temporary directory if 'git init' fails."""
with mock.patch(
"git_superproject.GitCommand", autospec=True
) as mock_git_command:
instance = mock_git_command.return_value
instance.Wait.return_value = 1
instance.stderr = "mock git init failure"
self.assertFalse(self._superproject._Init())
mock_git_command.assert_called_once()
args, kwargs = mock_git_command.call_args
tmp_git_name = args[1][2]
tmp_git_path = os.path.join(
self._superproject._superproject_path, tmp_git_name
)
self.assertFalse(os.path.exists(tmp_git_path))
self.assertFalse(os.path.exists(self._superproject._work_git))
def test_Fetch(self):
manifest = self.getXmlManifest(
"""

View File

@@ -14,6 +14,8 @@
"""Unittests for the git_trace2_event_log.py module."""
import contextlib
import io
import json
import os
import socket
@@ -65,13 +67,13 @@ class EventLogTestCase(unittest.TestCase):
def setUp(self):
"""Load the event_log module every time."""
self._event_log_module = None
self._event_log = None
# By default we initialize with the expected case where
# repo launches us (so GIT_TRACE2_PARENT_SID is set).
env = {
self.PARENT_SID_KEY: self.PARENT_SID_VALUE,
}
self._event_log_module = git_trace2_event_log.EventLog(env=env)
self._event_log = git_trace2_event_log.EventLog(env=env)
self._log_data = None
def verifyCommonKeys(
@@ -112,13 +114,13 @@ class EventLogTestCase(unittest.TestCase):
def test_initial_state_with_parent_sid(self):
"""Test initial state when 'GIT_TRACE2_PARENT_SID' is set by parent."""
self.assertRegex(self._event_log_module.full_sid, self.FULL_SID_REGEX)
self.assertRegex(self._event_log.full_sid, self.FULL_SID_REGEX)
def test_initial_state_no_parent_sid(self):
"""Test initial state when 'GIT_TRACE2_PARENT_SID' is not set."""
# Setup an empty environment dict (no parent sid).
self._event_log_module = git_trace2_event_log.EventLog(env={})
self.assertRegex(self._event_log_module.full_sid, self.SELF_SID_REGEX)
self._event_log = git_trace2_event_log.EventLog(env={})
self.assertRegex(self._event_log.full_sid, self.SELF_SID_REGEX)
def test_version_event(self):
"""Test 'version' event data is valid.
@@ -130,7 +132,7 @@ class EventLogTestCase(unittest.TestCase):
<version event>
"""
with tempfile.TemporaryDirectory(prefix="event_log_tests") as tempdir:
log_path = self._event_log_module.Write(path=tempdir)
log_path = self._event_log.Write(path=tempdir)
self._log_data = self.readLog(log_path)
# A log with no added events should only have the version entry.
@@ -150,9 +152,9 @@ class EventLogTestCase(unittest.TestCase):
<version event>
<start event>
"""
self._event_log_module.StartEvent([])
self._event_log.StartEvent([])
with tempfile.TemporaryDirectory(prefix="event_log_tests") as tempdir:
log_path = self._event_log_module.Write(path=tempdir)
log_path = self._event_log.Write(path=tempdir)
self._log_data = self.readLog(log_path)
self.assertEqual(len(self._log_data), 2)
@@ -172,9 +174,9 @@ class EventLogTestCase(unittest.TestCase):
<version event>
<exit event>
"""
self._event_log_module.ExitEvent(None)
self._event_log.ExitEvent(None)
with tempfile.TemporaryDirectory(prefix="event_log_tests") as tempdir:
log_path = self._event_log_module.Write(path=tempdir)
log_path = self._event_log.Write(path=tempdir)
self._log_data = self.readLog(log_path)
self.assertEqual(len(self._log_data), 2)
@@ -193,9 +195,9 @@ class EventLogTestCase(unittest.TestCase):
<version event>
<exit event>
"""
self._event_log_module.ExitEvent(2)
self._event_log.ExitEvent(2)
with tempfile.TemporaryDirectory(prefix="event_log_tests") as tempdir:
log_path = self._event_log_module.Write(path=tempdir)
log_path = self._event_log.Write(path=tempdir)
self._log_data = self.readLog(log_path)
self.assertEqual(len(self._log_data), 2)
@@ -213,11 +215,9 @@ class EventLogTestCase(unittest.TestCase):
<version event>
<command event>
"""
self._event_log_module.CommandEvent(
name="repo", subcommands=["init", "this"]
)
self._event_log.CommandEvent(name="repo", subcommands=["init", "this"])
with tempfile.TemporaryDirectory(prefix="event_log_tests") as tempdir:
log_path = self._event_log_module.Write(path=tempdir)
log_path = self._event_log.Write(path=tempdir)
self._log_data = self.readLog(log_path)
self.assertEqual(len(self._log_data), 2)
@@ -241,10 +241,10 @@ class EventLogTestCase(unittest.TestCase):
"repo.partialclone": "true",
"repo.partialclonefilter": "blob:none",
}
self._event_log_module.DefParamRepoEvents(config)
self._event_log.DefParamRepoEvents(config)
with tempfile.TemporaryDirectory(prefix="event_log_tests") as tempdir:
log_path = self._event_log_module.Write(path=tempdir)
log_path = self._event_log.Write(path=tempdir)
self._log_data = self.readLog(log_path)
self.assertEqual(len(self._log_data), 3)
@@ -268,10 +268,10 @@ class EventLogTestCase(unittest.TestCase):
"git.foo": "bar",
"git.core.foo2": "baz",
}
self._event_log_module.DefParamRepoEvents(config)
self._event_log.DefParamRepoEvents(config)
with tempfile.TemporaryDirectory(prefix="event_log_tests") as tempdir:
log_path = self._event_log_module.Write(path=tempdir)
log_path = self._event_log.Write(path=tempdir)
self._log_data = self.readLog(log_path)
self.assertEqual(len(self._log_data), 1)
@@ -292,10 +292,10 @@ class EventLogTestCase(unittest.TestCase):
"repo.syncstate.superproject.sys.argv": ["--", "sync", "protobuf"],
}
prefix_value = "prefix"
self._event_log_module.LogDataConfigEvents(config, prefix_value)
self._event_log.LogDataConfigEvents(config, prefix_value)
with tempfile.TemporaryDirectory(prefix="event_log_tests") as tempdir:
log_path = self._event_log_module.Write(path=tempdir)
log_path = self._event_log.Write(path=tempdir)
self._log_data = self.readLog(log_path)
self.assertEqual(len(self._log_data), 5)
@@ -311,7 +311,7 @@ class EventLogTestCase(unittest.TestCase):
key = self.remove_prefix(key, f"{prefix_value}/")
value = event["value"]
self.assertEqual(
self._event_log_module.GetDataEventName(value), event["event"]
self._event_log.GetDataEventName(value), event["event"]
)
self.assertTrue(key in config and value == config[key])
@@ -324,9 +324,9 @@ class EventLogTestCase(unittest.TestCase):
"""
msg = "invalid option: --cahced"
fmt = "invalid option: %s"
self._event_log_module.ErrorEvent(msg, fmt)
self._event_log.ErrorEvent(msg, fmt)
with tempfile.TemporaryDirectory(prefix="event_log_tests") as tempdir:
log_path = self._event_log_module.Write(path=tempdir)
log_path = self._event_log.Write(path=tempdir)
self._log_data = self.readLog(log_path)
self.assertEqual(len(self._log_data), 2)
@@ -341,33 +341,34 @@ class EventLogTestCase(unittest.TestCase):
def test_write_with_filename(self):
"""Test Write() with a path to a file exits with None."""
self.assertIsNone(self._event_log_module.Write(path="path/to/file"))
self.assertIsNone(self._event_log.Write(path="path/to/file"))
def test_write_with_git_config(self):
"""Test Write() uses the git config path when 'git config' call
succeeds."""
with tempfile.TemporaryDirectory(prefix="event_log_tests") as tempdir:
with mock.patch.object(
self._event_log_module,
self._event_log,
"_GetEventTargetPath",
return_value=tempdir,
):
self.assertEqual(
os.path.dirname(self._event_log_module.Write()), tempdir
os.path.dirname(self._event_log.Write()), tempdir
)
def test_write_no_git_config(self):
"""Test Write() with no git config variable present exits with None."""
with mock.patch.object(
self._event_log_module, "_GetEventTargetPath", return_value=None
self._event_log, "_GetEventTargetPath", return_value=None
):
self.assertIsNone(self._event_log_module.Write())
self.assertIsNone(self._event_log.Write())
def test_write_non_string(self):
"""Test Write() with non-string type for |path| throws TypeError."""
with self.assertRaises(TypeError):
self._event_log_module.Write(path=1234)
self._event_log.Write(path=1234)
@unittest.skipIf(not hasattr(socket, "AF_UNIX"), "Requires AF_UNIX sockets")
def test_write_socket(self):
"""Test Write() with Unix domain socket for |path| and validate received
traces."""
@@ -388,10 +389,8 @@ class EventLogTestCase(unittest.TestCase):
with server_ready:
server_ready.wait(timeout=120)
self._event_log_module.StartEvent([])
path = self._event_log_module.Write(
path=f"af_unix:{socket_path}"
)
self._event_log.StartEvent([])
path = self._event_log.Write(path=f"af_unix:{socket_path}")
finally:
server_thread.join(timeout=5)
@@ -404,3 +403,59 @@ class EventLogTestCase(unittest.TestCase):
# Check for 'start' event specific fields.
self.assertIn("argv", start_event)
self.assertIsInstance(start_event["argv"], list)
class EventLogVerboseTestCase(unittest.TestCase):
"""TestCase for the EventLog module verbose logging."""
def setUp(self):
self._event_log = git_trace2_event_log.EventLog(env={})
def test_write_socket_error_no_verbose(self):
"""Test Write() suppression of socket errors when not verbose."""
self._event_log.verbose = False
with contextlib.redirect_stderr(
io.StringIO()
) as mock_stderr, mock.patch("socket.socket", side_effect=OSError):
self._event_log.Write(path="af_unix:stream:/tmp/test_sock")
self.assertEqual(mock_stderr.getvalue(), "")
def test_write_socket_error_verbose(self):
"""Test Write() printing of socket errors when verbose."""
self._event_log.verbose = True
with contextlib.redirect_stderr(
io.StringIO()
) as mock_stderr, mock.patch(
"socket.socket", side_effect=OSError("Mock error")
):
self._event_log.Write(path="af_unix:stream:/tmp/test_sock")
self.assertIn(
"git trace2 logging failed: Mock error",
mock_stderr.getvalue(),
)
def test_write_file_error_no_verbose(self):
"""Test Write() suppression of file errors when not verbose."""
self._event_log.verbose = False
with contextlib.redirect_stderr(
io.StringIO()
) as mock_stderr, mock.patch(
"tempfile.NamedTemporaryFile", side_effect=FileExistsError
):
self._event_log.Write(path="/tmp")
self.assertEqual(mock_stderr.getvalue(), "")
def test_write_file_error_verbose(self):
"""Test Write() printing of file errors when verbose."""
self._event_log.verbose = True
with contextlib.redirect_stderr(
io.StringIO()
) as mock_stderr, mock.patch(
"tempfile.NamedTemporaryFile",
side_effect=FileExistsError("Mock error"),
):
self._event_log.Write(path="/tmp")
self.assertIn(
"git trace2 logging failed: FileExistsError",
mock_stderr.getvalue(),
)

View File

@@ -14,42 +14,47 @@
"""Unittests for the hooks.py module."""
import unittest
import pytest
import hooks
class RepoHookShebang(unittest.TestCase):
"""Check shebang parsing in RepoHook."""
@pytest.mark.parametrize(
"data",
(
"",
"#\n# foo\n",
"# Bad shebang in script\n#!/foo\n",
),
)
def test_no_shebang(data: str) -> None:
"""Lines w/out shebangs should be rejected."""
assert hooks.RepoHook._ExtractInterpFromShebang(data) is None
def test_no_shebang(self):
"""Lines w/out shebangs should be rejected."""
DATA = ("", "#\n# foo\n", "# Bad shebang in script\n#!/foo\n")
for data in DATA:
self.assertIsNone(hooks.RepoHook._ExtractInterpFromShebang(data))
def test_direct_interp(self):
"""Lines whose shebang points directly to the interpreter."""
DATA = (
("#!/foo", "/foo"),
("#! /foo", "/foo"),
("#!/bin/foo ", "/bin/foo"),
("#! /usr/foo ", "/usr/foo"),
("#! /usr/foo -args", "/usr/foo"),
)
for shebang, interp in DATA:
self.assertEqual(
hooks.RepoHook._ExtractInterpFromShebang(shebang), interp
)
@pytest.mark.parametrize(
"shebang, interp",
(
("#!/foo", "/foo"),
("#! /foo", "/foo"),
("#!/bin/foo ", "/bin/foo"),
("#! /usr/foo ", "/usr/foo"),
("#! /usr/foo -args", "/usr/foo"),
),
)
def test_direct_interp(shebang: str, interp: str) -> None:
"""Lines whose shebang points directly to the interpreter."""
assert hooks.RepoHook._ExtractInterpFromShebang(shebang) == interp
def test_env_interp(self):
"""Lines whose shebang launches through `env`."""
DATA = (
("#!/usr/bin/env foo", "foo"),
("#!/bin/env foo", "foo"),
("#! /bin/env /bin/foo ", "/bin/foo"),
)
for shebang, interp in DATA:
self.assertEqual(
hooks.RepoHook._ExtractInterpFromShebang(shebang), interp
)
@pytest.mark.parametrize(
"shebang, interp",
(
("#!/usr/bin/env foo", "foo"),
("#!/bin/env foo", "foo"),
("#! /bin/env /bin/foo ", "/bin/foo"),
),
)
def test_env_interp(shebang: str, interp: str) -> None:
"""Lines whose shebang launches through `env`."""
assert hooks.RepoHook._ExtractInterpFromShebang(shebang) == interp

View File

@@ -401,6 +401,32 @@ class XmlManifestTests(ManifestParseTestCase):
self.assertEqual(len(manifest.projects), 1)
self.assertEqual(manifest.projects[0].name, "test-project")
def test_sync_j_max(self):
"""Check sync-j-max handling."""
# Check valid value.
manifest = self.getXmlManifest(
'<manifest><default sync-j-max="5" /></manifest>'
)
self.assertEqual(manifest.default.sync_j_max, 5)
self.assertEqual(
manifest.ToXml().toxml(),
'<?xml version="1.0" ?>'
'<manifest><default sync-j-max="5"/></manifest>',
)
# Check invalid values.
with self.assertRaises(error.ManifestParseError):
manifest = self.getXmlManifest(
'<manifest><default sync-j-max="0" /></manifest>'
)
manifest.ToXml()
with self.assertRaises(error.ManifestParseError):
manifest = self.getXmlManifest(
'<manifest><default sync-j-max="-1" /></manifest>'
)
manifest.ToXml()
class IncludeElementTests(ManifestParseTestCase):
"""Tests for <include>."""
@@ -549,6 +575,33 @@ class IncludeElementTests(ManifestParseTestCase):
# Check project has set group via extend-project element.
self.assertIn("eg1", proj.groups)
def test_extend_project_does_not_inherit_local_groups(self):
"""Check that extend-project does not inherit local groups."""
root_m = self.manifest_dir / "root.xml"
root_m.write_text(
"""
<manifest>
<remote name="test-remote" fetch="http://localhost" />
<default remote="test-remote" revision="refs/heads/main" />
<project name="project1" path="project1" />
<include name="man1.xml" groups="g1,local:g2" />
</manifest>
"""
)
(self.manifest_dir / "man1.xml").write_text(
"""
<manifest>
<extend-project name="project1" groups="g3" />
</manifest>
"""
)
include_m = manifest_xml.XmlManifest(str(self.repodir), str(root_m))
proj = include_m.projects[0]
self.assertIn("g1", proj.groups)
self.assertNotIn("local:g2", proj.groups)
self.assertIn("g3", proj.groups)
def test_allow_bad_name_from_user(self):
"""Check handling of bad name attribute from the user's input."""
@@ -1427,6 +1480,46 @@ class ExtendProjectElementTests(ManifestParseTestCase):
"</manifest>",
)
def test_extend_project_annotations_multiples(self):
manifest = self.getXmlManifest(
"""
<manifest>
<remote name="default-remote" fetch="http://localhost" />
<default remote="default-remote" revision="refs/heads/main" />
<project name="myproject">
<annotation name="foo" value="bar" />
<annotation name="few" value="bar" />
</project>
<extend-project name="myproject">
<annotation name="foo" value="new_bar" />
<annotation name="new" value="anno" />
</extend-project>
</manifest>
"""
)
self.assertEqual(
[(a.name, a.value) for a in manifest.projects[0].annotations],
[
("foo", "bar"),
("few", "bar"),
("foo", "new_bar"),
("new", "anno"),
],
)
self.assertEqual(
sort_attributes(manifest.ToXml().toxml()),
'<?xml version="1.0" ?><manifest>'
'<remote fetch="http://localhost" name="default-remote"/>'
'<default remote="default-remote" revision="refs/heads/main"/>'
'<project name="myproject">'
'<annotation name="foo" value="bar"/>'
'<annotation name="few" value="bar"/>'
'<annotation name="foo" value="new_bar"/>'
'<annotation name="new" value="anno"/>'
"</project>"
"</manifest>",
)
class NormalizeUrlTests(ManifestParseTestCase):
"""Tests for normalize_url() in manifest_xml.py"""

View File

@@ -14,39 +14,35 @@
"""Unittests for the platform_utils.py module."""
import os
import tempfile
import unittest
from pathlib import Path
import pytest
import platform_utils
class RemoveTests(unittest.TestCase):
"""Check remove() helper."""
def test_remove_missing_ok(tmp_path: Path) -> None:
"""Check missing_ok handling."""
path = tmp_path / "test"
def testMissingOk(self):
"""Check missing_ok handling."""
with tempfile.TemporaryDirectory() as tmpdir:
path = os.path.join(tmpdir, "test")
# Should not fail.
platform_utils.remove(path, missing_ok=True)
# Should not fail.
platform_utils.remove(path, missing_ok=True)
# Should fail.
with pytest.raises(OSError):
platform_utils.remove(path)
with pytest.raises(OSError):
platform_utils.remove(path, missing_ok=False)
# Should fail.
self.assertRaises(OSError, platform_utils.remove, path)
self.assertRaises(
OSError, platform_utils.remove, path, missing_ok=False
)
# Should not fail if it exists.
path.touch()
platform_utils.remove(path, missing_ok=True)
assert not path.exists()
# Should not fail if it exists.
open(path, "w").close()
platform_utils.remove(path, missing_ok=True)
self.assertFalse(os.path.exists(path))
path.touch()
platform_utils.remove(path)
assert not path.exists()
open(path, "w").close()
platform_utils.remove(path)
self.assertFalse(os.path.exists(path))
open(path, "w").close()
platform_utils.remove(path, missing_ok=False)
self.assertFalse(os.path.exists(path))
path.touch()
platform_utils.remove(path, missing_ok=False)
assert not path.exists()

View File

@@ -19,35 +19,18 @@ import os
from pathlib import Path
import subprocess
import tempfile
from typing import Optional
import unittest
import utils_for_test
import error
import git_command
import git_config
import manifest_xml
import platform_utils
import project
@contextlib.contextmanager
def TempGitTree():
"""Create a new empty git checkout for testing."""
with tempfile.TemporaryDirectory(prefix="repo-tests") as tempdir:
# Tests need to assume, that main is default branch at init,
# which is not supported in config until 2.28.
cmd = ["git", "init"]
if git_command.git_require((2, 28, 0)):
cmd += ["--initial-branch=main"]
else:
# Use template dir for init.
templatedir = tempfile.mkdtemp(prefix=".test-template")
with open(os.path.join(templatedir, "HEAD"), "w") as fp:
fp.write("ref: refs/heads/main\n")
cmd += ["--template", templatedir]
subprocess.check_call(cmd, cwd=tempdir)
yield tempdir
class FakeProject:
"""A fake for Project for basic functionality."""
@@ -63,13 +46,16 @@ class FakeProject:
)
self.config = git_config.GitConfig.ForRepository(gitdir=self.gitdir)
def RelPath(self, local: Optional[bool] = None) -> str:
return self.name
class ReviewableBranchTests(unittest.TestCase):
"""Check ReviewableBranch behavior."""
def test_smoke(self):
"""A quick run through everything."""
with TempGitTree() as tempdir:
with utils_for_test.TempGitTree() as tempdir:
fakeproj = FakeProject(tempdir)
# Generate some commits.
@@ -116,6 +102,29 @@ class ProjectTests(unittest.TestCase):
"abcd00%21%21_%2b",
)
@unittest.skipUnless(
utils_for_test.supports_reftable(),
"git reftable support is required for this test",
)
def test_get_head_unborn_reftable(self):
with tempfile.TemporaryDirectory(prefix="repo-tests") as tempdir:
subprocess.check_call(
[
"git",
"-c",
"init.defaultRefFormat=reftable",
"init",
"-q",
tempdir,
]
)
fakeproj = FakeProject(tempdir)
expected = subprocess.check_output(
["git", "-C", tempdir, "symbolic-ref", "-q", "HEAD"],
encoding="utf-8",
).strip()
self.assertEqual(expected, fakeproj.work_git.GetHead())
class CopyLinkTestCase(unittest.TestCase):
"""TestCase for stub repo client checkouts.
@@ -359,6 +368,7 @@ class MigrateWorkTreeTests(unittest.TestCase):
"""Check _MigrateOldWorkTreeGitDir handling."""
_SYMLINKS = {
# go/keep-sorted start
"config",
"description",
"hooks",
@@ -367,9 +377,11 @@ class MigrateWorkTreeTests(unittest.TestCase):
"objects",
"packed-refs",
"refs",
"reftable",
"rr-cache",
"shallow",
"svn",
# go/keep-sorted end
}
_FILES = {
"COMMIT_EDITMSG",
@@ -448,6 +460,25 @@ class MigrateWorkTreeTests(unittest.TestCase):
for name in self._SYMLINKS:
self.assertTrue((dotgit / name).is_symlink())
def test_reftable_anchor_with_refs_dir(self):
"""Migrate when reftable/ and refs/ are directories."""
with self._simple_layout() as tempdir:
dotgit = tempdir / "src/test/.git"
(dotgit / "refs").unlink()
(dotgit / "refs").mkdir()
(dotgit / "refs" / "heads").write_text("dummy")
(dotgit / "reftable").unlink()
(dotgit / "reftable").mkdir()
(dotgit / "reftable" / "tables.list").write_text("dummy")
project.Project._MigrateOldWorkTreeGitDir(str(dotgit))
self.assertTrue(dotgit.is_symlink())
self.assertEqual(
os.readlink(dotgit),
os.path.normpath("../../.repo/projects/src/test.git"),
)
class ManifestPropertiesFetchedCorrectly(unittest.TestCase):
"""Ensure properties are fetched properly."""
@@ -467,7 +498,7 @@ class ManifestPropertiesFetchedCorrectly(unittest.TestCase):
def test_manifest_config_properties(self):
"""Test we are fetching the manifest config properties correctly."""
with TempGitTree() as tempdir:
with utils_for_test.TempGitTree() as tempdir:
fakeproj = self.setUpManifest(tempdir)
# Set property using the expected Set method, then ensure

View File

@@ -12,90 +12,96 @@
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit test for repo_logging module."""
"""Unittests for the repo_logging.py module."""
import contextlib
import io
import logging
import unittest
import re
from unittest import mock
import pytest
from color import SetDefaultColoring
from error import RepoExitError
from repo_logging import RepoLogger
class TestRepoLogger(unittest.TestCase):
@mock.patch.object(RepoLogger, "error")
def test_log_aggregated_errors_logs_aggregated_errors(self, mock_error):
"""Test if log_aggregated_errors logs a list of aggregated errors."""
logger = RepoLogger(__name__)
logger.log_aggregated_errors(
RepoExitError(
aggregate_errors=[
Exception("foo"),
Exception("bar"),
Exception("baz"),
Exception("hello"),
Exception("world"),
Exception("test"),
]
)
)
mock_error.assert_has_calls(
[
mock.call("=" * 80),
mock.call(
"Repo command failed due to the following `%s` errors:",
"RepoExitError",
),
mock.call("foo\nbar\nbaz\nhello\nworld"),
mock.call("+%d additional errors...", 1),
@mock.patch.object(RepoLogger, "error")
def test_log_aggregated_errors_logs_aggregated_errors(mock_error) -> None:
"""Test if log_aggregated_errors logs a list of aggregated errors."""
logger = RepoLogger(__name__)
logger.log_aggregated_errors(
RepoExitError(
aggregate_errors=[
Exception("foo"),
Exception("bar"),
Exception("baz"),
Exception("hello"),
Exception("world"),
Exception("test"),
]
)
)
@mock.patch.object(RepoLogger, "error")
def test_log_aggregated_errors_logs_single_error(self, mock_error):
"""Test if log_aggregated_errors logs empty aggregated_errors."""
mock_error.assert_has_calls(
[
mock.call("=" * 80),
mock.call(
"Repo command failed due to the following `%s` errors:",
"RepoExitError",
),
mock.call("foo\nbar\nbaz\nhello\nworld"),
mock.call("+%d additional errors...", 1),
]
)
@mock.patch.object(RepoLogger, "error")
def test_log_aggregated_errors_logs_single_error(mock_error) -> None:
"""Test if log_aggregated_errors logs empty aggregated_errors."""
logger = RepoLogger(__name__)
logger.log_aggregated_errors(RepoExitError())
mock_error.assert_has_calls(
[
mock.call("=" * 80),
mock.call("Repo command failed: %s", "RepoExitError"),
]
)
@pytest.mark.parametrize(
"level",
(
logging.INFO,
logging.WARN,
logging.ERROR,
),
)
def test_log_with_format_string(level: int) -> None:
"""Test different log levels with format strings."""
name = logging.getLevelName(level)
# Set color output to "always" for consistent test results.
# This ensures the logger's behavior is uniform across different
# environments and git configurations.
SetDefaultColoring("always")
# Regex pattern to match optional ANSI color codes.
# \033 - Escape character
# \[ - Opening square bracket
# [0-9;]* - Zero or more digits or semicolons
# m - Ending 'm' character
# ? - Makes the entire group optional
opt_color = r"(\033\[[0-9;]*m)?"
output = io.StringIO()
with contextlib.redirect_stderr(output):
logger = RepoLogger(__name__)
logger.log_aggregated_errors(RepoExitError())
logger.log(level, "%s", "100% pass")
mock_error.assert_has_calls(
[
mock.call("=" * 80),
mock.call("Repo command failed: %s", "RepoExitError"),
]
)
def test_log_with_format_string(self):
"""Test different log levels with format strings."""
# Set color output to "always" for consistent test results.
# This ensures the logger's behavior is uniform across different
# environments and git configurations.
SetDefaultColoring("always")
# Regex pattern to match optional ANSI color codes.
# \033 - Escape character
# \[ - Opening square bracket
# [0-9;]* - Zero or more digits or semicolons
# m - Ending 'm' character
# ? - Makes the entire group optional
opt_color = r"(\033\[[0-9;]*m)?"
for level in (logging.INFO, logging.WARN, logging.ERROR):
name = logging.getLevelName(level)
with self.subTest(level=level, name=name):
output = io.StringIO()
with contextlib.redirect_stderr(output):
logger = RepoLogger(__name__)
logger.log(level, "%s", "100% pass")
self.assertRegex(
output.getvalue().strip(),
f"^{opt_color}100% pass{opt_color}$",
f"failed for level {name}",
)
assert re.search(
f"^{opt_color}100% pass{opt_color}$", output.getvalue().strip()
), f"failed for level {name}"

View File

@@ -15,46 +15,37 @@
"""Unittests for the repo_trace.py module."""
import os
import unittest
from unittest import mock
import pytest
import repo_trace
class TraceTests(unittest.TestCase):
def test_trace_max_size_enforced(monkeypatch: pytest.MonkeyPatch) -> None:
"""Check Trace behavior."""
content = "git chicken"
def testTrace_MaxSizeEnforced(self):
content = "git chicken"
with repo_trace.Trace(content, first_trace=True):
pass
first_trace_size = os.path.getsize(repo_trace._TRACE_FILE)
with repo_trace.Trace(content, first_trace=True):
pass
first_trace_size = os.path.getsize(repo_trace._TRACE_FILE)
with repo_trace.Trace(content):
pass
assert os.path.getsize(repo_trace._TRACE_FILE) > first_trace_size
with repo_trace.Trace(content):
pass
self.assertGreater(
os.path.getsize(repo_trace._TRACE_FILE), first_trace_size
)
# Check we clear everything if the last chunk is larger than _MAX_SIZE.
monkeypatch.setattr(repo_trace, "_MAX_SIZE", 0)
with repo_trace.Trace(content, first_trace=True):
pass
assert os.path.getsize(repo_trace._TRACE_FILE) == first_trace_size
# Check we clear everything is the last chunk is larger than _MAX_SIZE.
with mock.patch("repo_trace._MAX_SIZE", 0):
with repo_trace.Trace(content, first_trace=True):
pass
self.assertEqual(
first_trace_size, os.path.getsize(repo_trace._TRACE_FILE)
)
# Check we only clear the chunks we need to.
new_max = (first_trace_size + 1) / (1024 * 1024)
monkeypatch.setattr(repo_trace, "_MAX_SIZE", new_max)
with repo_trace.Trace(content, first_trace=True):
pass
assert os.path.getsize(repo_trace._TRACE_FILE) == first_trace_size * 2
# Check we only clear the chunks we need to.
repo_trace._MAX_SIZE = (first_trace_size + 1) / (1024 * 1024)
with repo_trace.Trace(content, first_trace=True):
pass
self.assertEqual(
first_trace_size * 2, os.path.getsize(repo_trace._TRACE_FILE)
)
with repo_trace.Trace(content, first_trace=True):
pass
self.assertEqual(
first_trace_size * 2, os.path.getsize(repo_trace._TRACE_FILE)
)
with repo_trace.Trace(content, first_trace=True):
pass
assert os.path.getsize(repo_trace._TRACE_FILE) == first_trace_size * 2

View File

@@ -16,65 +16,84 @@
import multiprocessing
import subprocess
import unittest
from typing import Tuple
from unittest import mock
import pytest
import ssh
class SshTests(unittest.TestCase):
"""Tests the ssh functions."""
@pytest.fixture(autouse=True)
def clear_ssh_version_cache() -> None:
"""Clear the ssh version cache before each test."""
ssh.version.cache_clear()
def test_parse_ssh_version(self):
"""Check _parse_ssh_version() handling."""
ver = ssh._parse_ssh_version("Unknown\n")
self.assertEqual(ver, ())
ver = ssh._parse_ssh_version("OpenSSH_1.0\n")
self.assertEqual(ver, (1, 0))
ver = ssh._parse_ssh_version(
"OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.13, OpenSSL 1.0.1f 6 Jan 2014\n"
)
self.assertEqual(ver, (6, 6, 1))
ver = ssh._parse_ssh_version(
"OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\n"
)
self.assertEqual(ver, (7, 6))
ver = ssh._parse_ssh_version("OpenSSH_9.0p1, LibreSSL 3.3.6\n")
self.assertEqual(ver, (9, 0))
def test_version(self):
"""Check version() handling."""
with mock.patch("ssh._run_ssh_version", return_value="OpenSSH_1.2\n"):
self.assertEqual(ssh.version(), (1, 2))
@pytest.mark.parametrize(
"input_str, expected",
(
("Unknown\n", ()),
("OpenSSH_1.0\n", (1, 0)),
(
"OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.13, OpenSSL 1.0.1f 6 Jan 2014\n",
(6, 6, 1),
),
(
"OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\n",
(7, 6),
),
("OpenSSH_9.0p1, LibreSSL 3.3.6\n", (9, 0)),
),
)
def test_parse_ssh_version(input_str: str, expected: Tuple[int, ...]) -> None:
"""Check _parse_ssh_version() handling."""
assert ssh._parse_ssh_version(input_str) == expected
def test_context_manager_empty(self):
"""Verify context manager with no clients works correctly."""
with multiprocessing.Manager() as manager:
with ssh.ProxyManager(manager):
pass
def test_context_manager_child_cleanup(self):
"""Verify orphaned clients & masters get cleaned up."""
with multiprocessing.Manager() as manager:
def test_version() -> None:
"""Check version() handling."""
with mock.patch("ssh._run_ssh_version", return_value="OpenSSH_1.2\n"):
assert ssh.version() == (1, 2)
def test_context_manager_empty() -> None:
"""Verify context manager with no clients works correctly."""
with multiprocessing.Manager() as manager:
with ssh.ProxyManager(manager):
pass
def test_context_manager_child_cleanup() -> None:
"""Verify orphaned clients & masters get cleaned up."""
with multiprocessing.Manager() as manager:
with mock.patch("ssh.version", return_value=(1, 2)):
with ssh.ProxyManager(manager) as ssh_proxy:
client = subprocess.Popen(["sleep", "964853320"])
ssh_proxy.add_client(client)
master = subprocess.Popen(["sleep", "964853321"])
ssh_proxy.add_master(master)
# If the process still exists, these will throw timeout errors.
client.wait(0)
master.wait(0)
# If the process still exists, these will throw timeout errors.
client.wait(0)
master.wait(0)
def test_ssh_sock(self):
"""Check sock() function."""
manager = multiprocessing.Manager()
def test_ssh_sock(monkeypatch: pytest.MonkeyPatch) -> None:
"""Check sock() function."""
with multiprocessing.Manager() as manager:
proxy = ssh.ProxyManager(manager)
with mock.patch("tempfile.mkdtemp", return_value="/tmp/foo"):
# Old ssh version uses port.
with mock.patch("ssh.version", return_value=(6, 6)):
self.assertTrue(proxy.sock().endswith("%p"))
monkeypatch.setattr(
"tempfile.mkdtemp", lambda *args, **kwargs: "/tmp/foo"
)
proxy._sock_path = None
# New ssh version uses hash.
with mock.patch("ssh.version", return_value=(6, 7)):
self.assertTrue(proxy.sock().endswith("%C"))
# Old ssh version uses port.
with mock.patch("ssh.version", return_value=(6, 6)):
with proxy as ssh_proxy:
assert ssh_proxy.sock().endswith("%p")
proxy._sock_path = None
# New ssh version uses hash.
with mock.patch("ssh.version", return_value=(6, 7)):
with proxy as ssh_proxy:
assert ssh_proxy.sock().endswith("%C")
proxy._sock_path = None

View File

@@ -15,123 +15,164 @@
"""Unittests for the subcmds module (mostly __init__.py than subcommands)."""
import optparse
import unittest
from typing import Type
import pytest
from command import Command
import subcmds
class AllCommands(unittest.TestCase):
"""Check registered all_commands."""
# NB: We don't test all subcommands as we want to avoid "change detection"
# tests, so we just look for the most common/important ones here that are
# unlikely to ever change.
@pytest.mark.parametrize(
"cmd", ("cherry-pick", "help", "init", "start", "sync", "upload")
)
def test_required_basic(cmd: str) -> None:
"""Basic checking of registered commands."""
assert cmd in subcmds.all_commands
def test_required_basic(self):
"""Basic checking of registered commands."""
# NB: We don't test all subcommands as we want to avoid "change
# detection" tests, so we just look for the most common/important ones
# here that are unlikely to ever change.
for cmd in {"cherry-pick", "help", "init", "start", "sync", "upload"}:
self.assertIn(cmd, subcmds.all_commands)
def test_naming(self):
"""Verify we don't add things that we shouldn't."""
for cmd in subcmds.all_commands:
# Reject filename suffixes like "help.py".
self.assertNotIn(".", cmd)
@pytest.mark.parametrize("name", subcmds.all_commands.keys())
def test_naming(name: str) -> None:
"""Verify we don't add things that we shouldn't."""
# Reject filename suffixes like "help.py".
assert "." not in name
# Make sure all '_' were converted to '-'.
self.assertNotIn("_", cmd)
# Make sure all '_' were converted to '-'.
assert "_" not in name
# Reject internal python paths like "__init__".
self.assertFalse(cmd.startswith("__"))
# Reject internal python paths like "__init__".
assert not name.startswith("__")
def test_help_desc_style(self):
"""Force some consistency in option descriptions.
Python's optparse & argparse has a few default options like --help.
Their option description text uses lowercase sentence fragments, so
enforce our options follow the same style so UI is consistent.
@pytest.mark.parametrize("name, cls", subcmds.all_commands.items())
def test_help_desc_style(name: str, cls: Type[Command]) -> None:
"""Force some consistency in option descriptions.
We enforce:
* Text starts with lowercase.
* Text doesn't end with period.
"""
for name, cls in subcmds.all_commands.items():
cmd = cls()
parser = cmd.OptionParser
for option in parser.option_list:
if option.help == optparse.SUPPRESS_HELP:
continue
Python's optparse & argparse has a few default options like --help.
Their option description text uses lowercase sentence fragments, so
enforce our options follow the same style so UI is consistent.
c = option.help[0]
self.assertEqual(
c.lower(),
c,
msg=f"subcmds/{name}.py: {option.get_opt_string()}: "
f'help text should start with lowercase: "{option.help}"',
)
We enforce:
* Text starts with lowercase.
* Text doesn't end with period.
"""
cmd = cls()
parser = cmd.OptionParser
for option in parser.option_list:
if option.help == optparse.SUPPRESS_HELP or not option.help:
continue
self.assertNotEqual(
option.help[-1],
".",
msg=f"subcmds/{name}.py: {option.get_opt_string()}: "
f'help text should not end in a period: "{option.help}"',
)
c = option.help[0]
assert c.lower() == c, (
f"subcmds/{name}.py: {option.get_opt_string()}: "
f'help text should start with lowercase: "{option.help}"'
)
def test_cli_option_style(self):
"""Force some consistency in option flags."""
for name, cls in subcmds.all_commands.items():
cmd = cls()
parser = cmd.OptionParser
for option in parser.option_list:
for opt in option._long_opts:
self.assertNotIn(
"_",
opt,
msg=f"subcmds/{name}.py: {opt}: only use dashes in "
"options, not underscores",
)
assert option.help[-1] != ".", (
f"subcmds/{name}.py: {option.get_opt_string()}: "
f'help text should not end in a period: "{option.help}"'
)
def test_cli_option_dest(self):
"""Block redundant dest= arguments."""
def _check_dest(opt):
"""Check the dest= setting."""
# If the destination is not set, nothing to check.
# If long options are not set, then there's no implicit destination.
# If callback is used, then a destination might be needed because
# optparse cannot assume a value is always stored.
if opt.dest is None or not opt._long_opts or opt.callback:
return
@pytest.mark.parametrize("name, cls", subcmds.all_commands.items())
def test_cli_option_style(name: str, cls: Type[Command]) -> None:
"""Force some consistency in option flags."""
cmd = cls()
parser = cmd.OptionParser
for option in parser.option_list:
for opt in option._long_opts:
assert "_" not in opt, (
f"subcmds/{name}.py: {opt}: only use dashes in "
"options, not underscores"
)
long = opt._long_opts[0]
assert long.startswith("--")
# This matches optparse's behavior.
implicit_dest = long[2:].replace("-", "_")
if implicit_dest == opt.dest:
bad_opts.append((str(opt), opt.dest))
# Hook the option check list.
optparse.Option.CHECK_METHODS.insert(0, _check_dest)
def test_cli_option_dest() -> None:
"""Block redundant dest= arguments."""
bad_opts: list[tuple[str, str]] = []
def _check_dest(opt: optparse.Option) -> None:
"""Check the dest= setting."""
# If the destination is not set, nothing to check.
# If long options are not set, then there's no implicit destination.
# If callback is used, then a destination might be needed because
# optparse cannot assume a value is always stored.
if opt.dest is None or not opt._long_opts or opt.callback:
return
long = opt._long_opts[0]
assert long.startswith("--")
# This matches optparse's behavior.
implicit_dest = long[2:].replace("-", "_")
if implicit_dest == opt.dest:
bad_opts.append((str(opt), opt.dest))
# Hook the option check list.
optparse.Option.CHECK_METHODS.insert(0, _check_dest)
try:
# Gather all the bad options up front so people can see all bad options
# instead of failing at the first one.
all_bad_opts = {}
all_bad_opts: dict[str, list[tuple[str, str]]] = {}
for name, cls in subcmds.all_commands.items():
bad_opts = all_bad_opts[name] = []
bad_opts = []
cmd = cls()
# Trigger construction of parser.
cmd.OptionParser
_ = cmd.OptionParser
all_bad_opts[name] = bad_opts
errmsg = None
for name, bad_opts in sorted(all_bad_opts.items()):
if bad_opts:
errmsg = ""
for name, bad_opts_list in sorted(all_bad_opts.items()):
if bad_opts_list:
if not errmsg:
errmsg = "Omit redundant dest= when defining options.\n"
errmsg += f"\nSubcommand {name} (subcmds/{name}.py):\n"
errmsg += "".join(
f" {opt}: dest='{dest}'\n" for opt, dest in bad_opts
f" {opt}: dest='{dest}'\n" for opt, dest in bad_opts_list
)
if errmsg:
self.fail(errmsg)
pytest.fail(errmsg)
finally:
# Make sure we aren't popping the wrong stuff.
assert optparse.Option.CHECK_METHODS.pop(0) is _check_dest
@pytest.mark.parametrize("name, cls", subcmds.all_commands.items())
def test_common_validate_options(name: str, cls: Type[Command]) -> None:
"""Verify CommonValidateOptions sets up expected fields."""
cmd = cls()
opts, args = cmd.OptionParser.parse_args([])
# Verify the fields don't exist yet.
assert not hasattr(
opts, "verbose"
), f"{name}: has verbose before validation"
assert not hasattr(opts, "quiet"), f"{name}: has quiet before validation"
cmd.CommonValidateOptions(opts, args)
# Verify the fields exist now.
assert hasattr(opts, "verbose"), f"{name}: missing verbose after validation"
assert hasattr(opts, "quiet"), f"{name}: missing quiet after validation"
assert hasattr(
opts, "outer_manifest"
), f"{name}: missing outer_manifest after validation"
def test_attribute_error_repro() -> None:
"""Confirm that accessing verbose before CommonValidateOptions fails."""
from subcmds.sync import Sync
cmd = Sync()
opts, args = cmd.OptionParser.parse_args([])
# This confirms that without the fix in main.py, an AttributeError
# would be raised because CommonValidateOptions hasn't been called yet.
with pytest.raises(AttributeError):
_ = opts.verbose
cmd.CommonValidateOptions(opts, args)
assert hasattr(opts, "verbose")

View File

@@ -17,12 +17,12 @@
from io import StringIO
import os
from shutil import rmtree
import subprocess
import tempfile
import unittest
from unittest import mock
import git_command
import utils_for_test
import manifest_xml
import project
import subcmds
@@ -50,24 +50,6 @@ class AllCommands(unittest.TestCase):
"""Common teardown."""
rmtree(self.tempdir, ignore_errors=True)
def initTempGitTree(self, git_dir):
"""Create a new empty git checkout for testing."""
# Tests need to assume, that main is default branch at init,
# which is not supported in config until 2.28.
cmd = ["git", "init", "-q"]
if git_command.git_require((2, 28, 0)):
cmd += ["--initial-branch=main"]
else:
# Use template dir for init
templatedir = os.path.join(self.tempdirobj.name, ".test-template")
os.makedirs(templatedir)
with open(os.path.join(templatedir, "HEAD"), "w") as fp:
fp.write("ref: refs/heads/main\n")
cmd += ["--template", templatedir]
cmd += [git_dir]
subprocess.check_call(cmd)
def getXmlManifestWith8Projects(self):
"""Create and return a setup of 8 projects with enough dummy
files and setup to execute forall."""
@@ -114,7 +96,7 @@ class AllCommands(unittest.TestCase):
)
)
git_path = os.path.join(self.tempdir, "tests/path" + str(x))
self.initTempGitTree(git_path)
utils_for_test.init_git_tree(git_path)
return manifest_xml.XmlManifest(self.repodir, self.manifest_file)

82
tests/test_subcmds_gc.py Normal file
View File

@@ -0,0 +1,82 @@
# Copyright (C) 2026 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unittests for the subcmds/gc.py module."""
import unittest
from unittest import mock
from subcmds import gc
class GcCommand(unittest.TestCase):
"""Tests for gc command."""
def setUp(self):
self.cmd = gc.Gc()
self.opt, self.args = self.cmd.OptionParser.parse_args([])
self.opt.this_manifest_only = False
self.opt.repack = False
self.mock_get_projects = mock.patch.object(
self.cmd, "GetProjects"
).start()
self.mock_delete = mock.patch.object(
self.cmd, "delete_unused_projects", return_value=0
).start()
self.mock_repack = mock.patch.object(
self.cmd, "repack_projects", return_value=0
).start()
def tearDown(self):
mock.patch.stopall()
def test_gc_no_args(self):
"""Test gc without specific projects."""
self.mock_get_projects.return_value = ["all_projects"]
self.cmd.Execute(self.opt, [])
self.mock_get_projects.assert_called_once_with([], all_manifests=True)
self.mock_delete.assert_called_once_with(["all_projects"], self.opt)
self.mock_repack.assert_not_called()
def test_gc_with_args(self):
"""Test gc with specific projects uses all_projects for delete."""
self.mock_get_projects.side_effect = [["projA"], ["all_projects"]]
self.opt.repack = True
self.cmd.Execute(self.opt, ["projA"])
self.mock_get_projects.assert_has_calls(
[
mock.call(["projA"], all_manifests=True),
mock.call([], all_manifests=True),
]
)
self.mock_delete.assert_called_once_with(["all_projects"], self.opt)
self.mock_repack.assert_called_once_with(["projA"], self.opt)
def test_gc_exit_on_delete_failure(self):
"""Test gc exits if delete_unused_projects fails."""
self.mock_get_projects.return_value = ["all_projects"]
self.mock_delete.return_value = 1
self.opt.repack = True
ret = self.cmd.Execute(self.opt, [])
self.assertEqual(ret, 1)
self.mock_repack.assert_not_called()

View File

@@ -14,33 +14,36 @@
"""Unittests for the subcmds/init.py module."""
import unittest
from typing import List
import pytest
from subcmds import init
class InitCommand(unittest.TestCase):
"""Check registered all_commands."""
@pytest.mark.parametrize(
"argv",
([],),
)
def test_cli_parser_good(argv: List[str]) -> None:
"""Check valid command line options."""
cmd = init.Init()
opts, args = cmd.OptionParser.parse_args(argv)
cmd.ValidateOptions(opts, args)
def setUp(self):
self.cmd = init.Init()
def test_cli_parser_good(self):
"""Check valid command line options."""
ARGV = ([],)
for argv in ARGV:
opts, args = self.cmd.OptionParser.parse_args(argv)
self.cmd.ValidateOptions(opts, args)
def test_cli_parser_bad(self):
"""Check invalid command line options."""
ARGV = (
# Too many arguments.
["url", "asdf"],
# Conflicting options.
["--mirror", "--archive"],
)
for argv in ARGV:
opts, args = self.cmd.OptionParser.parse_args(argv)
with self.assertRaises(SystemExit):
self.cmd.ValidateOptions(opts, args)
@pytest.mark.parametrize(
"argv",
(
# Too many arguments.
["url", "asdf"],
# Conflicting options.
["--mirror", "--archive"],
),
)
def test_cli_parser_bad(argv: List[str]) -> None:
"""Check invalid command line options."""
cmd = init.Init()
opts, args = cmd.OptionParser.parse_args(argv)
with pytest.raises(SystemExit):
cmd.ValidateOptions(opts, args)

View File

@@ -97,6 +97,35 @@ def test_cli_jobs(argv, jobs_manifest, jobs, jobs_net, jobs_check):
"""Tests --jobs option behavior."""
mp = mock.MagicMock()
mp.manifest.default.sync_j = jobs_manifest
mp.manifest.default.sync_j_max = None
cmd = sync.Sync()
opts, args = cmd.OptionParser.parse_args(argv)
cmd.ValidateOptions(opts, args)
with mock.patch.object(sync, "_rlimit_nofile", return_value=(256, 256)):
with mock.patch.object(os, "cpu_count", return_value=OS_CPU_COUNT):
cmd._ValidateOptionsWithManifest(opts, mp)
assert opts.jobs == jobs
assert opts.jobs_network == jobs_net
assert opts.jobs_checkout == jobs_check
@pytest.mark.parametrize(
"argv, jobs_manifest, jobs_manifest_max, jobs, jobs_net, jobs_check",
[
(["--jobs=10"], None, 5, 5, 5, 5),
(["--jobs=10", "--jobs-network=10"], None, 5, 5, 5, 5),
(["--jobs=10", "--jobs-checkout=10"], None, 5, 5, 5, 5),
],
)
def test_cli_jobs_sync_j_max(
argv, jobs_manifest, jobs_manifest_max, jobs, jobs_net, jobs_check
):
"""Tests --jobs option behavior with sync-j-max."""
mp = mock.MagicMock()
mp.manifest.default.sync_j = jobs_manifest
mp.manifest.default.sync_j_max = jobs_manifest_max
cmd = sync.Sync()
opts, args = cmd.OptionParser.parse_args(argv)

View File

@@ -14,15 +14,10 @@
"""Unittests for the update_manpages module."""
import unittest
from release import update_manpages
class UpdateManpagesTest(unittest.TestCase):
"""Tests the update-manpages code."""
def test_replace_regex(self):
"""Check that replace_regex works."""
data = "\n\033[1mSummary\033[m\n"
self.assertEqual(update_manpages.replace_regex(data), "\nSummary\n")
def test_replace_regex() -> None:
"""Check that replace_regex works."""
data = "\n\033[1mSummary\033[m\n"
assert update_manpages.replace_regex(data) == "\nSummary\n"

View File

@@ -23,7 +23,8 @@ import tempfile
import unittest
from unittest import mock
import git_command
import utils_for_test
import main
import wrapper
@@ -408,18 +409,7 @@ class GitCheckoutTestCase(RepoWrapperTestCase):
remote = os.path.join(cls.GIT_DIR, "remote")
os.mkdir(remote)
# Tests need to assume, that main is default branch at init,
# which is not supported in config until 2.28.
if git_command.git_require((2, 28, 0)):
initstr = "--initial-branch=main"
else:
# Use template dir for init.
templatedir = tempfile.mkdtemp(prefix=".test-template")
with open(os.path.join(templatedir, "HEAD"), "w") as fp:
fp.write("ref: refs/heads/main\n")
initstr = "--template=" + templatedir
run_git("init", initstr, cwd=remote)
utils_for_test.init_git_tree(remote)
run_git("commit", "--allow-empty", "-minit", cwd=remote)
run_git("branch", "stable", cwd=remote)
run_git("tag", "v1.0", cwd=remote)

99
tests/utils_for_test.py Normal file
View File

@@ -0,0 +1,99 @@
# Copyright (C) 2026 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Various utility code used by tests.
If you want to write a per-test fixture, see conftest.py instead.
"""
import contextlib
import functools
from pathlib import Path
import subprocess
import tempfile
from typing import Optional, Union
import git_command
def init_git_tree(
path: Union[str, Path],
ref_format: Optional[str] = None,
) -> None:
"""Initialize `path` as a new git repo."""
with contextlib.ExitStack() as stack:
# Tests need to assume, that main is default branch at init,
# which is not supported in config until 2.28.
cmd = ["git"]
if ref_format:
cmd += ["-c", f"init.defaultRefFormat={ref_format}"]
cmd += ["init"]
if git_command.git_require((2, 28, 0)):
cmd += ["--initial-branch=main"]
else:
# Use template dir for init.
templatedir = stack.enter_context(
tempfile.mkdtemp(prefix="git-template")
)
(Path(templatedir) / "HEAD").write_text("ref: refs/heads/main\n")
cmd += ["--template", templatedir]
cmd += [path]
subprocess.run(cmd, check=True)
@contextlib.contextmanager
def TempGitTree():
"""Create a new empty git checkout for testing."""
with tempfile.TemporaryDirectory(prefix="repo-tests") as tempdir:
init_git_tree(tempdir)
yield tempdir
@functools.lru_cache(maxsize=None)
def supports_reftable() -> bool:
"""Check if git supports reftable."""
with tempfile.TemporaryDirectory(prefix="repo-tests") as tempdir:
proc = subprocess.run(
["git", "-c", "init.defaultRefFormat=reftable", "init"],
cwd=tempdir,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
check=False,
)
return proc.returncode == 0
@functools.lru_cache(maxsize=None)
def supports_refs_migrate() -> bool:
"""Check if git supports refs migrate."""
with tempfile.TemporaryDirectory(prefix="repo-tests") as tempdir:
subprocess.check_call(
["git", "-c", "init.defaultRefFormat=files", "init"],
cwd=tempdir,
)
proc = subprocess.run(
[
"git",
"refs",
"migrate",
"--ref-format=reftable",
"--dry-run",
],
cwd=tempdir,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
check=False,
)
return proc.returncode == 0

63
tox.ini
View File

@@ -1,63 +0,0 @@
# Copyright (C) 2019 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# https://tox.readthedocs.io/
[tox]
envlist = lint, py36, py37, py38, py39, py310, py311, py312
requires = virtualenv<20.22.0
[gh-actions]
python =
3.6: py36
3.7: py37
3.8: py38
3.9: py39
3.10: py310
3.11: py311
3.12: py312
[testenv]
deps =
-c constraints.txt
black
flake8
isort
pytest
pytest-timeout
commands = {envpython} run_tests {posargs}
setenv =
GIT_AUTHOR_NAME = Repo test author
GIT_COMMITTER_NAME = Repo test committer
EMAIL = repo@gerrit.nodomain
[testenv:lint]
skip_install = true
deps =
-c constraints.txt
black
flake8
commands =
black --check {posargs:. repo run_tests release/update-hooks release/update-manpages}
flake8
[testenv:format]
skip_install = true
deps =
-c constraints.txt
black
flake8
commands =
black {posargs:. repo run_tests release/update-hooks release/update-manpages}
flake8