Skip to content

Fix bugs affecting exception wrapping in rmtree callback #1700

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 19 commits into from
Oct 10, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
19 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Add initial test_env_vars_for_windows_tests
The new test method just verifies the current behavior of the
HIDE_WINDOWS_KNOWN_ERRORS and HIDE_WINDOWS_FREEZE_ERRORS
environment variables. This is so there is a test to modify when
changing that behavior. The purpose of these tests is *not* to
claim that the behavior of either variable is something code that
uses GitPython can (or has ever been able to) rely on.

This also adds pytest-subtests as a dependency, so multiple
failures from the subtests can be seen in the same test run.
  • Loading branch information
EliahKagan committed Oct 9, 2023
commit 100ab989fcba0b1d1bd89b5b4b41ea5014da3d82
1 change: 1 addition & 0 deletions test-requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,5 @@ pre-commit
pytest
pytest-cov
pytest-instafail
pytest-subtests
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've used the subTest method of unittest.TestCase in test_env_vars_for_windows_tests. This pytest plugin gives pytest full support for that mechanism. It lets pytest continue with the other subtests even after one fails, as the unittest test runner would, and show separate passes and failures for the individual subtests. Separate results are shown only when at least one fails, so when everything passes the report remains uncluttered. (It also provides a subtests pytest fixture, though since this is not an autouse fixture it is not usable from within a method of a class that inherits from unittest.TestCase.)

I think this is useful to have going forward, since we have many test cases that are large with many separate assertions of separate facts about the system under test, and as they are updated, some of them could be improved by having their separate claims divided into subtests so they can be individually described and so failures don't unnecessarily block later subtests.

However, if you'd rather this plugin be used, it can be removed. test_env_vars_for_windows_tests could retain subtests--they will still each run if none fails, just like multiple assertions in a test case without subtests. Or I could replace the subtests with more @ddt parameterization, or manually, etc.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I all sounds reasonable and in particular, since you are the one doing the work it seems fair that you can use the tooling you see as best fit. I also have no experience here and no preferences, and believe that anything that improves the tests in any way is very welcome. Thank you!

Copy link
Member Author

@EliahKagan EliahKagan Oct 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good. For the conceptually unrelated reason that it would facilitate more fine-grained xfail marking--so we don't habitually carry XPASSing cases that aren't actually expected to have failed--I think the test class involved here, test.test_util.TestUtils, should be split, so that the cygpath-related tests can be pure pytest tests. The reason to split it is:

  • @pytest.mark.parametrize supports fine-grained marking of individual test cases, including as skip or xfail, but that form of parameterization cannot be applied to test methods in unittest.TestCase subclasses. So at least those tests would currently (i.e., until the underlying causes of some cases failing are addressed) benefit from being pure pytest tests.
  • Some of the test methods, not for cygpath, use the rorepo fixture provided by test.lib.helper.TestBase, which inherits from unittest.TestCase. Although this could be converted to a pytest fixture, I'd rather wait to do that until after more operating systems, at least Windows, are tested on CI, and also until I have more insight into whether it makes sense to do that at all, rather than replacing rorepo and other fixtures with new corresponding fixtures that use isolated repositories (#914, #1693 (review)). So at least those tests should currently remain in a unittest.TestCase subclass.

So long as it's acceptable to have multiple test classes in the same test module, this could be done at any time, and it may facilitate some other simplifications. I mention it here because I think it might lead to the elimination of subtests in this particular module, either by using @pytest.mark.parametrize for this too or for other reasons.

If that happens, I might remove the pytest-subtests test dependency, even though it might be re-added later, because the alternative of pytest-check may be preferable in some of the large test methods if they can also be converted to be pure pytest tests (because pytest-check supports a more compact syntax).

pytest-sugar
45 changes: 45 additions & 0 deletions test/test_util.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,14 @@
# This module is part of GitPython and is released under
# the BSD License: https://opensource.org/license/bsd-3-clause/

import ast
import contextlib
from datetime import datetime
import os
import pathlib
import pickle
import stat
import subprocess
import sys
import tempfile
import time
Expand Down Expand Up @@ -502,3 +504,46 @@ def test_remove_password_from_command_line(self):

assert cmd_4 == remove_password_if_present(cmd_4)
assert cmd_5 == remove_password_if_present(cmd_5)

@ddt.data("HIDE_WINDOWS_KNOWN_ERRORS", "HIDE_WINDOWS_FREEZE_ERRORS")
def test_env_vars_for_windows_tests(self, name):
def run_parse(value):
command = [
sys.executable,
"-c",
f"from git.util import {name}; print(repr({name}))",
]
output = subprocess.check_output(
command,
env=None if value is None else dict(os.environ, **{name: value}),
text=True,
)
return ast.literal_eval(output)

assert_true_iff_win = self.assertTrue if os.name == "nt" else self.assertFalse

truthy_cases = [
("unset", None),
("true-seeming", "1"),
("true-seeming", "true"),
("true-seeming", "True"),
("true-seeming", "yes"),
("true-seeming", "YES"),
("false-seeming", "0"),
("false-seeming", "false"),
("false-seeming", "False"),
("false-seeming", "no"),
("false-seeming", "NO"),
("whitespace", " "),
]
falsy_cases = [
("empty", ""),
]

for msg, env_var_value in truthy_cases:
with self.subTest(msg, env_var_value=env_var_value):
assert_true_iff_win(run_parse(env_var_value))

for msg, env_var_value in falsy_cases:
with self.subTest(msg, env_var_value=env_var_value):
self.assertFalse(run_parse(env_var_value))