From 54c06069cdf6e7e4bf91b5c3db265faea6e8d710 Mon Sep 17 00:00:00 2001 From: Hannes Hallqvist Date: Tue, 14 Feb 2023 11:35:05 +0100 Subject: [PATCH 1/9] remove issue teplate from original repo --- .github/ISSUE_TEMPLATE/bug_report.yml | 54 ---------------------- .github/ISSUE_TEMPLATE/config.yml | 5 -- .github/ISSUE_TEMPLATE/feature_request.yml | 19 -------- 3 files changed, 78 deletions(-) delete mode 100644 .github/ISSUE_TEMPLATE/bug_report.yml delete mode 100644 .github/ISSUE_TEMPLATE/config.yml delete mode 100644 .github/ISSUE_TEMPLATE/feature_request.yml diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml deleted file mode 100644 index 4ccdb52cad24..000000000000 --- a/.github/ISSUE_TEMPLATE/bug_report.yml +++ /dev/null @@ -1,54 +0,0 @@ -name: Bug report -description: Create a bug report to help us address errors in the repository -labels: [bug] -body: - - type: markdown - attributes: - value: > - Before requesting please search [existing issues](https://github.com/TheAlgorithms/Python/labels/bug). - Usage questions such as "How do I...?" belong on the - [Discord](https://discord.gg/c7MnfGFGa6) and will be closed. - - - type: input - attributes: - label: "Repository commit" - description: > - The commit hash for `TheAlgorithms/Python` repository. You can get this - by running the command `git rev-parse HEAD` locally. - placeholder: "a0b0f414ae134aa1772d33bb930e5a960f9979e8" - validations: - required: true - - - type: input - attributes: - label: "Python version (python --version)" - placeholder: "Python 3.10.7" - validations: - required: true - - - type: textarea - attributes: - label: "Dependencies version (pip freeze)" - description: > - This is the output of the command `pip freeze --all`. Note that the - actual output might be different as compared to the placeholder text. - placeholder: | - appnope==0.1.3 - asttokens==2.0.8 - backcall==0.2.0 - ... - validations: - required: true - - - type: textarea - attributes: - label: "Expected behavior" - description: "Describe the behavior you expect. May include images or videos." - validations: - required: true - - - type: textarea - attributes: - label: "Actual behavior" - validations: - required: true diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml deleted file mode 100644 index 62019bb08938..000000000000 --- a/.github/ISSUE_TEMPLATE/config.yml +++ /dev/null @@ -1,5 +0,0 @@ -blank_issues_enabled: false -contact_links: - - name: Discord community - url: https://discord.gg/c7MnfGFGa6 - about: Have any questions or need any help? Please contact us via Discord diff --git a/.github/ISSUE_TEMPLATE/feature_request.yml b/.github/ISSUE_TEMPLATE/feature_request.yml deleted file mode 100644 index 09a159b2193e..000000000000 --- a/.github/ISSUE_TEMPLATE/feature_request.yml +++ /dev/null @@ -1,19 +0,0 @@ -name: Feature request -description: Suggest features, propose improvements, discuss new ideas. -labels: [enhancement] -body: - - type: markdown - attributes: - value: > - Before requesting please search [existing issues](https://github.com/TheAlgorithms/Python/labels/enhancement). - Usage questions such as "How do I...?" belong on the - [Discord](https://discord.gg/c7MnfGFGa6) and will be closed. - - - type: textarea - attributes: - label: "Feature description" - description: > - This could be new algorithms, data structures or improving any existing - implementations. - validations: - required: true From 6e9cbd8c11e1b4d8408ce866c44eb5d115bbb0b8 Mon Sep 17 00:00:00 2001 From: BillXu0424 <1065602877@qq.com> Date: Tue, 14 Feb 2023 12:52:13 +0100 Subject: [PATCH 2/9] doc. issue/11: add report template to repo --- report.md | 98 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 98 insertions(+) create mode 100644 report.md diff --git a/report.md b/report.md new file mode 100644 index 000000000000..0e33d3734d2a --- /dev/null +++ b/report.md @@ -0,0 +1,98 @@ +# Report for assignment 3 + +## Project + +Name: TheAlgorithm/Python + +URL: **[GitHub - TheAlgorithms/Python: All Algorithms implemented in Python](https://github.com/TheAlgorithms/Python)** + +Description: The project contains algorithms implemented in Python that can be used for learning purposes. + +## Onboarding experience + +Did it build and run as documented? + +See the assignment for details; if everything works out of the box, +there is no need to write much here. If the first project(s) you picked +ended up being unsuitable, you can describe the "onboarding experience" +for each project, along with reason(s) why you changed to a different one. + +## Complexity + +1. What are your results for ten complex functions? + * Did all methods (tools vs. manual count) get the same result? + * Are the results clear? +2. Are the functions just complex, or also long? +3. What is the purpose of the functions? +4. Are exceptions taken into account in the given measurements? +5. Is the documentation clear w.r.t. all the possible outcomes? + +## Refactoring + +Plan for refactoring complex code: + +Estimated impact of refactoring (lower CC, but other drawbacks?). + +Carried out refactoring (optional, P+): + +git diff ... + +## Coverage + +### Tools + +Document your experience in using a "new"/different coverage tool. + +How well was the tool documented? Was it possible/easy/difficult to +integrate it with your build environment? + +### Your own coverage tool + +Show a patch (or link to a branch) that shows the instrumented code to +gather coverage measurements. + +The patch is probably too long to be copied here, so please add +the git command that is used to obtain the patch instead: + +git diff ... + +What kinds of constructs does your tool support, and how accurate is +its output? + +### Evaluation + +1. How detailed is your coverage measurement? + +2. What are the limitations of your own tool? + +3. Are the results of your tool consistent with existing coverage tools? + +## Coverage improvement + +Show the comments that describe the requirements for the coverage. + +Report of old coverage: [link] + +Report of new coverage: [link] + +Test cases added: + +git diff ... + +Number of test cases added: two per team member (P) or at least four (P+). + +## Self-assessment: Way of working + +Current state according to the Essence standard: ... + +Was the self-assessment unanimous? Any doubts about certain items? + +How have you improved so far? + +Where is potential for improvement? + +## Overall experience + +What are your main take-aways from this project? What did you learn? + +Is there something special you want to mention here? From 84786f2e426726d725dba7b6fb50eb268ce605bc Mon Sep 17 00:00:00 2001 From: Hannes Hallqvist Date: Fri, 17 Feb 2023 14:51:05 +0100 Subject: [PATCH 3/9] Added coverage answers to report --- report.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/report.md b/report.md index 0e33d3734d2a..d6266b500d7a 100644 --- a/report.md +++ b/report.md @@ -46,6 +46,12 @@ Document your experience in using a "new"/different coverage tool. How well was the tool documented? Was it possible/easy/difficult to integrate it with your build environment? +I used the (coverage)[https://coverage.readthedocs.io/en/7.1.0/] tool to measure branch coverage. It was really easy to use and well documented. Our project used both `pytest`, `doctest` and `unittest` python test frameworks, but the coverage tool integrated fine with them all. + +The installation was a simple `pip` call, and measuring coverage was as simple as replacing `python3 -m uniitest file.py` with `coverage run -m uniitest file.py` + +Overall, a pleasant experience! + ### Your own coverage tool Show a patch (or link to a branch) that shows the instrumented code to From eb956620651abe5132c981f2554d04114e7eae72 Mon Sep 17 00:00:00 2001 From: Hannes Hallqvist Date: Fri, 17 Feb 2023 14:53:42 +0100 Subject: [PATCH 4/9] fix typo --- report.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/report.md b/report.md index d6266b500d7a..88d605aa627b 100644 --- a/report.md +++ b/report.md @@ -46,7 +46,7 @@ Document your experience in using a "new"/different coverage tool. How well was the tool documented? Was it possible/easy/difficult to integrate it with your build environment? -I used the (coverage)[https://coverage.readthedocs.io/en/7.1.0/] tool to measure branch coverage. It was really easy to use and well documented. Our project used both `pytest`, `doctest` and `unittest` python test frameworks, but the coverage tool integrated fine with them all. +We used the (coverage)[https://coverage.readthedocs.io/en/7.1.0/] tool to measure branch coverage. It was really easy to use and well documented. Our project used both `pytest`, `doctest` and `unittest` python test frameworks, but the coverage tool integrated fine with them all. The installation was a simple `pip` call, and measuring coverage was as simple as replacing `python3 -m uniitest file.py` with `coverage run -m uniitest file.py` From e699a5bbdbce1ad96a864fd0cde52ab0128a0d78 Mon Sep 17 00:00:00 2001 From: BillXu0424 <1065602877@qq.com> Date: Sat, 18 Feb 2023 14:38:05 +0100 Subject: [PATCH 5/9] refactor. issue/5: refactored canny function --- .../edge_detection/canny.py | 256 ++++++++++++------ 1 file changed, 180 insertions(+), 76 deletions(-) diff --git a/digital_image_processing/edge_detection/canny.py b/digital_image_processing/edge_detection/canny.py index a830355267c4..a746f68f785c 100644 --- a/digital_image_processing/edge_detection/canny.py +++ b/digital_image_processing/edge_detection/canny.py @@ -1,11 +1,61 @@ import cv2 import numpy as np -from digital_image_processing.filters.convolve import img_convolve -from digital_image_processing.filters.sobel_filter import sobel_filter +# from digital_image_processing.filters.convolve import img_convolve +# from digital_image_processing.filters.sobel_filter import sobel_filter + +from numpy import dot, pad, ravel, zeros PI = 180 +def sobel_filter(image): + kernel_x = np.array([[-1, 0, 1], [-2, 0, 2], [-1, 0, 1]]) + kernel_y = np.array([[1, 2, 1], [0, 0, 0], [-1, -2, -1]]) + + dst_x = np.abs(img_convolve(image, kernel_x)) + dst_y = np.abs(img_convolve(image, kernel_y)) + # modify the pix within [0, 255] + dst_x = dst_x * 255 / np.max(dst_x) + dst_y = dst_y * 255 / np.max(dst_y) + + dst_xy = np.sqrt((np.square(dst_x)) + (np.square(dst_y))) + dst_xy = dst_xy * 255 / np.max(dst_xy) + dst = dst_xy.astype(np.uint8) + + theta = np.arctan2(dst_y, dst_x) + return dst, theta + +def im2col(image, block_size): + rows, cols = image.shape + dst_height = cols - block_size[1] + 1 + dst_width = rows - block_size[0] + 1 + image_array = zeros((dst_height * dst_width, block_size[1] * block_size[0])) + row = 0 + for i in range(0, dst_height): + for j in range(0, dst_width): + window = ravel(image[i : i + block_size[0], j : j + block_size[1]]) + image_array[row, :] = window + row += 1 + + return image_array + +def img_convolve(image, filter_kernel): + height, width = image.shape[0], image.shape[1] + k_size = filter_kernel.shape[0] + pad_size = k_size // 2 + # Pads image with the edge values of array. + image_tmp = pad(image, pad_size, mode="edge") + + # im2col, turn the k_size*k_size pixels into a row and np.vstack all rows + image_array = im2col(image_tmp, (k_size, k_size)) + + # turn the kernel into shape(k*k, 1) + kernel_array = ravel(filter_kernel) + # reshape and get the dst image + dst = dot(image_array, kernel_array).reshape(height, width) + return dst + + def gen_gaussian_kernel(k_size, sigma): center = k_size // 2 @@ -17,98 +67,152 @@ def gen_gaussian_kernel(k_size, sigma): ) return g - -def canny(image, threshold_low=15, threshold_high=30, weak=128, strong=255): - image_row, image_col = image.shape[0], image.shape[1] - # gaussian_filter - gaussian_out = img_convolve(image, gen_gaussian_kernel(9, sigma=1.4)) - # get the gradient and degree by sobel_filter - sobel_grad, sobel_theta = sobel_filter(gaussian_out) - gradient_direction = np.rad2deg(sobel_theta) - gradient_direction += PI - - dst = np.zeros((image_row, image_col)) - +def non_maximum_suppression(image, grad_dir, grad_mag, strong, weak, low, high): """ Non-maximum suppression. If the edge strength of the current pixel is the largest compared to the other pixels in the mask with the same direction, the value will be preserved. Otherwise, the value will be suppressed. """ + image_row, image_col = image.shape for row in range(1, image_row - 1): for col in range(1, image_col - 1): - direction = gradient_direction[row, col] - - if ( - 0 <= direction < 22.5 - or 15 * PI / 8 <= direction <= 2 * PI - or 7 * PI / 8 <= direction <= 9 * PI / 8 - ): - w = sobel_grad[row, col - 1] - e = sobel_grad[row, col + 1] - if sobel_grad[row, col] >= w and sobel_grad[row, col] >= e: - dst[row, col] = sobel_grad[row, col] - - elif (PI / 8 <= direction < 3 * PI / 8) or ( - 9 * PI / 8 <= direction < 11 * PI / 8 - ): - sw = sobel_grad[row + 1, col - 1] - ne = sobel_grad[row - 1, col + 1] - if sobel_grad[row, col] >= sw and sobel_grad[row, col] >= ne: - dst[row, col] = sobel_grad[row, col] - - elif (3 * PI / 8 <= direction < 5 * PI / 8) or ( - 11 * PI / 8 <= direction < 13 * PI / 8 - ): - n = sobel_grad[row - 1, col] - s = sobel_grad[row + 1, col] - if sobel_grad[row, col] >= n and sobel_grad[row, col] >= s: - dst[row, col] = sobel_grad[row, col] - - elif (5 * PI / 8 <= direction < 7 * PI / 8) or ( - 13 * PI / 8 <= direction < 15 * PI / 8 - ): - nw = sobel_grad[row - 1, col - 1] - se = sobel_grad[row + 1, col + 1] - if sobel_grad[row, col] >= nw and sobel_grad[row, col] >= se: - dst[row, col] = sobel_grad[row, col] - - """ - High-Low threshold detection. If an edge pixel’s gradient value is higher - than the high threshold value, it is marked as a strong edge pixel. If an - edge pixel’s gradient value is smaller than the high threshold value and - larger than the low threshold value, it is marked as a weak edge pixel. If - an edge pixel's value is smaller than the low threshold value, it will be - suppressed. - """ - if dst[row, col] >= threshold_high: - dst[row, col] = strong - elif dst[row, col] <= threshold_low: - dst[row, col] = 0 - else: - dst[row, col] = weak - + direction = grad_dir[row, col] + angle_case1(direction, grad_mag, image, row, col) + angle_case2(direction, grad_mag, image, row, col) + angle_case3(direction, grad_mag, image, row, col) + angle_case4(direction, grad_mag, image, row, col) + threshold(image, row, col, high, low, strong, weak) + +def angle_case1(dir, grad_mag, image, row, col): + """ + Suppress the non-maximum value horizontally. + Args: + dir: gradient direction + grad_mag: map of gradient magnitude + image: edge map + row: first dimension coordinate of image + col: second dimension coordinate of image + """ + if ( + 0 <= dir < 22.5 + or 15 * PI / 8 <= dir <= 2 * PI + or 7 * PI / 8 <= dir <= 9 * PI / 8 + ): + w = grad_mag[row, col - 1] + e = grad_mag[row, col + 1] + if grad_mag[row, col] >= w and grad_mag[row, col] >= e: + image[row, col] = grad_mag[row, col] + +def angle_case2(dir, grad_mag, image, row, col): + """ + Suppress the non-maximum value subdiagonally. + Args: + dir: gradient direction + grad_mag: map of gradient magnitude + image: edge map + row: first dimension coordinate of image + col: second dimension coordinate of image + """ + if (PI / 8 <= dir < 3 * PI / 8) or ( + 9 * PI / 8 <= dir < 11 * PI / 8 + ): + sw = grad_mag[row + 1, col - 1] + ne = grad_mag[row - 1, col + 1] + if grad_mag[row, col] >= sw and grad_mag[row, col] >= ne: + image[row, col] = grad_mag[row, col] + +def angle_case3(dir, grad_mag, image, row, col): + """ + Suppress the non-maximum value vertically. + Args: + dir: gradient direction + grad_mag: map of gradient magnitude + image: edge map + row: first dimension coordinate of image + col: second dimension coordinate of image + """ + if (3 * PI / 8 <= dir < 5 * PI / 8) or ( + 11 * PI / 8 <= dir < 13 * PI / 8 + ): + n = grad_mag[row - 1, col] + s = grad_mag[row + 1, col] + if grad_mag[row, col] >= n and grad_mag[row, col] >= s: + image[row, col] = grad_mag[row, col] + +def angle_case4(dir, grad_mag, image, row, col): + """ + Suppress the non-maximum value diagonally. + Args: + dir: gradient direction + grad_mag: map of gradient magnitude + image: edge map + row: first dimension coordinate of image + col: second dimension coordinate of image + """ + if (5 * PI / 8 <= dir < 7 * PI / 8) or ( + 13 * PI / 8 <= dir < 15 * PI / 8 + ): + nw = grad_mag[row - 1, col - 1] + se = grad_mag[row + 1, col + 1] + if grad_mag[row, col] >= nw and grad_mag[row, col] >= se: + image[row, col] = grad_mag[row, col] + +def threshold(image, row, col, high, low, strong, weak): + """ + High-Low threshold detection. If an edge pixel's gradient value is higher + than the high threshold value, it is marked as a strong edge pixel. If an + edge pixel's gradient value is smaller than the high threshold value and + larger than the low threshold value, it is marked as a weak edge pixel. If + an edge pixel's value is smaller than the low threshold value, it will be + suppressed. + """ + if image[row, col] >= high: + image[row, col] = strong + elif image[row, col] <= low: + image[row, col] = 0 + else: + image[row, col] = weak + +def edge_tracking(image, weak, strong): """ Edge tracking. Usually a weak edge pixel caused from true edges will be connected to a strong edge pixel while noise responses are unconnected. As long as there is one strong edge pixel that is involved in its 8-connected neighborhood, that weak edge point can be identified as one that should be preserved. """ + image_row, image_col = image.shape for row in range(1, image_row): for col in range(1, image_col): - if dst[row, col] == weak: + if image[row, col] == weak: if 255 in ( - dst[row, col + 1], - dst[row, col - 1], - dst[row - 1, col], - dst[row + 1, col], - dst[row - 1, col - 1], - dst[row + 1, col - 1], - dst[row - 1, col + 1], - dst[row + 1, col + 1], + image[row, col + 1], + image[row, col - 1], + image[row - 1, col], + image[row + 1, col], + image[row - 1, col - 1], + image[row + 1, col - 1], + image[row - 1, col + 1], + image[row + 1, col + 1], ): - dst[row, col] = strong + image[row, col] = strong else: - dst[row, col] = 0 + image[row, col] = 0 + + +def canny(image, threshold_low=15, threshold_high=30, weak=128, strong=255): + image_row, image_col = image.shape[0], image.shape[1] + # gaussian_filter + gaussian_out = img_convolve(image, gen_gaussian_kernel(9, sigma=1.4)) + # get the gradient and degree by sobel_filter + sobel_grad, sobel_theta = sobel_filter(gaussian_out) + gradient_direction = np.rad2deg(sobel_theta) + gradient_direction += PI + + dst = np.zeros((image_row, image_col)) + + non_maximum_suppression(dst, gradient_direction, sobel_grad, strong, weak, threshold_low, threshold_high) + + edge_tracking(dst, weak, strong) return dst From 3f4b5eaf4f929a68a2868aa3b3d02e91b5358b14 Mon Sep 17 00:00:00 2001 From: Hannes Hallqvist Date: Sun, 19 Feb 2023 13:05:29 +0100 Subject: [PATCH 6/9] added script to run specific tests and calculate coverage --- specifict_tests.py | 61 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 61 insertions(+) create mode 100644 specifict_tests.py diff --git a/specifict_tests.py b/specifict_tests.py new file mode 100644 index 000000000000..b86fbcfc0513 --- /dev/null +++ b/specifict_tests.py @@ -0,0 +1,61 @@ +import doctest +import coverage +import pytest +import numpy as np +# methods to test: _remove_repair, remove +from data_structures.binary_tree.red_black_tree import RedBlackTree +from linear_algebra.src.polynom_for_points import points_to_polynomial +from graphs.a_star import search +# from graphs.bi_directional_dijkstra import bidirectional_dij +from matrix.inverse_of_matrix import inverse_of_matrix +from project_euler.problem_049.sol1 import solution +from project_euler.problem_551.sol1 import next_term +from cellular_automata.conways_game_of_life import new_generation + +# needed new_generation +BLINKER = [[0, 1, 0], [0, 1, 0], [0, 1, 0]] + + +cov = coverage.Coverage(branch=True, ) + +cov.start() +doctest.run_docstring_examples(RedBlackTree._remove_repair, globals(), name="_remove_repair") +pytest.main(["digital_image_processing/test_digital_image_processing.py::test_canny"]) # separate testfile +doctest.run_docstring_examples(points_to_polynomial, globals(), name="points_to_polynomial") +doctest.run_docstring_examples(search, globals(), name="search") +import graphs.bi_directional_dijkstra # runs test in main +doctest.run_docstring_examples(inverse_of_matrix, globals(), name="inverse_of_matrix") +doctest.run_docstring_examples(solution, globals(), name="solution") +doctest.run_docstring_examples(next_term, globals(), name="next_term") +doctest.run_docstring_examples(new_generation, globals(), name="new_generation") +doctest.run_docstring_examples(RedBlackTree.remove, globals(), name="remove") + +cov.stop() +cov.save() + +# the pytest runs alot of test we are not interested in, omit these +to_omit = ["/usr/lib/python3/dist-packages/PIL/Image.py", + "/usr/lib/python3/dist-packages/attr/_compat.py", + "digital_image_processing/sepia.py", + "digital_image_processing/dithering/burkes.py", + "digital_image_processing/filters/local_binary_pattern.py", + "digital_image_processing/filters/median_filter.py", + "digital_image_processing/filters/gaussian_filter.py", + "digital_image_processing/convert_to_negative.py", + "digital_image_processing/resize/resize.py", + "digital_image_processing/change_contrast.py", + "/usr/lib/python3/dist-packages/attr/_make.py", + "digital_image_processing/test_digital_image_processing.py", + "digital_image_processing/filters/sobel_filter.py", + "digital_image_processing/filters/convolve.py", + "config-3.py", + "config.py", + "digital_image_processing/__init__.py", + "digital_image_processing/dithering/__init__.py", + "digital_image_processing/edge_detection/__init__.py", + "digital_image_processing/filters/__init__.py", + "digital_image_processing/resize/__init__.py"] + +cov.report(omit=to_omit, skip_empty=False, show_missing=True) + +#doctest.run_docstring_examples(solution, globals()) From ce3543f239cc57f5684fc78f99f4595845eaa7fb Mon Sep 17 00:00:00 2001 From: Hannes Hallqvist Date: Sun, 19 Feb 2023 13:05:39 +0100 Subject: [PATCH 7/9] added results to report --- report.md | 31 ++++++++++++++++++++++++++++++- 1 file changed, 30 insertions(+), 1 deletion(-) diff --git a/report.md b/report.md index 88d605aa627b..1e89e9217720 100644 --- a/report.md +++ b/report.md @@ -77,7 +77,36 @@ its output? Show the comments that describe the requirements for the coverage. -Report of old coverage: [link] +Report of old coverage: +- `_remove_repair@red_black_tree.py` : 0% (no tests) # py -m doctest d/d/rbt.py +- `canny@canny.py` : 84% # `coverage run --branch -m pytest digital_image_processing/test_digital_image_processing.py` +- `points_to_polynomial@polynom_for_points` : 82% # doctest, single function +- `search@a_star.py` : 0% (no tests) +- `bidirectional_dij@bi_directional_djikstra` : 89% # coverage run -m doctest graphs/bi_directional_dijkstra.py +- `inverse_of_matrix@inverse_of_matrix.py` : 100% (skipping) +- `solution@problem_049/sol1.py` : 98% +- `problem_551/sol1.py` : 98% +- `conway_game_of_life` : 72% +- ` + +function@file Stmts Miss Branch BrPart Cover +------------------------------------------------------------------------------------------------- +`bidirectional_dij@graphs/bi_directional_dijkstra.py` 65 58 38 1 8% +`new_generation@cellular_automata/conways_game_of_life.py` 47 20 30 0 64% +`points_to_polynomial@linear_algebra/src/polynom_for_points.py` 68 16 38 4 79% +`canny@digital_image_processing/edge_detection/canny.py` 60 9 34 2 84% +`solution@project_euler/problem_049/sol1.py` 57 8 50 1 90% +`inverse_of_matrix@matrix/inverse_of_matrix.py` 37 4 20 0 93% + +(missing tests:) +`_remove_repair@data_structures/binary_tree/red_black_tree.py` +`remove@data_structures/binary_tree/red_black_tree.py` +`search@graphs/a_star.py +`next_term@project_euler/problem_551/sol1.py` +------------------------------------------------------------------------------------------------- +TOTAL 334 115 210 8 68% + + Report of new coverage: [link] From 83b913b626a0a9e9214066ef23afb9c2a63e86af Mon Sep 17 00:00:00 2001 From: Hannes Hallqvist Date: Sun, 19 Feb 2023 13:08:34 +0100 Subject: [PATCH 8/9] updated test reflections in report --- report.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/report.md b/report.md index 1e89e9217720..cde5437a8332 100644 --- a/report.md +++ b/report.md @@ -50,7 +50,8 @@ We used the (coverage)[https://coverage.readthedocs.io/en/7.1.0/] tool to measur The installation was a simple `pip` call, and measuring coverage was as simple as replacing `python3 -m uniitest file.py` with `coverage run -m uniitest file.py` -Overall, a pleasant experience! +However, as the project used tests a bit differently between files, some with doctests on a function basis and some with separate `test_` files, a bit of tinkering was required to test the branch coverage for the selected functions. A separate test runner script was created to extract only our wanted results, see `specific_tests.py`. + ### Your own coverage tool From c6ffa6bed99c614d5d15d5b9566ad0ebb03618f2 Mon Sep 17 00:00:00 2001 From: Hannes Hallqvist Date: Sun, 19 Feb 2023 13:09:45 +0100 Subject: [PATCH 9/9] added empty files as test runner otherwise complains about these files missing --- config-3.py | 0 config.py | 0 2 files changed, 0 insertions(+), 0 deletions(-) create mode 100644 config-3.py create mode 100644 config.py diff --git a/config-3.py b/config-3.py new file mode 100644 index 000000000000..e69de29bb2d1 diff --git a/config.py b/config.py new file mode 100644 index 000000000000..e69de29bb2d1