From 0dfc2b173216552d25da7be90ccf6bd8767426e1 Mon Sep 17 00:00:00 2001
From: Stefano Magni
Date: Mon, 12 Aug 2019 15:18:57 +0200
Subject: [PATCH 001/502] Fix 1.10 chapter "otherwise" bold
---
readme.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/readme.md b/readme.md
index 9ff0db8f..d56c6d79 100644
--- a/readme.md
+++ b/readme.md
@@ -590,7 +590,7 @@ A more elegant alternative is the using the one-line dedicated Chai assertion: e
-❌ **Otherwise:**It will be challenging to infer from the test reports (e.g. CI reports) what went wrong
+❌ **Otherwise:** It will be challenging to infer from the test reports (e.g. CI reports) what went wrong
@@ -1912,4 +1912,4 @@ A5NDY3LDE1OTcyNDA3NzUsMjEwMzQzMDE2NiwtMzc1NjYzODQs
LTEyODY1MzE2MDAsLTI5NzUwMjYyMyw0MzUxOTU4ODAsMTc2NT
k2NzEzMCw3OTQ4ODg1MTcsLTE4MDA1NTUwMDYsOTM1MTI0ODc5
LDc3NTU2MTAxOSwtMjEwMzIxODMzM119
--->
\ No newline at end of file
+-->
From efb2742de5d26db452e3ef502842ed463e680a74 Mon Sep 17 00:00:00 2001
From: Stefano Magni
Date: Mon, 12 Aug 2019 15:36:54 +0200
Subject: [PATCH 002/502] Fix Sinon link
---
readme.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/readme.md b/readme.md
index 9ff0db8f..05aa957c 100644
--- a/readme.md
+++ b/readme.md
@@ -1220,7 +1220,7 @@ test('movie title appears', async () => {
## ⚪ ️ 3.6 Stub flakky and slow resources like backend APIs
-:white_check_mark: **Do:** When coding your mainstream tests (not E2E tests), avoid involving any resource that is beyond your responsibility and control like backend API and use stubs instead (i.e. test double). Practically, instead of real network calls to APIs, use some test double library (like [Sinon]https://sinonjs.org/, [Test doubles](https://www.npmjs.com/package/testdouble), etc) for stubbing the API response. The main benefit is preventing flakiness - testing or staging APIs by definition are not highly stable and from time to time will fail your tests although YOUR component behaves just fine (production env was not meant for testing and it usually throttles requests). Doing this will allow simulating various API behavior that should drive your component behavior as when no data was found or the case when API throws an error. Last but not least, network calls will greatly slow down the tests
+:white_check_mark: **Do:** When coding your mainstream tests (not E2E tests), avoid involving any resource that is beyond your responsibility and control like backend API and use stubs instead (i.e. test double). Practically, instead of real network calls to APIs, use some test double library (like [Sinon](https://sinonjs.org/), [Test doubles](https://www.npmjs.com/package/testdouble), etc) for stubbing the API response. The main benefit is preventing flakiness - testing or staging APIs by definition are not highly stable and from time to time will fail your tests although YOUR component behaves just fine (production env was not meant for testing and it usually throttles requests). Doing this will allow simulating various API behavior that should drive your component behavior as when no data was found or the case when API throws an error. Last but not least, network calls will greatly slow down the tests
@@ -1912,4 +1912,4 @@ A5NDY3LDE1OTcyNDA3NzUsMjEwMzQzMDE2NiwtMzc1NjYzODQs
LTEyODY1MzE2MDAsLTI5NzUwMjYyMyw0MzUxOTU4ODAsMTc2NT
k2NzEzMCw3OTQ4ODg1MTcsLTE4MDA1NTUwMDYsOTM1MTI0ODc5
LDc3NTU2MTAxOSwtMjEwMzIxODMzM119
--->
\ No newline at end of file
+-->
From 6d4852d8aea2ff895cbf3923f97f8a4a1cea5368 Mon Sep 17 00:00:00 2001
From: Stefano Magni
Date: Mon, 12 Aug 2019 15:43:21 +0200
Subject: [PATCH 003/502] Fix the Percy name
---
readme.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/readme.md b/readme.md
index 9ff0db8f..b43b626e 100644
--- a/readme.md
+++ b/readme.md
@@ -1414,7 +1414,7 @@ Feature: Twitter new tweet
## ⚪ ️ 3.11 Detect visual issues with automated tools
-:white_check_mark: **Do:** Setup automated tools to capture UI screenshots when changes are presented and detect visual issues like content overlapping or breaking. This ensures that not only the right data is prepared but also the user can conveniently see it. This technique is not widely adopted, our testing mindset leans toward functional tests but it's the visuals what the user experience and with so many device types it's very easy to overlook some nasty UI bug. Some free tools can provide the basics - generate and save screenshots for the inspection of human eyes. While this approach might be sufficient for small apps, it's flawed as any other manual testing that demands human labor anytime something changes. On the other hand, it's quite challenging to detect UI issues automatically due to the lack of clear definition - this is where the field of 'Visual Regression' chime in and solve this puzzle by comparing old UI with the latest changes and detect differences. Some OSS/free tools can provide some of this functionality (e.g. [wraith](https://github.com/BBC-News/wraith), [PhantomCSS]([https://github.com/HuddleEng/PhantomCSS](https://github.com/HuddleEng/PhantomCSS)) but might charge signficant setup time. The commercial line of tools (e.g. [Applitools](https://applitools.com/), [Perci.io](https://percy.io/)) takes is a step further by smoothing the installation and packing advanced features like management UI, alerting, smart capturing by elemeinating 'visual noise' (e.g. ads, animations) and even root cause analysis of the DOM/css changes that led to the issue
+:white_check_mark: **Do:** Setup automated tools to capture UI screenshots when changes are presented and detect visual issues like content overlapping or breaking. This ensures that not only the right data is prepared but also the user can conveniently see it. This technique is not widely adopted, our testing mindset leans toward functional tests but it's the visuals what the user experience and with so many device types it's very easy to overlook some nasty UI bug. Some free tools can provide the basics - generate and save screenshots for the inspection of human eyes. While this approach might be sufficient for small apps, it's flawed as any other manual testing that demands human labor anytime something changes. On the other hand, it's quite challenging to detect UI issues automatically due to the lack of clear definition - this is where the field of 'Visual Regression' chime in and solve this puzzle by comparing old UI with the latest changes and detect differences. Some OSS/free tools can provide some of this functionality (e.g. [wraith](https://github.com/BBC-News/wraith), [PhantomCSS]([https://github.com/HuddleEng/PhantomCSS](https://github.com/HuddleEng/PhantomCSS)) but might charge signficant setup time. The commercial line of tools (e.g. [Applitools](https://applitools.com/), [Percy.io](https://percy.io/)) takes is a step further by smoothing the installation and packing advanced features like management UI, alerting, smart capturing by elemeinating 'visual noise' (e.g. ads, animations) and even root cause analysis of the DOM/css changes that led to the issue
@@ -1912,4 +1912,4 @@ A5NDY3LDE1OTcyNDA3NzUsMjEwMzQzMDE2NiwtMzc1NjYzODQs
LTEyODY1MzE2MDAsLTI5NzUwMjYyMyw0MzUxOTU4ODAsMTc2NT
k2NzEzMCw3OTQ4ODg1MTcsLTE4MDA1NTUwMDYsOTM1MTI0ODc5
LDc3NTU2MTAxOSwtMjEwMzIxODMzM119
--->
\ No newline at end of file
+-->
From 6f4316cc766e57b60dd20b4ab69aaa165a786ec1 Mon Sep 17 00:00:00 2001
From: Stefano Magni
Date: Mon, 12 Aug 2019 15:54:01 +0200
Subject: [PATCH 004/502] Align the "dependency updates" list to the "mutation
testing" one
---
readme.md | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/readme.md b/readme.md
index 9ff0db8f..620892a0 100644
--- a/readme.md
+++ b/readme.md
@@ -1802,7 +1802,13 @@ license-checker --summary --failOn BSD
## ⚪ ️5.7 Automate dependency updates
-:white_check_mark: **Do:** Yarn and npm latest introduction of package-lock.json introduced a serious challenge (the road to hell is paved with good intentions) — by default now, packages are no longer getting updates. Even a team running many fresh deployments with ‘npm install’ & ‘npm update’ won’t get any new updates. This leads to subpar dependent packages versions at best or to vulnerable code at worst. Teams now rely on developers goodwill and memory to manually update the package.json or use tools [like ncu](https://www.npmjs.com/package/npm-check-updates) manually. A more reliable way could be to automate the process of getting the most reliable dependency versions, though there are no silver bullet solutions yet there are two possible automation roads: (1) CI can fail builds that have obsolete dependencies — using tools like [‘npm outdated’](https://docs.npmjs.com/cli/outdated) or ‘npm-check-updates (ncu)’ . Doing so will enforce developers to update dependencies. (2) Use commercial tools that scan the code and automatically send pull requests with updated dependencies. One interesting question remaining is what should be the dependency update policy — updating on every patch generates too many overhead, updating right when a major is released might point to an unstable version (many packages found vulnerable on the very first days after being released, [see the](https://nodesource.com/blog/a-high-level-post-mortem-of-the-eslint-scope-security-incident/) eslint-scope incident). An efficient update policy may allow some ‘vesting period’ — let the code lag behind the @latest for some time and versions before considering the local copy as obsolete (e.g. local version is 1.3.1 and repository version is 1.3.8)
+:white_check_mark: **Do:** Yarn and npm latest introduction of package-lock.json introduced a serious challenge (the road to hell is paved with good intentions) — by default now, packages are no longer getting updates. Even a team running many fresh deployments with ‘npm install’ & ‘npm update’ won’t get any new updates. This leads to subpar dependent packages versions at best or to vulnerable code at worst. Teams now rely on developers goodwill and memory to manually update the package.json or use tools [like ncu](https://www.npmjs.com/package/npm-check-updates) manually. A more reliable way could be to automate the process of getting the most reliable dependency versions, though there are no silver bullet solutions yet there are two possible automation roads:
+
+(1) CI can fail builds that have obsolete dependencies — using tools like [‘npm outdated’](https://docs.npmjs.com/cli/outdated) or ‘npm-check-updates (ncu)’ . Doing so will enforce developers to update dependencies.
+
+(2) Use commercial tools that scan the code and automatically send pull requests with updated dependencies. One interesting question remaining is what should be the dependency update policy — updating on every patch generates too many overhead, updating right when a major is released might point to an unstable version (many packages found vulnerable on the very first days after being released, [see the](https://nodesource.com/blog/a-high-level-post-mortem-of-the-eslint-scope-security-incident/) eslint-scope incident).
+
+An efficient update policy may allow some ‘vesting period’ — let the code lag behind the @latest for some time and versions before considering the local copy as obsolete (e.g. local version is 1.3.1 and repository version is 1.3.8)
@@ -1912,4 +1918,4 @@ A5NDY3LDE1OTcyNDA3NzUsMjEwMzQzMDE2NiwtMzc1NjYzODQs
LTEyODY1MzE2MDAsLTI5NzUwMjYyMyw0MzUxOTU4ODAsMTc2NT
k2NzEzMCw3OTQ4ODg1MTcsLTE4MDA1NTUwMDYsOTM1MTI0ODc5
LDc3NTU2MTAxOSwtMjEwMzIxODMzM119
--->
\ No newline at end of file
+-->
From ddc3970384fe146ac0c19518c3d2ab73f696aac6 Mon Sep 17 00:00:00 2001
From: Stefano Magni
Date: Mon, 12 Aug 2019 15:56:41 +0200
Subject: [PATCH 005/502] Fix minor typos
---
readme.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/readme.md b/readme.md
index 9ff0db8f..deaaba67 100644
--- a/readme.md
+++ b/readme.md
@@ -1867,11 +1867,11 @@ license-checker --summary --failOn BSD
**Role:** Writer
-**About:** I'm an independent consultant who works with 500 fortune corporates and garage startups on polishing their JS & Node.js applications. More than any other topic I'm fascinated by and aims to master the art of testing. I'm also the author of '[Node.js Best Practices](https://github.com/goldbergyoni/nodebestpractices)
+**About:** I'm an independent consultant who works with 500 fortune corporates and garage startups on polishing their JS & Node.js applications. More than any other topic I'm fascinated by and aims to master the art of testing. I'm also the author of [Node.js Best Practices](https://github.com/goldbergyoni/nodebestpractices)
-**Workshop:** 👨🏫 Want to learn all these practices and techniques at your offices (Europe & USA)? [register here for my testing workshop](https://testjavascript.com/)
+**Workshop:** 👨🏫 Want to learn all these practices and techniques at your offices (Europe & USA)? [Register here for my testing workshop](https://testjavascript.com/)
**Follow:**
@@ -1912,4 +1912,4 @@ A5NDY3LDE1OTcyNDA3NzUsMjEwMzQzMDE2NiwtMzc1NjYzODQs
LTEyODY1MzE2MDAsLTI5NzUwMjYyMyw0MzUxOTU4ODAsMTc2NT
k2NzEzMCw3OTQ4ODg1MTcsLTE4MDA1NTUwMDYsOTM1MTI0ODc5
LDc3NTU2MTAxOSwtMjEwMzIxODMzM119
--->
\ No newline at end of file
+-->
From 346f5f4a306734a35ee6a676b5e765077b88e42a Mon Sep 17 00:00:00 2001
From: Yeoh Joer
Date: Tue, 13 Aug 2019 00:11:10 +0800
Subject: [PATCH 006/502] Fix typo `strucuturing`
---
readme.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/readme.md b/readme.md
index 9ff0db8f..02d8000e 100644
--- a/readme.md
+++ b/readme.md
@@ -34,7 +34,7 @@ A single advice that inspires all the others (1 special bullet)
* ### `Section 1: The Test Anatomy`
-The foundation - strucuturing clean tests (12 bullets)
+The foundation - structuring clean tests (12 bullets)
* ### `Section 2: Backend`
From 5928be78669b49c8935760797ea81d1cbaa6b6f6 Mon Sep 17 00:00:00 2001
From: Jhonny Moreira
Date: Mon, 12 Aug 2019 13:26:29 -0300
Subject: [PATCH 007/502] Fix typos in section 1.5 anti pattern example
---
readme.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/readme.md b/readme.md
index 9ff0db8f..2559e60a 100644
--- a/readme.md
+++ b/readme.md
@@ -334,10 +334,10 @@ For example, if you want to test what your app behaves reasonably when the payme
it("When a valid product is about to be deleted, ensure data access DAL was called once, with the right product and right config", async () => {
//Assume we already added a product
const dataAccessMock = sinon.mock(DAL);
- //hmmm BAD: testing the internals is actually our main goal here, not just a side-effecr
+ //hmmm BAD: testing the internals is actually our main goal here, not just a side-effect
dataAccessMock.expects("deleteProduct").once().withArgs(DBConfig, theProductWeJustAdded, true, false);
new ProductService().deletePrice(theProductWeJustAdded);
- mock.verify();
+ dataAccessMock.verify();
});
```
@@ -1912,4 +1912,4 @@ A5NDY3LDE1OTcyNDA3NzUsMjEwMzQzMDE2NiwtMzc1NjYzODQs
LTEyODY1MzE2MDAsLTI5NzUwMjYyMyw0MzUxOTU4ODAsMTc2NT
k2NzEzMCw3OTQ4ODg1MTcsLTE4MDA1NTUwMDYsOTM1MTI0ODc5
LDc3NTU2MTAxOSwtMjEwMzIxODMzM119
--->
\ No newline at end of file
+-->
From f1b0d1b93503a1fab709cdc8d5007fdb346348b6 Mon Sep 17 00:00:00 2001
From: Yeoh Joer
Date: Tue, 13 Aug 2019 00:31:06 +0800
Subject: [PATCH 008/502] Remove extra hash signs in Heading 3
---
readme.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/readme.md b/readme.md
index 02d8000e..c8d0dd29 100644
--- a/readme.md
+++ b/readme.md
@@ -342,7 +342,7 @@ it("When a valid product is about to be deleted, ensure data access DAL was call
```
-### ### :clap:Doing It Right Example: spies are focused on testing the requirements but as a side-effect are unavoidably touching to the internals
+### :clap:Doing It Right Example: spies are focused on testing the requirements but as a side-effect are unavoidably touching to the internals
```javascript
it("When a valid product is about to be deleted, ensure an email is sent", async () => {
From 22d2a076ecab103e160314a4c1cedebad7c0b0e0 Mon Sep 17 00:00:00 2001
From: Yeoh Joer
Date: Tue, 13 Aug 2019 00:40:56 +0800
Subject: [PATCH 009/502] Fix typo `travells`
---
readme.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/readme.md b/readme.md
index c8d0dd29..72ca76b6 100644
--- a/readme.md
+++ b/readme.md
@@ -11,7 +11,7 @@ This is a guide for JavaScript & Node.js reliability from A-Z. It summarizes and
## 🚢 Advanced: Goes 10,000 miles beyond the basics
-Hop into a journey that travells way beyond the basics into advanced topics like testing in production, mutation testing, property-based testing and many other strategic & professional tools. Should you read every word in this guide your testing skills are likely to go way above the average
+Hop into a journey that travels way beyond the basics into advanced topics like testing in production, mutation testing, property-based testing and many other strategic & professional tools. Should you read every word in this guide your testing skills are likely to go way above the average
## 🌐 Full-stack: front, backend, CI, anything
@@ -1335,7 +1335,7 @@ beforeEach(setUser => () {
-## ⚪ ️ 3.9 Have one E2E smoke test that just travells across the site map
+## ⚪ ️ 3.9 Have one E2E smoke test that just travels across the site map
:white_check_mark: **Do:** For production monitoring and development-time sanity check, run a single E2E test that visits all/most of the site pages and ensures no one breaks. This type of test brings a great return on investment as it's very easy to write and maintain, but it can detect any kind of failure including functional, network and deployment issues. Other styles of smoke and sanity checking are not as reliable and exhaustive - some ops teams just ping the home page (production) or developers who run many integration tests which don't discover packaging and browser issues. Goes without saying that the smoke test doesn't replace functional tests rather just aim to serve as a quick smoke detector
From d90ac666d3d85770dfd1fc8dba5e32671a0ac007 Mon Sep 17 00:00:00 2001
From: Yeoh Joer
Date: Tue, 13 Aug 2019 01:04:47 +0800
Subject: [PATCH 010/502] The character 'M' of MySQL is an upper case 'M'
---
readme.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/readme.md b/readme.md
index 9ff0db8f..6de4c870 100644
--- a/readme.md
+++ b/readme.md
@@ -1836,7 +1836,7 @@ license-checker --summary --failOn BSD
## ⚪ ️ 5.9 Build matrix: Run the same CI steps using multiple Node versions
-:white_check_mark: **Do:** Quality checking is about serendipity, the more ground you cover the luckier you get in detecting issues early. When developing reusable packages or running a multi-customer production with various configuration and Node versions, the CI must run the pipeline of tests over all the permutations of configurations. For example, assuming we use mySQL for some customers and Postgres for others — some CI vendors support a feature called ‘Matrix’ which allow running the suit of testing against all permutations of mySQL, Postgres and multiple Node version like 8, 9 and 10. This is done using configuration only without any additional effort (assuming you have testing or any other quality checks). Other CIs who doesn’t support Matrix might have extensions or tweaks to allow that
+:white_check_mark: **Do:** Quality checking is about serendipity, the more ground you cover the luckier you get in detecting issues early. When developing reusable packages or running a multi-customer production with various configuration and Node versions, the CI must run the pipeline of tests over all the permutations of configurations. For example, assuming we use MySQL for some customers and Postgres for others — some CI vendors support a feature called ‘Matrix’ which allow running the suit of testing against all permutations of MySQL, Postgres and multiple Node version like 8, 9 and 10. This is done using configuration only without any additional effort (assuming you have testing or any other quality checks). Other CIs who doesn’t support Matrix might have extensions or tweaks to allow that
From 7b8d32d93621b3bdce9e0ea1ee1f25425e40ebd7 Mon Sep 17 00:00:00 2001
From: Ian Germann
Date: Mon, 12 Aug 2019 16:35:23 -0400
Subject: [PATCH 011/502] Fix link to eslint-plugin-security
The link for eslint-plugin-security was incorrectly pointing to the npm page for eslint-plugin-promise
---
readme.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/readme.md b/readme.md
index a8a78351..e4add139 100644
--- a/readme.md
+++ b/readme.md
@@ -1627,7 +1627,7 @@ it("Test name", () => {*//error:no-identical-title. Assign unique titles to test
## ⚪ ️ 5.1 Enrich your linters and abort builds that have linting issues
-:white_check_mark: **Do:** Linters are a free lunch, with 5 min setup you get for free an auto-pilot guarding your code and catching significant issue as you type. Gone are the days where linting was about cosmetics (no semi-colons!). Nowadays, Linters can catch severe issues like errors that are not thrown correctly and losing information. On top of your basic set of rules (like [ESLint standard](https://www.npmjs.com/package/eslint-plugin-standard) or [Airbnb style](https://www.npmjs.com/package/eslint-config-airbnb)), consider including some specializing Linters like [eslint-plugin-chai-expect](https://www.npmjs.com/package/eslint-plugin-chai-expect) that can discover tests without assertions, [eslint-plugin-promise](https://www.npmjs.com/package/eslint-plugin-promise?activeTab=readme) can discover promises with no resolve (your code will never continue), [eslint-plugin-security](https://www.npmjs.com/package/eslint-plugin-promise?activeTab=readme) which can discover eager regex expressions that might get used for DOS attacks, and [eslint-plugin-you-dont-need-lodash-underscore](https://www.npmjs.com/package/eslint-plugin-you-dont-need-lodash-underscore) is capable of alarming when the code uses utility library methods that are part of the V8 core methods like Lodash._map(…)
+:white_check_mark: **Do:** Linters are a free lunch, with 5 min setup you get for free an auto-pilot guarding your code and catching significant issue as you type. Gone are the days where linting was about cosmetics (no semi-colons!). Nowadays, Linters can catch severe issues like errors that are not thrown correctly and losing information. On top of your basic set of rules (like [ESLint standard](https://www.npmjs.com/package/eslint-plugin-standard) or [Airbnb style](https://www.npmjs.com/package/eslint-config-airbnb)), consider including some specializing Linters like [eslint-plugin-chai-expect](https://www.npmjs.com/package/eslint-plugin-chai-expect) that can discover tests without assertions, [eslint-plugin-promise](https://www.npmjs.com/package/eslint-plugin-promise?activeTab=readme) can discover promises with no resolve (your code will never continue), [eslint-plugin-security](https://www.npmjs.com/package/eslint-plugin-security?activeTab=readme) which can discover eager regex expressions that might get used for DOS attacks, and [eslint-plugin-you-dont-need-lodash-underscore](https://www.npmjs.com/package/eslint-plugin-you-dont-need-lodash-underscore) is capable of alarming when the code uses utility library methods that are part of the V8 core methods like Lodash._map(…)
From 85d65379f3aff2011cab215a5019287df39fa16e Mon Sep 17 00:00:00 2001
From: Hafez
Date: Wed, 14 Aug 2019 18:53:54 +0200
Subject: [PATCH 012/502] Fix typos
---
readme.md | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/readme.md b/readme.md
index a8a78351..91df2eb7 100644
--- a/readme.md
+++ b/readme.md
@@ -11,7 +11,7 @@ This is a guide for JavaScript & Node.js reliability from A-Z. It summarizes and
## 🚢 Advanced: Goes 10,000 miles beyond the basics
-Hop into a journey that travells way beyond the basics into advanced topics like testing in production, mutation testing, property-based testing and many other strategic & professional tools. Should you read every word in this guide your testing skills are likely to go way above the average
+Hop into a journey that travels way beyond the basics into advanced topics like testing in production, mutation testing, property-based testing and many other strategic & professional tools. Should you read every word in this guide your testing skills are likely to go way above the average
## 🌐 Full-stack: front, backend, CI, anything
@@ -54,7 +54,7 @@ Watching the watchman - measuring test quality (4 bullets)
* ### `Section 5: Continous Integration`
-Guideliness for CI in the JS world (9 bullets)
+Guidelines for CI in the JS world (9 bullets)
@@ -361,7 +361,7 @@ it("When a valid product is about to be deleted, ensure an email is sent", async
-## ⚪ ️1.6 Don’t “foo”, use realistic input dataing
+## ⚪ ️1.6 Don’t “foo”, use realistic input data
:white_check_mark: **Do:** Often production bugs are revealed under some very specific and surprising input — the more realistic the test input is, the greater the chances are to catch bugs early. Use dedicated libraries like [Faker](https://www.npmjs.com/package/faker) to generate pseudo-real data that resembles the variety and form of production data. For example, such libraries can generate realistic phone numbers, usernames, credit card, company names, and even ‘lorem ipsum’ text. You may also create some tests (on top of unit tests, not instead) that randomize fakers data to stretch your unit under test or even import real data from your production environment. Want to take it to the next level? see next bullet (property-based testing).
@@ -1337,7 +1337,7 @@ beforeEach(setUser => () {
-## ⚪ ️ 3.9 Have one E2E smoke test that just travells across the site map
+## ⚪ ️ 3.9 Have one E2E smoke test that just travels across the site map
:white_check_mark: **Do:** For production monitoring and development-time sanity check, run a single E2E test that visits all/most of the site pages and ensures no one breaks. This type of test brings a great return on investment as it's very easy to write and maintain, but it can detect any kind of failure including functional, network and deployment issues. Other styles of smoke and sanity checking are not as reliable and exhaustive - some ops teams just ping the home page (production) or developers who run many integration tests which don't discover packaging and browser issues. Goes without saying that the smoke test doesn't replace functional tests rather just aim to serve as a quick smoke detector
From ad5c4b029d1772a50f1cd8ef491a557700cb82cc Mon Sep 17 00:00:00 2001
From: Ruxandra Fediuc
Date: Thu, 15 Aug 2019 08:34:25 -0700
Subject: [PATCH 013/502] fix(typo): example section 1.11
---
readme.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/readme.md b/readme.md
index 0bec4ca5..73fca186 100644
--- a/readme.md
+++ b/readme.md
@@ -683,7 +683,7 @@ it.only("When no product name, it throws error 400", async() => {
//now the user/CI can run it frequently
describe('Order service', function() {
describe('Add new order #cold-test #sanity', function() {
- test('Scenario - no currency was supplied. Excpectation - Use the default currency #sanity', function() {
+ test('Scenario - no currency was supplied. Expectation - Use the default currency #sanity', function() {
//code logic here
});
});
From b4e4600ae279d9db27c9844e5cb2014d9ac07a58 Mon Sep 17 00:00:00 2001
From: Ruxandra Fediuc
Date: Thu, 15 Aug 2019 08:41:08 -0700
Subject: [PATCH 014/502] fix(typo): section 2.5
---
readme.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/readme.md b/readme.md
index 73fca186..0982de33 100644
--- a/readme.md
+++ b/readme.md
@@ -874,7 +874,7 @@ Credit::
Date: Thu, 15 Aug 2019 08:58:33 -0700
Subject: [PATCH 015/502] fix(typo): section 3.10
---
readme.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/readme.md b/readme.md
index 0982de33..c26cb0fc 100644
--- a/readme.md
+++ b/readme.md
@@ -1448,11 +1448,11 @@ it('When doing smoke testing over all page, should load them all successfully',
-### :clap: Doing It Right Example: Describing tests in human-language using cocumber-js
+### :clap: Doing It Right Example: Describing tests in human-language using cucumber-js
-
+
```javascript
-// this is how one can describe tests using cocumber: plain language that allows anyone to understand and collaborate
+// this is how one can describe tests using cucumber: plain language that allows anyone to understand and collaborate
Feature: Twitter new tweet
From 52af0af60eb5533854e3e4307dfe9768807c7090 Mon Sep 17 00:00:00 2001
From: Jack
Date: Fri, 16 Aug 2019 17:27:12 +0800
Subject: [PATCH 016/502] Update readme.md
ref the AAA pattern
---
readme.md | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/readme.md b/readme.md
index 85a65646..523044b9 100644
--- a/readme.md
+++ b/readme.md
@@ -594,7 +594,9 @@ it("When updating site name, get successful confirmation", async () => {
const siteUnderTest = await SiteService.addSite({
name: "siteForUpdateTest"
});
+
const updateNameResult = await SiteService.changeName(siteUnderTest, "newName");
+
expect(updateNameResult).to.be(true);
});
@@ -2033,4 +2035,4 @@ I0MDc3NSwyMTAzNDMwMTY2LC0zNzU2NjM4NCwtMTI4NjUzMTYw
MCwtMjk3NTAyNjIzLDQzNTE5NTg4MCwxNzY1OTY3MTMwLDc5ND
g4ODUxNywtMTgwMDU1NTAwNiw5MzUxMjQ4NzksNzc1NTYxMDE5
XX0=
--->
\ No newline at end of file
+-->
From 0858b8b8f35488ac080ca0f4cd2dfe3c5cef2477 Mon Sep 17 00:00:00 2001
From: Jack
Date: Fri, 16 Aug 2019 17:37:55 +0800
Subject: [PATCH 017/502] fix the typo
---
readme.md | 14 +++++++++++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/readme.md b/readme.md
index 523044b9..2aaefd44 100644
--- a/readme.md
+++ b/readme.md
@@ -4,6 +4,8 @@
# 👇 Why this guide can take your testing skills to the next level
+
+
## 📗 45+ best practices: Super-comprehensive and exhaustive
@@ -76,7 +78,9 @@ Our minds are full with the main production code, we don't have 'headspace' for
The tests are an opportunity for something else - a friendly and smiley assistant, one that it's delightful to work with and delivers great value for such a small investment. Science tells we have two brain systems: system 1 which is used for effortless activities like driving a car on an empty road and system 2 which is meant for complex and conscious operations like solving a math equation. Design your test for system 1, when looking at test code it should *feel* as easy as modifying an HTML document and not like solving 2X(17 × 24).
-This can be achieved by selectively cherry-picking techniques, tools and test targets that are cost-effective and provide great ROI. Test only as much as needed, strive to keep it nimble, sometimes it's even worth dropping some tests and trade reliability for agility and simplicity.
+This can be achieved by selectively cherry-picking techniques, too
+
+ls and test targets that are cost-effective and provide great ROI. Test only as much as needed, strive to keep it nimble, sometimes it's even worth dropping some tests and trade reliability for agility and simplicity.

@@ -623,13 +627,17 @@ A more elegant alternative is the using the one-line dedicated Chai assertion: e
-### :thumbsdown: Anti-pattern Example: A long test case that tries to assert the existence of error with try-catch
+### :thumbsdown:
+
+
+
+

```javascript
-/it("When no product name, it throws error 400", async() => {
+it("When no product name, it throws error 400", async() => {
let errorWeExceptFor = null;
try {
const result = await addNewProduct({name:'nest'});}
From ffc699e147dc3d5fe2a2ecb651113b9f182afd79 Mon Sep 17 00:00:00 2001
From: Jack
Date: Fri, 16 Aug 2019 17:40:28 +0800
Subject: [PATCH 018/502] Update readme.md
---
readme.md | 12 ++----------
1 file changed, 2 insertions(+), 10 deletions(-)
diff --git a/readme.md b/readme.md
index 2aaefd44..90972560 100644
--- a/readme.md
+++ b/readme.md
@@ -4,8 +4,6 @@
# 👇 Why this guide can take your testing skills to the next level
-
-
## 📗 45+ best practices: Super-comprehensive and exhaustive
@@ -78,9 +76,7 @@ Our minds are full with the main production code, we don't have 'headspace' for
The tests are an opportunity for something else - a friendly and smiley assistant, one that it's delightful to work with and delivers great value for such a small investment. Science tells we have two brain systems: system 1 which is used for effortless activities like driving a car on an empty road and system 2 which is meant for complex and conscious operations like solving a math equation. Design your test for system 1, when looking at test code it should *feel* as easy as modifying an HTML document and not like solving 2X(17 × 24).
-This can be achieved by selectively cherry-picking techniques, too
-
-ls and test targets that are cost-effective and provide great ROI. Test only as much as needed, strive to keep it nimble, sometimes it's even worth dropping some tests and trade reliability for agility and simplicity.
+This can be achieved by selectively cherry-picking techniques, tools and test targets that are cost-effective and provide great ROI. Test only as much as needed, strive to keep it nimble, sometimes it's even worth dropping some tests and trade reliability for agility and simplicity.

@@ -627,11 +623,7 @@ A more elegant alternative is the using the one-line dedicated Chai assertion: e
-### :thumbsdown:
-
-
-
-
+### :thumbsdown: Anti-pattern Example: A long test case that tries to assert the existence of error with try-catch

From c26b50bec77be900d184ad3839d2b8ae62b08100 Mon Sep 17 00:00:00 2001
From: Peter Carrero
Date: Fri, 16 Aug 2019 12:49:21 -0500
Subject: [PATCH 019/502] Fix typo "continuous"
---
readme.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/readme.md b/readme.md
index 85a65646..5d22b854 100644
--- a/readme.md
+++ b/readme.md
@@ -55,7 +55,7 @@ Writing tests for web UI including component and E2E tests (11 bullets)
Watching the watchman - measuring test quality (4 bullets)
-* ### `Section 5: Continous Integration`
+* ### `Section 5: Continuous Integration`
Guideliness for CI in the JS world (9 bullets)
@@ -2033,4 +2033,4 @@ I0MDc3NSwyMTAzNDMwMTY2LC0zNzU2NjM4NCwtMTI4NjUzMTYw
MCwtMjk3NTAyNjIzLDQzNTE5NTg4MCwxNzY1OTY3MTMwLDc5ND
g4ODUxNywtMTgwMDU1NTAwNiw5MzUxMjQ4NzksNzc1NTYxMDE5
XX0=
--->
\ No newline at end of file
+-->
From c03db84334b244db191ccff1febb4d240dca74af Mon Sep 17 00:00:00 2001
From: Yoni Goldberg
Date: Fri, 16 Aug 2019 21:52:03 +0300
Subject: [PATCH 020/502] Update readme.md
---
readme.md | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/readme.md b/readme.md
index 85a65646..8906b658 100644
--- a/readme.md
+++ b/readme.md
@@ -2000,6 +2000,8 @@ license-checker --summary --failOn BSD
* [🐦 Twitter](https://twitter.com/goldbergyoni/)
* [📞 Contact](https://testjavascript.com/contact-2/)
+* [✉️ Newsletter](https://testjavascript.com/newsletter//)
+
@@ -2033,4 +2035,4 @@ I0MDc3NSwyMTAzNDMwMTY2LC0zNzU2NjM4NCwtMTI4NjUzMTYw
MCwtMjk3NTAyNjIzLDQzNTE5NTg4MCwxNzY1OTY3MTMwLDc5ND
g4ODUxNywtMTgwMDU1NTAwNiw5MzUxMjQ4NzksNzc1NTYxMDE5
XX0=
--->
\ No newline at end of file
+-->
From cbd3f1803651fc8ea33926bba71da636eb28745b Mon Sep 17 00:00:00 2001
From: Yoni Goldberg
Date: Fri, 16 Aug 2019 22:11:50 +0300
Subject: [PATCH 021/502] readme.md updated from https://stackedit.io/
---
readme.md | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/readme.md b/readme.md
index 8906b658..d24c3283 100644
--- a/readme.md
+++ b/readme.md
@@ -21,7 +21,7 @@ Start by understanding the ubiquitous testing practices that are the foundation
### Written By Yoni Goldberg
* A JavaScript & Node.js consultant
-* [My testing workshop](https://www.testjavascript.com) - Learn all these testing practices and techniques live
+* ## 👨🏫 [My testing workshop](https://www.testjavascript.com) - learn about [my workshops](https://www.testjavascript.com) in Europe & US
* [Follow me on Twitter ](https://twitter.com/goldbergyoni/)
* Come hear me speak at [LA](https://js.la/), [Verona](https://2019.nodejsday.it/), [Kharkiv](https://kharkivjs.org/), [free webinar](https://zoom.us/webinar/register/1015657064375/WN_Lzvnuv4oQJOYey2jXNqX6A). Future events TBD
* [My JavaScript Quality newsletter](https://testjavascript.com/newsletter/) - insights and content only on strategic matters
@@ -2028,11 +2028,11 @@ Took care to revise, improve, lint and polish all the texts
+eyJoaXN0b3J5IjpbLTE3MDExMjkwNTMsMTgyMzc3OTkyMCwtMT
+IyNTQ2MjQyMiwtNjI3MjIwMDEsMTEzMDI2NzQ0NiwxNTg1ODY1
+NjMyLDI5ODA3MzcwMyw1ODM1NDY0NjgsLTM0ODY5OTIxNyw3OT
+Q4MDk0NjcsMTU5NzI0MDc3NSwyMTAzNDMwMTY2LC0zNzU2NjM4
+NCwtMTI4NjUzMTYwMCwtMjk3NTAyNjIzLDQzNTE5NTg4MCwxNz
+Y1OTY3MTMwLDc5NDg4ODUxNywtMTgwMDU1NTAwNiw5MzUxMjQ4
+NzldfQ==
+-->
\ No newline at end of file
From 09bc858c9dbd9e08e230dd83df9ff89852e49937 Mon Sep 17 00:00:00 2001
From: Yoni Goldberg
Date: Fri, 16 Aug 2019 22:12:20 +0300
Subject: [PATCH 022/502] readme.md updated from https://stackedit.io/
---
readme.md | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/readme.md b/readme.md
index d24c3283..224dd156 100644
--- a/readme.md
+++ b/readme.md
@@ -21,7 +21,7 @@ Start by understanding the ubiquitous testing practices that are the foundation
### Written By Yoni Goldberg
* A JavaScript & Node.js consultant
-* ## 👨🏫 [My testing workshop](https://www.testjavascript.com) - learn about [my workshops](https://www.testjavascript.com) in Europe & US
+* 👨🏫 [My testing workshop](https://www.testjavascript.com) - learn about [my workshops](https://www.testjavascript.com) in Europe & US
* [Follow me on Twitter ](https://twitter.com/goldbergyoni/)
* Come hear me speak at [LA](https://js.la/), [Verona](https://2019.nodejsday.it/), [Kharkiv](https://kharkivjs.org/), [free webinar](https://zoom.us/webinar/register/1015657064375/WN_Lzvnuv4oQJOYey2jXNqX6A). Future events TBD
* [My JavaScript Quality newsletter](https://testjavascript.com/newsletter/) - insights and content only on strategic matters
@@ -2028,11 +2028,11 @@ Took care to revise, improve, lint and polish all the texts
\ No newline at end of file
From 2b8638d11aeda7d900844b5648bca499d89db2b9 Mon Sep 17 00:00:00 2001
From: Huhgawz
Date: Fri, 16 Aug 2019 19:32:00 -0500
Subject: [PATCH 023/502] Fix typo "strcutured"
---
readme.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/readme.md b/readme.md
index 224dd156..11504572 100644
--- a/readme.md
+++ b/readme.md
@@ -162,7 +162,7 @@ describe('Products Service', function() {
-### :clap: Doing It Right Example: A test strcutured with the AAA pattern
+### :clap: Doing It Right Example: A test structured with the AAA pattern
 
diff --git a/readme.md b/readme.md
index 11504572..80ec0f1d 100644
--- a/readme.md
+++ b/readme.md
@@ -456,7 +456,6 @@ it("Better: When adding new valid product, get successful confirmation", async (
```javascript
require('mocha-testcheck').install();
const {expect} = require('chai');
-const faker = require('faker');
describe('Product service', () => {
describe('Adding new', () => {
From 078ae5d511f7e420bb65a598202ab600c8a91e6b Mon Sep 17 00:00:00 2001
From: Huhgawz
Date: Fri, 16 Aug 2019 20:27:28 -0500
Subject: [PATCH 025/502] Fix typo "prdoction'
---
readme.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/readme.md b/readme.md
index 80ec0f1d..01cd1c1c 100644
--- a/readme.md
+++ b/readme.md
@@ -701,7 +701,7 @@ describe('Order service', function() {
## ⚪ ️1.12 Other generic good testing hygiene
:white_check_mark: **Do:** This post is focused on testing advice that is related to, or at least can be exemplified with Node JS. This bullet, however, groups few non-Node related tips that are well-known
-Learn and practice [TDD principles](https://www.sm-cloud.com/book-review-test-driven-development-by-example-a-tldr/) — they are extremely valuable for many but don’t get intimidated if they don’t fit your style, you’re not the only one. Consider writing the tests before the code in a [red-green-refactor style](https://blog.cleancoder.com/uncle-bob/2014/12/17/TheCyclesOfTDD.html), ensure each test checks exactly one thing, when you find a bug — before fixing write a test that will detect this bug in the future, let each test fail at least once before turning green, start a module by writing a quick and simplistic code that satsifies the test - then refactor gradually and take it to a prdoction grade level, avoid any dependency on the environment (paths, OS, etc)
+Learn and practice [TDD principles](https://www.sm-cloud.com/book-review-test-driven-development-by-example-a-tldr/) — they are extremely valuable for many but don’t get intimidated if they don’t fit your style, you’re not the only one. Consider writing the tests before the code in a [red-green-refactor style](https://blog.cleancoder.com/uncle-bob/2014/12/17/TheCyclesOfTDD.html), ensure each test checks exactly one thing, when you find a bug — before fixing write a test that will detect this bug in the future, let each test fail at least once before turning green, start a module by writing a quick and simplistic code that satsifies the test - then refactor gradually and take it to a production grade level, avoid any dependency on the environment (paths, OS, etc)
From e5f51871b20f819241cbc349b6d98f1621cc22c1 Mon Sep 17 00:00:00 2001
From: Evan
Date: Sun, 18 Aug 2019 08:47:51 +0800
Subject: [PATCH 026/502] Remove repeated content
---
readme.md | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/readme.md b/readme.md
index 224dd156..03cfdaf4 100644
--- a/readme.md
+++ b/readme.md
@@ -324,8 +324,6 @@ it("White-box test: When the internal methods get 0 vat, it return 0 response",
:white_check_mark: **Do:** Test doubles are a necessary evil because they are coupled to the application internals, yet some provide an immense value ([Read here a reminder about test doubles: mocks vs stubs vs spies](https://martinfowler.com/articles/mocksArentStubs.html)). However, the various techniques were not born equal: some of them, spies and stubs, are focused on testing the requirements but as an inevitable side-effect they also slightly touch the internals. Mocks, on the contrary side, are focused on testing the internals — this brings huge overhead as explained in the bullet “Stick to black box testing”.
-However, the various techniques were not born equal: some of them, spies and stubs, are focused on testing the requirements but as an inevitable side-effect they also slightly touch the internals. Mocks, on the contrary side, are focused on testing the internals — this brings huge overhead as explained in the bullet “Stick to black box testing”.
-
Before using test doubles, ask a very simple question: Do I use it to test functionality that appears, or could appear, in the requirements document? If no, it’s a smell of white-box testing.
For example, if you want to test what your app behaves reasonably when the payment service is down, you might stub the payment service and trigger some ‘No Response’ return to ensure that the unit under test returns the right value. This checks our application behavior/response/outcome under certain scenarios. You might also use a spy to assert that an email was sent when that service is down — this is again a behavioral check which is likely to appear in a requirements doc (“Send an email if payment couldn’t be saved”). On the flip side, if you mock the Payment service and ensure that it was called with the right JavaScript types — then your test is focused on internal things that got nothing with the application functionality and are likely to change frequently
@@ -2035,4 +2033,4 @@ gwOTQ2NywxNTk3MjQwNzc1LDIxMDM0MzAxNjYsLTM3NTY2Mzg0
LC0xMjg2NTMxNjAwLC0yOTc1MDI2MjMsNDM1MTk1ODgwLDE3Nj
U5NjcxMzAsNzk0ODg4NTE3LC0xODAwNTU1MDA2LDkzNTEyNDg3
OV19
--->
\ No newline at end of file
+-->
From 0020ea29950c9bb29cffde5fbb9a798d8fa10cc3 Mon Sep 17 00:00:00 2001
From: Adrien REDON
Date: Mon, 19 Aug 2019 08:58:42 +0200
Subject: [PATCH 027/502] Fix typo
---
readme.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/readme.md b/readme.md
index 224dd156..3f222236 100644
--- a/readme.md
+++ b/readme.md
@@ -55,9 +55,9 @@ Writing tests for web UI including component and E2E tests (11 bullets)
Watching the watchman - measuring test quality (4 bullets)
-* ### `Section 5: Continous Integration`
+* ### `Section 5: Continuous Integration`
-Guideliness for CI in the JS world (9 bullets)
+Guidelines for CI in the JS world (9 bullets)
@@ -2035,4 +2035,4 @@ gwOTQ2NywxNTk3MjQwNzc1LDIxMDM0MzAxNjYsLTM3NTY2Mzg0
LC0xMjg2NTMxNjAwLC0yOTc1MDI2MjMsNDM1MTk1ODgwLDE3Nj
U5NjcxMzAsNzk0ODg4NTE3LC0xODAwNTU1MDA2LDkzNTEyNDg3
OV19
--->
\ No newline at end of file
+-->
From f66fe473935e75fe8b2badbf7edbed0a0ec7016b Mon Sep 17 00:00:00 2001
From: Scott Davis
Date: Mon, 19 Aug 2019 08:45:18 -0600
Subject: [PATCH 028/502] fix: spelling typo
---
readme.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/readme.md b/readme.md
index 224dd156..54f9f6a4 100644
--- a/readme.md
+++ b/readme.md
@@ -37,7 +37,7 @@ A single advice that inspires all the others (1 special bullet)
* ### `Section 1: The Test Anatomy`
-The foundation - strucuturing clean tests (12 bullets)
+The foundation - structuring clean tests (12 bullets)
* ### `Section 2: Backend`
@@ -2035,4 +2035,4 @@ gwOTQ2NywxNTk3MjQwNzc1LDIxMDM0MzAxNjYsLTM3NTY2Mzg0
LC0xMjg2NTMxNjAwLC0yOTc1MDI2MjMsNDM1MTk1ODgwLDE3Nj
U5NjcxMzAsNzk0ODg4NTE3LC0xODAwNTU1MDA2LDkzNTEyNDg3
OV19
--->
\ No newline at end of file
+-->
From 3bd38831a5a8163cf320dbbaa8f5017ad000b95c Mon Sep 17 00:00:00 2001
From: John Gee
Date: Tue, 20 Aug 2019 21:07:56 +1200
Subject: [PATCH 029/502] Correct the text for section 5.6
---
readme.md | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/readme.md b/readme.md
index c54d7563..ab67a329 100644
--- a/readme.md
+++ b/readme.md
@@ -1900,11 +1900,9 @@ license-checker --summary --failOn BSD
## ⚪ ️5.6 Constantly inspect for vulnerable dependencies
-:white_check_mark: **Do:** Licensing and plagiarism issues are probably not your main concern right now, but why not tick this box as well in 10 minutes? A bunch of npm packages like license check and plagiarism check (commercial with free plan) can be easily baked into your CI pipeline and inspect for sorrows like dependencies with restrictive licenses or code that was copy-pasted from Stackoverflow and apparently violates some copyrights
-
-
+:white_check_mark: **Do:** Even the most reputable dependencies such as Express have known vulnerabilities. This can get easily tamed using community tools such as [npm audit](https://docs.npmjs.com/getting-started/running-a-security-audit), or commercial tools like [snyk](https://snyk.io/) (offer also a free community version). Both can be invoked from your CI on every build
-❌ **Otherwise:** Even the most reputable dependencies such as Express have known vulnerabilities. This can get easily tamed using community tools such as [npm audit](https://docs.npmjs.com/getting-started/running-a-security-audit), or commercial tools like [snyk](https://snyk.io/) (offer also a free community version). Both can be invoked from your CI on every build
+❌ **Otherwise:** Keeping your code clean from vulnerabilities without dedicated tools will require to constantly follow online publications about new threats. Quite tedious
From 061abc979c83cb3d396fff77fc9c28167ecc42ab Mon Sep 17 00:00:00 2001
From: Olivier PASCAL
Date: Tue, 20 Aug 2019 17:46:18 +0200
Subject: [PATCH 030/502] Fix section 4.3 markdown image path typo
Missing `")` was causing error on image path
---
readme.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/readme.md b/readme.md
index 30d8ae30..b5774bc4 100644
--- a/readme.md
+++ b/readme.md
@@ -1698,7 +1698,7 @@ it("Test addNewOrder, don't use such test names", () => {
### :clap: Doing It Right Example: Stryker reports, a tool for mutation testing, detects and counts the amount of code that is not tested (Mutations)
-
+")
@@ -2028,4 +2028,4 @@ Took care to revise, improve, lint and polish all the texts
**Role:** Concept, design and great advice
-**About:** A savvy frontend developer, CSS expert and emojis freak
\ No newline at end of file
+**About:** A savvy frontend developer, CSS expert and emojis freak
From ee58a6e5395024043781d9b82c9d504dbea4221f Mon Sep 17 00:00:00 2001
From: Yoni Goldberg
Date: Tue, 20 Aug 2019 18:56:15 +0300
Subject: [PATCH 031/502] Update readme.md
---
readme.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/readme.md b/readme.md
index 9c230375..0102fa1e 100644
--- a/readme.md
+++ b/readme.md
@@ -322,7 +322,7 @@ it("White-box test: When the internal methods get 0 vat, it return 0 response",
## ⚪ ️ ️1.5 Choose the right test doubles: Avoid mocks in favor of stubs and spies
-:white_check_mark: **Do:** Test doubles are a necessary evil because they are coupled to the application internals, yet some provide an immense value ([Read here a reminder about test doubles: mocks vs stubs vs spies](https://martinfowler.com/articles/mocksArentStubs.html)). However, the various techniques were not born equal: some of them, spies and stubs, are focused on testing the requirements but as an inevitable side-effect they also slightly touch the internals. Mocks, on the contrary side, are focused on testing the internals — this brings huge overhead as explained in the bullet “Stick to black box testing”.
+:white_check_mark: **Do:** Test doubles are a necessary evil because they are coupled to the application internals, yet some provide an immense value ([Read here a reminder about test doubles: mocks vs stubs vs spies](https://martinfowler.com/articles/mocksArentStubs.html)).
However, the various techniques were not born equal: some of them, spies and stubs, are focused on testing the requirements but as an inevitable side-effect they also slightly touch the internals. Mocks, on the contrary side, are focused on testing the internals — this brings huge overhead as explained in the bullet “Stick to black box testing”.
@@ -2028,4 +2028,4 @@ Took care to revise, improve, lint and polish all the texts
**Role:** Concept, design and great advice
-**About:** A savvy frontend developer, CSS expert and emojis freak
\ No newline at end of file
+**About:** A savvy frontend developer, CSS expert and emojis freak
From b1e51464c986a0f48f672dc42bbc2b2edd733e46 Mon Sep 17 00:00:00 2001
From: Idan
Date: Wed, 21 Aug 2019 18:16:08 +0300
Subject: [PATCH 032/502] fixed links in table of contents
---
readme.md | 17 ++++++++---------
1 file changed, 8 insertions(+), 9 deletions(-)
diff --git a/readme.md b/readme.md
index e5a807d4..6691ba83 100644
--- a/readme.md
+++ b/readme.md
@@ -29,33 +29,32 @@ Start by understanding the ubiquitous testing practices that are the foundation
-# `Table of contents`
+## Table of Contents
-* ### `Section 0: The Golden Rule`
+* ####[Section 0: The Golden Rule](#Section-0️⃣-:-The-Golden-Rule)
A single advice that inspires all the others (1 special bullet)
-* ### `Section 1: The Test Anatomy`
+* ####[Section 1: The Test Anatomy](#Section-1:-The-Test-Anatomy)
The foundation - structuring clean tests (12 bullets)
-
-* ### `Section 2: Backend`
+* ####[Section 2: Backend](#Section-2️⃣-:-Backend-Testing)
Writing backend and Microservices tests efficiently (8 bullets)
-* ### `Section 3: Frontend, UI, E2E`
+* ####[Section 3: Frontend](#Section-3️⃣:-Frontend-Testing)
Writing tests for web UI including component and E2E tests (11 bullets)
-* ### `Section 4: Measuring Tests Effectivenss`
+* ####[Section 4: Measuring Tests Effectiveness](#Section-4️⃣:-Measuring-Test-Effectiveness)
Watching the watchman - measuring test quality (4 bullets)
-* ### `Section 5: Continuous Integration`
+* ####[Section 5: Continuous Integration](#Section-5️⃣-CI-and-Other-Quality-Measures)
Guidelines for CI in the JS world (9 bullets)
@@ -87,7 +86,7 @@ Most of the advice below are derivatives of this principle.
-# Section 1. The Test Anatomy
+# Section 1: The Test Anatomy
From 84bf22a9b7dcc7d92e38d12c0781403da5236972 Mon Sep 17 00:00:00 2001
From: jaimemendozadev
Date: Wed, 21 Aug 2019 18:53:35 -0700
Subject: [PATCH 033/502] Make content/style changes in Sections 1.2 - 1.4
---
readme.md | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/readme.md b/readme.md
index e5a807d4..02a949a7 100644
--- a/readme.md
+++ b/readme.md
@@ -142,7 +142,7 @@ describe('Products Service', function() {
## ⚪ ️ 1.2 Structure tests by the AAA pattern
-:white_check_mark: **Do:** Structure your tests with 3 well-separated sections Arrange, Act & Assert (AAA). Following this structure guarantees that the reader spends no brain CPU on understanding the test plan:
+:white_check_mark: **Do:** Structure your tests with 3 well-separated sections Arrange, Act & Assert (AAA). Following this structure guarantees that the reader spends no brain-CPU on understanding the test plan:
1st A - Arrange: All the setup code to bring the system to the scenario the test aims to simulate. This might include instantiating the unit under test constructor, adding DB records, mocking/stubbing on objects and any other preparation code
@@ -154,7 +154,7 @@ describe('Products Service', function() {
-❌ **Otherwise:** Not only you spend long daily hours on understanding the main code, now also what should have been the simple part of the day (testing) stretches your brain
+❌ **Otherwise:** Not only do you spend hours understanding the main code, but what should have been the simplest part of the day (testing) stretches your brain
@@ -211,11 +211,11 @@ test('Should be classified as premium', () => {
## ⚪ ️1.3 Describe expectations in a product language: use BDD-style assertions
-:white_check_mark: **Do:** Coding your tests in a declarative-style allows the reader to get the grab instantly without spending even a single brain-CPU cycle. When you write an imperative code that is packed with conditional logic the reader is thrown away to an effortful mental mood. In that sense, code the expectation in a human-like language, declarative BDD style using expect or should and not using custom code. If Chai & Jest don’t include the desired assertion and it’s highly repeatable, consider [extending Jest matcher (Jest)](https://jestjs.io/docs/en/expect#expectextendmatchers) or writing a [custom Chai plugin](https://www.chaijs.com/guide/plugins/)
+:white_check_mark: **Do:** Coding your tests in a declarative-style allows the reader to get the grab instantly without spending even a single brain-CPU cycle. When you write imperative code that is packed with conditional logic, the reader is forced to exert more brain-CPU cycles. In that case, code the expectation in a human-like language, declarative BDD style using `expect` or `should` and not using custom code. If Chai & Jest doesn't include the desired assertion and it’s highly repeatable, consider [extending Jest matcher (Jest)](https://jestjs.io/docs/en/expect#expectextendmatchers) or writing a [custom Chai plugin](https://www.chaijs.com/guide/plugins/)
-❌ **Otherwise:** The team will write less test and decorate the annoying ones with .skip()
+❌ **Otherwise:** The team will write less tests and decorate the annoying ones with .skip()
@@ -276,7 +276,7 @@ it("When asking for an admin, ensure only ordered admins in results" , ()={
## ⚪ ️ 1.4 Stick to black-box testing: Test only public methods
-:white_check_mark: **Do:** Testing the internals brings huge overhead for almost nothing. If your code/API deliver the right results, should you really invest your next 3 hours in testing HOW it worked internally and then maintain these fragile tests? Whenever a public behavior is checked, the private implementation is also implicitly tested and your tests will break only if there is a certain problem (e.g. wrong output). This approach is also referred to as behavioral testing. On the other side, should you test the internals (white box approach) — your focus shifts from planning the component outcome to nitty-gritty details and your test might break because of minor code refactors although the results are fine- this dramatically increases the maintenance burden
+:white_check_mark: **Do:** Testing the internals brings huge overhead for almost nothing. If your code/API deliver the right results, should you really invest the next 3 hours testing HOW it worked internally and then maintain those fragile tests? Whenever a public behavior is checked, the private implementation is also implicitly tested and your tests will break only if there is a certain problem (e.g. wrong output). This approach is also referred to as behavioral testing. On the other side, should you test the internals (white box approach) — your focus shifts from planning the component outcome to nitty-gritty details and your test might break because of minor code refactors although the results are fine- this dramatically increases the maintenance burden
From cfab106acfbdd4d14f7bc138d565aca5cc4e1383 Mon Sep 17 00:00:00 2001
From: jaimemendozadev
Date: Wed, 21 Aug 2019 19:01:10 -0700
Subject: [PATCH 034/502] Make more edits to Section 1.4
---
readme.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/readme.md b/readme.md
index 02a949a7..91a79b96 100644
--- a/readme.md
+++ b/readme.md
@@ -276,11 +276,11 @@ it("When asking for an admin, ensure only ordered admins in results" , ()={
## ⚪ ️ 1.4 Stick to black-box testing: Test only public methods
-:white_check_mark: **Do:** Testing the internals brings huge overhead for almost nothing. If your code/API deliver the right results, should you really invest the next 3 hours testing HOW it worked internally and then maintain those fragile tests? Whenever a public behavior is checked, the private implementation is also implicitly tested and your tests will break only if there is a certain problem (e.g. wrong output). This approach is also referred to as behavioral testing. On the other side, should you test the internals (white box approach) — your focus shifts from planning the component outcome to nitty-gritty details and your test might break because of minor code refactors although the results are fine- this dramatically increases the maintenance burden
+:white_check_mark: **Do:** Testing the internals brings huge overhead for almost nothing. If your code/API deliver the right results, should you really invest your next 3 hours in testing HOW it worked internally and then maintain these fragile tests? Whenever a public behavior is checked, the private implementation is also implicitly tested and your tests will break only if there is a certain problem (e.g. wrong output). This approach is also referred to as `behavioral testing`. On the other side, should you test the internals (white box approach) — your focus shifts from planning the component outcome to nitty-gritty details and your test might break because of minor code refactors although the results are fine - this dramatically increases the maintenance burden
-❌ **Otherwise:** Your test behaves like the [child who cries wolf](https://en.wikipedia.org/wiki/The_Boy_Who_Cried_Wolf): shoot out loud false-positive cries (e.g., A test fails because a private variable name was changed). Unsurprisingly, people will soon start to ignore the CI notifications until someday a real bug will get ignored…
+❌ **Otherwise:** Your tests behave like the [boy who cried wolf](https://en.wikipedia.org/wiki/The_Boy_Who_Cried_Wolf): shouting false-positive cries (e.g., A test fails because a private variable name was changed). Unsurprisingly, people will soon start to ignore the CI notifications until someday, a real bug gets ignored…
✏ Code Examples
From d7bb1d2a61852d264f4c83e491b778322225eab3 Mon Sep 17 00:00:00 2001
From: jaimemendozadev
Date: Wed, 21 Aug 2019 19:16:26 -0700
Subject: [PATCH 035/502] Make edits to Section 1.5
---
readme.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/readme.md b/readme.md
index 91a79b96..04590054 100644
--- a/readme.md
+++ b/readme.md
@@ -322,11 +322,11 @@ it("White-box test: When the internal methods get 0 vat, it return 0 response",
## ⚪ ️ ️1.5 Choose the right test doubles: Avoid mocks in favor of stubs and spies
-:white_check_mark: **Do:** Test doubles are a necessary evil because they are coupled to the application internals, yet some provide an immense value ([Read here a reminder about test doubles: mocks vs stubs vs spies](https://martinfowler.com/articles/mocksArentStubs.html)).
+:white_check_mark: **Do:** Test doubles are a necessary evil because they are coupled to the application internals, yet some provide immense value ([Read here a reminder about test doubles: mocks vs stubs vs spies](https://martinfowler.com/articles/mocksArentStubs.html)).
-Before using test doubles, ask a very simple question: Do I use it to test functionality that appears, or could appear, in the requirements document? If no, it’s a smell of white-box testing.
+Before using test doubles, ask a very simple question: Do I use it to test functionality that appears, or could appear, in the requirements document? If no, it’s a white-box testing smell.
-For example, if you want to test what your app behaves reasonably when the payment service is down, you might stub the payment service and trigger some ‘No Response’ return to ensure that the unit under test returns the right value. This checks our application behavior/response/outcome under certain scenarios. You might also use a spy to assert that an email was sent when that service is down — this is again a behavioral check which is likely to appear in a requirements doc (“Send an email if payment couldn’t be saved”). On the flip side, if you mock the Payment service and ensure that it was called with the right JavaScript types — then your test is focused on internal things that got nothing with the application functionality and are likely to change frequently
+For example, if you want to test that your app behaves reasonably when the payment service is down, you might stub the payment service and trigger some ‘No Response’ return to ensure that the unit under test returns the right value. This checks our application behavior/response/outcome under certain scenarios. You might also use a spy to assert that an email was sent when that service is down — this is again a behavioral check which is likely to appear in a requirements doc (“Send an email if payment couldn’t be saved”). On the flip side, if you mock the Payment service and ensure that it was called with the right JavaScript types — then your test is focused on internal things that got nothing with the application functionality and are likely to change frequently
From f1730193840a9bc0711513959ea5654ba060f336 Mon Sep 17 00:00:00 2001
From: jaimemendozadev
Date: Wed, 21 Aug 2019 19:28:48 -0700
Subject: [PATCH 036/502] Make edits to Section 1.6
---
readme.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/readme.md b/readme.md
index 04590054..f850989e 100644
--- a/readme.md
+++ b/readme.md
@@ -372,11 +372,11 @@ it("When a valid product is about to be deleted, ensure an email is sent", async
## ⚪ ️1.6 Don’t “foo”, use realistic input data
-:white_check_mark: **Do:** Often production bugs are revealed under some very specific and surprising input — the more realistic the test input is, the greater the chances are to catch bugs early. Use dedicated libraries like [Faker](https://www.npmjs.com/package/faker) to generate pseudo-real data that resembles the variety and form of production data. For example, such libraries can generate realistic phone numbers, usernames, credit card, company names, and even ‘lorem ipsum’ text. You may also create some tests (on top of unit tests, not instead) that randomize fakers data to stretch your unit under test or even import real data from your production environment. Want to take it to the next level? see next bullet (property-based testing).
+:white_check_mark: **Do:** Often production bugs are revealed under some very specific and surprising input — the more realistic the test input is, the greater the chances are to catch bugs early. Use dedicated libraries like [Faker](https://www.npmjs.com/package/faker) to generate pseudo-real data that resembles the variety and form of production data. For example, such libraries can generate realistic phone numbers, usernames, credit card, company names, and even ‘lorem ipsum’ text. You may also create some tests (on top of unit tests, not as a replacement) that randomize fakers data to stretch your unit under test or even import real data from your production environment. Want to take it to the next level? See the next bullet (property-based testing).
-❌ **Otherwise:** All your development testing will falsely seem green when you use synthetic inputs like “Foo” but then production might turn red when a hacker passes-in a nasty string like “@3e2ddsf . ##’ 1 fdsfds . fds432 AAAA”
+❌ **Otherwise:** All your development testing will falsely show green when you use synthetic inputs like “Foo”, but then production might turn red when a hacker passes-in a nasty string like “@3e2ddsf . ##’ 1 fdsfds . fds432 AAAA”
From 93be9dc6a6728a979d58188ff7dc2aaa7fa20a55 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Andr=C3=A9=20Evangelista?=
Date: Thu, 22 Aug 2019 14:25:50 +0100
Subject: [PATCH 037/502] Correcting typos
---
readme.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/readme.md b/readme.md
index e5a807d4..cd15ec71 100644
--- a/readme.md
+++ b/readme.md
@@ -228,7 +228,7 @@ test('Should be classified as premium', () => {
### :thumbsdown: Anti Pattern Example: The reader must skim through not so short, and imperative code just to get the test story
```javascript
-test("When asking for an admin, ensure only ordered admins in results" , ()={
+test("When asking for an admin, ensure only ordered admins in results" , () => {
//assuming we've added here two admins "admin1", "admin2" and "user1"
const allAdmins = getUsers({adminOnly:true});
@@ -258,7 +258,7 @@ test("When asking for an admin, ensure only ordered admins in results" , ()={
```javascript
-it("When asking for an admin, ensure only ordered admins in results" , ()={
+it("When asking for an admin, ensure only ordered admins in results" , () => {
//assuming we've added here two admins
const allAdmins = getUsers({adminOnly:true});
From 6717f56bdb889587f0c15b2760bfa24de8094818 Mon Sep 17 00:00:00 2001
From: Idan Dagan
Date: Thu, 22 Aug 2019 16:36:31 +0300
Subject: [PATCH 038/502] Update readme.md
---
readme.md | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/readme.md b/readme.md
index 6691ba83..c4568b50 100644
--- a/readme.md
+++ b/readme.md
@@ -31,30 +31,30 @@ Start by understanding the ubiquitous testing practices that are the foundation
## Table of Contents
-* ####[Section 0: The Golden Rule](#Section-0️⃣-:-The-Golden-Rule)
+#### [Section 0: The Golden Rule](#Section-0️⃣-:-The-Golden-Rule)
A single advice that inspires all the others (1 special bullet)
-* ####[Section 1: The Test Anatomy](#Section-1:-The-Test-Anatomy)
+#### [Section 1: The Test Anatomy](#Section-1:-The-Test-Anatomy)
The foundation - structuring clean tests (12 bullets)
-* ####[Section 2: Backend](#Section-2️⃣-:-Backend-Testing)
+#### [Section 2: Backend](#Section-2️⃣-:-Backend-Testing)
Writing backend and Microservices tests efficiently (8 bullets)
-* ####[Section 3: Frontend](#Section-3️⃣:-Frontend-Testing)
+#### [Section 3: Frontend](#Section-3️⃣:-Frontend-Testing)
Writing tests for web UI including component and E2E tests (11 bullets)
-* ####[Section 4: Measuring Tests Effectiveness](#Section-4️⃣:-Measuring-Test-Effectiveness)
+#### [Section 4: Measuring Tests Effectiveness](#Section-4️⃣:-Measuring-Test-Effectiveness)
Watching the watchman - measuring test quality (4 bullets)
-* ####[Section 5: Continuous Integration](#Section-5️⃣-CI-and-Other-Quality-Measures)
+#### [Section 5: Continuous Integration](#Section-5️⃣-CI-and-Other-Quality-Measures)
Guidelines for CI in the JS world (9 bullets)
From 4ec4b90243593e7388ff758e76a8af0e40fc6918 Mon Sep 17 00:00:00 2001
From: Idan
Date: Thu, 22 Aug 2019 16:41:32 +0300
Subject: [PATCH 039/502] Update readme.md
---
readme.md | 15 ++++++---------
1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/readme.md b/readme.md
index 6691ba83..8dfae9f5 100644
--- a/readme.md
+++ b/readme.md
@@ -31,30 +31,27 @@ Start by understanding the ubiquitous testing practices that are the foundation
## Table of Contents
-* ####[Section 0: The Golden Rule](#Section-0️⃣-:-The-Golden-Rule)
+#### [Section 0: The Golden Rule](#section-0️⃣---the-golden-rule)
A single advice that inspires all the others (1 special bullet)
-* ####[Section 1: The Test Anatomy](#Section-1:-The-Test-Anatomy)
+#### [Section 1: The Test Anatomy](#section-1-the-test-anatomy-1)
The foundation - structuring clean tests (12 bullets)
-* ####[Section 2: Backend](#Section-2️⃣-:-Backend-Testing)
+#### [Section 2: Backend](#section-2️⃣--backend-testing)
Writing backend and Microservices tests efficiently (8 bullets)
-
-* ####[Section 3: Frontend](#Section-3️⃣:-Frontend-Testing)
+#### [Section 3: Frontend](#section-3️⃣-frontend-testing)
Writing tests for web UI including component and E2E tests (11 bullets)
-
-* ####[Section 4: Measuring Tests Effectiveness](#Section-4️⃣:-Measuring-Test-Effectiveness)
+#### [Section 4: Measuring Tests Effectiveness](#section-4️⃣-measuring-test-effectiveness)
Watching the watchman - measuring test quality (4 bullets)
-
-* ####[Section 5: Continuous Integration](#Section-5️⃣-CI-and-Other-Quality-Measures)
+#### [Section 5: Continuous Integration](#section-5️⃣-ci-and-other-quality-measures)
Guidelines for CI in the JS world (9 bullets)
From fe471392c5e9e83c0901c945509ba53f34564aa3 Mon Sep 17 00:00:00 2001
From: Idan
Date: Thu, 22 Aug 2019 16:48:21 +0300
Subject: [PATCH 040/502] fixed section titles
---
readme.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/readme.md b/readme.md
index 8dfae9f5..e41e852c 100644
--- a/readme.md
+++ b/readme.md
@@ -59,7 +59,7 @@ Guidelines for CI in the JS world (9 bullets)
-# Section 0️⃣ : The Golden Rule
+# Section 0️⃣: The Golden Rule
@@ -706,7 +706,7 @@ Learn and practice [TDD principles](https://www.sm-cloud.com/book-review-test-dr
-# Section 2️⃣ : Backend Testing
+# Section 2️⃣: Backend Testing
## ⚪ ️2.1 Enrich your testing portfolio: Look beyond unit tests and the pyramid
@@ -1736,7 +1736,7 @@ it("Test name", () => {*//error:no-identical-title. Assign unique titles to test
-# Section 5️⃣ CI and Other Quality Measures
+# Section 5️⃣: CI and Other Quality Measures
From 79a6f4061775410c0a03d59a935ad7690d4e78d0 Mon Sep 17 00:00:00 2001
From: Idan
Date: Thu, 22 Aug 2019 16:50:36 +0300
Subject: [PATCH 041/502] fixed links
---
readme.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/readme.md b/readme.md
index e41e852c..db7bc298 100644
--- a/readme.md
+++ b/readme.md
@@ -31,7 +31,7 @@ Start by understanding the ubiquitous testing practices that are the foundation
## Table of Contents
-#### [Section 0: The Golden Rule](#section-0️⃣---the-golden-rule)
+#### [Section 0: The Golden Rule](#section-0️⃣-the-golden-rule)
A single advice that inspires all the others (1 special bullet)
@@ -39,7 +39,7 @@ A single advice that inspires all the others (1 special bullet)
The foundation - structuring clean tests (12 bullets)
-#### [Section 2: Backend](#section-2️⃣--backend-testing)
+#### [Section 2: Backend](#section-2️⃣-backend-testing)
Writing backend and Microservices tests efficiently (8 bullets)
From 937f6b804b54985ec46c9ab416641c46333298d9 Mon Sep 17 00:00:00 2001
From: Idan Dagan
Date: Thu, 22 Aug 2019 18:11:26 +0300
Subject: [PATCH 042/502] Update readme.md
---
readme.md | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/readme.md b/readme.md
index db7bc298..0935715c 100644
--- a/readme.md
+++ b/readme.md
@@ -31,27 +31,27 @@ Start by understanding the ubiquitous testing practices that are the foundation
## Table of Contents
-#### [Section 0: The Golden Rule](#section-0️⃣-the-golden-rule)
+#### [`Section 0: The Golden Rule`](#section-0️⃣-the-golden-rule)
A single advice that inspires all the others (1 special bullet)
-#### [Section 1: The Test Anatomy](#section-1-the-test-anatomy-1)
+#### [`Section 1: The Test Anatomy`](#section-1-the-test-anatomy-1)
The foundation - structuring clean tests (12 bullets)
-#### [Section 2: Backend](#section-2️⃣-backend-testing)
+#### [`Section 2: Backend`](#section-2️⃣-backend-testing)
Writing backend and Microservices tests efficiently (8 bullets)
-#### [Section 3: Frontend](#section-3️⃣-frontend-testing)
+#### [`Section 3: Frontend`](#section-3️⃣-frontend-testing)
Writing tests for web UI including component and E2E tests (11 bullets)
-#### [Section 4: Measuring Tests Effectiveness](#section-4️⃣-measuring-test-effectiveness)
+#### [`Section 4: Measuring Tests Effectiveness`](#section-4️⃣-measuring-test-effectiveness)
Watching the watchman - measuring test quality (4 bullets)
-#### [Section 5: Continuous Integration](#section-5️⃣-ci-and-other-quality-measures)
+#### [`Section 5: Continuous Integration`](#section-5️⃣-ci-and-other-quality-measures)
Guidelines for CI in the JS world (9 bullets)
From 1211a2bf141bb708a96c4b1c15b8f11283c77a80 Mon Sep 17 00:00:00 2001
From: Idan Dagan
Date: Thu, 22 Aug 2019 18:13:11 +0300
Subject: [PATCH 043/502] Added code ticks
---
readme.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/readme.md b/readme.md
index 0935715c..44ca74c8 100644
--- a/readme.md
+++ b/readme.md
@@ -29,7 +29,7 @@ Start by understanding the ubiquitous testing practices that are the foundation
-## Table of Contents
+## `Table of Contents`
#### [`Section 0: The Golden Rule`](#section-0️⃣-the-golden-rule)
From 55dffb07462068d8ef895ccc6a10034f67443f5f Mon Sep 17 00:00:00 2001
From: Yoni Goldberg
Date: Sat, 24 Aug 2019 05:46:57 -0700
Subject: [PATCH 044/502] Create questions-answers.md
---
.operations/questions-answers.md | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
create mode 100644 .operations/questions-answers.md
diff --git a/.operations/questions-answers.md b/.operations/questions-answers.md
new file mode 100644
index 00000000..bb18840c
--- /dev/null
+++ b/.operations/questions-answers.md
@@ -0,0 +1,18 @@
+# Common questions and answers
+
+### Q: How do I start a new translation?
+
+**Answer: **
+
+welcome aboard , having a {language} translation would be awesome 🔥
+
+Having a Hungarian translation would be great 👍
+
+Before you start with this, we've prepared some basic workflow guidelines:
+
+Work on your own fork - fork, create a branch, translate & collaborate with other translators, then open a PR
+Focus on translation, not content editing - the focus is on translation, should anyone want to modify the content or the graphics - let's PR a draft in English first and then translate to other languages. Also the format of the text should remain intact (same design)
+Duplicate the readme and the inner pages - the content should be translated over at a duplicated page, step by step. As an example, README.md would be come README.{translated-language}.md (e.g. README.french.md), all other files should be duplicated similarly. So the number of English & translated pages should be the same
+Collaborate - once you setup the translation foundation (branch, duplicate pages), we can announce the work on a new language and get others involved to support you in translating (if you wish so, of course)
+We're here to help - let us know whether we can do anything to support you. We can Tweet about this work, etc. 🚀
+
From 5a9f655c776a9ed04aae2910aa5e52ac52db1355 Mon Sep 17 00:00:00 2001
From: Yoni Goldberg
Date: Sat, 24 Aug 2019 06:03:56 -0700
Subject: [PATCH 045/502] Update questions-answers.md
---
.operations/questions-answers.md | 19 +++++++++----------
1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/.operations/questions-answers.md b/.operations/questions-answers.md
index bb18840c..d5e7b75d 100644
--- a/.operations/questions-answers.md
+++ b/.operations/questions-answers.md
@@ -1,18 +1,17 @@
# Common questions and answers
-### Q: How do I start a new translation?
+## Q: How do I start a new translation?
-**Answer: **
+**Answer:**
-welcome aboard , having a {language} translation would be awesome 🔥
+welcome aboard! Having a {language} translation would be awesome 🔥. I'll be glad to collaborate with you on this and help wherever I can
-Having a Hungarian translation would be great 👍
+Before you start with this, I've prepared some basic workflow guidelines:
-Before you start with this, we've prepared some basic workflow guidelines:
+**Where to do the translation?** - Fork and work your own copy, create a readme-{language}.md file (e.g. readme-fr.md) and do the translation work over there
-Work on your own fork - fork, create a branch, translate & collaborate with other translators, then open a PR
-Focus on translation, not content editing - the focus is on translation, should anyone want to modify the content or the graphics - let's PR a draft in English first and then translate to other languages. Also the format of the text should remain intact (same design)
-Duplicate the readme and the inner pages - the content should be translated over at a duplicated page, step by step. As an example, README.md would be come README.{translated-language}.md (e.g. README.french.md), all other files should be duplicated similarly. So the number of English & translated pages should be the same
-Collaborate - once you setup the translation foundation (branch, duplicate pages), we can announce the work on a new language and get others involved to support you in translating (if you wish so, of course)
-We're here to help - let us know whether we can do anything to support you. We can Tweet about this work, etc. 🚀
+**How to push changes?** - I will create a dedicated branch for you translations-{language}-staging (e.g. translations-fr-staging), whenever you want to save some changes or share with the team - just PR to this branch
+**How & when to publish to master?** - The content can be published once it's 70% translated and 100% language proofed. Kindly run it through spell checker. Whenever you feel that the content stands to these guidelines, just a raise a flag and I'll merge the language branch into the master
+
+**Will I get credit for the translation work?** - Obviously! Your name will appear nearby the langauge flag in the main readme.md, added to the repo team, appear boldly at the top of the translation page - 'Translated, adapted and reviewed by {Your name}'. We will also publish a medium article with the translation with your name at the top
From bb4fef26a8fa26f5468a1ff2ad8dcb574411d5b3 Mon Sep 17 00:00:00 2001
From: Yoni Goldberg
Date: Sat, 24 Aug 2019 06:10:01 -0700
Subject: [PATCH 046/502] Update questions-answers.md
---
.operations/questions-answers.md | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/.operations/questions-answers.md b/.operations/questions-answers.md
index d5e7b75d..67b42d7e 100644
--- a/.operations/questions-answers.md
+++ b/.operations/questions-answers.md
@@ -4,7 +4,7 @@
**Answer:**
-welcome aboard! Having a {language} translation would be awesome 🔥. I'll be glad to collaborate with you on this and help wherever I can
+Welcome aboard! Having a Brazilian Portuguese translation would be awesome 🔥🌈👌 . I'll be glad to collaborate with you on this and help wherever I can
Before you start with this, I've prepared some basic workflow guidelines:
@@ -14,4 +14,6 @@ Before you start with this, I've prepared some basic workflow guidelines:
**How & when to publish to master?** - The content can be published once it's 70% translated and 100% language proofed. Kindly run it through spell checker. Whenever you feel that the content stands to these guidelines, just a raise a flag and I'll merge the language branch into the master
-**Will I get credit for the translation work?** - Obviously! Your name will appear nearby the langauge flag in the main readme.md, added to the repo team, appear boldly at the top of the translation page - 'Translated, adapted and reviewed by {Your name}'. We will also publish a medium article with the translation with your name at the top
+**Will I get credit for the translation work?** - Obviously! Your name will appear nearby the language flag in the main readme.md, added to the repo team, appear boldly at the top of the translation page - 'Translated, adapted and reviewed by {Your name}'. We will also publish a medium article with the translation with your name at the top
+
+Looking forward and excited to work on this!
From e89e598f57bf53668333df2fdcd363ec821a606f Mon Sep 17 00:00:00 2001
From: Rain Byun
Date: Mon, 26 Aug 2019 00:06:33 +0900
Subject: [PATCH 047/502] Add readme file for Korean translation.
- Just getting started.
---
readme.korean.md | 2022 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 2022 insertions(+)
create mode 100644 readme.korean.md
diff --git a/readme.korean.md b/readme.korean.md
new file mode 100644
index 00000000..8ab281ec
--- /dev/null
+++ b/readme.korean.md
@@ -0,0 +1,2022 @@
+
+
+
+
+# 👇 이 가이드가 당신의 테스트 기술을 한 단계 끌어 올리는 이유
+
+
+
+## 📗 철저하고 매우 포괄적인 45가지 이상의 모범 사례
+JavaScript 및 Node.js에 대한 A부터 Z까지의 믿음직한 가이드입니다. 수십 가지 최고의 블로그 게시물, 서적 및 도구를 요약하고 정리합니다.
+
+## 🚢 기초를 뛰어넘어 고급으로
+운영중인 제품의 테스트, 돌연변이 테스트, 속성 기반 테스트 및 기타 여러 전략적 & 전문 도구와 같은 고급 주제로 넘어가는 여정을 경험하십시오.
+이 가이드의 모든 단어를 읽으면 당신의 테스트 기술이 평균보다 높아질 수 있습니다.
+
+## 🌐 Full-stack: 프론트, 백엔드, CI, 무엇이든
+모든 응용프로그램 계층의 기초가 되는 유비쿼터스 테스트 방법을 이해하는 것으로부터 시작하십시오. 그런 다음 프론트엔드/UI, 백엔드, CI 혹은 이 모든것을 공부하세요.
+
+
+
+### Yoni Goldberg 작성
+* JavaScript & Node.js 컨설턴트
+* 👨🏫 [나의 테스팅 워크샵](https://www.testjavascript.com) - 유럽과 미국에서의 [제 워크샵](https://www.testjavascript.com)에 대해서 알아보십시오.
+* [트위터 팔로우 하기](https://twitter.com/goldbergyoni/)
+* [LA](https://js.la/), [베로나](https://2019.nodejsday.it/), [하르키우](https://kharkivjs.org/), [무료 웨비나](https://zoom.us/webinar/register/1015657064375/WN_Lzvnuv4oQJOYey2jXNqX6A)를 들으러 오십시오. 향후 이벤트는 곧 결정될 것입니다.
+* [저의 JavaScript 뉴스 레터](https://testjavascript.com/newsletter/) - 인사이트와 오직 전략적인 문제에 대한 내용
+
+
+
+## `목차`
+
+#### [`섹션 0: 황금률`](#section-0️⃣-the-golden-rule)
+
+모든 모든 사람들에게 영감을 주는 하나의 조언(하나의 특수한 항목)
+
+#### [`섹션 1: 테스트 해부`](#section-1-the-test-anatomy-1)
+
+기초 - 깔끔한 테스트 구성하기(12개)
+
+#### [`섹션 2: 백엔드`](#section-2️⃣-backend-testing)
+
+백엔드 및 마이크로서비스 테스트 효율적으로 작성하기(8개)
+
+#### [`섹션 3: 프론트엔드`](#section-3️⃣-frontend-testing)
+
+컴포넌트 및 E2E 테스트를 포함한 웹 UI에 대한 테스트 작성하기(11개)
+
+#### [`섹션 4: 테스트 효과 측정`](#section-4️⃣-measuring-test-effectiveness)
+
+감시자를 감시하기 - 테스트 품질 측정(4개)
+
+
+#### [`섹션 5: 지속적인 통합`](#section-5️⃣-ci-and-other-quality-measures)
+
+자바스크립트 세계에서 CI에 대한 지침(9개)
+
+
+
+# Section 0️⃣: The Golden Rule
+
+
+
+## ⚪️ 0. The Golden Rule: Design for lean testing
+
+:white_check_mark: **Do:**
+Testing code is not like production-code - design it to be dead-simple, short, abstraction-free, flat, delightful to work with, lean. One should look at a test and get the intent instantly.
+
+Our minds are full with the main production code, we don't have 'headspace' for additional complexity. Should we try to squeeze yet another challenging code into our poor brain it will slow the team down which works against the reason we do testing. Practically this is where many teams just abandon testing.
+
+The tests are an opportunity for something else - a friendly and smiley assistant, one that it's delightful to work with and delivers great value for such a small investment. Science tells we have two brain systems: system 1 which is used for effortless activities like driving a car on an empty road and system 2 which is meant for complex and conscious operations like solving a math equation. Design your test for system 1, when looking at test code it should *feel* as easy as modifying an HTML document and not like solving 2X(17 × 24).
+
+This can be achieved by selectively cherry-picking techniques, tools and test targets that are cost-effective and provide great ROI. Test only as much as needed, strive to keep it nimble, sometimes it's even worth dropping some tests and trade reliability for agility and simplicity.
+
+
+
+Most of the advice below are derivatives of this principle.
+
+### Ready to start?
+
+
+
+
+# Section 1: The Test Anatomy
+
+
+
+## ⚪ ️ 1.1 Include 3 parts in each test name
+
+:white_check_mark: **Do:** A test report should tell whether the current application revision satisfies the requirements for the people who are not necessarily familiar with the code: the tester, the DevOps engineer who is deploying and the future you two years from now. This can be achieved best if the tests speak at the requirements level and include 3 parts:
+
+(1) What is being tested? For example, the ProductsService.addNewProduct method
+
+(2) Under what circumstances and scenario? For example, no price is passed to the method
+
+(3) What is the expected result? For example, the new product is not approved
+
+
+
+
+❌ **Otherwise:** A deployment just failed, a test named “Add product” failed. Does this tell you what exactly is malfunctioning?
+
+
+
+**👇 Note:** Each bullet has code examples and sometime also an image illustration. Click to expand
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: A test name that constitutes 3 parts
+
+
+
+```javascript
+//1. unit under test
+describe('Products Service', function() {
+ describe('Add new product', function() {
+ //2. scenario and 3. expectation
+ it('When no price is specified, then the product status is pending approval', ()=> {
+ const newProduct = new ProductService().add(...);
+ expect(newProduct.status).to.equal('pendingApproval');
+ });
+ });
+});
+
+```
+
+
+### :clap: Doing It Right Example: A test name that constitutes 3 parts
+
+
+
+
+
+
+## ⚪ ️ 1.2 Structure tests by the AAA pattern
+
+:white_check_mark: **Do:** Structure your tests with 3 well-separated sections Arrange, Act & Assert (AAA). Following this structure guarantees that the reader spends no brain CPU on understanding the test plan:
+
+1st A - Arrange: All the setup code to bring the system to the scenario the test aims to simulate. This might include instantiating the unit under test constructor, adding DB records, mocking/stubbing on objects and any other preparation code
+
+2nd A - Act: Execute the unit under test. Usually 1 line of code
+
+3rd A - Assert: Ensure that the received value satisfies the expectation. Usually 1 line of code
+
+
+
+
+
+❌ **Otherwise:** Not only you spend long daily hours on understanding the main code, now also what should have been the simple part of the day (testing) stretches your brain
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: A test structured with the AAA pattern
+
+ 
+
+```javascript
+describe('Customer classifier', () => {
+ test('When customer spent more than 500$, should be classified as premium', () => {
+ //Arrange
+ const customerToClassify = {spent:505, joined: new Date(), id:1}
+ const DBStub = sinon.stub(dataAccess, "getCustomer")
+ .reply({id:1, classification: 'regular'});
+
+ //Act
+ const receivedClassification = customerClassifier.classifyCustomer(customerToClassify);
+
+ //Assert
+ expect(receivedClassification).toMatch('premium');
+ });
+});
+```
+
+
+
+### :thumbsdown: Anti Pattern Example: No separation, one bulk, harder to interpret
+
+```javascript
+test('Should be classified as premium', () => {
+ const customerToClassify = {spent:505, joined: new Date(), id:1}
+ const DBStub = sinon.stub(dataAccess, "getCustomer")
+ .reply({id:1, classification: 'regular'});
+ const receivedClassification = customerClassifier.classifyCustomer(customerToClassify);
+ expect(receivedClassification).toMatch('premium');
+ });
+```
+
+
+
+
+
+
+
+
+
+
+
+## ⚪ ️1.3 Describe expectations in a product language: use BDD-style assertions
+
+:white_check_mark: **Do:** Coding your tests in a declarative-style allows the reader to get the grab instantly without spending even a single brain-CPU cycle. When you write an imperative code that is packed with conditional logic the reader is thrown away to an effortful mental mood. In that sense, code the expectation in a human-like language, declarative BDD style using expect or should and not using custom code. If Chai & Jest don’t include the desired assertion and it’s highly repeatable, consider [extending Jest matcher (Jest)](https://jestjs.io/docs/en/expect#expectextendmatchers) or writing a [custom Chai plugin](https://www.chaijs.com/guide/plugins/)
+
+
+
+❌ **Otherwise:** The team will write less test and decorate the annoying ones with .skip()
+
+
+
+✏ Code Examples
+
+ 
+
+ ### :thumbsdown: Anti Pattern Example: The reader must skim through not so short, and imperative code just to get the test story
+
+```javascript
+test("When asking for an admin, ensure only ordered admins in results" , () => {
+ //assuming we've added here two admins "admin1", "admin2" and "user1"
+ const allAdmins = getUsers({adminOnly:true});
+
+ const admin1Found, adming2Found = false;
+
+ allAdmins.forEach(aSingleUser => {
+ if(aSingleUser === "user1"){
+ assert.notEqual(aSingleUser, "user1", "A user was found and not admin");
+ }
+ if(aSingleUser==="admin1"){
+ admin1Found = true;
+ }
+ if(aSingleUser==="admin2"){
+ admin2Found = true;
+ }
+ });
+
+ if(!admin1Found || !admin2Found ){
+ throw new Error("Not all admins were returned");
+ }
+});
+
+```
+
+
+### :clap: Doing It Right Example: Skimming through the following declarative test is a breeze
+
+
+```javascript
+it("When asking for an admin, ensure only ordered admins in results" , () => {
+ //assuming we've added here two admins
+ const allAdmins = getUsers({adminOnly:true});
+
+ expect(allAdmins).to.include.ordered.members(["admin1" , "admin2"])
+ .but.not.include.ordered.members(["user1"]);
+});
+
+```
+
+
+
+
+
+
+
+## ⚪ ️ 1.4 Stick to black-box testing: Test only public methods
+
+:white_check_mark: **Do:** Testing the internals brings huge overhead for almost nothing. If your code/API deliver the right results, should you really invest your next 3 hours in testing HOW it worked internally and then maintain these fragile tests? Whenever a public behavior is checked, the private implementation is also implicitly tested and your tests will break only if there is a certain problem (e.g. wrong output). This approach is also referred to as behavioral testing. On the other side, should you test the internals (white box approach) — your focus shifts from planning the component outcome to nitty-gritty details and your test might break because of minor code refactors although the results are fine- this dramatically increases the maintenance burden
+
+
+
+❌ **Otherwise:** Your test behaves like the [child who cries wolf](https://en.wikipedia.org/wiki/The_Boy_Who_Cried_Wolf): shoot out loud false-positive cries (e.g., A test fails because a private variable name was changed). Unsurprisingly, people will soon start to ignore the CI notifications until someday a real bug will get ignored…
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti Pattern Example: A test case is testing the internals for no good reason
+
+```javascript
+class ProductService{
+ //this method is only used internally
+ //Change this name will make the tests fail
+ calculateVAT(priceWithoutVAT){
+ return {finalPrice: priceWithoutVAT * 1.2};
+ //Change the result format or key name above will make the tests fail
+ }
+ //public method
+ getPrice(productId){
+ const desiredProduct= DB.getProduct(productId);
+ finalPrice = this.calculateVATAdd(desiredProduct.price).finalPrice;
+ }
+}
+
+
+it("White-box test: When the internal methods get 0 vat, it return 0 response", async () => {
+ //There's no requirement to allow users to calculate the VAT, only show the final price. Nevertheless we falsely insist here to test the class internals
+ expect(new ProductService().calculateVATAdd(0).finalPrice).to.equal(0);
+});
+
+```
+
+
+
+
+
+
+
+
+## ⚪ ️ ️1.5 Choose the right test doubles: Avoid mocks in favor of stubs and spies
+
+:white_check_mark: **Do:** Test doubles are a necessary evil because they are coupled to the application internals, yet some provide an immense value ([Read here a reminder about test doubles: mocks vs stubs vs spies](https://martinfowler.com/articles/mocksArentStubs.html)).
+
+Before using test doubles, ask a very simple question: Do I use it to test functionality that appears, or could appear, in the requirements document? If no, it’s a smell of white-box testing.
+
+For example, if you want to test what your app behaves reasonably when the payment service is down, you might stub the payment service and trigger some ‘No Response’ return to ensure that the unit under test returns the right value. This checks our application behavior/response/outcome under certain scenarios. You might also use a spy to assert that an email was sent when that service is down — this is again a behavioral check which is likely to appear in a requirements doc (“Send an email if payment couldn’t be saved”). On the flip side, if you mock the Payment service and ensure that it was called with the right JavaScript types — then your test is focused on internal things that got nothing with the application functionality and are likely to change frequently
+
+
+
+❌ **Otherwise:** Any refactoring of code mandates searching for all the mocks in the code and updating accordingly. Tests become a burden rather than a helpful friend
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti-pattern example: Mocks focus on the internals
+
+```javascript
+it("When a valid product is about to be deleted, ensure data access DAL was called once, with the right product and right config", async () => {
+ //Assume we already added a product
+ const dataAccessMock = sinon.mock(DAL);
+ //hmmm BAD: testing the internals is actually our main goal here, not just a side-effect
+ dataAccessMock.expects("deleteProduct").once().withArgs(DBConfig, theProductWeJustAdded, true, false);
+ new ProductService().deletePrice(theProductWeJustAdded);
+ dataAccessMock.verify();
+});
+```
+
+
+### :clap:Doing It Right Example: spies are focused on testing the requirements but as a side-effect are unavoidably touching to the internals
+
+```javascript
+it("When a valid product is about to be deleted, ensure an email is sent", async () => {
+ //Assume we already added here a product
+ const spy = sinon.spy(Emailer.prototype, "sendEmail");
+ new ProductService().deletePrice(theProductWeJustAdded);
+ //hmmm OK: we deal with internals? Yes, but as a side effect of testing the requirements (sending an email)
+});
+```
+
+
+
+
+
+
+
+## ⚪ ️1.6 Don’t “foo”, use realistic input data
+
+:white_check_mark: **Do:** Often production bugs are revealed under some very specific and surprising input — the more realistic the test input is, the greater the chances are to catch bugs early. Use dedicated libraries like [Faker](https://www.npmjs.com/package/faker) to generate pseudo-real data that resembles the variety and form of production data. For example, such libraries can generate realistic phone numbers, usernames, credit card, company names, and even ‘lorem ipsum’ text. You may also create some tests (on top of unit tests, not instead) that randomize fakers data to stretch your unit under test or even import real data from your production environment. Want to take it to the next level? see next bullet (property-based testing).
+
+
+
+❌ **Otherwise:** All your development testing will falsely seem green when you use synthetic inputs like “Foo” but then production might turn red when a hacker passes-in a nasty string like “@3e2ddsf . ##’ 1 fdsfds . fds432 AAAA”
+
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti-Pattern Example: A test suite that passes due to non-realistic data
+
+
+
+
+```javascript
+const addProduct = (name, price) =>{
+ const productNameRegexNoSpace = /^\S*$/;//no white-space allowd
+
+ if(!productNameRegexNoSpace.test(name))
+ return false;//this path never reached due to dull input
+
+ //some logic here
+ return true;
+};
+
+test("Wrong: When adding new product with valid properties, get successful confirmation", async () => {
+ //The string "Foo" which is used in all tests never triggers a false result
+ const addProductResult = addProduct("Foo", 5);
+ expect(addProductResult).toBe(true);
+ //Positive-false: the operation succeeded because we never tried with long
+ //product name including spaces
+});
+
+```
+
+
+### :clap:Doing It Right Example: Randomizing realistic input
+```javascript
+it("Better: When adding new valid product, get successful confirmation", async () => {
+ const addProductResult = addProduct(faker.commerce.productName(), faker.random.number());
+ //Generated random input: {'Sleek Cotton Computer', 85481}
+ expect(addProductResult).to.be.true;
+ //Test failed, the random input triggered some path we never planned for.
+ //We discovered a bug early!
+});
+```
+
+
+
+
+
+
+
+
+## ⚪ ️ 1.7 Test many input combinations using Property-based testing
+
+:white_check_mark: **Do:** Typically we choose a few input samples for each test. Even when the input format resembles real-world data (see bullet ‘Don’t foo’), we cover only a few input combinations (method(‘’, true, 1), method(“string” , false” , 0)), However, in production, an API that is called with 5 parameters can be invoked with thousands of different permutations, one of them might render our process down ([see Fuzz Testing](https://en.wikipedia.org/wiki/Fuzzing)). What if you could write a single test that sends 1000 permutations of different inputs automatically and catches for which input our code fails to return the right response? Property-based testing is a technique that does exactly that: by sending all the possible input combinations to your unit under test it increases the serendipity of finding a bug. For example, given a method — addNewProduct(id, name, isDiscount) — the supporting libraries will call this method with many combinations of (number, string, boolean) like (1, “iPhone”, false), (2, “Galaxy”, true). You can run property-based testing using your favorite test runner (Mocha, Jest, etc) using libraries like [js-verify](https://github.com/jsverify/jsverify) or [testcheck](https://github.com/leebyron/testcheck-js) (much better documentation). Update: Nicolas Dubien suggests in the comments below to [checkout fast-check](https://github.com/dubzzz/fast-check#readme) which seems to offer some additional features and also to be actively maintained
+
+
+
+❌ **Otherwise:** Unconsciously, you choose the test inputs that cover only code paths that work well. Unfortunately, this decreases the efficiency of testing as a vehicle to expose bugs
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Testing many input permutations with “mocha-testcheck”
+
+
+
+```javascript
+require('mocha-testcheck').install();
+const {expect} = require('chai');
+
+describe('Product service', () => {
+ describe('Adding new', () => {
+ //this will run 100 times with different random properties
+ check.it('Add new product with random yet valid properties, always successful',
+ gen.int, gen.string, (id, name) => {
+ expect(addNewProduct(id, name).status).to.equal('approved');
+ });
+ })
+});
+
+```
+
+
+
+
+
+
+
+
+## ⚪ ️ 1.8 If needed, use only short & inline snapshots
+
+:white_check_mark: **Do:** When there is a need for [snapshot testing](https://jestjs.io/docs/en/snapshot-testing), use only short and focused snapshots (i.e. 3-7 lines) that are included as part of the test ([Inline Snapshot](https://jestjs.io/docs/en/snapshot-testing#inline-snapshots)) and not within external files. Keeping this guideline will ensure your tests remain self-explanatory and less fragile.
+
+On the other hand, ‘classic snapshots’ tutorials and tools encourage to store big files (e.g. component rendering markup, API JSON result) over some external medium and ensure each time when the test run to compare the received result with the saved version. This, for example, can implicitly couple our test to 1000 lines with 3000 data values that the test writer never read and reasoned about. Why is this wrong? By doing so, there are 1000 reasons for your test to fail - it’s enough for a single line to change for the snapshot to get invalid and this is likely to happen a lot. How frequently? for every space, comment or minor CSS/HTML change. Not only this, the test name wouldn’t give a clue about the failure as it just checks that 1000 lines didn’t change, also it encourages to the test writer to accept as the desired true a long document he couldn’t inspect and verify. All of these are symptoms of obscure and eager test that is not focused and aims to achieve too much
+
+It’s worth noting that there are few cases where long & external snapshots are acceptable - when asserting on schema and not data (extracting out values and focusing on fields) or when the received document rarely changes
+
+
+❌ **Otherwise:** A UI test fails. The code seems right, the screen renders perfect pixels, what happened? your snapshot testing just found a difference from the origin document to current received one - a single space character was added to the markdown...
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti-Pattern Example: Coupling our test to unseen 2000 lines of code
+
+
+
+```javascript
+it('TestJavaScript.com is renderd correctly', () => {
+
+//Arrange
+
+//Act
+const receivedPage = renderer
+.create( Test JavaScript < /DisplayPage>)
+.toJSON();
+
+//Assert
+expect(receivedPage).toMatchSnapshot();
+//We now implicitly maintain a 2000 lines long document
+//every additional line break or comment - will break this test
+
+});
+```
+
+
+### :clap: Doing It Right Example: Expectations are visible and focused
+```javascript
+it('When visiting TestJavaScript.com home page, a menu is displayed', () => {
+//Arrange
+
+//Act
+receivedPage tree = renderer
+.create( Test JavaScript < /DisplayPage>)
+.toJSON();
+
+//Assert
+
+const menu = receivedPage.content.menu;
+expect(menu).toMatchInlineSnapshot(`
+
+- Home
+- About
+- Contact
+
+`);
+});
+```
+
+
+
+
+
+
+## ⚪ ️1.9 Avoid global test fixtures and seeds, add data per-test
+
+:white_check_mark: **Do:** Going by the golden rule (bullet 0), each test should add and act on its own set of DB rows to prevent coupling and easily reason about the test flow. In reality, this is often violated by testers who seed the DB with data before running the tests ([also known as ‘test fixture’](https://en.wikipedia.org/wiki/Test_fixture)) for the sake of performance improvement. While performance is indeed a valid concern — it can be mitigated (see “Component testing” bullet), however, test complexity is a much painful sorrow that should govern other considerations most of the time. Practically, make each test case explicitly add the DB records it needs and act only on those records. If performance becomes a critical concern — a balanced compromise might come in the form of seeding the only suite of tests that are not mutating data (e.g. queries)
+
+
+
+❌ **Otherwise:** Few tests fail, a deployment is aborted, our team is going to spend precious time now, do we have a bug? let’s investigate, oh no — it seems that two tests were mutating the same seed data
+
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti Pattern Example: tests are not independent and rely on some global hook to feed global DB data
+
+
+
+```javascript
+before(() => {
+ //adding sites and admins data to our DB. Where is the data? outside. At some external json or migration framework
+ await DB.AddSeedDataFromJson('seed.json');
+});
+it("When updating site name, get successful confirmation", async () => {
+ //I know that site name "portal" exists - I saw it in the seed files
+ const siteToUpdate = await SiteService.getSiteByName("Portal");
+ const updateNameResult = await SiteService.changeName(siteToUpdate, "newName");
+ expect(updateNameResult).to.be(true);
+});
+it("When querying by site name, get the right site", async () => {
+ //I know that site name "portal" exists - I saw it in the seed files
+ const siteToCheck = await SiteService.getSiteByName("Portal");
+ expect(siteToCheck.name).to.be.equal("Portal"); //Failure! The previous test change the name :[
+});
+
+```
+
+
+### :clap: Doing It Right Example: We can stay within the test, each test acts on its own set of data
+
+```javascript
+it("When updating site name, get successful confirmation", async () => {
+ //test is adding a fresh new records and acting on the records only
+ const siteUnderTest = await SiteService.addSite({
+ name: "siteForUpdateTest"
+ });
+
+ const updateNameResult = await SiteService.changeName(siteUnderTest, "newName");
+
+ expect(updateNameResult).to.be(true);
+});
+
+```
+
+
+
+
+
+
+## ⚪ ️ 1.10 Don’t catch errors, expect them
+:white_check_mark: **Do:** When trying to assert that some input triggers an error, it might look right to use try-catch-finally and asserts that the catch clause was entered. The result is an awkward and verbose test case (example below) that hides the simple test intent and the result expectations
+
+A more elegant alternative is the using the one-line dedicated Chai assertion: expect(method).to.throw (or in Jest: expect(method).toThrow()). It’s absolutely mandatory to also ensure the exception contains a property that tells the error type, otherwise given just a generic error the application won’t be able to do much rather than show a disappointing message to the user
+
+
+
+❌ **Otherwise:** It will be challenging to infer from the test reports (e.g. CI reports) what went wrong
+
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti-pattern Example: A long test case that tries to assert the existence of error with try-catch
+
+
+
+```javascript
+it("When no product name, it throws error 400", async() => {
+let errorWeExceptFor = null;
+try {
+ const result = await addNewProduct({name:'nest'});}
+catch (error) {
+ expect(error.code).to.equal('InvalidInput');
+ errorWeExceptFor = error;
+}
+expect(errorWeExceptFor).not.to.be.null;
+//if this assertion fails, the tests results/reports will only show
+//that some value is null, there won't be a word about a missing Exception
+});
+
+```
+
+
+### :clap: Doing It Right Example: A human-readable expectation that could be understood easily, maybe even by QA or technical PM
+
+```javascript
+it.only("When no product name, it throws error 400", async() => {
+ expect(addNewProduct)).to.eventually.throw(AppError).with.property('code', "InvalidInput");
+});
+
+```
+
+
+
+
+
+
+
+
+## ⚪ ️ 1.11 Tag your tests
+
+:white_check_mark: **Do:** Different tests must run on different scenarios: quick smoke, IO-less, tests should run when a developer saves or commits a file, full end-to-end tests usually run when a new pull request is submitted, etc. This can be achieved by tagging tests with keywords like #cold #api #sanity so you can grep with your testing harness and invoke the desired subset. For example, this is how you would invoke only the sanity test group with Mocha: mocha — grep ‘sanity’
+
+
+
+❌ **Otherwise:** Running all the tests, including tests that perform dozens of DB queries, any time a developer makes a small change can be extremely slow and keeps developers away from running tests
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Tagging tests as ‘#cold-test’ allows the test runner to execute only fast tests (Cold===quick tests that are doing no IO and can be executed frequently even as the developer is typing)
+
+
+```javascript
+//this test is fast (no DB) and we're tagging it correspondigly
+//now the user/CI can run it frequently
+describe('Order service', function() {
+ describe('Add new order #cold-test #sanity', function() {
+ test('Scenario - no currency was supplied. Expectation - Use the default currency #sanity', function() {
+ //code logic here
+ });
+ });
+});
+
+
+```
+
+
+
+
+
+
+
+
+## ⚪ ️1.12 Other generic good testing hygiene
+:white_check_mark: **Do:** This post is focused on testing advice that is related to, or at least can be exemplified with Node JS. This bullet, however, groups few non-Node related tips that are well-known
+
+Learn and practice [TDD principles](https://www.sm-cloud.com/book-review-test-driven-development-by-example-a-tldr/) — they are extremely valuable for many but don’t get intimidated if they don’t fit your style, you’re not the only one. Consider writing the tests before the code in a [red-green-refactor style](https://blog.cleancoder.com/uncle-bob/2014/12/17/TheCyclesOfTDD.html), ensure each test checks exactly one thing, when you find a bug — before fixing write a test that will detect this bug in the future, let each test fail at least once before turning green, start a module by writing a quick and simplistic code that satsifies the test - then refactor gradually and take it to a production grade level, avoid any dependency on the environment (paths, OS, etc)
+
+
+
+❌ **Otherwise:** You‘ll miss pearls of wisdom that were collected for decades
+
+
+
+
+# Section 2️⃣: Backend Testing
+
+## ⚪ ️2.1 Enrich your testing portfolio: Look beyond unit tests and the pyramid
+
+:white_check_mark: **Do:** The [testing pyramid](https://martinfowler.com/bliki/TestPyramid.html), though 10> years old, is a great and relevant model that suggests three testing types and influences most developers’ testing strategy. At the same time, more than a handful of shiny new testing techniques emerged and are hiding in the shadows of the testing pyramid. Given all the dramatic changes that we’ve seen in the recent 10 years (Microservices, cloud, serverless), is it even possible that one quite-old model will suit *all* types of applications? shouldn’t the testing world consider welcoming new testing techniques?
+
+Don’t get me wrong, in 2019 the testing pyramid, TDD and unit tests are still a powerful technique and are probably the best match for many applications. Only like any other model, despite its usefulness, [it must be wrong sometimes](https://en.wikipedia.org/wiki/All_models_are_wrong). For example, consider an IOT application that ingests many events into a message-bus like Kafka/RabbitMQ, which then flow into some data-warehouse and are eventually queried by some analytics UI. Should we really spend 50% of our testing budget on writing unit tests for an application that is integration-centric and has almost no logic? As the diversity of application types increase (bots, crypto, Alexa-skills) greater are the chances to find scenarios where the testing pyramid is not the best match.
+
+It’s time to enrich your testing portfolio and become familiar with more testing types (the next bullets suggest few ideas), mind models like the testing pyramid but also match testing types to real-world problems that you’re facing (‘Hey, our API is broken, let’s write consumer-driven contract testing!’), diversify your tests like an investor that build a portfolio based on risk analysis — assess where problems might arise and match some prevention measures to mitigate those potential risks
+
+A word of caution: the TDD argument in the software world takes a typical false-dichotomy face, some preach to use it everywhere, others think it’s the devil. Everyone who speaks in absolutes is wrong :]
+
+
+
+
+❌ **Otherwise:** You’re going to miss some tools with amazing ROI, some like Fuzz, lint, and mutation can provide value in 10 minutes
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Cindy Sridharan suggests a rich testing portfolio in her amazing post ‘Testing Microservices — the sane way’
+
+
+☺️Example: [YouTube: “Beyond Unit Tests: 5 Shiny Node.JS Test Types (2018)” (Yoni Goldberg)](https://www.youtube.com/watch?v=-2zP494wdUY&feature=youtu.be)
+
+
+
+
+
+
+
+
+
+
+
+
+
+## ⚪ ️2.2 Component testing might be your best affair
+
+:white_check_mark: **Do:** Each unit test covers a tiny portion of the application and it’s expensive to cover the whole, whereas end-to-end testing easily covers a lot of ground but is flaky and slower, why not apply a balanced approach and write tests that are bigger than unit tests but smaller than end-to-end testing? Component testing is the unsung song of the testing world — they provide the best from both worlds: reasonable performance and a possibility to apply TDD patterns + realistic and great coverage.
+
+Component tests focus on the Microservice ‘unit’, they work against the API, don’t mock anything which belongs to the Microservice itself (e.g. real DB, or at least the in-memory version of that DB) but stub anything that is external like calls to other Microservices. By doing so, we test what we deploy, approach the app from outwards to inwards and gain great confidence in a reasonable amount of time.
+
+
+
+❌ **Otherwise:** You may spend long days on writing unit tests to find out that you got only 20% system coverage
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Supertest allows approaching Express API in-process (fast and cover many layers)
+
+
+
+ allows approaching Express API in-process (fast and cover many layers)")
+
+
+
+
+
+## ⚪ ️2.3 Ensure new releases don’t break the API using
+
+:white_check_mark: **Do:** So your Microservice has multiple clients, and you run multiple versions of the service for compatibility reasons (keeping everyone happy). Then you change some field and ‘boom!’, some important client who relies on this field is angry. This is the Catch-22 of the integration world: It’s very challenging for the server side to consider all the multiple client expectations — On the other hand, the clients can’t perform any testing because the server controls the release dates. [Consumer-driven contracts and the framework PACT](https://docs.pact.io/) were born to formalize this process with a very disruptive approach — not the server defines the test plan of itself rather the client defines the tests of the… server! PACT can record the client expectation and put in a shared location, “broker”, so the server can pull the expectations and run on every build using PACT library to detect broken contracts — a client expectation that is not met. By doing so, all the server-client API mismatches are caught early during build/CI and might save you a great deal of frustration
+
+
+
+❌ **Otherwise:** The alternatives are exhausting manual testing or deployment fear
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example:
+
+
+
+
+
+
+
+
+
+
+
+
+
+## ⚪ ️ 2.4 Test your middlewares in isolation
+
+:white_check_mark: **Do:** Many avoid Middleware testing because they represent a small portion of the system and require a live Express server. Both reasons are wrong — Middlewares are small but affect all or most of the requests and can be tested easily as pure functions that get {req,res} JS objects. To test a middleware function one should just invoke it and spy ([using Sinon for example](https://www.npmjs.com/package/sinon)) on the interaction with the {req,res} objects to ensure the function performed the right action. The library [node-mock-http](https://www.npmjs.com/package/node-mocks-http) takes it even further and factors the {req,res} objects along with spying on their behavior. For example, it can assert whether the http status that was set on the res object matches the expectation (See example below)
+
+
+
+❌ **Otherwise:** A bug in Express middleware === a bug in all or most requests
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap:Doing It Right Example: Testing middleware in isolation without issuing network calls and waking-up the entire Express machine
+
+
+
+```javascript
+//the middleware we want to test
+const unitUnderTest = require('./middleware')
+const httpMocks = require('node-mocks-http');
+//Jest syntax, equivelant to describe() & it() in Mocha
+test('A request without authentication header, should return http status 403', () => {
+ const request = httpMocks.createRequest({
+ method: 'GET',
+ url: '/user/42',
+ headers: {
+ authentication: ''
+ }
+ });
+ const response = httpMocks.createResponse();
+ unitUnderTest(request, response);
+ expect(response.statusCode).toBe(403);
+});
+
+```
+
+
+
+
+
+
+
+
+## ⚪ ️2.5 Measure and refactor using static analysis tools
+:white_check_mark: **Do:** Using static analysis tools helps by giving objective ways to improve code quality and keep your code maintainable. You can add static analysis tools to your CI build to abort when it finds code smells. Its main selling points over plain linting are the ability to inspect quality in the context of multiple files (e.g. detect duplications), perform advanced analysis (e.g. code complexity) and follow the history and progress of code issues. Two examples of tools you can use are [Sonarqube](https://www.sonarqube.org/) (2,600+ [stars](https://github.com/SonarSource/sonarqube)) and [Code Climate](https://codeclimate.com/) (1,500+ [stars](https://github.com/codeclimate/codeclimate))
+
+Credit:: [Keith Holliday](https://github.com/TheHollidayInn)
+
+
+
+
+❌ **Otherwise:** With poor code quality, bugs and performance will always be an issue that no shiny new library or state of the art features can fix
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: CodeClimate, a commercial tool that can identify complex methods:
+
+
+
+
+
+
+
+
+
+
+
+
+## ⚪ ️ 2.6 Check your readiness for Node-related chaos
+:white_check_mark: **Do:** Weirdly, most software testings are about logic & data only, but some of the worst things that happen (and are really hard to mitigate ) are infrastructural issues. For example, did you ever test what happens when your process memory is overloaded, or when the server/process dies, or does your monitoring system realizes when the API becomes 50% slower?. To test and mitigate these type of bad things — [Chaos engineering](https://principlesofchaos.org/) was born by Netflix. It aims to provide awareness, frameworks and tools for testing our app resiliency for chaotic issues. For example, one of its famous tools, [the chaos monkey](https://github.com/Netflix/chaosmonkey), randomly kills servers to ensure that our service can still serve users and not relying on a single server (there is also a Kubernetes version, [kube-monkey](https://github.com/asobti/kube-monkey), that kills pods). All these tools work on the hosting/platform level, but what if you wish to test and generate pure Node chaos like check how your Node process copes with uncaught errors, unhandled promise rejection, v8 memory overloaded with the max allowed of 1.7GB or whether your UX stays satisfactory when the event loop gets blocked often? to address this I’ve written, [node-chaos](https://github.com/i0natan/node-chaos-monkey) (alpha) which provides all sort of Node-related chaotic acts
+
+
+
+❌ **Otherwise:** No escape here, Murphy’s law will hit your production without mercy
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: : Node-chaos can generate all sort of Node.js pranks so you can test how resilience is your app to chaos
+
+
+
+
+
+
+## ⚪ ️2.7 Avoid global test fixtures and seeds, add data per-test
+
+:white_check_mark: **Do:** Going by the golden rule (bullet 0), each test should add and act on its own set of DB rows to prevent coupling and easily reason about the test flow. In reality, this is often violated by testers who seed the DB with data before running the tests (also known as ‘test fixture’) for the sake of performance improvement. While performance is indeed a valid concern — it can be mitigated (see “Component testing” bullet), however, test complexity is a much painful sorrow that should govern other considerations most of the time. Practically, make each test case explicitly add the DB records it needs and act only on those records. If performance becomes a critical concern — a balanced compromise might come in the form of seeding the only suite of tests that are not mutating data (e.g. queries)
+
+
+
+❌ **Otherwise:** Few tests fail, a deployment is aborted, our team is going to spend precious time now, do we have a bug? let’s investigate, oh no — it seems that two tests were mutating the same seed data
+
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti Pattern Example: tests are not independent and rely on some global hook to feed global DB data
+
+
+
+```javascript
+before(() => {
+ //adding sites and admins data to our DB. Where is the data? outside. At some external json or migration framework
+ await DB.AddSeedDataFromJson('seed.json');
+});
+it("When updating site name, get successful confirmation", async () => {
+ //I know that site name "portal" exists - I saw it in the seed files
+ const siteToUpdate = await SiteService.getSiteByName("Portal");
+ const updateNameResult = await SiteService.changeName(siteToUpdate, "newName");
+ expect(updateNameResult).to.be(true);
+});
+it("When querying by site name, get the right site", async () => {
+ //I know that site name "portal" exists - I saw it in the seed files
+ const siteToCheck = await SiteService.getSiteByName("Portal");
+ expect(siteToCheck.name).to.be.equal("Portal"); //Failure! The previous test change the name :[
+});
+
+```
+
+
+### :clap: Doing It Right Example: We can stay within the test, each test acts on its own set of data
+
+```javascript
+it("When updating site name, get successful confirmation", async () => {
+ //test is adding a fresh new records and acting on the records only
+ const siteUnderTest = await SiteService.addSite({
+ name: "siteForUpdateTest"
+ });
+ const updateNameResult = await SiteService.changeName(siteUnderTest, "newName");
+ expect(updateNameResult).to.be(true);
+});
+
+```
+
+
+
+
+
+# Section 3️⃣: Frontend Testing
+
+## ⚪ ️ 3.1. Separate UI from functionality
+
+:white_check_mark: **Do:** When focusing on testing component logic, UI details become a noise that should be extracted, so your tests can focus on pure data. Practically, extract the desired data from the markup in an abstract way that is not too coupled to the graphic implementation, assert only on pure data (vs HTML/CSS graphic details) and disable animations that slow down. You might get tempted to avoid rendering and test only the back part of the UI (e.g. services, actions, store) but this will result in fictional tests that don't resemble the reality and won't reveal cases where the right data doesn't even arrive in the UI
+
+
+
+
+❌ **Otherwise:** The pure calculated data of your test might be ready in 10ms, but then the whole test will last 500ms (100 tests = 1 min) due to some fancy and irrelevant animation
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Separating out the UI details
+
+ 
+
+```javascript
+test('When users-list is flagged to show only VIP, should display only VIP members', () => {
+ // Arrange
+ const allUsers = [
+ { id: 1, name: 'Yoni Goldberg', vip: false },
+ { id: 2, name: 'John Doe', vip: true }
+ ];
+
+ // Act
+ const { getAllByTestId } = render();
+
+ // Assert - Extract the data from the UI first
+ const allRenderedUsers = getAllByTestId('user').map(uiElement => uiElement.textContent);
+ const allRealVIPUsers = allUsers.filter((user) => user.vip).map((user) => user.name);
+ expect(allRenderedUsers).toEqual(allRealVIPUsers); //compare data with data, no UI here
+});
+
+```
+
+
+
+### :thumbsdown: Anti Pattern Example: Assertion mix UI details and data
+```javascript
+test('When flagging to show only VIP, should display only VIP members', () => {
+ // Arrange
+ const allUsers = [
+ {id: 1, name: 'Yoni Goldberg', vip: false },
+ {id: 2, name: 'John Doe', vip: true }
+ ];
+
+ // Act
+ const { getAllByTestId } = render();
+
+ // Assert - Mix UI & data in assertion
+ expect(getAllByTestId('user')).toEqual('[John Doe]');
+});
+
+```
+
+
+
+
+
+
+
+
+
+## ⚪ ️ 3.2 Query HTML elements based on attributes that are unlikely to change
+
+:white_check_mark: **Do:** Query HTML elements based on attributes that are likely to survive graphic changes unlike CSS selectors and like form labels. If the designated element doesn't have such attributes, create a dedicated test attribute like 'test-id-submit-button'. Going this route not only ensures that your functional/logic tests never break because of look & feel changes but also it becomes clear to the entire team that this element and attribute are utilized by tests and shouldn't get removed
+
+
+
+❌ **Otherwise:** You want to test the login functionality that spans many components, logic and services, everything is set up perfectly - stubs, spies, Ajax calls are isolated. All seems perfect. Then the test fails because the designer changed the div CSS class from 'thick-border' to 'thin-border'
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Querying an element using a dedicated attrbiute for testing
+
+
+
+```html
+// the markup code (part of React component)
+
+
+ {value}
+
+
+```
+
+```javascript
+// this example is using react-testing-library
+ test('Whenever no data is passed to metric, show 0 as default', () => {
+ // Arrange
+ const metricValue = undefined;
+
+ // Act
+ const { getByTestId } = render();
+
+ expect(getByTestId('errorsLabel')).text()).toBe("0");
+ });
+
+```
+
+
+
+### :thumbsdown: Anti-Pattern Example: Relying on CSS attributes
+```html
+
+{value}
+```
+
+```javascript
+// this exammple is using enzyme
+test('Whenever no data is passed, error metric shows zero', () => {
+ // ...
+
+ expect(wrapper.find("[className='d-flex-column']").text()).toBe("0");
+ });
+```
+
+
+
+
+
+
+
+
+
+## ⚪ ️ 3.3 Whenever possible, test with a realistic and fully rendered component
+
+:white_check_mark: **Do:** Whenever reasonably sized, test your component from outside like your users do, fully render the UI, act on it and assert that the rendered UI behaves as expected. Avoid all sort of mocking, partial and shallow rendering - this approach might result in untrapped bugs due to lack of details and harden the maintenance as the tests mess with the internals (see bullet 'Favour blackbox testing'). If one of the child components is significantly slowing down (e.g. animation) or complicating the setup - consider explicitly replacing it with a fake
+
+With all that said, a word of caution is in order: this technique works for small/medium components that pack a reasonable size of child components. Fully rendering a component with too many children will make it hard to reason about test failures (root cause analysis) and might get too slow. In such cases, write only a few tests against that fat parent component and more tests against its children
+
+
+
+❌ **Otherwise:** When poking into a component's internal by invoking its private methods, and checking the inner state - you would have to refactor all tests when refactoring the components implementation. Do you really have a capacity for this level of maintenance?
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Working realstically with a fully rendered component
+
+ 
+
+```javascript
+class Calendar extends React.Component {
+ static defaultProps = {showFilters: false}
+
+ render() {
+ return (
+
+ A filters panel with a button to hide/show filters
+
+
+ )
+ }
+}
+
+//Examples use React & Enzyme
+test('Realistic approach: When clicked to show filters, filters are displayed', () => {
+ // Arrange
+ const wrapper = mount()
+
+ // Act
+ wrapper.find('button').simulate('click');
+
+ // Assert
+ expect(wrapper.text().includes('Choose Filter'));
+ // This is how the user will approach this element: by text
+})
+
+
+```
+
+### :thumbsdown: Anti-Pattern Example: Mocking the reality with shallow rendering
+```javascript
+
+test('Shallow/mocked approach: When clicked to show filters, filters are displayed', () => {
+ // Arrange
+ const wrapper = shallow()
+
+ // Act
+ wrapper.find('filtersPanel').instance().showFilters();
+ // Tap into the internals, bypass the UI and invoke a method. White-box approach
+
+ // Assert
+ expect(wrapper.find('Filter').props()).toEqual({title: 'Choose Filter'});
+ // what if we change the prop name or don't pass anything relevant?
+})
+
+```
+
+
+
+
+
+
+## ⚪ ️ 3.4 Don't sleep, use frameworks built-in support for async events. Also try to speed things up
+
+:white_check_mark: **Do:** In many cases, the unit under test completion time is just unknown (e.g. animation suspends element appearance) - in that case, avoid sleeping (e.g. setTimeOut) and prefer more deterministic methods that most platforms provide. Some libraries allows awaiting on operations (e.g. [Cypress cy.request('url')](https://docs.cypress.io/guides/references/best-practices.html#Unnecessary-Waiting)), other provide API for waiting like [@testing-library/dom method wait(expect(element))](https://testing-library.com/docs/guide-disappearance). Sometimes a more elegant way is to stub the slow resource, like API for example, and then once the response moment becomes deterministic the component can be explicitly re-rendered. When depending upon some external component that sleeps, it might turn useful to [hurry-up the clock](https://jestjs.io/docs/en/timer-mocks). Sleeping is a pattern to avoid because it forces your test to be slow or risky (when waiting for a too short period). Whenever sleeping and polling is inevitable and there's no support from the testing framework, some npm libraries like [wait-for-expect](https://www.npmjs.com/package/wait-for-expect) can help with a semi-deterministic solution
+
+
+❌ **Otherwise:** When sleeping for a long time, tests will be an order of magnitude slower. When trying to sleep for small numbers, test will fail when the unit under test didn't respond in a timely fashion. So it boils down to a trade-off between flakiness and bad performance
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: E2E API that resolves only when the async operations is done (Cypress)
+
+ 
+
+```javascript
+// using Cypress
+cy.get('#show-products').click()// navigate
+cy.wait('@products')// wait for route to appear
+// this line will get executed only when the route is ready
+
+```
+
+### :clap: Doing It Right Example: Testing library that waits for DOM elements
+
+```javascript
+// @testing-library/dom
+test('movie title appears', async () => {
+ // element is initially not present...
+
+ // wait for appearance
+ await wait(() => {
+ expect(getByText('the lion king')).toBeInTheDocument()
+ })
+
+ // wait for appearance and return the element
+ const movie = await waitForElement(() => getByText('the lion king'))
+})
+
+```
+
+### :thumbsdown: Anti-Pattern Example: custom sleep code
+```javascript
+
+test('movie title appears', async () => {
+ // element is initially not present...
+
+ // custom wait logic (caution: simplistic, no timeout)
+ const interval = setInterval(() => {
+ const found = getByText('the lion king');
+ if(found){
+ clearInterval(interval);
+ expect(getByText('the lion king')).toBeInTheDocument();
+ }
+
+ }, 100);
+
+ // wait for appearance and return the element
+ const movie = await waitForElement(() => getByText('the lion king'))
+})
+
+```
+
+
+
+
+
+
+## ⚪ ️ 3.5. Watch how the content is served over the network
+
+
+
+✅ **Do:** Apply some active monitor that ensures the page load under real network is optimized - this includes any UX concern like slow page load or un-minified bundle. The inspection tools market is no short: basic tools like [pingdom](https://www.pingdom.com/), AWS CloudWatch, [gcp StackDriver](https://cloud.google.com/monitoring/uptime-checks/) can be easily configured to watch whether the server is alive and response under a reasonable SLA. This only scratches the surface of what might get wrong, hence it's preferable to opt for tools that specialize in frontend (e.g. [lighthouse](https://developers.google.com/web/tools/lighthouse/), [pagespeed](https://developers.google.com/speed/pagespeed/insights/)) and perform richer analysis. The focus should be on symptoms, metrics that directly affect the UX, like page load time, [meaningful paint](https://scotch.io/courses/10-web-performance-audit-tips-for-your-next-billion-users-in-2018/fmp-first-meaningful-paint), [time until the page gets interactive (TTI)](https://calibreapp.com/blog/time-to-interactive/). On top of that, one may also watch for technical causes like ensuring the content is compressed, time to the first byte, optimize images, ensuring reasonable DOM size, SSL and many others. It's advisable to have these rich monitors both during development, as part of the CI and most important - 24x7 over the production's servers/CDN
+
+
+
+❌ **Otherwise:** It must be disappointing to realize that after such great care for crafting a UI, 100% functional tests passing and sophisticated bundling - the UX is horrible and slow due to CDN misconfiguration
+
+
+
+✏ Code Examples
+
+### :clap: Doing It Right Example: Lighthouse page load inspection report
+
+
+
+
+
+
+
+
+
+## ⚪ ️ 3.6 Stub flakky and slow resources like backend APIs
+
+:white_check_mark: **Do:** When coding your mainstream tests (not E2E tests), avoid involving any resource that is beyond your responsibility and control like backend API and use stubs instead (i.e. test double). Practically, instead of real network calls to APIs, use some test double library (like [Sinon](https://sinonjs.org/), [Test doubles](https://www.npmjs.com/package/testdouble), etc) for stubbing the API response. The main benefit is preventing flakiness - testing or staging APIs by definition are not highly stable and from time to time will fail your tests although YOUR component behaves just fine (production env was not meant for testing and it usually throttles requests). Doing this will allow simulating various API behavior that should drive your component behavior as when no data was found or the case when API throws an error. Last but not least, network calls will greatly slow down the tests
+
+
+
+❌ **Otherwise:** The average test runs no longer than few ms, a typical API call last 100ms>, this makes each test ~20x slower
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Stubbing or intercepting API calls
+ 
+
+```javascript
+
+// unit under test
+export default function ProductsList() {
+ const [products, setProducts] = useState(false)
+
+ const fetchProducts = async() => {
+ const products = await axios.get('api/products')
+ setProducts(products);
+ }
+
+ useEffect(() => {
+ fetchProducts();
+ }, []);
+
+ return products ? {products}
: No products
+}
+
+// test
+test('When no products exist, show the appropriate message', () => {
+ // Arrange
+ nock("api")
+ .get(`/products`)
+ .reply(404);
+
+ // Act
+ const {getByTestId} = render();
+
+ // Assert
+ expect(getByTestId('no-products-message')).toBeTruthy();
+});
+
+```
+
+
+
+
+
+## ⚪ ️ 3.7 Have very few end-to-end tests that spans the whole system
+
+:white_check_mark: **Do:** Although E2E (end-to-end) usually means UI-only testing with a real browser (See bullet 3.6), for other they mean tests that stretch the entire system including the real backend. The latter type of tests is highly valuable as they cover integration bugs between frontend and backend that might happen due to a wrong understanding of the exchange schema. They are also an efficient method to discover backend-to-backend integration issues (e.g. Microservice A sends the wrong message to Microservice B) and even to detect deployment failures - there are no backend frameworks for E2E testing that are as friendly and mature as UI frameworks like [Cypress](https://www.cypress.io/) and [Pupeteer](https://github.com/GoogleChrome/puppeteer). The downside of such tests is the high cost of configuring an environment with so many components, and mostly their brittleness - given 50 microservices, even if one fails then the entire E2E just failed. For that reason, we should use this technique sparingly and probably have 1-10 of those and no more. That said, even a small number of E2E tests are likely to catch the type of issues they are targeted for - deployment & integration faults. It's advisable to run those over a production-like staging environment
+
+
+
+❌ **Otherwise:** UI might invest much in testing its functionality only to realizes very late that the backend returned payload (the data schema the UI has to work with) is very differnt than expected
+
+
+
+## ⚪ ️ 3.8 Speed-up E2E tests by reusing login credentials
+
+:white_check_mark: **Do:** In E2E tests that involve a real backend and rely on a valid user token for API calls, it doesn't payoff to isolate the test to a level where a user is created and logged-in in every request. Instead, login only once before the tests execution start (i.e. before-all hook), save the token in some local storage and reuse it across requests. This seem to violate one of the core testing principle - keep the test autonomous without resources coupling. While this is a valid worry, in E2E tests performance is a key concern and creating 1-3 API requests before starting each individial tests might lead to horrible execution time. Reusing credentials doesn't mean the tests have to act on the same user records - if relying on user records (e.g. test user payments history) than make sure to generate those records as part of the test and avoid sharing their existence with other tests. Also remember that the backend can be faked - if your tests are focused on the frontend it might be better to isolate it and stub the backend API (see bullet 3.6).
+
+
+
+❌ **Otherwise:** Given 200 test cases and assuming login=100ms = 20 seconds only for logging-in again and again
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Logging-in before-all and not before-each
+
+
+
+```javascript
+let authenticationToken;
+
+// happens before ALL tests run
+before(() => {
+ cy.request('POST', 'http://localhost:3000/login', {
+ username: Cypress.env('username'),
+ password: Cypress.env('password'),
+ })
+ .its('body')
+ .then((responseFromLogin) => {
+ authenticationToken = responseFromLogin.token;
+ })
+})
+
+// happens before EACH test
+beforeEach(setUser => () {
+ cy.visit('/home', {
+ onBeforeLoad (win) {
+ win.localStorage.setItem('token', JSON.stringify(authenticationToken))
+ },
+ })
+})
+
+```
+
+
+
+
+
+
+
+
+## ⚪ ️ 3.9 Have one E2E smoke test that just travels across the site map
+
+:white_check_mark: **Do:** For production monitoring and development-time sanity check, run a single E2E test that visits all/most of the site pages and ensures no one breaks. This type of test brings a great return on investment as it's very easy to write and maintain, but it can detect any kind of failure including functional, network and deployment issues. Other styles of smoke and sanity checking are not as reliable and exhaustive - some ops teams just ping the home page (production) or developers who run many integration tests which don't discover packaging and browser issues. Goes without saying that the smoke test doesn't replace functional tests rather just aim to serve as a quick smoke detector
+
+
+
+❌ **Otherwise:** Everything might seem perfect, all tests pass, production health-check is also positive but the Payment component had some packaging issue and only the /Payment route is not rendering
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Smoke travelling across all pages
+
+```javascript
+it('When doing smoke testing over all page, should load them all successfully', () => {
+ // exemplified using Cypress but can be implemented easily
+ // using any E2E suite
+ cy.visit('https://mysite.com/home');
+ cy.contains('Home');
+ cy.contains('https://mysite.com/Login');
+ cy.contains('Login');
+ cy.contains('https://mysite.com/About');
+ cy.contains('About');
+ })
+```
+
+
+
+
+
+
+## ⚪ ️ 3.10 Expose the tests as a live collaborative document
+
+:white_check_mark: **Do:** Besides increasing app reliability, tests bring another attractive opportunity to the table - serve as live app documentation. Since tests inherently speak at a less-technical and product/UX language, using the right tools they can serve as a communication artifact that greatly aligns all the peers - developers and their customers. For example, some frameworks allow expressing the flow and expectations (i.e. tests plan) using a human-readable language so any stakeholder, including product managers, can read, approve and collaborate on the tests which just became the live requirements document. This technique is also being referred to as 'acceptance test' as it allows the customer to define his acceptance criteria in plain language. This is [BDD (behavior-driven testing)](https://en.wikipedia.org/wiki/Behavior-driven_development) at its purest form. One of the popular frameworks that enable this is [Cucumber which has a JavaScript flavor](https://github.com/cucumber/cucumber-js), see example below. Another similar yet different opportunity, [StoryBook](https://storybook.js.org/), allows exposing UI components as a graphic catalog where one can walk through the various states of each component (e.g. render a grid w/o filters, render that grid with multiple rows or with none, etc), see how it looks like, and how to trigger that state - this can appeal also to product folks but mostly serves as live doc for developers who consume those components.
+
+❌ **Otherwise:** After investing top resources on testing, it's just a pity not to leverage this investment and win great value
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Describing tests in human-language using cucumber-js
+
+
+```javascript
+// this is how one can describe tests using cucumber: plain language that allows anyone to understand and collaborate
+
+Feature: Twitter new tweet
+
+ I want to tweet something in Twitter
+
+ @focus
+ Scenario: Tweeting from the home page
+ Given I open Twitter home
+ Given I click on "New tweet" button
+ Given I type "Hello followers!" in the textbox
+ Given I click on "Submit" button
+ Then I see message "Tweet saved"
+
+```
+
+### :clap: Doing It Right Example: Visualizing our components, their various states and inputs using Storybook
+
+
+
+
+
+
+
+
+## ⚪ ️ 3.11 Detect visual issues with automated tools
+
+
+:white_check_mark: **Do:** Setup automated tools to capture UI screenshots when changes are presented and detect visual issues like content overlapping or breaking. This ensures that not only the right data is prepared but also the user can conveniently see it. This technique is not widely adopted, our testing mindset leans toward functional tests but it's the visuals what the user experience and with so many device types it's very easy to overlook some nasty UI bug. Some free tools can provide the basics - generate and save screenshots for the inspection of human eyes. While this approach might be sufficient for small apps, it's flawed as any other manual testing that demands human labor anytime something changes. On the other hand, it's quite challenging to detect UI issues automatically due to the lack of clear definition - this is where the field of 'Visual Regression' chime in and solve this puzzle by comparing old UI with the latest changes and detect differences. Some OSS/free tools can provide some of this functionality (e.g. [wraith](https://github.com/BBC-News/wraith), [PhantomCSS]([https://github.com/HuddleEng/PhantomCSS](https://github.com/HuddleEng/PhantomCSS)) but might charge signficant setup time. The commercial line of tools (e.g. [Applitools](https://applitools.com/), [Percy.io](https://percy.io/)) takes is a step further by smoothing the installation and packing advanced features like management UI, alerting, smart capturing by elemeinating 'visual noise' (e.g. ads, animations) and even root cause analysis of the DOM/css changes that led to the issue
+
+
+
+❌ **Otherwise:** How good is a content page that display great content (100% tests passed), loads instantly but half of the content area is hidden?
+
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti Pattern Example: A typical visual regression - right content that is served badly
+
+
+
+
+
+
+### :clap: Doing It Right Example: Configuring wraith to capture and compare UI snapshots
+
+
+
+```
+# Add as many domains as necessary. Key will act as a label
+
+domains:
+ english: "http://www.mysite.com"
+
+# Type screen widths below, here are a couple of examples
+
+screen_widths:
+
+ - 600
+ - 768
+ - 1024
+ - 1280
+
+
+# Type page URL paths below, here are a couple of examples
+paths:
+ about:
+ path: /about
+ selector: '.about'
+ subscribe:
+ selector: '.subscribe'
+ path: /subscribe
+```
+
+### :clap: Doing It Right Example: Using Applitools to get snapshot comaprison and other advanced features
+
+ 
+
+```javascript
+import * as todoPage from '../page-objects/todo-page';
+
+describe('visual validation', () => {
+
+before(() => todoPage.navigate());
+
+beforeEach(() => cy.eyesOpen({ appName: 'TAU TodoMVC' }));
+
+afterEach(() => cy.eyesClose());
+
+
+
+it('should look good', () => {
+
+cy.eyesCheckWindow('empty todo list');
+
+
+
+todoPage.addTodo('Clean room');
+
+
+
+todoPage.addTodo('Learn javascript');
+
+
+
+cy.eyesCheckWindow('two todos');
+
+
+
+todoPage.toggleTodo(0);
+
+
+
+cy.eyesCheckWindow('mark as completed');
+
+});
+
+});
+```
+
+
+
+
+
+
+
+
+
+
+
+# Section 4️⃣: Measuring Test Effectiveness
+
+
+
+## ⚪ ️ 4.1 Get enough coverage for being confident, ~80% seems to be the lucky number
+
+:white_check_mark: **Do:** The purpose of testing is to get enough confidence for moving fast, obviously the more code is tested the more confident the team can be. Coverage is a measure of how many code lines (and branches, statements, etc) are being reached by the tests. So how much is enough? 10–30% is obviously too low to get any sense about the build correctness, on the other side 100% is very expensive and might shift your focus from the critical paths to the exotic corners of the code. The long answer is that it depends on many factors like the type of application — if you’re building the next generation of Airbus A380 than 100% is a must, for a cartoon pictures website 50% might be too much. Although most of the testing enthusiasts claim that the right coverage threshold is contextual, most of them also mention the number 80% as a thumb of a rule ([Fowler: “in the upper 80s or 90s”](https://martinfowler.com/bliki/TestCoverage.html)) that presumably should satisfy most of the applications.
+
+Implementation tips: You may want to configure your continuous integration (CI) to have a coverage threshold ([Jest link](https://jestjs.io/docs/en/configuration.html#collectcoverage-boolean)) and stop a build that doesn’t stand to this standard (it’s also possible to configure threshold per component, see code example below). On top of this, consider detecting build coverage decrease (when a newly committed code has less coverage) — this will push developers raising or at least preserving the amount of tested code. All that said, coverage is only one measure, a quantitative based one, that is not enough to tell the robustness of your testing. And it can also be fooled as illustrated in the next bullets
+
+
+
+
+❌ **Otherwise:** Confidence and numbers go hand in hand, without really knowing that you tested most of the system — there will also be some fear. and fear will slow you down
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Example: A typical coverage report
+
+
+
+
+### :clap: Doing It Right Example: Setting up coverage per component (using Jest)
+
+
+
+
+
+
+
+
+
+
+
+## ⚪ ️ 4.2 Inspect coverage reports to detect untested areas and other oddities
+
+:white_check_mark: **Do:** Some issues sneak just under the radar and are really hard to find using traditional tools. These are not really bugs but more of surprising application behavior that might have a severe impact. For example, often some code areas are never or rarely being invoked — you thought that the ‘PricingCalculator’ class is always setting the product price but it turns out it is actually never invoked although we have 10000 products in DB and many sales… Code coverage reports help you realize whether the application behaves the way you believe it does. Other than that, it can also highlight which types of code is not tested — being informed that 80% of the code is tested doesn’t tell whether the critical parts are covered. Generating reports is easy — just run your app in production or during testing with coverage tracking and then see colorful reports that highlight how frequent each code area is invoked. If you take your time to glimpse into this data — you might find some gotchas
+
+
+
+❌ **Otherwise:** If you don’t know which parts of your code are left un-tested, you don’t know where the issues might come from
+
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti-Pattern Example: What’s wrong with this coverage report? based on a real-world scenario where we tracked our application usage in QA and find out interesting login patterns (Hint: the amount of login failures is non-proportional, something is clearly wrong. Finally it turned out that some frontend bug keeps hitting the backend login API)
+
+
+
+
+
+
+
+
+## ⚪ ️ 4.3 Measure logical coverage using mutation testing
+
+:white_check_mark: **Do:** The Traditional Coverage metric often lies: It may show you 100% code coverage, but none of your functions, even not one, return the right response. How come? it simply measures over which lines of code the test visited, but it doesn’t check if the tests actually tested anything — asserted for the right response. Like someone who’s traveling for business and showing his passport stamps — this doesn’t prove any work done, only that he visited few airports and hotels.
+
+Mutation-based testing is here to help by measuring the amount of code that was actually TESTED not just VISITED. [Stryker](https://stryker-mutator.io/) is a JavaScript library for mutation testing and the implementation is really neat:
+
+(1) it intentionally changes the code and “plants bugs”. For example the code newOrder.price===0 becomes newOrder.price!=0. This “bugs” are called mutations
+
+(2) it runs the tests, if all succeed then we have a problem — the tests didn’t serve their purpose of discovering bugs, the mutations are so-called survived. If the tests failed, then great, the mutations were killed.
+
+Knowing that all or most of the mutations were killed gives much higher confidence than traditional coverage and the setup time is similar
+
+
+
+❌ **Otherwise:** You’ll be fooled to believe that 85% coverage means your test will detect bugs in 85% of your code
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti Pattern Example: 100% coverage, 0% testing
+
+
+```javascript
+function addNewOrder(newOrder) {
+ logger.log(`Adding new order ${newOrder}`);
+ DB.save(newOrder);
+ Mailer.sendMail(newOrder.assignee, `A new order was places ${newOrder}`);
+
+ return {approved: true};
+}
+
+it("Test addNewOrder, don't use such test names", () => {
+ addNewOrder({asignee: "John@mailer.com",price: 120});
+});//Triggers 100% code coverage, but it doesn't check anything
+
+```
+
+
+### :clap: Doing It Right Example: Stryker reports, a tool for mutation testing, detects and counts the amount of code that is not tested (Mutations)
+
+")
+
+
+
+
+
+
+
+## ⚪ ️4.4 Preventing test code issues with Test linters
+
+:white_check_mark: **Do:** A set of ESLint plugins were built specifically for inspecting the tests code patterns and discover issues. For example, [eslint-plugin-mocha](https://www.npmjs.com/package/eslint-plugin-mocha) will warn when a test is written at the global level (not a son of a describe() statement) or when tests are [skipped](https://mochajs.org/#inclusive-tests) which might lead to a false belief that all tests are passing. Similarly, [eslint-plugin-jest](https://github.com/jest-community/eslint-plugin-jest) can, for example, warn when a test has no assertions at all (not checking anything)
+
+
+
+
+❌ **Otherwise:** Seeing 90% code coverage and 100% green tests will make your face wear a big smile only until you realize that many tests aren’t asserting for anything and many test suites were just skipped. Hopefully, you didn’t deploy anything based on this false observation
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti Pattern Example: A test case full of errors, luckily all are caught by Linters
+
+```javascript
+describe("Too short description", () => {
+ const userToken = userService.getDefaultToken() // *error:no-setup-in-describe, use hooks (sparingly) instead
+ it("Some description", () => {});//* error: valid-test-description. Must include the word "Should" + at least 5 words
+});
+
+it.skip("Test name", () => {// *error:no-skipped-tests, error:error:no-global-tests. Put tests only under describe or suite
+ expect("somevalue"); // error:no-assert
+});
+
+it("Test name", () => {*//error:no-identical-title. Assign unique titles to tests
+});
+```
+
+
+
+
+
+
+# Section 5️⃣: CI and Other Quality Measures
+
+
+
+## ⚪ ️ 5.1 Enrich your linters and abort builds that have linting issues
+
+:white_check_mark: **Do:** Linters are a free lunch, with 5 min setup you get for free an auto-pilot guarding your code and catching significant issue as you type. Gone are the days where linting was about cosmetics (no semi-colons!). Nowadays, Linters can catch severe issues like errors that are not thrown correctly and losing information. On top of your basic set of rules (like [ESLint standard](https://www.npmjs.com/package/eslint-plugin-standard) or [Airbnb style](https://www.npmjs.com/package/eslint-config-airbnb)), consider including some specializing Linters like [eslint-plugin-chai-expect](https://www.npmjs.com/package/eslint-plugin-chai-expect) that can discover tests without assertions, [eslint-plugin-promise](https://www.npmjs.com/package/eslint-plugin-promise?activeTab=readme) can discover promises with no resolve (your code will never continue), [eslint-plugin-security](https://www.npmjs.com/package/eslint-plugin-security?activeTab=readme) which can discover eager regex expressions that might get used for DOS attacks, and [eslint-plugin-you-dont-need-lodash-underscore](https://www.npmjs.com/package/eslint-plugin-you-dont-need-lodash-underscore) is capable of alarming when the code uses utility library methods that are part of the V8 core methods like Lodash._map(…)
+
+
+
+❌ **Otherwise:** Consider a rainy day where your production keeps crashing but the logs don’t display the error stack trace. What happened? Your code mistakenly threw a non-error object and the stack trace was lost, a good reason for banging your head against a brick wall. A 5min linter setup could detect this TYPO and save your day
+
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti Pattern Example: The wrong Error object is thrown mistakenly, no stack-trace will appear for this error. Luckily, ESLint catches the next production bug
+
+
+
+
+
+
+
+
+
+# ⚪ ️ 5.2 Shorten the feedback loop with local developer-CI
+
+:white_check_mark: **Do:** Using a CI with shiny quality inspections like testing, linting, vulnerabilities check, etc? Help developers run this pipeline also locally to solicit instant feedback and shorten the [feedback loop](https://www.gocd.org/2016/03/15/are-you-ready-for-continuous-delivery-part-2-feedback-loops/). Why? an efficient testing process constitutes many and iterative loops: (1) try-outs -> (2) feedback -> (3) refactor. The faster the feedback is, the more improvement iterations a developer can perform per-module and perfect the results. On the flip, when the feedback is late to come fewer improvement iterations could be packed into a single day, the team might already move forward to another topic/task/module and might not be up for refining that module.
+
+Practically, some CI vendors (Example: [CircleCI load CLI](https://circleci.com/docs/2.0/local-cli/)) allow running the pipeline locally. Some commercial tools like [wallaby provide highly-valuable & testing insights](https://wallabyjs.com/) as a developer prototype (no affiliation). Alternatively, you may just add npm script to package.json that runs all the quality commands (e.g. test, lint, vulnerabilities) — use tools like [concurrently](https://www.npmjs.com/package/concurrently) for parallelization and non-zero exit code if one of the tools failed. Now the developer should just invoke one command — e.g. ‘npm run quality’ — to get instant feedback. Consider also aborting a commit if the quality check failed using a githook ([husky can help](https://github.com/typicode/husky))
+
+
+
+❌ **Otherwise:** When the quality results arrive the day after the code, testing doesn’t become a fluent part of development rather an after the fact formal artifact
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: npm scripts that perform code quality inspection, all are run in parallel on demand or when a developer is trying to push new code
+```javascript
+"scripts": {
+ "inspect:sanity-testing": "mocha **/**--test.js --grep \"sanity\"",
+ "inspect:lint": "eslint .",
+ "inspect:vulnerabilities": "npm audit",
+ "inspect:license": "license-checker --failOn GPLv2",
+ "inspect:complexity": "plato .",
+
+ "inspect:all": "concurrently -c \"bgBlue.bold,bgMagenta.bold,yellow\" \"npm:inspect:quick-testing\" \"npm:inspect:lint\" \"npm:inspect:vulnerabilities\" \"npm:inspect:license\""
+ },
+ "husky": {
+ "hooks": {
+ "precommit": "npm run inspect:all",
+ "prepush": "npm run inspect:all"
+ }
+}
+
+```
+
+
+
+
+
+
+
+
+# ⚪ ️5.3 Perform e2e testing over a true production-mirror
+
+:white_check_mark: **Do:** End to end (e2e) testing are the main challenge of every CI pipeline — creating an identical ephemeral production mirror on the fly with all the related cloud services can be tedious and expensive. Finding the best compromise is your game: [Docker-compose](https://serverless.com/) allows crafting isolated dockerized environment with identical containers using a single plain text file but the backing technology (e.g. networking, deployment model) is different from real-world productions. You may combine it with [‘AWS Local’](https://github.com/localstack/localstack) to work with a stub of the real AWS services. If you went [serverless](https://serverless.com/) multiple frameworks like serverless and [AWS SAM](https://docs.aws.amazon.com/lambda/latest/dg/serverless_app.html) allows the local invocation of Faas code.
+
+The huge Kubernetes eco-system is yet to formalize a standard convenient tool for local and CI-mirroring though many new tools are launched frequently. One approach is running a ‘minimized-Kubernetes’ using tools like [Minikube](https://kubernetes.io/docs/setup/minikube/) and [MicroK8s](https://microk8s.io/) which resemble the real thing only come with less overhead. Another approach is testing over a remote ‘real-Kubernetes’, some CI providers (e.g. [Codefresh](https://codefresh.io/)) has native integration with Kubernetes environment and make it easy to run the CI pipeline over the real thing, others allow custom scripting against a remote Kubernetes.
+
+
+
+❌ **Otherwise:** Using different technologies for production and testing demands maintaining two deployment models and keeps the developers and the ops team separated
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Example: a CI pipeline that generates Kubernetes cluster on the fly ([Credit: Dynamic-environments Kubernetes](https://container-solutions.com/dynamic-environments-kubernetes/))
+
+deploy:
stage: deploy
image: registry.gitlab.com/gitlab-examples/kubernetes-deploy
script:
- ./configureCluster.sh $KUBE_CA_PEM_FILE $KUBE_URL $KUBE_TOKEN
- kubectl create ns $NAMESPACE
- kubectl create secret -n $NAMESPACE docker-registry gitlab-registry --docker-server="$CI_REGISTRY" --docker-username="$CI_REGISTRY_USER" --docker-password="$CI_REGISTRY_PASSWORD" --docker-email="$GITLAB_USER_EMAIL"
- mkdir .generated
- echo "$CI_BUILD_REF_NAME-$CI_BUILD_REF"
- sed -e "s/TAG/$CI_BUILD_REF_NAME-$CI_BUILD_REF/g" templates/deals.yaml | tee ".generated/deals.yaml"
- kubectl apply --namespace $NAMESPACE -f .generated/deals.yaml
- kubectl apply --namespace $NAMESPACE -f templates/my-sock-shop.yaml
environment:
name: test-for-ci
+
+
+
+
+
+
+
+
+
+## ⚪ ️5.4 Parallelize test execution
+:white_check_mark: **Do:** When done right, testing is your 24/7 friend providing almost instant feedback. In practice, executing 500 CPU-bounded unit test on a single thread can take too long. Luckily, modern test runners and CI platforms (like [Jest](https://github.com/facebook/jest), [AVA](https://github.com/avajs/ava) and [Mocha extensions](https://github.com/yandex/mocha-parallel-tests)) can parallelize the test into multiple processes and achieve significant improvement in feedback time. Some CI vendors do also parallelize tests across containers (!) which shortens the feedback loop even further. Whether locally over multiple processes, or over some cloud CLI using multiple machines — parallelizing demand keeping the tests autonomous as each might run on different processes
+
+
+❌ **Otherwise:** Getting test results 1 hour long after pushing new code, as you already code the next features, is a great recipe for making testing less relevant
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Mocha parallel & Jest easily outrun the traditional Mocha thanks to testing parallelization ([Credit: JavaScript Test-Runners Benchmark](https://medium.com/dailyjs/javascript-test-runners-benchmark-3a78d4117b4))
+")
+
+
+
+
+
+
+
+
+## ⚪ ️5.5 Stay away from legal issues using license and plagiarism check
+:white_check_mark: **Do:** Licensing and plagiarism issues are probably not your main concern right now, but why not tick this box as well in 10 minutes? A bunch of npm packages like [license check](https://www.npmjs.com/package/license-checker) and [plagiarism check](https://www.npmjs.com/package/plagiarism-checker) (commercial with free plan) can be easily baked into your CI pipeline and inspect for sorrows like dependencies with restrictive licenses or code that was copy-pasted from Stackoverflow and apparently violates some copyrights
+
+❌ **Otherwise:** Unintentionally, developers might use packages with inappropriate licenses or copy paste commercial code and run into legal issues
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example:
+```javascript
+//install license-checker in your CI environment or also locally
+npm install -g license-checker
+
+//ask it to scan all licenses and fail with exit code other than 0 if it found unauthorized license. The CI system should catch this failure and stop the build
+license-checker --summary --failOn BSD
+
+```
+
+
+
+
+
+
+
+
+
+
+
+
+## ⚪ ️5.6 Constantly inspect for vulnerable dependencies
+:white_check_mark: **Do:** Licensing and plagiarism issues are probably not your main concern right now, but why not tick this box as well in 10 minutes? A bunch of npm packages like license check and plagiarism check (commercial with free plan) can be easily baked into your CI pipeline and inspect for sorrows like dependencies with restrictive licenses or code that was copy-pasted from Stackoverflow and apparently violates some copyrights
+
+
+
+❌ **Otherwise:** Even the most reputable dependencies such as Express have known vulnerabilities. This can get easily tamed using community tools such as [npm audit](https://docs.npmjs.com/getting-started/running-a-security-audit), or commercial tools like [snyk](https://snyk.io/) (offer also a free community version). Both can be invoked from your CI on every build
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Example: NPM Audit result
+
+
+
+
+
+
+
+
+
+## ⚪ ️5.7 Automate dependency updates
+:white_check_mark: **Do:** Yarn and npm latest introduction of package-lock.json introduced a serious challenge (the road to hell is paved with good intentions) — by default now, packages are no longer getting updates. Even a team running many fresh deployments with ‘npm install’ & ‘npm update’ won’t get any new updates. This leads to subpar dependent packages versions at best or to vulnerable code at worst. Teams now rely on developers goodwill and memory to manually update the package.json or use tools [like ncu](https://www.npmjs.com/package/npm-check-updates) manually. A more reliable way could be to automate the process of getting the most reliable dependency versions, though there are no silver bullet solutions yet there are two possible automation roads:
+
+(1) CI can fail builds that have obsolete dependencies — using tools like [‘npm outdated’](https://docs.npmjs.com/cli/outdated) or ‘npm-check-updates (ncu)’ . Doing so will enforce developers to update dependencies.
+
+(2) Use commercial tools that scan the code and automatically send pull requests with updated dependencies. One interesting question remaining is what should be the dependency update policy — updating on every patch generates too many overhead, updating right when a major is released might point to an unstable version (many packages found vulnerable on the very first days after being released, [see the](https://nodesource.com/blog/a-high-level-post-mortem-of-the-eslint-scope-security-incident/) eslint-scope incident).
+
+An efficient update policy may allow some ‘vesting period’ — let the code lag behind the @latest for some time and versions before considering the local copy as obsolete (e.g. local version is 1.3.1 and repository version is 1.3.8)
+
+
+
+❌ **Otherwise:** Your production will run packages that have been explicitly tagged by their author as risky
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Example: [ncu](https://www.npmjs.com/package/npm-check-updates) can be used manually or within a CI pipeline to detect to which extent the code lag behind the latest versions
+
+
+
+
+
+
+
+
+## ⚪ ️ 5.8 Other, non-Node related, CI tips
+:white_check_mark: **Do:** This post is focused on testing advice that is related to, or at least can be exemplified with Node JS. This bullet, however, groups few non-Node related tips that are well-known
+
+ - Use a declarative syntax. This is the only option for most vendors but older versions of Jenkins allows using code or UI
- Opt for a vendor that has native Docker support
- Fail early, run your fastest tests first. Create a ‘Smoke testing’ step/milestone that groups multiple fast inspections (e.g. linting, unit tests) and provide snappy feedback to the code committer
- Make it easy to skim-through all build artifacts including test reports, coverage reports, mutation reports, logs, etc
- Create multiple pipelines/jobs for each event, reuse steps between them. For example, configure a job for feature branch commits and a different one for master PR. Let each reuse logic using shared steps (most vendors provide some mechanism for code reuse
- Never embed secrets in a job declaration, grab them from a secret store or from the job’s configuration
- Explicitly bump version in a release build or at least ensure the developer did so
- Build only once and perform all the inspections over the single build artifact (e.g. Docker image)
- Test in an ephemeral environment that doesn’t drift state between builds. Caching node_modules might be the only exception
+
+
+
+❌ **Otherwise:** You‘ll miss years of wisdom
+
+
+
+## ⚪ ️ 5.9 Build matrix: Run the same CI steps using multiple Node versions
+:white_check_mark: **Do:** Quality checking is about serendipity, the more ground you cover the luckier you get in detecting issues early. When developing reusable packages or running a multi-customer production with various configuration and Node versions, the CI must run the pipeline of tests over all the permutations of configurations. For example, assuming we use MySQL for some customers and Postgres for others — some CI vendors support a feature called ‘Matrix’ which allow running the suit of testing against all permutations of MySQL, Postgres and multiple Node version like 8, 9 and 10. This is done using configuration only without any additional effort (assuming you have testing or any other quality checks). Other CIs who doesn’t support Matrix might have extensions or tweaks to allow that
+
+
+
+❌ **Otherwise:** So after doing all that hard work of writing testing are we going to let bugs sneak in only because of configuration issues?
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Example: Using Travis (CI vendor) build definition to run the same test over multiple Node versions
+language: node_js
node_js:
- "7"
- "6"
- "5"
- "4"
install:
- npm install
script:
- npm run test
+
+
+
+
+# Team
+
+
+
+## Yoni Goldberg
+
+
+
+
+
+**Role:** Writer
+
+**About:** I'm an independent consultant who works with 500 fortune corporates and garage startups on polishing their JS & Node.js applications. More than any other topic I'm fascinated by and aims to master the art of testing. I'm also the author of [Node.js Best Practices](https://github.com/goldbergyoni/nodebestpractices)
+
+
+
+**Workshop:** 👨🏫 Want to learn all these practices and techniques at your offices (Europe & USA)? [Register here for my testing workshop](https://testjavascript.com/)
+
+
+**Follow:**
+
+* [🐦 Twitter](https://twitter.com/goldbergyoni/)
+* [📞 Contact](https://testjavascript.com/contact-2/)
+* [✉️ Newsletter](https://testjavascript.com/newsletter//)
+
+
+
+
+
+
+## [Bruno Scheufler](https://github.com/BrunoScheufler)
+
+**Role:** Tech reviewer and advisor
+
+Took care to revise, improve, lint and polish all the texts
+
+**About:** full-stack web engineer, Node.js & GraphQL enthusiast
+
+
+
+## [Ido Richter](https://github.com/idori)
+
+**Role:** Concept, design and great advice
+
+**About:** A savvy frontend developer, CSS expert and emojis freak
From cded6f0c55f18c027b474a298111b045551f3d21 Mon Sep 17 00:00:00 2001
From: Rain Byun
Date: Tue, 27 Aug 2019 00:25:40 +0900
Subject: [PATCH 048/502] Translate into Korean from 1.1 to 1.2
---
readme.korean.md | 76 ++++++++++++++++++++++--------------------------
1 file changed, 34 insertions(+), 42 deletions(-)
diff --git a/readme.korean.md b/readme.korean.md
index 8ab281ec..778f65dc 100644
--- a/readme.korean.md
+++ b/readme.korean.md
@@ -75,95 +75,93 @@ This can be achieved by selectively cherry-picking techniques, tools and test ta
Most of the advice below are derivatives of this principle.
-### Ready to start?
-
+### 시작할 준비 되셨나요?
-# Section 1: The Test Anatomy
+# 섹션 1: 테스트 해부
-## ⚪ ️ 1.1 Include 3 parts in each test name
+## ⚪ ️ 1.1 각 테스트 이름은 세 부분으로 구성된다.
-:white_check_mark: **Do:** A test report should tell whether the current application revision satisfies the requirements for the people who are not necessarily familiar with the code: the tester, the DevOps engineer who is deploying and the future you two years from now. This can be achieved best if the tests speak at the requirements level and include 3 parts:
+:white_check_mark: **이렇게 해라:** 테스트는 현재 애플리케이션의 개정판이 요구 사항을 충족하는지 여부를 다음과 같은 사람들에게 알려야합니다: 배포를 할 테스터, DevOps 엔지니어, 2년 후의 미래에 코드가 익숙하지 않은 사람. 테스트가 요구 사항 수준에서 작성되어 있고 세 부분으로 구성되어 있다면, 목적을 이룰 수 있습니다:
-(1) What is being tested? For example, the ProductsService.addNewProduct method
+(1) 무엇을 테스트하고 있는가? 예) 제품서비스.새제품추가 메서드
-(2) Under what circumstances and scenario? For example, no price is passed to the method
+(2) 어떤 상황과 시나리오에서? 예) 메서드에 가격이 전달되지 않는다.
-(3) What is the expected result? For example, the new product is not approved
+(3) 예상되는 결과는 무엇인가? 예) 신제품은 승인되지 않는다.
-
-❌ **Otherwise:** A deployment just failed, a test named “Add product” failed. Does this tell you what exactly is malfunctioning?
+❌ **그렇지 않으면:** 배포에 실패하였고 "제품 추가" 라는 테스트에 실패하였다. 이것이 정확히 어떤 오작동 인지를 알려주나요?
-**👇 Note:** Each bullet has code examples and sometime also an image illustration. Click to expand
-✏ Code Examples
+**👇 주의:** 각 글에는 코드 예제가 있으며 때로는 이미지도 있습니다. 클릭하여 확장
+
+✏ 코드 예제
-### :clap: Doing It Right Example: A test name that constitutes 3 parts
+### :clap: 올바른 예: 세 부분으로 구성된 테스트 이름

```javascript
-//1. unit under test
-describe('Products Service', function() {
- describe('Add new product', function() {
- //2. scenario and 3. expectation
- it('When no price is specified, then the product status is pending approval', ()=> {
+//1. 단위 테스트
+describe('제품 서비스', function() {
+ describe('새 제품 추가', function() {
+ //2. 시나리오 3. 예상
+ it('가격을 지정하지 않으면 제품 상태는 승인 대기중이다.', ()=> {
const newProduct = new ProductService().add(...);
- expect(newProduct.status).to.equal('pendingApproval');
+ expect(newProduct.status).to.equal('승인 대기');
});
});
});
-
```
+
-### :clap: Doing It Right Example: A test name that constitutes 3 parts
+### :clap: 올바른 예: 세 부분으로 구성된 테스트 이름
+

-## ⚪ ️ 1.2 Structure tests by the AAA pattern
+## ⚪ ️ 1.2 AAA 패턴에 의한 테스트 구조
-:white_check_mark: **Do:** Structure your tests with 3 well-separated sections Arrange, Act & Assert (AAA). Following this structure guarantees that the reader spends no brain CPU on understanding the test plan:
+:white_check_mark: **이렇게 해라:** 3개의 잘 잘 구분된 섹션 AAA(Arrange, Act, Assert)으로 테스트를 구성하십시오. 이 구조를 따르면 테스트를 쉽게 읽을 수 있습니다:
-1st A - Arrange: All the setup code to bring the system to the scenario the test aims to simulate. This might include instantiating the unit under test constructor, adding DB records, mocking/stubbing on objects and any other preparation code
+첫번째 A - Arrange(준비): 테스트가 목표로 하는 시나리오에 필요한 시스템을 제공하기 위한 모든 설정 코드. 여기에는 테스트 생성자의 단위 인스턴스화, DB 데이터 추가, 객체에 대한 mock/stub 및 기타 준비 코드가 포함될 수 있습니다.
-2nd A - Act: Execute the unit under test. Usually 1 line of code
-
-3rd A - Assert: Ensure that the received value satisfies the expectation. Usually 1 line of code
+두번째 A - Act(행동): 단위 테스트를 실행. 일반적으로 코드 한줄
+세번째 A - Assert(주장, 예상): 받은 예상값이 충족하는지 확인하십시오. 일반적으로 코드 한줄
-
-❌ **Otherwise:** Not only you spend long daily hours on understanding the main code, now also what should have been the simple part of the day (testing) stretches your brain
+❌ **그렇지 않으면:** 테스트는 오늘 일의 아주 단순한 부분에 불과하지만, 메인 코드를 이해하는데 많은 시간을 낭비 할 것입니다.
-✏ Code Examples
+✏ 코드 예제
-### :clap: Doing It Right Example: A test structured with the AAA pattern
+### :clap: 올바른 예: AAA 패턴으로 구성된 테스트
 
-
+
```javascript
-describe('Customer classifier', () => {
- test('When customer spent more than 500$, should be classified as premium', () => {
+describe('고객 분류기', () => {
+ test('고객이 500달러 이상을 소비한 경우 프리미엄으로 분류해야 합니다.', () => {
//Arrange
const customerToClassify = {spent:505, joined: new Date(), id:1}
const DBStub = sinon.stub(dataAccess, "getCustomer")
@@ -180,10 +178,10 @@ describe('Customer classifier', () => {
-### :thumbsdown: Anti Pattern Example: No separation, one bulk, harder to interpret
+### :thumbsdown: 올바르지 않은 예: 분리가 없고 한 벌로 작성되어 있어 해석하기 어렵다.
```javascript
-test('Should be classified as premium', () => {
+test('프리미엄으로 분류해야 합니다.', () => {
const customerToClassify = {spent:505, joined: new Date(), id:1}
const DBStub = sinon.stub(dataAccess, "getCustomer")
.reply({id:1, classification: 'regular'});
@@ -192,16 +190,10 @@ test('Should be classified as premium', () => {
});
```
-
-
-
-
-
-
## ⚪ ️1.3 Describe expectations in a product language: use BDD-style assertions
:white_check_mark: **Do:** Coding your tests in a declarative-style allows the reader to get the grab instantly without spending even a single brain-CPU cycle. When you write an imperative code that is packed with conditional logic the reader is thrown away to an effortful mental mood. In that sense, code the expectation in a human-like language, declarative BDD style using expect or should and not using custom code. If Chai & Jest don’t include the desired assertion and it’s highly repeatable, consider [extending Jest matcher (Jest)](https://jestjs.io/docs/en/expect#expectextendmatchers) or writing a [custom Chai plugin](https://www.chaijs.com/guide/plugins/)
From e7913fb30b234dbfb6a2149a6b434a035fb26d85 Mon Sep 17 00:00:00 2001
From: Rain Byun
Date: Wed, 28 Aug 2019 21:45:10 +0900
Subject: [PATCH 049/502] Translate into Korean from 1.3 to 1.4
- Rename readme.korean.md to readme.kr.md
---
readme.korean.md => readme.kr.md | 97 +++++++++++++++-----------------
1 file changed, 44 insertions(+), 53 deletions(-)
rename readme.korean.md => readme.kr.md (95%)
diff --git a/readme.korean.md b/readme.kr.md
similarity index 95%
rename from readme.korean.md
rename to readme.kr.md
index 778f65dc..c4070bcc 100644
--- a/readme.korean.md
+++ b/readme.kr.md
@@ -19,6 +19,7 @@ JavaScript 및 Node.js에 대한 A부터 Z까지의 믿음직한 가이드입니
### Yoni Goldberg 작성
+
* JavaScript & Node.js 컨설턴트
* 👨🏫 [나의 테스팅 워크샵](https://www.testjavascript.com) - 유럽과 미국에서의 [제 워크샵](https://www.testjavascript.com)에 대해서 알아보십시오.
* [트위터 팔로우 하기](https://twitter.com/goldbergyoni/)
@@ -178,50 +179,47 @@ describe('고객 분류기', () => {
-### :thumbsdown: 올바르지 않은 예: 분리가 없고 한 벌로 작성되어 있어 해석하기 어렵다.
+### :thumbsdown: 올바르지 않은 예: 분리 되어있지 않고 한 벌로 작성되어 있어 해석하기 어렵다.
```javascript
test('프리미엄으로 분류해야 합니다.', () => {
- const customerToClassify = {spent:505, joined: new Date(), id:1}
- const DBStub = sinon.stub(dataAccess, "getCustomer")
- .reply({id:1, classification: 'regular'});
- const receivedClassification = customerClassifier.classifyCustomer(customerToClassify);
- expect(receivedClassification).toMatch('premium');
- });
+ const customerToClassify = {spent:505, joined: new Date(), id:1}
+ const DBStub = sinon.stub(dataAccess, "getCustomer")
+ .reply({id:1, classification: 'regular'});
+ const receivedClassification = customerClassifier.classifyCustomer(customerToClassify);
+ expect(receivedClassification).toMatch('premium');
+});
```
-## ⚪ ️1.3 Describe expectations in a product language: use BDD-style assertions
+## ⚪ ️1.3 제품의 언어로 예상값을 설명: BDD 스타일의 Assertion을 사용
+테스트를 선언적 스타일로 작성하면 읽는 사람이 즉시 파악할 수 있습니다. 조건부 논리로 채워진 명령형 코드로 작성하면 테스트를 읽기가 쉽지 않습니다. 그런 의미에서 임의의 사용자 정의 코드를 사용하지 말고, 선언적 BDD 스타일의 expect 또는 should를 사용하여 인간과 같은 언어로 테스트를 작성하십시오. Chai & Jest에 원하는 Assertion이 포함되어 있지 않고 반복성이 높은 경우 [extending Jest matcher (Jest)](https://jestjs.io/docs/en/expect#expectextendmatchers) 혹은 [custom Chai plugin](https://www.chaijs.com/guide/plugins/) 작성을 고려하십시오.
-:white_check_mark: **Do:** Coding your tests in a declarative-style allows the reader to get the grab instantly without spending even a single brain-CPU cycle. When you write an imperative code that is packed with conditional logic the reader is thrown away to an effortful mental mood. In that sense, code the expectation in a human-like language, declarative BDD style using expect or should and not using custom code. If Chai & Jest don’t include the desired assertion and it’s highly repeatable, consider [extending Jest matcher (Jest)](https://jestjs.io/docs/en/expect#expectextendmatchers) or writing a [custom Chai plugin](https://www.chaijs.com/guide/plugins/)
-
-❌ **Otherwise:** The team will write less test and decorate the annoying ones with .skip()
+❌ **그렇지 않으면:** 팀은 테스트를 덜 작성하고 성가신 것들을 .skip() 으로 장식합니다.
-✏ Code Examples
+✏ 코드 예제
 
- ### :thumbsdown: Anti Pattern Example: The reader must skim through not so short, and imperative code just to get the test story
+ ### :thumbsdown: 올바르지 않은 예: 읽는 사람은 테스트 스토리를 이해하기 위해 짧지않은 명령형 코드를 훑어봐야 합니다.
```javascript
-test("When asking for an admin, ensure only ordered admins in results" , () => {
- //assuming we've added here two admins "admin1", "admin2" and "user1"
+test("관리자 요청이 들어오면 정렬된 관리자 목록만 결과에 포함된다." , () => {
+ // 여기에 두 명의 관리자 "admin1", "admin2" 및 "user1" 을 추가했다고 가정합니다.
const allAdmins = getUsers({adminOnly:true});
-
const admin1Found, adming2Found = false;
-
allAdmins.forEach(aSingleUser => {
if(aSingleUser === "user1"){
- assert.notEqual(aSingleUser, "user1", "A user was found and not admin");
+ assert.notEqual(aSingleUser, "user1", "관리자가 아닌 사용자를 찾았다.");
}
if(aSingleUser==="admin1"){
admin1Found = true;
@@ -230,79 +228,72 @@ test("When asking for an admin, ensure only ordered admins in results" , () => {
admin2Found = true;
}
});
-
if(!admin1Found || !admin2Found ){
- throw new Error("Not all admins were returned");
+ throw new Error("모든 관리자가 반환되지 않았다.");
}
});
-
```
-
-### :clap: Doing It Right Example: Skimming through the following declarative test is a breeze
+
+### :clap: 올바른 예: 다음과 같은 선언적 테스트는 이해하기 쉽습니다.
```javascript
-it("When asking for an admin, ensure only ordered admins in results" , () => {
- //assuming we've added here two admins
+it("관리자 요청이 들어오면 정렬된 관리자 목록만 결과에 포함된다." , () => {
+ // 여기에 두 명의 관리자를 추가했다고 가정합니다.
const allAdmins = getUsers({adminOnly:true});
-
expect(allAdmins).to.include.ordered.members(["admin1" , "admin2"])
- .but.not.include.ordered.members(["user1"]);
+ .but.not.include.ordered.members(["user1"]);
});
-
```
-
+## ⚪ ️ 1.4 블랙박스 테스트에 충실: public method만 테스트
-## ⚪ ️ 1.4 Stick to black-box testing: Test only public methods
+내부테스트는 거의 아무것도 하지 않는 엄청난 오버헤드를 발생시킵니다. 만약 당신의 코드 혹은 API가 올바른 결과를 반환한다면, 내부적으로 어떻게 동작했는지의 테스트에 3시간을 투자해야 합니까? 깨지기 쉬운 테스트를 유지해야 합니까? public method가 잘 동작할 때마다 private method 또한 암시적으로 테스트가 되고, 특정 문제(예. 잘못된 출력)가 있는 경우에만 테스트가 깨집니다. 이 접근법은 행동 테스트라고도 합니다. 다른 한편으로 당신은 내부 테스트를 해야합니까?(화이트박스 접근) - 컴포넌트를 설계하는 것에서 핵심 세부 사항으로 초점이 이동하거나 작은 코드의 리펙토링으로 인해 테스트가 중단 될 수 있지만, 결과는 훌륭합니다. - 이는 유지보수 부담을 크게 증가시킵니다.
-:white_check_mark: **Do:** Testing the internals brings huge overhead for almost nothing. If your code/API deliver the right results, should you really invest your next 3 hours in testing HOW it worked internally and then maintain these fragile tests? Whenever a public behavior is checked, the private implementation is also implicitly tested and your tests will break only if there is a certain problem (e.g. wrong output). This approach is also referred to as behavioral testing. On the other side, should you test the internals (white box approach) — your focus shifts from planning the component outcome to nitty-gritty details and your test might break because of minor code refactors although the results are fine- this dramatically increases the maintenance burden
-
-❌ **Otherwise:** Your test behaves like the [child who cries wolf](https://en.wikipedia.org/wiki/The_Boy_Who_Cried_Wolf): shoot out loud false-positive cries (e.g., A test fails because a private variable name was changed). Unsurprisingly, people will soon start to ignore the CI notifications until someday a real bug will get ignored…
+❌ **그렇지 않으면:** 당신의 테스트는 다음과 같이 동작합니다. [양치기 소년](https://en.wikipedia.org/wiki/The_Boy_Who_Cried_Wolf): 늑대가 나타났다!(예. private 변수가 변경되어 테스트에 실패하였습니다). 당연히 사람들은, 언젠가 진짜 버그가 무시될 때 까지 CI 알람을 무시하기 시작할 것입니다...
-✏ Code Examples
+
+✏ 코드 예제
-### :thumbsdown: Anti Pattern Example: A test case is testing the internals for no good reason
+### :thumbsdown: 올바르지 않은 예: 테스트 케이스는 이유없이 내부를 테스트합니다.
+

+
```javascript
class ProductService{
- //this method is only used internally
- //Change this name will make the tests fail
- calculateVAT(priceWithoutVAT){
- return {finalPrice: priceWithoutVAT * 1.2};
- //Change the result format or key name above will make the tests fail
- }
- //public method
- getPrice(productId){
- const desiredProduct= DB.getProduct(productId);
- finalPrice = this.calculateVATAdd(desiredProduct.price).finalPrice;
- }
+ // 이 method 는 내부에서만 사용됩니다.
+ // 이 이름을 변경하면 테스트가 실패합니다.
+ calculateVAT(priceWithoutVAT){
+ return {finalPrice: priceWithoutVAT * 1.2};
+ // 결과 형식이나 키 이름을 변경하면 테스트가 실패합니다.
+ }
+ // public method
+ getPrice(productId){
+ const desiredProduct= DB.getProduct(productId);
+ finalPrice = this.calculateVATAdd(desiredProduct.price).finalPrice;
+ }
}
-
-it("White-box test: When the internal methods get 0 vat, it return 0 response", async () => {
- //There's no requirement to allow users to calculate the VAT, only show the final price. Nevertheless we falsely insist here to test the class internals
+it("화이트박스 테스트: 내부 method가 VAT 0을 받으면 0을 반환합니다.", async () => {
+ // 사용자가 VAT를 계산할 수 있게 하는 요구사항은 없으며, 최종 가격만 표시합니다.
+ // 그럼에도 불구하고 여기에서 내부 테스트 수행
expect(new ProductService().calculateVATAdd(0).finalPrice).to.equal(0);
});
-
```
-
-
-
## ⚪ ️ ️1.5 Choose the right test doubles: Avoid mocks in favor of stubs and spies
From a820165ceecb6f323c1ea399f41a1cbdbf92e525 Mon Sep 17 00:00:00 2001
From: Kyle Martin
Date: Sat, 31 Aug 2019 15:49:35 +1200
Subject: [PATCH 050/502] add to team
---
readme.md | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/readme.md b/readme.md
index ec445572..af368428 100644
--- a/readme.md
+++ b/readme.md
@@ -2023,3 +2023,9 @@ Took care to revise, improve, lint and polish all the texts
**Role:** Concept, design and great advice
**About:** A savvy frontend developer, CSS expert and emojis freak
+
+## [Kyle Martin](https://github.com/js-kyle)
+
+**Role:** Helps keep this project running, and reviews security related practices
+
+**About:** Loves working on Node.js projects and web application security.
From 8b2854eb586539e9d6944148451b2c26135668c6 Mon Sep 17 00:00:00 2001
From: Rain Byun
Date: Sun, 1 Sep 2019 23:35:47 +0900
Subject: [PATCH 051/502] Translate into Korean section 0.
---
readme.kr.md | 29 ++++++++++++++---------------
1 file changed, 14 insertions(+), 15 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index c4070bcc..6577d1e0 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -57,24 +57,23 @@ JavaScript 및 Node.js에 대한 A부터 Z까지의 믿음직한 가이드입니
-# Section 0️⃣: The Golden Rule
+# 섹션 0️⃣: 황금률
-## ⚪️ 0. The Golden Rule: Design for lean testing
+## ⚪ ️ 0. 황금률: 린 테스트를 위한 설계
-:white_check_mark: **Do:**
-Testing code is not like production-code - design it to be dead-simple, short, abstraction-free, flat, delightful to work with, lean. One should look at a test and get the intent instantly.
+:white_check_mark: **이렇게 해라:** 테스트 코드는 제품 코드와 다릅니다. 단순하고, 짧고, 추상화가 없고, 무난하고, 작업하기에 편리하고, 린하게 디자인 하십시오. 테스트를 보고 즉시 의미를 알아챌 수 있어야 합니다.
-Our minds are full with the main production code, we don't have 'headspace' for additional complexity. Should we try to squeeze yet another challenging code into our poor brain it will slow the team down which works against the reason we do testing. Practically this is where many teams just abandon testing.
-
-The tests are an opportunity for something else - a friendly and smiley assistant, one that it's delightful to work with and delivers great value for such a small investment. Science tells we have two brain systems: system 1 which is used for effortless activities like driving a car on an empty road and system 2 which is meant for complex and conscious operations like solving a math equation. Design your test for system 1, when looking at test code it should *feel* as easy as modifying an HTML document and not like solving 2X(17 × 24).
+우리 머리속은 제품 코드로 가득하고 부가적인 복잡한 것들을 생각할 여유가 없습니다. 또 다른 어려운 코드를 억지로 생각해내려고 한다면, 팀의 속도를 늦추게 되어 우리가 테스트를 하는 이유가 무색해 질 것입니다. 실제로 많은 팀들이 이런 이유를 테스트를 포기합니다.
+
+테스트는 친절하고 웃는 동료와 함께 일하는 것이 즐거울 수 있는 기회이고, 적은 투자로 큰 가치를 제공하는 것입니다. 과학은 우리에게 두 개의 뇌 시스템이 있다고 말합니다. 빈 도로에서 자동차를 운전하는 등의 간편한 활동에 사용되는 시스템 1, 그리고 수학 방정식을 푸는 것과 같이 복잡하고 의식적인 연산을 위한 시스템 2. 테스트 코드를 볼 때 수학 문제를 푸는 것 같은게 아닌, HTML 문서를 수정하는 것만 큼 쉬워야하는 시스템 1에 맞게 테스트를 설계하십시오.
-This can be achieved by selectively cherry-picking techniques, tools and test targets that are cost-effective and provide great ROI. Test only as much as needed, strive to keep it nimble, sometimes it's even worth dropping some tests and trade reliability for agility and simplicity.
+선택적인 체리픽 기술, 툴 그리고 비용-효율적이고 뛰어난 ROI를 제공하는 테스트 대상 선정으로 목적을 이러한 달성할 수 있습니다. 필요한 만큼의 테스트, 융통성 있게 유자하려는 노력, 때로는 애자일함과 단순성을 위해 일부 테스트와 신뢰성을 포기하는 것도 가치가 있습니다.
-
+
-Most of the advice below are derivatives of this principle.
+아래 대부분의 조언은 이 원칙의 파생입니다.
### 시작할 준비 되셨나요?
@@ -195,8 +194,8 @@ test('프리미엄으로 분류해야 합니다.', () => {
-## ⚪ ️1.3 제품의 언어로 예상값을 설명: BDD 스타일의 Assertion을 사용
-테스트를 선언적 스타일로 작성하면 읽는 사람이 즉시 파악할 수 있습니다. 조건부 논리로 채워진 명령형 코드로 작성하면 테스트를 읽기가 쉽지 않습니다. 그런 의미에서 임의의 사용자 정의 코드를 사용하지 말고, 선언적 BDD 스타일의 expect 또는 should를 사용하여 인간과 같은 언어로 테스트를 작성하십시오. Chai & Jest에 원하는 Assertion이 포함되어 있지 않고 반복성이 높은 경우 [extending Jest matcher (Jest)](https://jestjs.io/docs/en/expect#expectextendmatchers) 혹은 [custom Chai plugin](https://www.chaijs.com/guide/plugins/) 작성을 고려하십시오.
+## ⚪ ️ 1.3 제품의 언어로 예상값을 설명: BDD 스타일의 Assertion을 사용
+:white_check_mark: **이렇게 해라:** 테스트를 선언적 스타일로 작성하면 읽는 사람이 즉시 파악할 수 있습니다. 조건부 논리로 채워진 명령형 코드로 작성하면 테스트를 읽기가 쉽지 않습니다. 그런 의미에서 임의의 사용자 정의 코드를 사용하지 말고, 선언적 BDD 스타일의 expect 또는 should를 사용하여 인간과 같은 언어로 테스트를 작성하십시오. Chai & Jest에 원하는 Assertion이 포함되어 있지 않고 반복성이 높은 경우 [extending Jest matcher (Jest)](https://jestjs.io/docs/en/expect#expectextendmatchers) 혹은 [custom Chai plugin](https://www.chaijs.com/guide/plugins/) 작성을 고려하십시오.
@@ -210,7 +209,7 @@ test('프리미엄으로 분류해야 합니다.', () => {
"Examples with Mocha & Chai") 
- ### :thumbsdown: 올바르지 않은 예: 읽는 사람은 테스트 스토리를 이해하기 위해 짧지않은 명령형 코드를 훑어봐야 합니다.
+### :thumbsdown: 올바르지 않은 예: 읽는 사람은 테스트 스토리를 이해하기 위해 짧지않은 명령형 코드를 훑어봐야 합니다.
```javascript
test("관리자 요청이 들어오면 정렬된 관리자 목록만 결과에 포함된다." , () => {
@@ -251,9 +250,9 @@ it("관리자 요청이 들어오면 정렬된 관리자 목록만 결과에 포
-## ⚪ ️ 1.4 블랙박스 테스트에 충실: public method만 테스트
+## ⚪ ️ 1.4 블랙박스 테스트에 충실: public method만 테스트
-내부테스트는 거의 아무것도 하지 않는 엄청난 오버헤드를 발생시킵니다. 만약 당신의 코드 혹은 API가 올바른 결과를 반환한다면, 내부적으로 어떻게 동작했는지의 테스트에 3시간을 투자해야 합니까? 깨지기 쉬운 테스트를 유지해야 합니까? public method가 잘 동작할 때마다 private method 또한 암시적으로 테스트가 되고, 특정 문제(예. 잘못된 출력)가 있는 경우에만 테스트가 깨집니다. 이 접근법은 행동 테스트라고도 합니다. 다른 한편으로 당신은 내부 테스트를 해야합니까?(화이트박스 접근) - 컴포넌트를 설계하는 것에서 핵심 세부 사항으로 초점이 이동하거나 작은 코드의 리펙토링으로 인해 테스트가 중단 될 수 있지만, 결과는 훌륭합니다. - 이는 유지보수 부담을 크게 증가시킵니다.
+:white_check_mark: **이렇게 해라:** 내부테스트는 거의 아무것도 하지 않는 엄청난 오버헤드를 발생시킵니다. 만약 당신의 코드 혹은 API가 올바른 결과를 반환한다면, 내부적으로 어떻게 동작했는지의 테스트에 3시간을 투자해야 합니까? 깨지기 쉬운 테스트를 유지해야 합니까? public method가 잘 동작할 때마다 private method 또한 암시적으로 테스트가 되고, 특정 문제(예. 잘못된 출력)가 있는 경우에만 테스트가 깨집니다. 이 접근법은 행동 테스트라고도 합니다. 다른 한편으로 당신은 내부 테스트를 해야합니까?(화이트박스 접근) - 컴포넌트를 설계하는 것에서 핵심 세부 사항으로 초점이 이동하거나 작은 코드의 리펙토링으로 인해 테스트가 중단 될 수 있지만, 결과는 훌륭합니다. - 이는 유지보수 부담을 크게 증가시킵니다.
From 866d25f07eafd9d00ccb764e3011219bd61ff476 Mon Sep 17 00:00:00 2001
From: Rain Byun
Date: Fri, 6 Sep 2019 00:30:37 +0900
Subject: [PATCH 052/502] Translate into Korean 1.5
- Fix typo.
---
readme.kr.md | 39 ++++++++++++++++++++-------------------
1 file changed, 20 insertions(+), 19 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 6577d1e0..be8617d7 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -69,7 +69,7 @@ JavaScript 및 Node.js에 대한 A부터 Z까지의 믿음직한 가이드입니
테스트는 친절하고 웃는 동료와 함께 일하는 것이 즐거울 수 있는 기회이고, 적은 투자로 큰 가치를 제공하는 것입니다. 과학은 우리에게 두 개의 뇌 시스템이 있다고 말합니다. 빈 도로에서 자동차를 운전하는 등의 간편한 활동에 사용되는 시스템 1, 그리고 수학 방정식을 푸는 것과 같이 복잡하고 의식적인 연산을 위한 시스템 2. 테스트 코드를 볼 때 수학 문제를 푸는 것 같은게 아닌, HTML 문서를 수정하는 것만 큼 쉬워야하는 시스템 1에 맞게 테스트를 설계하십시오.
-선택적인 체리픽 기술, 툴 그리고 비용-효율적이고 뛰어난 ROI를 제공하는 테스트 대상 선정으로 목적을 이러한 달성할 수 있습니다. 필요한 만큼의 테스트, 융통성 있게 유자하려는 노력, 때로는 애자일함과 단순성을 위해 일부 테스트와 신뢰성을 포기하는 것도 가치가 있습니다.
+선택적인 체리픽 기술, 툴 그리고 비용-효율적이고 뛰어난 ROI를 제공하는 테스트 대상 선정으로 이러한 목적을 달성할 수 있습니다. 필요한 만큼의 테스트, 융통성 있게 유지하려는 노력, 때로는 애자일함과 단순성을 위해 일부 테스트와 신뢰성을 포기하는 것도 가치가 있습니다.

@@ -295,54 +295,55 @@ it("화이트박스 테스트: 내부 method가 VAT 0을 받으면 0을 반환
-## ⚪ ️ ️1.5 Choose the right test doubles: Avoid mocks in favor of stubs and spies
+## ⚪ ️ 1.5 올바른 테스트 더블 선택: Stub과 Spy를 위한 Mock을 피하십시오.
-:white_check_mark: **Do:** Test doubles are a necessary evil because they are coupled to the application internals, yet some provide an immense value ([Read here a reminder about test doubles: mocks vs stubs vs spies](https://martinfowler.com/articles/mocksArentStubs.html)).
+:white_check_mark: **이렇게 해라:** 테스트 더블은 어플리케이션 내부에 연결되어 있기때문에 필요악이지만 일부는 엄청난 가치를 제공합니다([테스트 더블에 대한 알림: mocks vs stubs vs spies](https://martinfowler.com/articles/mocksArentStubs.html)).
-Before using test doubles, ask a very simple question: Do I use it to test functionality that appears, or could appear, in the requirements document? If no, it’s a smell of white-box testing.
+테스트 더블을 사용하기 전에 간단한 질문: 요구사항 문서에 있거나 있을 수 있는 기능을 테스트하는 데 테스트 더블을 사용합니까? 만약 아니라면 화이트박스 테스트 낌새가 보입니다.
-For example, if you want to test what your app behaves reasonably when the payment service is down, you might stub the payment service and trigger some ‘No Response’ return to ensure that the unit under test returns the right value. This checks our application behavior/response/outcome under certain scenarios. You might also use a spy to assert that an email was sent when that service is down — this is again a behavioral check which is likely to appear in a requirements doc (“Send an email if payment couldn’t be saved”). On the flip side, if you mock the Payment service and ensure that it was called with the right JavaScript types — then your test is focused on internal things that got nothing with the application functionality and are likely to change frequently
+예를 들어, 결제 서비스가 중단되었을 때 앱이 적절하게 작동하는 것을 테스트하려는 경우, 테스트중인 단위가 올바른 값을 반환하도록, 결제 서비스를 stub하고 '응답 없음' 반환을 트리거 할 수 있습니다.
+이것은 특정 시나리오에서 애플리케이션의 동작/응답/결과를 확인합니다. 그리고 spy를 사용하여 해당 서비스가 중단되었을 때 메일이 보내지는지를 assert 할 수 있습니다. 이것은 다시 요구사항 문서에 있을 수 있는 행동에 대한 점검입니다(결제가 저장되지 않으면 메일은 보낸다). 반대로, 결제 서비스를 mock 하고 올바른 JavaScript 타입으로 호출 되었는지를 확인한다면 - 당신의 테스트는 애플리케이션 기능에 전혀 영향을 받지 않고 자주 변경될 수 있는 내부 구현에 초점을 둔 경우입니다.
-
-❌ **Otherwise:** Any refactoring of code mandates searching for all the mocks in the code and updating accordingly. Tests become a burden rather than a helpful friend
+❌ **그렇지 않으면:** 코드를 리펙토링 할 때, 모든 mock을 찾아서 수정해야 합니다. 테스트가 도움이 아닌 부담이 됩니다.
-✏ Code Examples
+✏ 코드 예제
-### :thumbsdown: Anti-pattern example: Mocks focus on the internals
+### :thumbsdown: 올바르지 않은 예: 내부에 초점을 둔 mock
+

+
```javascript
-it("When a valid product is about to be deleted, ensure data access DAL was called once, with the right product and right config", async () => {
- //Assume we already added a product
- const dataAccessMock = sinon.mock(DAL);
- //hmmm BAD: testing the internals is actually our main goal here, not just a side-effect
+it("유효한 제품을 삭제하려고 할 때, 올바른 제품과 올바른 구성 정보로 데이터 액세스 DAL을 한 번 호출했는지 확인한다", async () => {
+ // 이미 제품을 추가했다고 가정
+ const dataAccessMock = sinon.mock(DAL);
+ // 좋지 않음: 내부 테스트는 side-effect를 위해서가 주요 목적을 위해서 입니다.
dataAccessMock.expects("deleteProduct").once().withArgs(DBConfig, theProductWeJustAdded, true, false);
new ProductService().deletePrice(theProductWeJustAdded);
dataAccessMock.verify();
});
```
+
-### :clap:Doing It Right Example: spies are focused on testing the requirements but as a side-effect are unavoidably touching to the internals
+### :clap:올바른 예: spy는 요구사항을 테스트하는데 초점을 두고있지만, 내부를 건드리는 side-effect를 피할 순 없습니다.
```javascript
-it("When a valid product is about to be deleted, ensure an email is sent", async () => {
- //Assume we already added here a product
+it("유효한 제품을 삭제하려고 할 때, 메일을 보낸다", async () => {
+ // 이미 제품을 추가했다고 가정
const spy = sinon.spy(Emailer.prototype, "sendEmail");
new ProductService().deletePrice(theProductWeJustAdded);
- //hmmm OK: we deal with internals? Yes, but as a side effect of testing the requirements (sending an email)
+ // 좋음: 우리는 내부를 다루는가? 그렇다, 그러나 요구사항(이메일을 보낸다)에 대한 테스트의 side-effect이다.
});
```
-
-
## ⚪ ️1.6 Don’t “foo”, use realistic input data
From da13678a605d06bf7a01fdfa66fc4b548edbb97d Mon Sep 17 00:00:00 2001
From: Iago Cavalcante
Date: Thu, 5 Sep 2019 23:30:39 -0300
Subject: [PATCH 053/502] First steps First steps to translate this guide to
pt-br
---
readme-pt-br.md | 2025 +++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 2025 insertions(+)
create mode 100644 readme-pt-br.md
diff --git a/readme-pt-br.md b/readme-pt-br.md
new file mode 100644
index 00000000..ca49910c
--- /dev/null
+++ b/readme-pt-br.md
@@ -0,0 +1,2025 @@
+
+
+
+
+# 👇 Por que este guia pode levar suas habilidades de teste para o próximo nível
+
+
+
+## 📗 45+ boas práticas: Super abrangente e exaustivo
+Este é um guia para a confiabilidade JavaScript & Node.js da A-Z. Ele resume e organiza para você dezenas das melhores publicações, livros, ferramentas e postagens de blogs que o mercado tem a oferecer
+
+
+## 🚢 Avançado: vai 10.000 milhas além do básico
+Entre em uma jornada que vai muito além do básico, para tópicos avançados como testes em produção, testes de mutação, testes baseados em propriedades e muitas outras ferramentas estratégicas e profissionais. Se você ler todas as palavras deste guia, é provável que suas habilidades de teste superem a média
+
+
+## 🌐 Full-stack: front, backend, CI(Integração Contínua), qualquer coisa
+Comece entendendo as práticas de teste onipresentes que são a base para qualquer camada de aplicativo. Em seguida, mergulhe na sua área de escolha: front-end/UI, back-end, CI(Integração Contínua) ou talvez todos eles?
+
+
+
+### Escrito por Yoni Goldberg
+* Um consultor JavaScript & Node.js
+* 👨🏫 [Minha oficina de testes](https://www.testjavascript.com) - aprenda sobre [meus workshops](https://www.testjavascript.com) na Europe & Estados Unidos
+* [Me siga no twitter ](https://twitter.com/goldbergyoni/)
+* Venha me ouvir falar em [LA](https://js.la/), [Verona](https://2019.nodejsday.it/), [Kharkiv](https://kharkivjs.org/), [free webinar](https://zoom.us/webinar/register/1015657064375/WN_Lzvnuv4oQJOYey2jXNqX6A). Eventos futuros TBD
+* [Newsletter informativo de qualidade sobre JavaScript](https://testjavascript.com/newsletter/) - insights e conteúdo apenas em assuntos estratégicos
+
+
+
+
+## `Índice`
+
+#### [`Seção 0: A Regra de ouro`](#section-0️⃣-the-golden-rule)
+
+Um único conselho que inspira todos os outros (1 marcador especial)
+
+#### [`Section 1: The Test Anatomy`](#section-1-the-test-anatomy-1)
+
+The foundation - structuring clean tests (12 bullets)
+
+#### [`Section 2: Backend`](#section-2️⃣-backend-testing)
+
+Writing backend and Microservices tests efficiently (8 bullets)
+
+#### [`Section 3: Frontend`](#section-3️⃣-frontend-testing)
+
+Writing tests for web UI including component and E2E tests (11 bullets)
+
+#### [`Section 4: Measuring Tests Effectiveness`](#section-4️⃣-measuring-test-effectiveness)
+
+Watching the watchman - measuring test quality (4 bullets)
+
+#### [`Section 5: Continuous Integration`](#section-5️⃣-ci-and-other-quality-measures)
+
+Guidelines for CI in the JS world (9 bullets)
+
+
+
+
+
+# Section 0️⃣: The Golden Rule
+
+
+
+## ⚪️ 0. The Golden Rule: Design for lean testing
+
+:white_check_mark: **Do:**
+Testing code is not like production-code - design it to be dead-simple, short, abstraction-free, flat, delightful to work with, lean. One should look at a test and get the intent instantly.
+
+Our minds are full with the main production code, we don't have 'headspace' for additional complexity. Should we try to squeeze yet another challenging code into our poor brain it will slow the team down which works against the reason we do testing. Practically this is where many teams just abandon testing.
+
+The tests are an opportunity for something else - a friendly and smiley assistant, one that it's delightful to work with and delivers great value for such a small investment. Science tells we have two brain systems: system 1 which is used for effortless activities like driving a car on an empty road and system 2 which is meant for complex and conscious operations like solving a math equation. Design your test for system 1, when looking at test code it should *feel* as easy as modifying an HTML document and not like solving 2X(17 × 24).
+
+This can be achieved by selectively cherry-picking techniques, tools and test targets that are cost-effective and provide great ROI. Test only as much as needed, strive to keep it nimble, sometimes it's even worth dropping some tests and trade reliability for agility and simplicity.
+
+
+
+Most of the advice below are derivatives of this principle.
+
+### Ready to start?
+
+
+
+
+# Section 1: The Test Anatomy
+
+
+
+## ⚪ ️ 1.1 Include 3 parts in each test name
+
+:white_check_mark: **Do:** A test report should tell whether the current application revision satisfies the requirements for the people who are not necessarily familiar with the code: the tester, the DevOps engineer who is deploying and the future you two years from now. This can be achieved best if the tests speak at the requirements level and include 3 parts:
+
+(1) What is being tested? For example, the ProductsService.addNewProduct method
+
+(2) Under what circumstances and scenario? For example, no price is passed to the method
+
+(3) What is the expected result? For example, the new product is not approved
+
+
+
+
+❌ **Otherwise:** A deployment just failed, a test named “Add product” failed. Does this tell you what exactly is malfunctioning?
+
+
+
+**👇 Note:** Each bullet has code examples and sometime also an image illustration. Click to expand
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: A test name that constitutes 3 parts
+
+
+
+```javascript
+//1. unit under test
+describe('Products Service', function() {
+ describe('Add new product', function() {
+ //2. scenario and 3. expectation
+ it('When no price is specified, then the product status is pending approval', ()=> {
+ const newProduct = new ProductService().add(...);
+ expect(newProduct.status).to.equal('pendingApproval');
+ });
+ });
+});
+
+```
+
+
+### :clap: Doing It Right Example: A test name that constitutes 3 parts
+
+
+
+
+
+
+## ⚪ ️ 1.2 Structure tests by the AAA pattern
+
+:white_check_mark: **Do:** Structure your tests with 3 well-separated sections Arrange, Act & Assert (AAA). Following this structure guarantees that the reader spends no brain CPU on understanding the test plan:
+
+1st A - Arrange: All the setup code to bring the system to the scenario the test aims to simulate. This might include instantiating the unit under test constructor, adding DB records, mocking/stubbing on objects and any other preparation code
+
+2nd A - Act: Execute the unit under test. Usually 1 line of code
+
+3rd A - Assert: Ensure that the received value satisfies the expectation. Usually 1 line of code
+
+
+
+
+
+❌ **Otherwise:** Not only you spend long daily hours on understanding the main code, now also what should have been the simple part of the day (testing) stretches your brain
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: A test structured with the AAA pattern
+
+ 
+
+```javascript
+describe('Customer classifier', () => {
+ test('When customer spent more than 500$, should be classified as premium', () => {
+ //Arrange
+ const customerToClassify = {spent:505, joined: new Date(), id:1}
+ const DBStub = sinon.stub(dataAccess, "getCustomer")
+ .reply({id:1, classification: 'regular'});
+
+ //Act
+ const receivedClassification = customerClassifier.classifyCustomer(customerToClassify);
+
+ //Assert
+ expect(receivedClassification).toMatch('premium');
+ });
+});
+```
+
+
+
+### :thumbsdown: Anti Pattern Example: No separation, one bulk, harder to interpret
+
+```javascript
+test('Should be classified as premium', () => {
+ const customerToClassify = {spent:505, joined: new Date(), id:1}
+ const DBStub = sinon.stub(dataAccess, "getCustomer")
+ .reply({id:1, classification: 'regular'});
+ const receivedClassification = customerClassifier.classifyCustomer(customerToClassify);
+ expect(receivedClassification).toMatch('premium');
+ });
+```
+
+
+
+
+
+
+
+
+
+
+
+## ⚪ ️1.3 Describe expectations in a product language: use BDD-style assertions
+
+:white_check_mark: **Do:** Coding your tests in a declarative-style allows the reader to get the grab instantly without spending even a single brain-CPU cycle. When you write an imperative code that is packed with conditional logic the reader is thrown away to an effortful mental mood. In that sense, code the expectation in a human-like language, declarative BDD style using expect or should and not using custom code. If Chai & Jest don’t include the desired assertion and it’s highly repeatable, consider [extending Jest matcher (Jest)](https://jestjs.io/docs/en/expect#expectextendmatchers) or writing a [custom Chai plugin](https://www.chaijs.com/guide/plugins/)
+
+
+
+❌ **Otherwise:** The team will write less test and decorate the annoying ones with .skip()
+
+
+
+✏ Code Examples
+
+ 
+
+ ### :thumbsdown: Anti Pattern Example: The reader must skim through not so short, and imperative code just to get the test story
+
+```javascript
+test("When asking for an admin, ensure only ordered admins in results" , () => {
+ //assuming we've added here two admins "admin1", "admin2" and "user1"
+ const allAdmins = getUsers({adminOnly:true});
+
+ const admin1Found, adming2Found = false;
+
+ allAdmins.forEach(aSingleUser => {
+ if(aSingleUser === "user1"){
+ assert.notEqual(aSingleUser, "user1", "A user was found and not admin");
+ }
+ if(aSingleUser==="admin1"){
+ admin1Found = true;
+ }
+ if(aSingleUser==="admin2"){
+ admin2Found = true;
+ }
+ });
+
+ if(!admin1Found || !admin2Found ){
+ throw new Error("Not all admins were returned");
+ }
+});
+
+```
+
+
+### :clap: Doing It Right Example: Skimming through the following declarative test is a breeze
+
+
+```javascript
+it("When asking for an admin, ensure only ordered admins in results" , () => {
+ //assuming we've added here two admins
+ const allAdmins = getUsers({adminOnly:true});
+
+ expect(allAdmins).to.include.ordered.members(["admin1" , "admin2"])
+ .but.not.include.ordered.members(["user1"]);
+});
+
+```
+
+
+
+
+
+
+
+## ⚪ ️ 1.4 Stick to black-box testing: Test only public methods
+
+:white_check_mark: **Do:** Testing the internals brings huge overhead for almost nothing. If your code/API deliver the right results, should you really invest your next 3 hours in testing HOW it worked internally and then maintain these fragile tests? Whenever a public behavior is checked, the private implementation is also implicitly tested and your tests will break only if there is a certain problem (e.g. wrong output). This approach is also referred to as behavioral testing. On the other side, should you test the internals (white box approach) — your focus shifts from planning the component outcome to nitty-gritty details and your test might break because of minor code refactors although the results are fine- this dramatically increases the maintenance burden
+
+
+
+❌ **Otherwise:** Your test behaves like the [child who cries wolf](https://en.wikipedia.org/wiki/The_Boy_Who_Cried_Wolf): shoot out loud false-positive cries (e.g., A test fails because a private variable name was changed). Unsurprisingly, people will soon start to ignore the CI notifications until someday a real bug will get ignored…
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti Pattern Example: A test case is testing the internals for no good reason
+
+```javascript
+class ProductService{
+ //this method is only used internally
+ //Change this name will make the tests fail
+ calculateVAT(priceWithoutVAT){
+ return {finalPrice: priceWithoutVAT * 1.2};
+ //Change the result format or key name above will make the tests fail
+ }
+ //public method
+ getPrice(productId){
+ const desiredProduct= DB.getProduct(productId);
+ finalPrice = this.calculateVATAdd(desiredProduct.price).finalPrice;
+ }
+}
+
+
+it("White-box test: When the internal methods get 0 vat, it return 0 response", async () => {
+ //There's no requirement to allow users to calculate the VAT, only show the final price. Nevertheless we falsely insist here to test the class internals
+ expect(new ProductService().calculateVATAdd(0).finalPrice).to.equal(0);
+});
+
+```
+
+
+
+
+
+
+
+
+## ⚪ ️ ️1.5 Choose the right test doubles: Avoid mocks in favor of stubs and spies
+
+:white_check_mark: **Do:** Test doubles are a necessary evil because they are coupled to the application internals, yet some provide an immense value ([Read here a reminder about test doubles: mocks vs stubs vs spies](https://martinfowler.com/articles/mocksArentStubs.html)).
+
+Before using test doubles, ask a very simple question: Do I use it to test functionality that appears, or could appear, in the requirements document? If no, it’s a smell of white-box testing.
+
+For example, if you want to test what your app behaves reasonably when the payment service is down, you might stub the payment service and trigger some ‘No Response’ return to ensure that the unit under test returns the right value. This checks our application behavior/response/outcome under certain scenarios. You might also use a spy to assert that an email was sent when that service is down — this is again a behavioral check which is likely to appear in a requirements doc (“Send an email if payment couldn’t be saved”). On the flip side, if you mock the Payment service and ensure that it was called with the right JavaScript types — then your test is focused on internal things that got nothing with the application functionality and are likely to change frequently
+
+
+
+❌ **Otherwise:** Any refactoring of code mandates searching for all the mocks in the code and updating accordingly. Tests become a burden rather than a helpful friend
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti-pattern example: Mocks focus on the internals
+
+```javascript
+it("When a valid product is about to be deleted, ensure data access DAL was called once, with the right product and right config", async () => {
+ //Assume we already added a product
+ const dataAccessMock = sinon.mock(DAL);
+ //hmmm BAD: testing the internals is actually our main goal here, not just a side-effect
+ dataAccessMock.expects("deleteProduct").once().withArgs(DBConfig, theProductWeJustAdded, true, false);
+ new ProductService().deletePrice(theProductWeJustAdded);
+ dataAccessMock.verify();
+});
+```
+
+
+### :clap:Doing It Right Example: spies are focused on testing the requirements but as a side-effect are unavoidably touching to the internals
+
+```javascript
+it("When a valid product is about to be deleted, ensure an email is sent", async () => {
+ //Assume we already added here a product
+ const spy = sinon.spy(Emailer.prototype, "sendEmail");
+ new ProductService().deletePrice(theProductWeJustAdded);
+ //hmmm OK: we deal with internals? Yes, but as a side effect of testing the requirements (sending an email)
+});
+```
+
+
+
+
+
+
+
+## ⚪ ️1.6 Don’t “foo”, use realistic input data
+
+:white_check_mark: **Do:** Often production bugs are revealed under some very specific and surprising input — the more realistic the test input is, the greater the chances are to catch bugs early. Use dedicated libraries like [Faker](https://www.npmjs.com/package/faker) to generate pseudo-real data that resembles the variety and form of production data. For example, such libraries can generate realistic phone numbers, usernames, credit card, company names, and even ‘lorem ipsum’ text. You may also create some tests (on top of unit tests, not instead) that randomize fakers data to stretch your unit under test or even import real data from your production environment. Want to take it to the next level? see next bullet (property-based testing).
+
+
+
+❌ **Otherwise:** All your development testing will falsely seem green when you use synthetic inputs like “Foo” but then production might turn red when a hacker passes-in a nasty string like “@3e2ddsf . ##’ 1 fdsfds . fds432 AAAA”
+
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti-Pattern Example: A test suite that passes due to non-realistic data
+
+
+
+
+```javascript
+const addProduct = (name, price) =>{
+ const productNameRegexNoSpace = /^\S*$/;//no white-space allowd
+
+ if(!productNameRegexNoSpace.test(name))
+ return false;//this path never reached due to dull input
+
+ //some logic here
+ return true;
+};
+
+test("Wrong: When adding new product with valid properties, get successful confirmation", async () => {
+ //The string "Foo" which is used in all tests never triggers a false result
+ const addProductResult = addProduct("Foo", 5);
+ expect(addProductResult).toBe(true);
+ //Positive-false: the operation succeeded because we never tried with long
+ //product name including spaces
+});
+
+```
+
+
+### :clap:Doing It Right Example: Randomizing realistic input
+```javascript
+it("Better: When adding new valid product, get successful confirmation", async () => {
+ const addProductResult = addProduct(faker.commerce.productName(), faker.random.number());
+ //Generated random input: {'Sleek Cotton Computer', 85481}
+ expect(addProductResult).to.be.true;
+ //Test failed, the random input triggered some path we never planned for.
+ //We discovered a bug early!
+});
+```
+
+
+
+
+
+
+
+
+## ⚪ ️ 1.7 Test many input combinations using Property-based testing
+
+:white_check_mark: **Do:** Typically we choose a few input samples for each test. Even when the input format resembles real-world data (see bullet ‘Don’t foo’), we cover only a few input combinations (method(‘’, true, 1), method(“string” , false” , 0)), However, in production, an API that is called with 5 parameters can be invoked with thousands of different permutations, one of them might render our process down ([see Fuzz Testing](https://en.wikipedia.org/wiki/Fuzzing)). What if you could write a single test that sends 1000 permutations of different inputs automatically and catches for which input our code fails to return the right response? Property-based testing is a technique that does exactly that: by sending all the possible input combinations to your unit under test it increases the serendipity of finding a bug. For example, given a method — addNewProduct(id, name, isDiscount) — the supporting libraries will call this method with many combinations of (number, string, boolean) like (1, “iPhone”, false), (2, “Galaxy”, true). You can run property-based testing using your favorite test runner (Mocha, Jest, etc) using libraries like [js-verify](https://github.com/jsverify/jsverify) or [testcheck](https://github.com/leebyron/testcheck-js) (much better documentation). Update: Nicolas Dubien suggests in the comments below to [checkout fast-check](https://github.com/dubzzz/fast-check#readme) which seems to offer some additional features and also to be actively maintained
+
+
+
+❌ **Otherwise:** Unconsciously, you choose the test inputs that cover only code paths that work well. Unfortunately, this decreases the efficiency of testing as a vehicle to expose bugs
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Testing many input permutations with “mocha-testcheck”
+
+
+
+```javascript
+require('mocha-testcheck').install();
+const {expect} = require('chai');
+
+describe('Product service', () => {
+ describe('Adding new', () => {
+ //this will run 100 times with different random properties
+ check.it('Add new product with random yet valid properties, always successful',
+ gen.int, gen.string, (id, name) => {
+ expect(addNewProduct(id, name).status).to.equal('approved');
+ });
+ })
+});
+
+```
+
+
+
+
+
+
+
+
+## ⚪ ️ 1.8 If needed, use only short & inline snapshots
+
+:white_check_mark: **Do:** When there is a need for [snapshot testing](https://jestjs.io/docs/en/snapshot-testing), use only short and focused snapshots (i.e. 3-7 lines) that are included as part of the test ([Inline Snapshot](https://jestjs.io/docs/en/snapshot-testing#inline-snapshots)) and not within external files. Keeping this guideline will ensure your tests remain self-explanatory and less fragile.
+
+On the other hand, ‘classic snapshots’ tutorials and tools encourage to store big files (e.g. component rendering markup, API JSON result) over some external medium and ensure each time when the test run to compare the received result with the saved version. This, for example, can implicitly couple our test to 1000 lines with 3000 data values that the test writer never read and reasoned about. Why is this wrong? By doing so, there are 1000 reasons for your test to fail - it’s enough for a single line to change for the snapshot to get invalid and this is likely to happen a lot. How frequently? for every space, comment or minor CSS/HTML change. Not only this, the test name wouldn’t give a clue about the failure as it just checks that 1000 lines didn’t change, also it encourages to the test writer to accept as the desired true a long document he couldn’t inspect and verify. All of these are symptoms of obscure and eager test that is not focused and aims to achieve too much
+
+It’s worth noting that there are few cases where long & external snapshots are acceptable - when asserting on schema and not data (extracting out values and focusing on fields) or when the received document rarely changes
+
+
+❌ **Otherwise:** A UI test fails. The code seems right, the screen renders perfect pixels, what happened? your snapshot testing just found a difference from the origin document to current received one - a single space character was added to the markdown...
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti-Pattern Example: Coupling our test to unseen 2000 lines of code
+
+
+
+```javascript
+it('TestJavaScript.com is renderd correctly', () => {
+
+//Arrange
+
+//Act
+const receivedPage = renderer
+.create( Test JavaScript < /DisplayPage>)
+.toJSON();
+
+//Assert
+expect(receivedPage).toMatchSnapshot();
+//We now implicitly maintain a 2000 lines long document
+//every additional line break or comment - will break this test
+
+});
+```
+
+
+### :clap: Doing It Right Example: Expectations are visible and focused
+```javascript
+it('When visiting TestJavaScript.com home page, a menu is displayed', () => {
+//Arrange
+
+//Act
+receivedPage tree = renderer
+.create( Test JavaScript < /DisplayPage>)
+.toJSON();
+
+//Assert
+
+const menu = receivedPage.content.menu;
+expect(menu).toMatchInlineSnapshot(`
+
+- Home
+- About
+- Contact
+
+`);
+});
+```
+
+
+
+
+
+
+## ⚪ ️1.9 Avoid global test fixtures and seeds, add data per-test
+
+:white_check_mark: **Do:** Going by the golden rule (bullet 0), each test should add and act on its own set of DB rows to prevent coupling and easily reason about the test flow. In reality, this is often violated by testers who seed the DB with data before running the tests ([also known as ‘test fixture’](https://en.wikipedia.org/wiki/Test_fixture)) for the sake of performance improvement. While performance is indeed a valid concern — it can be mitigated (see “Component testing” bullet), however, test complexity is a much painful sorrow that should govern other considerations most of the time. Practically, make each test case explicitly add the DB records it needs and act only on those records. If performance becomes a critical concern — a balanced compromise might come in the form of seeding the only suite of tests that are not mutating data (e.g. queries)
+
+
+
+❌ **Otherwise:** Few tests fail, a deployment is aborted, our team is going to spend precious time now, do we have a bug? let’s investigate, oh no — it seems that two tests were mutating the same seed data
+
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti Pattern Example: tests are not independent and rely on some global hook to feed global DB data
+
+
+
+```javascript
+before(() => {
+ //adding sites and admins data to our DB. Where is the data? outside. At some external json or migration framework
+ await DB.AddSeedDataFromJson('seed.json');
+});
+it("When updating site name, get successful confirmation", async () => {
+ //I know that site name "portal" exists - I saw it in the seed files
+ const siteToUpdate = await SiteService.getSiteByName("Portal");
+ const updateNameResult = await SiteService.changeName(siteToUpdate, "newName");
+ expect(updateNameResult).to.be(true);
+});
+it("When querying by site name, get the right site", async () => {
+ //I know that site name "portal" exists - I saw it in the seed files
+ const siteToCheck = await SiteService.getSiteByName("Portal");
+ expect(siteToCheck.name).to.be.equal("Portal"); //Failure! The previous test change the name :[
+});
+
+```
+
+
+### :clap: Doing It Right Example: We can stay within the test, each test acts on its own set of data
+
+```javascript
+it("When updating site name, get successful confirmation", async () => {
+ //test is adding a fresh new records and acting on the records only
+ const siteUnderTest = await SiteService.addSite({
+ name: "siteForUpdateTest"
+ });
+
+ const updateNameResult = await SiteService.changeName(siteUnderTest, "newName");
+
+ expect(updateNameResult).to.be(true);
+});
+
+```
+
+
+
+
+
+
+## ⚪ ️ 1.10 Don’t catch errors, expect them
+:white_check_mark: **Do:** When trying to assert that some input triggers an error, it might look right to use try-catch-finally and asserts that the catch clause was entered. The result is an awkward and verbose test case (example below) that hides the simple test intent and the result expectations
+
+A more elegant alternative is the using the one-line dedicated Chai assertion: expect(method).to.throw (or in Jest: expect(method).toThrow()). It’s absolutely mandatory to also ensure the exception contains a property that tells the error type, otherwise given just a generic error the application won’t be able to do much rather than show a disappointing message to the user
+
+
+
+❌ **Otherwise:** It will be challenging to infer from the test reports (e.g. CI reports) what went wrong
+
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti-pattern Example: A long test case that tries to assert the existence of error with try-catch
+
+
+
+```javascript
+it("When no product name, it throws error 400", async() => {
+let errorWeExceptFor = null;
+try {
+ const result = await addNewProduct({name:'nest'});}
+catch (error) {
+ expect(error.code).to.equal('InvalidInput');
+ errorWeExceptFor = error;
+}
+expect(errorWeExceptFor).not.to.be.null;
+//if this assertion fails, the tests results/reports will only show
+//that some value is null, there won't be a word about a missing Exception
+});
+
+```
+
+
+### :clap: Doing It Right Example: A human-readable expectation that could be understood easily, maybe even by QA or technical PM
+
+```javascript
+it.only("When no product name, it throws error 400", async() => {
+ expect(addNewProduct)).to.eventually.throw(AppError).with.property('code', "InvalidInput");
+});
+
+```
+
+
+
+
+
+
+
+
+## ⚪ ️ 1.11 Tag your tests
+
+:white_check_mark: **Do:** Different tests must run on different scenarios: quick smoke, IO-less, tests should run when a developer saves or commits a file, full end-to-end tests usually run when a new pull request is submitted, etc. This can be achieved by tagging tests with keywords like #cold #api #sanity so you can grep with your testing harness and invoke the desired subset. For example, this is how you would invoke only the sanity test group with Mocha: mocha — grep ‘sanity’
+
+
+
+❌ **Otherwise:** Running all the tests, including tests that perform dozens of DB queries, any time a developer makes a small change can be extremely slow and keeps developers away from running tests
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Tagging tests as ‘#cold-test’ allows the test runner to execute only fast tests (Cold===quick tests that are doing no IO and can be executed frequently even as the developer is typing)
+
+
+```javascript
+//this test is fast (no DB) and we're tagging it correspondigly
+//now the user/CI can run it frequently
+describe('Order service', function() {
+ describe('Add new order #cold-test #sanity', function() {
+ test('Scenario - no currency was supplied. Expectation - Use the default currency #sanity', function() {
+ //code logic here
+ });
+ });
+});
+
+
+```
+
+
+
+
+
+
+
+
+## ⚪ ️1.12 Other generic good testing hygiene
+:white_check_mark: **Do:** This post is focused on testing advice that is related to, or at least can be exemplified with Node JS. This bullet, however, groups few non-Node related tips that are well-known
+
+Learn and practice [TDD principles](https://www.sm-cloud.com/book-review-test-driven-development-by-example-a-tldr/) — they are extremely valuable for many but don’t get intimidated if they don’t fit your style, you’re not the only one. Consider writing the tests before the code in a [red-green-refactor style](https://blog.cleancoder.com/uncle-bob/2014/12/17/TheCyclesOfTDD.html), ensure each test checks exactly one thing, when you find a bug — before fixing write a test that will detect this bug in the future, let each test fail at least once before turning green, start a module by writing a quick and simplistic code that satsifies the test - then refactor gradually and take it to a production grade level, avoid any dependency on the environment (paths, OS, etc)
+
+
+
+❌ **Otherwise:** You‘ll miss pearls of wisdom that were collected for decades
+
+
+
+
+# Section 2️⃣: Backend Testing
+
+## ⚪ ️2.1 Enrich your testing portfolio: Look beyond unit tests and the pyramid
+
+:white_check_mark: **Do:** The [testing pyramid](https://martinfowler.com/bliki/TestPyramid.html), though 10> years old, is a great and relevant model that suggests three testing types and influences most developers’ testing strategy. At the same time, more than a handful of shiny new testing techniques emerged and are hiding in the shadows of the testing pyramid. Given all the dramatic changes that we’ve seen in the recent 10 years (Microservices, cloud, serverless), is it even possible that one quite-old model will suit *all* types of applications? shouldn’t the testing world consider welcoming new testing techniques?
+
+Don’t get me wrong, in 2019 the testing pyramid, TDD and unit tests are still a powerful technique and are probably the best match for many applications. Only like any other model, despite its usefulness, [it must be wrong sometimes](https://en.wikipedia.org/wiki/All_models_are_wrong). For example, consider an IOT application that ingests many events into a message-bus like Kafka/RabbitMQ, which then flow into some data-warehouse and are eventually queried by some analytics UI. Should we really spend 50% of our testing budget on writing unit tests for an application that is integration-centric and has almost no logic? As the diversity of application types increase (bots, crypto, Alexa-skills) greater are the chances to find scenarios where the testing pyramid is not the best match.
+
+It’s time to enrich your testing portfolio and become familiar with more testing types (the next bullets suggest few ideas), mind models like the testing pyramid but also match testing types to real-world problems that you’re facing (‘Hey, our API is broken, let’s write consumer-driven contract testing!’), diversify your tests like an investor that build a portfolio based on risk analysis — assess where problems might arise and match some prevention measures to mitigate those potential risks
+
+A word of caution: the TDD argument in the software world takes a typical false-dichotomy face, some preach to use it everywhere, others think it’s the devil. Everyone who speaks in absolutes is wrong :]
+
+
+
+
+❌ **Otherwise:** You’re going to miss some tools with amazing ROI, some like Fuzz, lint, and mutation can provide value in 10 minutes
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Cindy Sridharan suggests a rich testing portfolio in her amazing post ‘Testing Microservices — the sane way’
+
+
+☺️Example: [YouTube: “Beyond Unit Tests: 5 Shiny Node.JS Test Types (2018)” (Yoni Goldberg)](https://www.youtube.com/watch?v=-2zP494wdUY&feature=youtu.be)
+
+
+
+
+
+
+
+
+
+
+
+
+
+## ⚪ ️2.2 Component testing might be your best affair
+
+:white_check_mark: **Do:** Each unit test covers a tiny portion of the application and it’s expensive to cover the whole, whereas end-to-end testing easily covers a lot of ground but is flaky and slower, why not apply a balanced approach and write tests that are bigger than unit tests but smaller than end-to-end testing? Component testing is the unsung song of the testing world — they provide the best from both worlds: reasonable performance and a possibility to apply TDD patterns + realistic and great coverage.
+
+Component tests focus on the Microservice ‘unit’, they work against the API, don’t mock anything which belongs to the Microservice itself (e.g. real DB, or at least the in-memory version of that DB) but stub anything that is external like calls to other Microservices. By doing so, we test what we deploy, approach the app from outwards to inwards and gain great confidence in a reasonable amount of time.
+
+
+
+❌ **Otherwise:** You may spend long days on writing unit tests to find out that you got only 20% system coverage
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Supertest allows approaching Express API in-process (fast and cover many layers)
+
+
+
+ allows approaching Express API in-process (fast and cover many layers)")
+
+
+
+
+
+## ⚪ ️2.3 Ensure new releases don’t break the API using
+
+:white_check_mark: **Do:** So your Microservice has multiple clients, and you run multiple versions of the service for compatibility reasons (keeping everyone happy). Then you change some field and ‘boom!’, some important client who relies on this field is angry. This is the Catch-22 of the integration world: It’s very challenging for the server side to consider all the multiple client expectations — On the other hand, the clients can’t perform any testing because the server controls the release dates. [Consumer-driven contracts and the framework PACT](https://docs.pact.io/) were born to formalize this process with a very disruptive approach — not the server defines the test plan of itself rather the client defines the tests of the… server! PACT can record the client expectation and put in a shared location, “broker”, so the server can pull the expectations and run on every build using PACT library to detect broken contracts — a client expectation that is not met. By doing so, all the server-client API mismatches are caught early during build/CI and might save you a great deal of frustration
+
+
+
+❌ **Otherwise:** The alternatives are exhausting manual testing or deployment fear
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example:
+
+
+
+
+
+
+
+
+
+
+
+
+
+## ⚪ ️ 2.4 Test your middlewares in isolation
+
+:white_check_mark: **Do:** Many avoid Middleware testing because they represent a small portion of the system and require a live Express server. Both reasons are wrong — Middlewares are small but affect all or most of the requests and can be tested easily as pure functions that get {req,res} JS objects. To test a middleware function one should just invoke it and spy ([using Sinon for example](https://www.npmjs.com/package/sinon)) on the interaction with the {req,res} objects to ensure the function performed the right action. The library [node-mock-http](https://www.npmjs.com/package/node-mocks-http) takes it even further and factors the {req,res} objects along with spying on their behavior. For example, it can assert whether the http status that was set on the res object matches the expectation (See example below)
+
+
+
+❌ **Otherwise:** A bug in Express middleware === a bug in all or most requests
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap:Doing It Right Example: Testing middleware in isolation without issuing network calls and waking-up the entire Express machine
+
+
+
+```javascript
+//the middleware we want to test
+const unitUnderTest = require('./middleware')
+const httpMocks = require('node-mocks-http');
+//Jest syntax, equivelant to describe() & it() in Mocha
+test('A request without authentication header, should return http status 403', () => {
+ const request = httpMocks.createRequest({
+ method: 'GET',
+ url: '/user/42',
+ headers: {
+ authentication: ''
+ }
+ });
+ const response = httpMocks.createResponse();
+ unitUnderTest(request, response);
+ expect(response.statusCode).toBe(403);
+});
+
+```
+
+
+
+
+
+
+
+
+## ⚪ ️2.5 Measure and refactor using static analysis tools
+:white_check_mark: **Do:** Using static analysis tools helps by giving objective ways to improve code quality and keep your code maintainable. You can add static analysis tools to your CI build to abort when it finds code smells. Its main selling points over plain linting are the ability to inspect quality in the context of multiple files (e.g. detect duplications), perform advanced analysis (e.g. code complexity) and follow the history and progress of code issues. Two examples of tools you can use are [Sonarqube](https://www.sonarqube.org/) (2,600+ [stars](https://github.com/SonarSource/sonarqube)) and [Code Climate](https://codeclimate.com/) (1,500+ [stars](https://github.com/codeclimate/codeclimate))
+
+Credit:: [Keith Holliday](https://github.com/TheHollidayInn)
+
+
+
+
+❌ **Otherwise:** With poor code quality, bugs and performance will always be an issue that no shiny new library or state of the art features can fix
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: CodeClimate, a commercial tool that can identify complex methods:
+
+
+
+
+
+
+
+
+
+
+
+
+## ⚪ ️ 2.6 Check your readiness for Node-related chaos
+:white_check_mark: **Do:** Weirdly, most software testings are about logic & data only, but some of the worst things that happen (and are really hard to mitigate ) are infrastructural issues. For example, did you ever test what happens when your process memory is overloaded, or when the server/process dies, or does your monitoring system realizes when the API becomes 50% slower?. To test and mitigate these type of bad things — [Chaos engineering](https://principlesofchaos.org/) was born by Netflix. It aims to provide awareness, frameworks and tools for testing our app resiliency for chaotic issues. For example, one of its famous tools, [the chaos monkey](https://github.com/Netflix/chaosmonkey), randomly kills servers to ensure that our service can still serve users and not relying on a single server (there is also a Kubernetes version, [kube-monkey](https://github.com/asobti/kube-monkey), that kills pods). All these tools work on the hosting/platform level, but what if you wish to test and generate pure Node chaos like check how your Node process copes with uncaught errors, unhandled promise rejection, v8 memory overloaded with the max allowed of 1.7GB or whether your UX stays satisfactory when the event loop gets blocked often? to address this I’ve written, [node-chaos](https://github.com/i0natan/node-chaos-monkey) (alpha) which provides all sort of Node-related chaotic acts
+
+
+
+❌ **Otherwise:** No escape here, Murphy’s law will hit your production without mercy
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: : Node-chaos can generate all sort of Node.js pranks so you can test how resilience is your app to chaos
+
+
+
+
+
+
+## ⚪ ️2.7 Avoid global test fixtures and seeds, add data per-test
+
+:white_check_mark: **Do:** Going by the golden rule (bullet 0), each test should add and act on its own set of DB rows to prevent coupling and easily reason about the test flow. In reality, this is often violated by testers who seed the DB with data before running the tests (also known as ‘test fixture’) for the sake of performance improvement. While performance is indeed a valid concern — it can be mitigated (see “Component testing” bullet), however, test complexity is a much painful sorrow that should govern other considerations most of the time. Practically, make each test case explicitly add the DB records it needs and act only on those records. If performance becomes a critical concern — a balanced compromise might come in the form of seeding the only suite of tests that are not mutating data (e.g. queries)
+
+
+
+❌ **Otherwise:** Few tests fail, a deployment is aborted, our team is going to spend precious time now, do we have a bug? let’s investigate, oh no — it seems that two tests were mutating the same seed data
+
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti Pattern Example: tests are not independent and rely on some global hook to feed global DB data
+
+
+
+```javascript
+before(() => {
+ //adding sites and admins data to our DB. Where is the data? outside. At some external json or migration framework
+ await DB.AddSeedDataFromJson('seed.json');
+});
+it("When updating site name, get successful confirmation", async () => {
+ //I know that site name "portal" exists - I saw it in the seed files
+ const siteToUpdate = await SiteService.getSiteByName("Portal");
+ const updateNameResult = await SiteService.changeName(siteToUpdate, "newName");
+ expect(updateNameResult).to.be(true);
+});
+it("When querying by site name, get the right site", async () => {
+ //I know that site name "portal" exists - I saw it in the seed files
+ const siteToCheck = await SiteService.getSiteByName("Portal");
+ expect(siteToCheck.name).to.be.equal("Portal"); //Failure! The previous test change the name :[
+});
+
+```
+
+
+### :clap: Doing It Right Example: We can stay within the test, each test acts on its own set of data
+
+```javascript
+it("When updating site name, get successful confirmation", async () => {
+ //test is adding a fresh new records and acting on the records only
+ const siteUnderTest = await SiteService.addSite({
+ name: "siteForUpdateTest"
+ });
+ const updateNameResult = await SiteService.changeName(siteUnderTest, "newName");
+ expect(updateNameResult).to.be(true);
+});
+
+```
+
+
+
+
+
+# Section 3️⃣: Frontend Testing
+
+## ⚪ ️ 3.1. Separate UI from functionality
+
+:white_check_mark: **Do:** When focusing on testing component logic, UI details become a noise that should be extracted, so your tests can focus on pure data. Practically, extract the desired data from the markup in an abstract way that is not too coupled to the graphic implementation, assert only on pure data (vs HTML/CSS graphic details) and disable animations that slow down. You might get tempted to avoid rendering and test only the back part of the UI (e.g. services, actions, store) but this will result in fictional tests that don't resemble the reality and won't reveal cases where the right data doesn't even arrive in the UI
+
+
+
+
+❌ **Otherwise:** The pure calculated data of your test might be ready in 10ms, but then the whole test will last 500ms (100 tests = 1 min) due to some fancy and irrelevant animation
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Separating out the UI details
+
+ 
+
+```javascript
+test('When users-list is flagged to show only VIP, should display only VIP members', () => {
+ // Arrange
+ const allUsers = [
+ { id: 1, name: 'Yoni Goldberg', vip: false },
+ { id: 2, name: 'John Doe', vip: true }
+ ];
+
+ // Act
+ const { getAllByTestId } = render();
+
+ // Assert - Extract the data from the UI first
+ const allRenderedUsers = getAllByTestId('user').map(uiElement => uiElement.textContent);
+ const allRealVIPUsers = allUsers.filter((user) => user.vip).map((user) => user.name);
+ expect(allRenderedUsers).toEqual(allRealVIPUsers); //compare data with data, no UI here
+});
+
+```
+
+
+
+### :thumbsdown: Anti Pattern Example: Assertion mix UI details and data
+```javascript
+test('When flagging to show only VIP, should display only VIP members', () => {
+ // Arrange
+ const allUsers = [
+ {id: 1, name: 'Yoni Goldberg', vip: false },
+ {id: 2, name: 'John Doe', vip: true }
+ ];
+
+ // Act
+ const { getAllByTestId } = render();
+
+ // Assert - Mix UI & data in assertion
+ expect(getAllByTestId('user')).toEqual('[John Doe]');
+});
+
+```
+
+
+
+
+
+
+
+
+
+## ⚪ ️ 3.2 Query HTML elements based on attributes that are unlikely to change
+
+:white_check_mark: **Do:** Query HTML elements based on attributes that are likely to survive graphic changes unlike CSS selectors and like form labels. If the designated element doesn't have such attributes, create a dedicated test attribute like 'test-id-submit-button'. Going this route not only ensures that your functional/logic tests never break because of look & feel changes but also it becomes clear to the entire team that this element and attribute are utilized by tests and shouldn't get removed
+
+
+
+❌ **Otherwise:** You want to test the login functionality that spans many components, logic and services, everything is set up perfectly - stubs, spies, Ajax calls are isolated. All seems perfect. Then the test fails because the designer changed the div CSS class from 'thick-border' to 'thin-border'
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Querying an element using a dedicated attrbiute for testing
+
+
+
+```html
+// the markup code (part of React component)
+
+
+ {value}
+
+
+```
+
+```javascript
+// this example is using react-testing-library
+ test('Whenever no data is passed to metric, show 0 as default', () => {
+ // Arrange
+ const metricValue = undefined;
+
+ // Act
+ const { getByTestId } = render();
+
+ expect(getByTestId('errorsLabel')).text()).toBe("0");
+ });
+
+```
+
+
+
+### :thumbsdown: Anti-Pattern Example: Relying on CSS attributes
+```html
+
+{value}
+```
+
+```javascript
+// this exammple is using enzyme
+test('Whenever no data is passed, error metric shows zero', () => {
+ // ...
+
+ expect(wrapper.find("[className='d-flex-column']").text()).toBe("0");
+ });
+```
+
+
+
+
+
+
+
+
+
+## ⚪ ️ 3.3 Whenever possible, test with a realistic and fully rendered component
+
+:white_check_mark: **Do:** Whenever reasonably sized, test your component from outside like your users do, fully render the UI, act on it and assert that the rendered UI behaves as expected. Avoid all sort of mocking, partial and shallow rendering - this approach might result in untrapped bugs due to lack of details and harden the maintenance as the tests mess with the internals (see bullet 'Favour blackbox testing'). If one of the child components is significantly slowing down (e.g. animation) or complicating the setup - consider explicitly replacing it with a fake
+
+With all that said, a word of caution is in order: this technique works for small/medium components that pack a reasonable size of child components. Fully rendering a component with too many children will make it hard to reason about test failures (root cause analysis) and might get too slow. In such cases, write only a few tests against that fat parent component and more tests against its children
+
+
+
+❌ **Otherwise:** When poking into a component's internal by invoking its private methods, and checking the inner state - you would have to refactor all tests when refactoring the components implementation. Do you really have a capacity for this level of maintenance?
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Working realstically with a fully rendered component
+
+ 
+
+```javascript
+class Calendar extends React.Component {
+ static defaultProps = {showFilters: false}
+
+ render() {
+ return (
+
+ A filters panel with a button to hide/show filters
+
+
+ )
+ }
+}
+
+//Examples use React & Enzyme
+test('Realistic approach: When clicked to show filters, filters are displayed', () => {
+ // Arrange
+ const wrapper = mount()
+
+ // Act
+ wrapper.find('button').simulate('click');
+
+ // Assert
+ expect(wrapper.text().includes('Choose Filter'));
+ // This is how the user will approach this element: by text
+})
+
+
+```
+
+### :thumbsdown: Anti-Pattern Example: Mocking the reality with shallow rendering
+```javascript
+
+test('Shallow/mocked approach: When clicked to show filters, filters are displayed', () => {
+ // Arrange
+ const wrapper = shallow()
+
+ // Act
+ wrapper.find('filtersPanel').instance().showFilters();
+ // Tap into the internals, bypass the UI and invoke a method. White-box approach
+
+ // Assert
+ expect(wrapper.find('Filter').props()).toEqual({title: 'Choose Filter'});
+ // what if we change the prop name or don't pass anything relevant?
+})
+
+```
+
+
+
+
+
+
+## ⚪ ️ 3.4 Don't sleep, use frameworks built-in support for async events. Also try to speed things up
+
+:white_check_mark: **Do:** In many cases, the unit under test completion time is just unknown (e.g. animation suspends element appearance) - in that case, avoid sleeping (e.g. setTimeOut) and prefer more deterministic methods that most platforms provide. Some libraries allows awaiting on operations (e.g. [Cypress cy.request('url')](https://docs.cypress.io/guides/references/best-practices.html#Unnecessary-Waiting)), other provide API for waiting like [@testing-library/dom method wait(expect(element))](https://testing-library.com/docs/guide-disappearance). Sometimes a more elegant way is to stub the slow resource, like API for example, and then once the response moment becomes deterministic the component can be explicitly re-rendered. When depending upon some external component that sleeps, it might turn useful to [hurry-up the clock](https://jestjs.io/docs/en/timer-mocks). Sleeping is a pattern to avoid because it forces your test to be slow or risky (when waiting for a too short period). Whenever sleeping and polling is inevitable and there's no support from the testing framework, some npm libraries like [wait-for-expect](https://www.npmjs.com/package/wait-for-expect) can help with a semi-deterministic solution
+
+
+❌ **Otherwise:** When sleeping for a long time, tests will be an order of magnitude slower. When trying to sleep for small numbers, test will fail when the unit under test didn't respond in a timely fashion. So it boils down to a trade-off between flakiness and bad performance
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: E2E API that resolves only when the async operations is done (Cypress)
+
+ 
+
+```javascript
+// using Cypress
+cy.get('#show-products').click()// navigate
+cy.wait('@products')// wait for route to appear
+// this line will get executed only when the route is ready
+
+```
+
+### :clap: Doing It Right Example: Testing library that waits for DOM elements
+
+```javascript
+// @testing-library/dom
+test('movie title appears', async () => {
+ // element is initially not present...
+
+ // wait for appearance
+ await wait(() => {
+ expect(getByText('the lion king')).toBeInTheDocument()
+ })
+
+ // wait for appearance and return the element
+ const movie = await waitForElement(() => getByText('the lion king'))
+})
+
+```
+
+### :thumbsdown: Anti-Pattern Example: custom sleep code
+```javascript
+
+test('movie title appears', async () => {
+ // element is initially not present...
+
+ // custom wait logic (caution: simplistic, no timeout)
+ const interval = setInterval(() => {
+ const found = getByText('the lion king');
+ if(found){
+ clearInterval(interval);
+ expect(getByText('the lion king')).toBeInTheDocument();
+ }
+
+ }, 100);
+
+ // wait for appearance and return the element
+ const movie = await waitForElement(() => getByText('the lion king'))
+})
+
+```
+
+
+
+
+
+
+## ⚪ ️ 3.5. Watch how the content is served over the network
+
+
+
+✅ **Do:** Apply some active monitor that ensures the page load under real network is optimized - this includes any UX concern like slow page load or un-minified bundle. The inspection tools market is no short: basic tools like [pingdom](https://www.pingdom.com/), AWS CloudWatch, [gcp StackDriver](https://cloud.google.com/monitoring/uptime-checks/) can be easily configured to watch whether the server is alive and response under a reasonable SLA. This only scratches the surface of what might get wrong, hence it's preferable to opt for tools that specialize in frontend (e.g. [lighthouse](https://developers.google.com/web/tools/lighthouse/), [pagespeed](https://developers.google.com/speed/pagespeed/insights/)) and perform richer analysis. The focus should be on symptoms, metrics that directly affect the UX, like page load time, [meaningful paint](https://scotch.io/courses/10-web-performance-audit-tips-for-your-next-billion-users-in-2018/fmp-first-meaningful-paint), [time until the page gets interactive (TTI)](https://calibreapp.com/blog/time-to-interactive/). On top of that, one may also watch for technical causes like ensuring the content is compressed, time to the first byte, optimize images, ensuring reasonable DOM size, SSL and many others. It's advisable to have these rich monitors both during development, as part of the CI and most important - 24x7 over the production's servers/CDN
+
+
+
+❌ **Otherwise:** It must be disappointing to realize that after such great care for crafting a UI, 100% functional tests passing and sophisticated bundling - the UX is horrible and slow due to CDN misconfiguration
+
+
+
+✏ Code Examples
+
+### :clap: Doing It Right Example: Lighthouse page load inspection report
+
+
+
+
+
+
+
+
+
+## ⚪ ️ 3.6 Stub flakky and slow resources like backend APIs
+
+:white_check_mark: **Do:** When coding your mainstream tests (not E2E tests), avoid involving any resource that is beyond your responsibility and control like backend API and use stubs instead (i.e. test double). Practically, instead of real network calls to APIs, use some test double library (like [Sinon](https://sinonjs.org/), [Test doubles](https://www.npmjs.com/package/testdouble), etc) for stubbing the API response. The main benefit is preventing flakiness - testing or staging APIs by definition are not highly stable and from time to time will fail your tests although YOUR component behaves just fine (production env was not meant for testing and it usually throttles requests). Doing this will allow simulating various API behavior that should drive your component behavior as when no data was found or the case when API throws an error. Last but not least, network calls will greatly slow down the tests
+
+
+
+❌ **Otherwise:** The average test runs no longer than few ms, a typical API call last 100ms>, this makes each test ~20x slower
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Stubbing or intercepting API calls
+ 
+
+```javascript
+
+// unit under test
+export default function ProductsList() {
+ const [products, setProducts] = useState(false)
+
+ const fetchProducts = async() => {
+ const products = await axios.get('api/products')
+ setProducts(products);
+ }
+
+ useEffect(() => {
+ fetchProducts();
+ }, []);
+
+ return products ? {products}
: No products
+}
+
+// test
+test('When no products exist, show the appropriate message', () => {
+ // Arrange
+ nock("api")
+ .get(`/products`)
+ .reply(404);
+
+ // Act
+ const {getByTestId} = render();
+
+ // Assert
+ expect(getByTestId('no-products-message')).toBeTruthy();
+});
+
+```
+
+
+
+
+
+## ⚪ ️ 3.7 Have very few end-to-end tests that spans the whole system
+
+:white_check_mark: **Do:** Although E2E (end-to-end) usually means UI-only testing with a real browser (See bullet 3.6), for other they mean tests that stretch the entire system including the real backend. The latter type of tests is highly valuable as they cover integration bugs between frontend and backend that might happen due to a wrong understanding of the exchange schema. They are also an efficient method to discover backend-to-backend integration issues (e.g. Microservice A sends the wrong message to Microservice B) and even to detect deployment failures - there are no backend frameworks for E2E testing that are as friendly and mature as UI frameworks like [Cypress](https://www.cypress.io/) and [Pupeteer](https://github.com/GoogleChrome/puppeteer). The downside of such tests is the high cost of configuring an environment with so many components, and mostly their brittleness - given 50 microservices, even if one fails then the entire E2E just failed. For that reason, we should use this technique sparingly and probably have 1-10 of those and no more. That said, even a small number of E2E tests are likely to catch the type of issues they are targeted for - deployment & integration faults. It's advisable to run those over a production-like staging environment
+
+
+
+❌ **Otherwise:** UI might invest much in testing its functionality only to realizes very late that the backend returned payload (the data schema the UI has to work with) is very differnt than expected
+
+
+
+## ⚪ ️ 3.8 Speed-up E2E tests by reusing login credentials
+
+:white_check_mark: **Do:** In E2E tests that involve a real backend and rely on a valid user token for API calls, it doesn't payoff to isolate the test to a level where a user is created and logged-in in every request. Instead, login only once before the tests execution start (i.e. before-all hook), save the token in some local storage and reuse it across requests. This seem to violate one of the core testing principle - keep the test autonomous without resources coupling. While this is a valid worry, in E2E tests performance is a key concern and creating 1-3 API requests before starting each individial tests might lead to horrible execution time. Reusing credentials doesn't mean the tests have to act on the same user records - if relying on user records (e.g. test user payments history) than make sure to generate those records as part of the test and avoid sharing their existence with other tests. Also remember that the backend can be faked - if your tests are focused on the frontend it might be better to isolate it and stub the backend API (see bullet 3.6).
+
+
+
+❌ **Otherwise:** Given 200 test cases and assuming login=100ms = 20 seconds only for logging-in again and again
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Logging-in before-all and not before-each
+
+
+
+```javascript
+let authenticationToken;
+
+// happens before ALL tests run
+before(() => {
+ cy.request('POST', 'http://localhost:3000/login', {
+ username: Cypress.env('username'),
+ password: Cypress.env('password'),
+ })
+ .its('body')
+ .then((responseFromLogin) => {
+ authenticationToken = responseFromLogin.token;
+ })
+})
+
+// happens before EACH test
+beforeEach(setUser => () {
+ cy.visit('/home', {
+ onBeforeLoad (win) {
+ win.localStorage.setItem('token', JSON.stringify(authenticationToken))
+ },
+ })
+})
+
+```
+
+
+
+
+
+
+
+
+## ⚪ ️ 3.9 Have one E2E smoke test that just travels across the site map
+
+:white_check_mark: **Do:** For production monitoring and development-time sanity check, run a single E2E test that visits all/most of the site pages and ensures no one breaks. This type of test brings a great return on investment as it's very easy to write and maintain, but it can detect any kind of failure including functional, network and deployment issues. Other styles of smoke and sanity checking are not as reliable and exhaustive - some ops teams just ping the home page (production) or developers who run many integration tests which don't discover packaging and browser issues. Goes without saying that the smoke test doesn't replace functional tests rather just aim to serve as a quick smoke detector
+
+
+
+❌ **Otherwise:** Everything might seem perfect, all tests pass, production health-check is also positive but the Payment component had some packaging issue and only the /Payment route is not rendering
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Smoke travelling across all pages
+
+```javascript
+it('When doing smoke testing over all page, should load them all successfully', () => {
+ // exemplified using Cypress but can be implemented easily
+ // using any E2E suite
+ cy.visit('https://mysite.com/home');
+ cy.contains('Home');
+ cy.contains('https://mysite.com/Login');
+ cy.contains('Login');
+ cy.contains('https://mysite.com/About');
+ cy.contains('About');
+ })
+```
+
+
+
+
+
+
+## ⚪ ️ 3.10 Expose the tests as a live collaborative document
+
+:white_check_mark: **Do:** Besides increasing app reliability, tests bring another attractive opportunity to the table - serve as live app documentation. Since tests inherently speak at a less-technical and product/UX language, using the right tools they can serve as a communication artifact that greatly aligns all the peers - developers and their customers. For example, some frameworks allow expressing the flow and expectations (i.e. tests plan) using a human-readable language so any stakeholder, including product managers, can read, approve and collaborate on the tests which just became the live requirements document. This technique is also being referred to as 'acceptance test' as it allows the customer to define his acceptance criteria in plain language. This is [BDD (behavior-driven testing)](https://en.wikipedia.org/wiki/Behavior-driven_development) at its purest form. One of the popular frameworks that enable this is [Cucumber which has a JavaScript flavor](https://github.com/cucumber/cucumber-js), see example below. Another similar yet different opportunity, [StoryBook](https://storybook.js.org/), allows exposing UI components as a graphic catalog where one can walk through the various states of each component (e.g. render a grid w/o filters, render that grid with multiple rows or with none, etc), see how it looks like, and how to trigger that state - this can appeal also to product folks but mostly serves as live doc for developers who consume those components.
+
+❌ **Otherwise:** After investing top resources on testing, it's just a pity not to leverage this investment and win great value
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Describing tests in human-language using cucumber-js
+
+
+```javascript
+// this is how one can describe tests using cucumber: plain language that allows anyone to understand and collaborate
+
+Feature: Twitter new tweet
+
+ I want to tweet something in Twitter
+
+ @focus
+ Scenario: Tweeting from the home page
+ Given I open Twitter home
+ Given I click on "New tweet" button
+ Given I type "Hello followers!" in the textbox
+ Given I click on "Submit" button
+ Then I see message "Tweet saved"
+
+```
+
+### :clap: Doing It Right Example: Visualizing our components, their various states and inputs using Storybook
+
+
+
+
+
+
+
+
+## ⚪ ️ 3.11 Detect visual issues with automated tools
+
+
+:white_check_mark: **Do:** Setup automated tools to capture UI screenshots when changes are presented and detect visual issues like content overlapping or breaking. This ensures that not only the right data is prepared but also the user can conveniently see it. This technique is not widely adopted, our testing mindset leans toward functional tests but it's the visuals what the user experience and with so many device types it's very easy to overlook some nasty UI bug. Some free tools can provide the basics - generate and save screenshots for the inspection of human eyes. While this approach might be sufficient for small apps, it's flawed as any other manual testing that demands human labor anytime something changes. On the other hand, it's quite challenging to detect UI issues automatically due to the lack of clear definition - this is where the field of 'Visual Regression' chime in and solve this puzzle by comparing old UI with the latest changes and detect differences. Some OSS/free tools can provide some of this functionality (e.g. [wraith](https://github.com/BBC-News/wraith), [PhantomCSS]([https://github.com/HuddleEng/PhantomCSS](https://github.com/HuddleEng/PhantomCSS)) but might charge signficant setup time. The commercial line of tools (e.g. [Applitools](https://applitools.com/), [Percy.io](https://percy.io/)) takes is a step further by smoothing the installation and packing advanced features like management UI, alerting, smart capturing by elemeinating 'visual noise' (e.g. ads, animations) and even root cause analysis of the DOM/css changes that led to the issue
+
+
+
+❌ **Otherwise:** How good is a content page that display great content (100% tests passed), loads instantly but half of the content area is hidden?
+
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti Pattern Example: A typical visual regression - right content that is served badly
+
+
+
+
+
+
+### :clap: Doing It Right Example: Configuring wraith to capture and compare UI snapshots
+
+
+
+```
+# Add as many domains as necessary. Key will act as a label
+
+domains:
+ english: "http://www.mysite.com"
+
+# Type screen widths below, here are a couple of examples
+
+screen_widths:
+
+ - 600
+ - 768
+ - 1024
+ - 1280
+
+
+# Type page URL paths below, here are a couple of examples
+paths:
+ about:
+ path: /about
+ selector: '.about'
+ subscribe:
+ selector: '.subscribe'
+ path: /subscribe
+```
+
+### :clap: Doing It Right Example: Using Applitools to get snapshot comaprison and other advanced features
+
+ 
+
+```javascript
+import * as todoPage from '../page-objects/todo-page';
+
+describe('visual validation', () => {
+
+before(() => todoPage.navigate());
+
+beforeEach(() => cy.eyesOpen({ appName: 'TAU TodoMVC' }));
+
+afterEach(() => cy.eyesClose());
+
+
+
+it('should look good', () => {
+
+cy.eyesCheckWindow('empty todo list');
+
+
+
+todoPage.addTodo('Clean room');
+
+
+
+todoPage.addTodo('Learn javascript');
+
+
+
+cy.eyesCheckWindow('two todos');
+
+
+
+todoPage.toggleTodo(0);
+
+
+
+cy.eyesCheckWindow('mark as completed');
+
+});
+
+});
+```
+
+
+
+
+
+
+
+
+
+
+
+# Section 4️⃣: Measuring Test Effectiveness
+
+
+
+## ⚪ ️ 4.1 Get enough coverage for being confident, ~80% seems to be the lucky number
+
+:white_check_mark: **Do:** The purpose of testing is to get enough confidence for moving fast, obviously the more code is tested the more confident the team can be. Coverage is a measure of how many code lines (and branches, statements, etc) are being reached by the tests. So how much is enough? 10–30% is obviously too low to get any sense about the build correctness, on the other side 100% is very expensive and might shift your focus from the critical paths to the exotic corners of the code. The long answer is that it depends on many factors like the type of application — if you’re building the next generation of Airbus A380 than 100% is a must, for a cartoon pictures website 50% might be too much. Although most of the testing enthusiasts claim that the right coverage threshold is contextual, most of them also mention the number 80% as a thumb of a rule ([Fowler: “in the upper 80s or 90s”](https://martinfowler.com/bliki/TestCoverage.html)) that presumably should satisfy most of the applications.
+
+Implementation tips: You may want to configure your continuous integration (CI) to have a coverage threshold ([Jest link](https://jestjs.io/docs/en/configuration.html#collectcoverage-boolean)) and stop a build that doesn’t stand to this standard (it’s also possible to configure threshold per component, see code example below). On top of this, consider detecting build coverage decrease (when a newly committed code has less coverage) — this will push developers raising or at least preserving the amount of tested code. All that said, coverage is only one measure, a quantitative based one, that is not enough to tell the robustness of your testing. And it can also be fooled as illustrated in the next bullets
+
+
+
+
+❌ **Otherwise:** Confidence and numbers go hand in hand, without really knowing that you tested most of the system — there will also be some fear. and fear will slow you down
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Example: A typical coverage report
+
+
+
+
+### :clap: Doing It Right Example: Setting up coverage per component (using Jest)
+
+
+
+
+
+
+
+
+
+
+
+## ⚪ ️ 4.2 Inspect coverage reports to detect untested areas and other oddities
+
+:white_check_mark: **Do:** Some issues sneak just under the radar and are really hard to find using traditional tools. These are not really bugs but more of surprising application behavior that might have a severe impact. For example, often some code areas are never or rarely being invoked — you thought that the ‘PricingCalculator’ class is always setting the product price but it turns out it is actually never invoked although we have 10000 products in DB and many sales… Code coverage reports help you realize whether the application behaves the way you believe it does. Other than that, it can also highlight which types of code is not tested — being informed that 80% of the code is tested doesn’t tell whether the critical parts are covered. Generating reports is easy — just run your app in production or during testing with coverage tracking and then see colorful reports that highlight how frequent each code area is invoked. If you take your time to glimpse into this data — you might find some gotchas
+
+
+
+❌ **Otherwise:** If you don’t know which parts of your code are left un-tested, you don’t know where the issues might come from
+
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti-Pattern Example: What’s wrong with this coverage report? based on a real-world scenario where we tracked our application usage in QA and find out interesting login patterns (Hint: the amount of login failures is non-proportional, something is clearly wrong. Finally it turned out that some frontend bug keeps hitting the backend login API)
+
+
+
+
+
+
+
+
+## ⚪ ️ 4.3 Measure logical coverage using mutation testing
+
+:white_check_mark: **Do:** The Traditional Coverage metric often lies: It may show you 100% code coverage, but none of your functions, even not one, return the right response. How come? it simply measures over which lines of code the test visited, but it doesn’t check if the tests actually tested anything — asserted for the right response. Like someone who’s traveling for business and showing his passport stamps — this doesn’t prove any work done, only that he visited few airports and hotels.
+
+Mutation-based testing is here to help by measuring the amount of code that was actually TESTED not just VISITED. [Stryker](https://stryker-mutator.io/) is a JavaScript library for mutation testing and the implementation is really neat:
+
+(1) it intentionally changes the code and “plants bugs”. For example the code newOrder.price===0 becomes newOrder.price!=0. This “bugs” are called mutations
+
+(2) it runs the tests, if all succeed then we have a problem — the tests didn’t serve their purpose of discovering bugs, the mutations are so-called survived. If the tests failed, then great, the mutations were killed.
+
+Knowing that all or most of the mutations were killed gives much higher confidence than traditional coverage and the setup time is similar
+
+
+
+❌ **Otherwise:** You’ll be fooled to believe that 85% coverage means your test will detect bugs in 85% of your code
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti Pattern Example: 100% coverage, 0% testing
+
+
+```javascript
+function addNewOrder(newOrder) {
+ logger.log(`Adding new order ${newOrder}`);
+ DB.save(newOrder);
+ Mailer.sendMail(newOrder.assignee, `A new order was places ${newOrder}`);
+
+ return {approved: true};
+}
+
+it("Test addNewOrder, don't use such test names", () => {
+ addNewOrder({asignee: "John@mailer.com",price: 120});
+});//Triggers 100% code coverage, but it doesn't check anything
+
+```
+
+
+### :clap: Doing It Right Example: Stryker reports, a tool for mutation testing, detects and counts the amount of code that is not tested (Mutations)
+
+")
+
+
+
+
+
+
+
+## ⚪ ️4.4 Preventing test code issues with Test linters
+
+:white_check_mark: **Do:** A set of ESLint plugins were built specifically for inspecting the tests code patterns and discover issues. For example, [eslint-plugin-mocha](https://www.npmjs.com/package/eslint-plugin-mocha) will warn when a test is written at the global level (not a son of a describe() statement) or when tests are [skipped](https://mochajs.org/#inclusive-tests) which might lead to a false belief that all tests are passing. Similarly, [eslint-plugin-jest](https://github.com/jest-community/eslint-plugin-jest) can, for example, warn when a test has no assertions at all (not checking anything)
+
+
+
+
+❌ **Otherwise:** Seeing 90% code coverage and 100% green tests will make your face wear a big smile only until you realize that many tests aren’t asserting for anything and many test suites were just skipped. Hopefully, you didn’t deploy anything based on this false observation
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti Pattern Example: A test case full of errors, luckily all are caught by Linters
+
+```javascript
+describe("Too short description", () => {
+ const userToken = userService.getDefaultToken() // *error:no-setup-in-describe, use hooks (sparingly) instead
+ it("Some description", () => {});//* error: valid-test-description. Must include the word "Should" + at least 5 words
+});
+
+it.skip("Test name", () => {// *error:no-skipped-tests, error:error:no-global-tests. Put tests only under describe or suite
+ expect("somevalue"); // error:no-assert
+});
+
+it("Test name", () => {*//error:no-identical-title. Assign unique titles to tests
+});
+```
+
+
+
+
+
+
+# Section 5️⃣: CI and Other Quality Measures
+
+
+
+## ⚪ ️ 5.1 Enrich your linters and abort builds that have linting issues
+
+:white_check_mark: **Do:** Linters are a free lunch, with 5 min setup you get for free an auto-pilot guarding your code and catching significant issue as you type. Gone are the days where linting was about cosmetics (no semi-colons!). Nowadays, Linters can catch severe issues like errors that are not thrown correctly and losing information. On top of your basic set of rules (like [ESLint standard](https://www.npmjs.com/package/eslint-plugin-standard) or [Airbnb style](https://www.npmjs.com/package/eslint-config-airbnb)), consider including some specializing Linters like [eslint-plugin-chai-expect](https://www.npmjs.com/package/eslint-plugin-chai-expect) that can discover tests without assertions, [eslint-plugin-promise](https://www.npmjs.com/package/eslint-plugin-promise?activeTab=readme) can discover promises with no resolve (your code will never continue), [eslint-plugin-security](https://www.npmjs.com/package/eslint-plugin-security?activeTab=readme) which can discover eager regex expressions that might get used for DOS attacks, and [eslint-plugin-you-dont-need-lodash-underscore](https://www.npmjs.com/package/eslint-plugin-you-dont-need-lodash-underscore) is capable of alarming when the code uses utility library methods that are part of the V8 core methods like Lodash._map(…)
+
+
+
+❌ **Otherwise:** Consider a rainy day where your production keeps crashing but the logs don’t display the error stack trace. What happened? Your code mistakenly threw a non-error object and the stack trace was lost, a good reason for banging your head against a brick wall. A 5min linter setup could detect this TYPO and save your day
+
+
+
+
+✏ Code Examples
+
+
+
+### :thumbsdown: Anti Pattern Example: The wrong Error object is thrown mistakenly, no stack-trace will appear for this error. Luckily, ESLint catches the next production bug
+
+
+
+
+
+
+
+
+
+# ⚪ ️ 5.2 Shorten the feedback loop with local developer-CI
+
+:white_check_mark: **Do:** Using a CI with shiny quality inspections like testing, linting, vulnerabilities check, etc? Help developers run this pipeline also locally to solicit instant feedback and shorten the [feedback loop](https://www.gocd.org/2016/03/15/are-you-ready-for-continuous-delivery-part-2-feedback-loops/). Why? an efficient testing process constitutes many and iterative loops: (1) try-outs -> (2) feedback -> (3) refactor. The faster the feedback is, the more improvement iterations a developer can perform per-module and perfect the results. On the flip, when the feedback is late to come fewer improvement iterations could be packed into a single day, the team might already move forward to another topic/task/module and might not be up for refining that module.
+
+Practically, some CI vendors (Example: [CircleCI load CLI](https://circleci.com/docs/2.0/local-cli/)) allow running the pipeline locally. Some commercial tools like [wallaby provide highly-valuable & testing insights](https://wallabyjs.com/) as a developer prototype (no affiliation). Alternatively, you may just add npm script to package.json that runs all the quality commands (e.g. test, lint, vulnerabilities) — use tools like [concurrently](https://www.npmjs.com/package/concurrently) for parallelization and non-zero exit code if one of the tools failed. Now the developer should just invoke one command — e.g. ‘npm run quality’ — to get instant feedback. Consider also aborting a commit if the quality check failed using a githook ([husky can help](https://github.com/typicode/husky))
+
+
+
+❌ **Otherwise:** When the quality results arrive the day after the code, testing doesn’t become a fluent part of development rather an after the fact formal artifact
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: npm scripts that perform code quality inspection, all are run in parallel on demand or when a developer is trying to push new code
+```javascript
+"scripts": {
+ "inspect:sanity-testing": "mocha **/**--test.js --grep \"sanity\"",
+ "inspect:lint": "eslint .",
+ "inspect:vulnerabilities": "npm audit",
+ "inspect:license": "license-checker --failOn GPLv2",
+ "inspect:complexity": "plato .",
+
+ "inspect:all": "concurrently -c \"bgBlue.bold,bgMagenta.bold,yellow\" \"npm:inspect:quick-testing\" \"npm:inspect:lint\" \"npm:inspect:vulnerabilities\" \"npm:inspect:license\""
+ },
+ "husky": {
+ "hooks": {
+ "precommit": "npm run inspect:all",
+ "prepush": "npm run inspect:all"
+ }
+}
+
+```
+
+
+
+
+
+
+
+
+# ⚪ ️5.3 Perform e2e testing over a true production-mirror
+
+:white_check_mark: **Do:** End to end (e2e) testing are the main challenge of every CI pipeline — creating an identical ephemeral production mirror on the fly with all the related cloud services can be tedious and expensive. Finding the best compromise is your game: [Docker-compose](https://serverless.com/) allows crafting isolated dockerized environment with identical containers using a single plain text file but the backing technology (e.g. networking, deployment model) is different from real-world productions. You may combine it with [‘AWS Local’](https://github.com/localstack/localstack) to work with a stub of the real AWS services. If you went [serverless](https://serverless.com/) multiple frameworks like serverless and [AWS SAM](https://docs.aws.amazon.com/lambda/latest/dg/serverless_app.html) allows the local invocation of Faas code.
+
+The huge Kubernetes eco-system is yet to formalize a standard convenient tool for local and CI-mirroring though many new tools are launched frequently. One approach is running a ‘minimized-Kubernetes’ using tools like [Minikube](https://kubernetes.io/docs/setup/minikube/) and [MicroK8s](https://microk8s.io/) which resemble the real thing only come with less overhead. Another approach is testing over a remote ‘real-Kubernetes’, some CI providers (e.g. [Codefresh](https://codefresh.io/)) has native integration with Kubernetes environment and make it easy to run the CI pipeline over the real thing, others allow custom scripting against a remote Kubernetes.
+
+
+
+❌ **Otherwise:** Using different technologies for production and testing demands maintaining two deployment models and keeps the developers and the ops team separated
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Example: a CI pipeline that generates Kubernetes cluster on the fly ([Credit: Dynamic-environments Kubernetes](https://container-solutions.com/dynamic-environments-kubernetes/))
+
+deploy:
stage: deploy
image: registry.gitlab.com/gitlab-examples/kubernetes-deploy
script:
- ./configureCluster.sh $KUBE_CA_PEM_FILE $KUBE_URL $KUBE_TOKEN
- kubectl create ns $NAMESPACE
- kubectl create secret -n $NAMESPACE docker-registry gitlab-registry --docker-server="$CI_REGISTRY" --docker-username="$CI_REGISTRY_USER" --docker-password="$CI_REGISTRY_PASSWORD" --docker-email="$GITLAB_USER_EMAIL"
- mkdir .generated
- echo "$CI_BUILD_REF_NAME-$CI_BUILD_REF"
- sed -e "s/TAG/$CI_BUILD_REF_NAME-$CI_BUILD_REF/g" templates/deals.yaml | tee ".generated/deals.yaml"
- kubectl apply --namespace $NAMESPACE -f .generated/deals.yaml
- kubectl apply --namespace $NAMESPACE -f templates/my-sock-shop.yaml
environment:
name: test-for-ci
+
+
+
+
+
+
+
+
+
+## ⚪ ️5.4 Parallelize test execution
+:white_check_mark: **Do:** When done right, testing is your 24/7 friend providing almost instant feedback. In practice, executing 500 CPU-bounded unit test on a single thread can take too long. Luckily, modern test runners and CI platforms (like [Jest](https://github.com/facebook/jest), [AVA](https://github.com/avajs/ava) and [Mocha extensions](https://github.com/yandex/mocha-parallel-tests)) can parallelize the test into multiple processes and achieve significant improvement in feedback time. Some CI vendors do also parallelize tests across containers (!) which shortens the feedback loop even further. Whether locally over multiple processes, or over some cloud CLI using multiple machines — parallelizing demand keeping the tests autonomous as each might run on different processes
+
+
+❌ **Otherwise:** Getting test results 1 hour long after pushing new code, as you already code the next features, is a great recipe for making testing less relevant
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example: Mocha parallel & Jest easily outrun the traditional Mocha thanks to testing parallelization ([Credit: JavaScript Test-Runners Benchmark](https://medium.com/dailyjs/javascript-test-runners-benchmark-3a78d4117b4))
+")
+
+
+
+
+
+
+
+
+## ⚪ ️5.5 Stay away from legal issues using license and plagiarism check
+:white_check_mark: **Do:** Licensing and plagiarism issues are probably not your main concern right now, but why not tick this box as well in 10 minutes? A bunch of npm packages like [license check](https://www.npmjs.com/package/license-checker) and [plagiarism check](https://www.npmjs.com/package/plagiarism-checker) (commercial with free plan) can be easily baked into your CI pipeline and inspect for sorrows like dependencies with restrictive licenses or code that was copy-pasted from Stackoverflow and apparently violates some copyrights
+
+❌ **Otherwise:** Unintentionally, developers might use packages with inappropriate licenses or copy paste commercial code and run into legal issues
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Doing It Right Example:
+```javascript
+//install license-checker in your CI environment or also locally
+npm install -g license-checker
+
+//ask it to scan all licenses and fail with exit code other than 0 if it found unauthorized license. The CI system should catch this failure and stop the build
+license-checker --summary --failOn BSD
+
+```
+
+
+
+
+
+
+
+
+
+
+
+
+## ⚪ ️5.6 Constantly inspect for vulnerable dependencies
+:white_check_mark: **Do:** Licensing and plagiarism issues are probably not your main concern right now, but why not tick this box as well in 10 minutes? A bunch of npm packages like license check and plagiarism check (commercial with free plan) can be easily baked into your CI pipeline and inspect for sorrows like dependencies with restrictive licenses or code that was copy-pasted from Stackoverflow and apparently violates some copyrights
+
+
+
+❌ **Otherwise:** Even the most reputable dependencies such as Express have known vulnerabilities. This can get easily tamed using community tools such as [npm audit](https://docs.npmjs.com/getting-started/running-a-security-audit), or commercial tools like [snyk](https://snyk.io/) (offer also a free community version). Both can be invoked from your CI on every build
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Example: NPM Audit result
+
+
+
+
+
+
+
+
+
+## ⚪ ️5.7 Automate dependency updates
+:white_check_mark: **Do:** Yarn and npm latest introduction of package-lock.json introduced a serious challenge (the road to hell is paved with good intentions) — by default now, packages are no longer getting updates. Even a team running many fresh deployments with ‘npm install’ & ‘npm update’ won’t get any new updates. This leads to subpar dependent packages versions at best or to vulnerable code at worst. Teams now rely on developers goodwill and memory to manually update the package.json or use tools [like ncu](https://www.npmjs.com/package/npm-check-updates) manually. A more reliable way could be to automate the process of getting the most reliable dependency versions, though there are no silver bullet solutions yet there are two possible automation roads:
+
+(1) CI can fail builds that have obsolete dependencies — using tools like [‘npm outdated’](https://docs.npmjs.com/cli/outdated) or ‘npm-check-updates (ncu)’ . Doing so will enforce developers to update dependencies.
+
+(2) Use commercial tools that scan the code and automatically send pull requests with updated dependencies. One interesting question remaining is what should be the dependency update policy — updating on every patch generates too many overhead, updating right when a major is released might point to an unstable version (many packages found vulnerable on the very first days after being released, [see the](https://nodesource.com/blog/a-high-level-post-mortem-of-the-eslint-scope-security-incident/) eslint-scope incident).
+
+An efficient update policy may allow some ‘vesting period’ — let the code lag behind the @latest for some time and versions before considering the local copy as obsolete (e.g. local version is 1.3.1 and repository version is 1.3.8)
+
+
+
+❌ **Otherwise:** Your production will run packages that have been explicitly tagged by their author as risky
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Example: [ncu](https://www.npmjs.com/package/npm-check-updates) can be used manually or within a CI pipeline to detect to which extent the code lag behind the latest versions
+
+
+
+
+
+
+
+
+## ⚪ ️ 5.8 Other, non-Node related, CI tips
+:white_check_mark: **Do:** This post is focused on testing advice that is related to, or at least can be exemplified with Node JS. This bullet, however, groups few non-Node related tips that are well-known
+
+ - Use a declarative syntax. This is the only option for most vendors but older versions of Jenkins allows using code or UI
- Opt for a vendor that has native Docker support
- Fail early, run your fastest tests first. Create a ‘Smoke testing’ step/milestone that groups multiple fast inspections (e.g. linting, unit tests) and provide snappy feedback to the code committer
- Make it easy to skim-through all build artifacts including test reports, coverage reports, mutation reports, logs, etc
- Create multiple pipelines/jobs for each event, reuse steps between them. For example, configure a job for feature branch commits and a different one for master PR. Let each reuse logic using shared steps (most vendors provide some mechanism for code reuse
- Never embed secrets in a job declaration, grab them from a secret store or from the job’s configuration
- Explicitly bump version in a release build or at least ensure the developer did so
- Build only once and perform all the inspections over the single build artifact (e.g. Docker image)
- Test in an ephemeral environment that doesn’t drift state between builds. Caching node_modules might be the only exception
+
+
+
+❌ **Otherwise:** You‘ll miss years of wisdom
+
+
+
+## ⚪ ️ 5.9 Build matrix: Run the same CI steps using multiple Node versions
+:white_check_mark: **Do:** Quality checking is about serendipity, the more ground you cover the luckier you get in detecting issues early. When developing reusable packages or running a multi-customer production with various configuration and Node versions, the CI must run the pipeline of tests over all the permutations of configurations. For example, assuming we use MySQL for some customers and Postgres for others — some CI vendors support a feature called ‘Matrix’ which allow running the suit of testing against all permutations of MySQL, Postgres and multiple Node version like 8, 9 and 10. This is done using configuration only without any additional effort (assuming you have testing or any other quality checks). Other CIs who doesn’t support Matrix might have extensions or tweaks to allow that
+
+
+
+❌ **Otherwise:** So after doing all that hard work of writing testing are we going to let bugs sneak in only because of configuration issues?
+
+
+
+
+✏ Code Examples
+
+
+
+### :clap: Example: Using Travis (CI vendor) build definition to run the same test over multiple Node versions
+language: node_js
node_js:
- "7"
- "6"
- "5"
- "4"
install:
- npm install
script:
- npm run test
+
+
+
+
+# Team
+
+
+
+## Yoni Goldberg
+
+
+
+
+
+**Role:** Writer
+
+**About:** I'm an independent consultant who works with 500 fortune corporates and garage startups on polishing their JS & Node.js applications. More than any other topic I'm fascinated by and aims to master the art of testing. I'm also the author of [Node.js Best Practices](https://github.com/goldbergyoni/nodebestpractices)
+
+
+
+**Workshop:** 👨🏫 Want to learn all these practices and techniques at your offices (Europe & USA)? [Register here for my testing workshop](https://testjavascript.com/)
+
+
+**Follow:**
+
+* [🐦 Twitter](https://twitter.com/goldbergyoni/)
+* [📞 Contact](https://testjavascript.com/contact-2/)
+* [✉️ Newsletter](https://testjavascript.com/newsletter//)
+
+
+
+
+
+
+## [Bruno Scheufler](https://github.com/BrunoScheufler)
+
+**Role:** Tech reviewer and advisor
+
+Took care to revise, improve, lint and polish all the texts
+
+**About:** full-stack web engineer, Node.js & GraphQL enthusiast
+
+
+
+## [Ido Richter](https://github.com/idori)
+
+**Role:** Concept, design and great advice
+
+**About:** A savvy frontend developer, CSS expert and emojis freak
From 50f35fb38630fd86c468dfaf3c9fe51ddf29ed95 Mon Sep 17 00:00:00 2001
From: devori
Date: Fri, 6 Sep 2019 15:20:35 +0900
Subject: [PATCH 054/502] Translation-ko_KR(1.6)
---
readme.kr.md | 31 +++++++++++++++----------------
1 file changed, 15 insertions(+), 16 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index be8617d7..3c61d2cd 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -346,22 +346,22 @@ it("유효한 제품을 삭제하려고 할 때, 메일을 보낸다", async ()
-## ⚪ ️1.6 Don’t “foo”, use realistic input data
+## ⚪ ️1.6 의미없는 인풋 데이터를 사용하지 말고, 실제와 같은 인풋 데이터를 사용해라
-:white_check_mark: **Do:** Often production bugs are revealed under some very specific and surprising input — the more realistic the test input is, the greater the chances are to catch bugs early. Use dedicated libraries like [Faker](https://www.npmjs.com/package/faker) to generate pseudo-real data that resembles the variety and form of production data. For example, such libraries can generate realistic phone numbers, usernames, credit card, company names, and even ‘lorem ipsum’ text. You may also create some tests (on top of unit tests, not instead) that randomize fakers data to stretch your unit under test or even import real data from your production environment. Want to take it to the next level? see next bullet (property-based testing).
+:white_check_mark: **이렇게 해라:** 흔히 제품의 버그들은 매우 특수한 인풋데이터를 통해 나타납니다 - 테스트 인풋이 혈실적일 수록 버그를 조기에 발견할 가능성이 높아집니다. 실제 데이터와 다양성 및 형태가 유사한 데이터를 생성해 주는 [Faker](https://www.npmjs.com/package/faker) 같은 전용 라이브러리들을 사용하십시오. 이런 라이브러리들은 실제같은 전화번호, 사용자 이름, 신용카드, 회사명 그리고 심지어 'lorem ipsum'같은 문자등을 생성할 수도 있습니다. 당신은 가상의 데이터를 사용하여 테스트(단위 테스트 위에서)를 무작위화 하거나 심지어 실제 환경으로부터의 실제 데이터를 임포트 할수도 있습니다. 다음 단계를 얻기를 원하십니까? 그렇다면 아래로 가십시오 (property-based testing).
-❌ **Otherwise:** All your development testing will falsely seem green when you use synthetic inputs like “Foo” but then production might turn red when a hacker passes-in a nasty string like “@3e2ddsf . ##’ 1 fdsfds . fds432 AAAA”
+❌ **그렇지 않다면:** "Foo"와 같은 인풋을 사용하면 당신의 모든 테스트가 모두 통과한것 처럼 표시되지만, 실제 환경에서는 해커가 “@3e2ddsf . ##’ 1 fdsfds . fds432 AAAA” 같은 인풋을 전달해 실패 할수도 있습니다.
-✏ Code Examples
+✏ 코드 예제
-### :thumbsdown: Anti-Pattern Example: A test suite that passes due to non-realistic data
+### :thumbsdown: 올바르지 않은 예: 현실적이지 않은 데이터 때문에 통과하는 테스트

@@ -369,34 +369,33 @@ it("유효한 제품을 삭제하려고 할 때, 메일을 보낸다", async ()
```javascript
const addProduct = (name, price) =>{
- const productNameRegexNoSpace = /^\S*$/;//no white-space allowd
+ const productNameRegexNoSpace = /^\S*$/;// 공백은 허용되지 않음
if(!productNameRegexNoSpace.test(name))
- return false;//this path never reached due to dull input
+ return false;//도달하지 않는 곳
//some logic here
return true;
};
-test("Wrong: When adding new product with valid properties, get successful confirmation", async () => {
- //The string "Foo" which is used in all tests never triggers a false result
+test("잘못된 예제: 유효한 속성과 함께 제품을 추가한다면, 성공을 얻는다.", async () => {
+ //모든 테스트에서 false 가 리턴되지 않는 "Foo" 인풋을 사용
const addProductResult = addProduct("Foo", 5);
expect(addProductResult).toBe(true);
- //Positive-false: the operation succeeded because we never tried with long
- //product name including spaces
+ //거짓된 성공: 공백을 포함하는 문자열을 사용하지 않았기 때문에 테스트는 성공한다.
});
```
-### :clap:Doing It Right Example: Randomizing realistic input
+### :clap:올바른 예: 무작위한 현실적인 인풋Randomizing realistic input
```javascript
-it("Better: When adding new valid product, get successful confirmation", async () => {
+it("더 나은 것: 유효한 제품이 추가된다면, 성공을 얻는다.", async () => {
const addProductResult = addProduct(faker.commerce.productName(), faker.random.number());
- //Generated random input: {'Sleek Cotton Computer', 85481}
+ //생성된 무작위 인풋: {'Sleek Cotton Computer', 85481}
expect(addProductResult).to.be.true;
- //Test failed, the random input triggered some path we never planned for.
- //We discovered a bug early!
+ //테스트는 실패한다, 무작위 인풋은 우리가 계획하지 않은 일이 일어나도록 만든다.
+ //우리는 조기에 버그를 발견했다!
});
```
From 71e7813a092af0ec19fa930c7239510e6c5d9e55 Mon Sep 17 00:00:00 2001
From: devori
Date: Fri, 6 Sep 2019 15:23:04 +0900
Subject: [PATCH 055/502] fix typo
remove english sentence that translated into ko-KR
---
readme.kr.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/readme.kr.md b/readme.kr.md
index 3c61d2cd..927f5871 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -388,7 +388,7 @@ test("잘못된 예제: 유효한 속성과 함께 제품을 추가한다면,
```
-### :clap:올바른 예: 무작위한 현실적인 인풋Randomizing realistic input
+### :clap:올바른 예: 무작위한 현실적인 인풋
```javascript
it("더 나은 것: 유효한 제품이 추가된다면, 성공을 얻는다.", async () => {
const addProductResult = addProduct(faker.commerce.productName(), faker.random.number());
From b6182259d5fe7659def70f315b56ad5c66544dc6 Mon Sep 17 00:00:00 2001
From: sury
Date: Fri, 6 Sep 2019 15:40:28 +0900
Subject: [PATCH 056/502] translation 1.7 ko/kr
---
readme.kr.md | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 927f5871..d8c03f0d 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -406,13 +406,15 @@ it("더 나은 것: 유효한 제품이 추가된다면, 성공을 얻는다.",
-## ⚪ ️ 1.7 Test many input combinations using Property-based testing
+## ⚪ ️ 1.7 프로퍼티 기반(Property-based) 테스트를 통해 다양한 인풋 값 조합으로 테스를 하십시오.
-:white_check_mark: **Do:** Typically we choose a few input samples for each test. Even when the input format resembles real-world data (see bullet ‘Don’t foo’), we cover only a few input combinations (method(‘’, true, 1), method(“string” , false” , 0)), However, in production, an API that is called with 5 parameters can be invoked with thousands of different permutations, one of them might render our process down ([see Fuzz Testing](https://en.wikipedia.org/wiki/Fuzzing)). What if you could write a single test that sends 1000 permutations of different inputs automatically and catches for which input our code fails to return the right response? Property-based testing is a technique that does exactly that: by sending all the possible input combinations to your unit under test it increases the serendipity of finding a bug. For example, given a method — addNewProduct(id, name, isDiscount) — the supporting libraries will call this method with many combinations of (number, string, boolean) like (1, “iPhone”, false), (2, “Galaxy”, true). You can run property-based testing using your favorite test runner (Mocha, Jest, etc) using libraries like [js-verify](https://github.com/jsverify/jsverify) or [testcheck](https://github.com/leebyron/testcheck-js) (much better documentation). Update: Nicolas Dubien suggests in the comments below to [checkout fast-check](https://github.com/dubzzz/fast-check#readme) which seems to offer some additional features and also to be actively maintained
+:white_check_mark: **이렇게 해라:** 우리는 일반적으로 적은 수의 인풋 샘플 데이터를 가지고 테스트를 합니다. 심지어 인풋 데이터 형식이 실제 데이터와 비슷할 때에도 다음과 같이 제한된 인풋 조합으로만 테스트를 커버합니다.(method(‘’, true, 1), method(“string” , false” , 0)) 하지만, 운영시에는 5개의 파라미터를 가지는 API는 수 천 개의 다른 조합의 파라미터로 호출 될 수 있고, 이 중 하나가 우리의 시스템을 다운시킬 수도 있습니다. 그렇다면 만약 1000 가지 조합의 인풋값을 자동으로 생성하고 올바른 응답을 반환하지 못하는 인풋값을 찾아내는 단일 테스트를 작성할 수 있다면 어떨까요?
+프로퍼티 기반 테스트는 유닛 테스트에 모든 가능한 인풋 조합을 사용하여 생각하지 못 한 버그를 찾을 확률을 높여줍니다. 예를들어, 다음의 메소드가 주어졌을 때 — addNewProduct(id, name, isDiscount) — 프로퍼티 기반 테스트 라이브러리들은 다양한 파라미터 (number, string, boolean) 조합으로 - (1, “iPhone”, false), (2, “Galaxy”, true) - 이 메소드를 호출합니다. [js-verify](https://github.com/jsverify/jsverify) 나 [testcheck](https://github.com/leebyron/testcheck-js) (much better documentation) 같은 라이브러리를 지원하는 테스트 러너들 (Mocha, Jest, etc) 중 당신이 가장 선호하는 방법을 통해 프로퍼티 기반 테스트를 할 수 있습니다.
+업데이트 : Nicolas Dubien가 코멘트를 통해 더 많은 부가적인 기능들을 제공하고 활발하게 유지보수되고 있는 라이브러리 [fast-check](https://github.com/dubzzz/fast-check#readme)를 추천해 주었습니다.
-❌ **Otherwise:** Unconsciously, you choose the test inputs that cover only code paths that work well. Unfortunately, this decreases the efficiency of testing as a vehicle to expose bugs
+❌ **그렇지 않으면:** 의심할 여지 없이 당신은 오직 코드가 잘 동작하는 테스트 인풋을 사용할 것입니다. 불행하게도 이러한 방식은 버그를 찾는 도구로써의 테스트 효율성을 떨어뜨릴 것 입니다.
@@ -421,7 +423,7 @@ it("더 나은 것: 유효한 제품이 추가된다면, 성공을 얻는다.",
-### :clap: Doing It Right Example: Testing many input permutations with “mocha-testcheck”
+### :clap: 올바른 예: “mocha-testcheck”를 사용하여 다양한 인풋 조합으로 테스트 하십시오.

@@ -432,7 +434,7 @@ const {expect} = require('chai');
describe('Product service', () => {
describe('Adding new', () => {
- //this will run 100 times with different random properties
+ //서로 다른 무작위 값으로 100회 호출됩니다.
check.it('Add new product with random yet valid properties, always successful',
gen.int, gen.string, (id, name) => {
expect(addNewProduct(id, name).status).to.equal('approved');
From d9b3d87098fdd23b024d74afc91355550189cbd3 Mon Sep 17 00:00:00 2001
From: sury
Date: Fri, 6 Sep 2019 15:46:03 +0900
Subject: [PATCH 057/502] fix 1.7 tranlation to Korean
---
readme.kr.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/readme.kr.md b/readme.kr.md
index d8c03f0d..3ea34a81 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -419,7 +419,7 @@ it("더 나은 것: 유효한 제품이 추가된다면, 성공을 얻는다.",
-✏ Code Examples
+✏ 코드 예제
From e3bf782b5694ffadb6996846d6fca67128862ba9 Mon Sep 17 00:00:00 2001
From: sury
Date: Fri, 6 Sep 2019 16:01:06 +0900
Subject: [PATCH 058/502] =?UTF-8?q?fix=20used=20word=20(=EB=8B=A8=EC=9D=BC?=
=?UTF-8?q?=20->=20=EB=8B=A8=EC=9C=84)?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
readme.kr.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 3ea34a81..317d1671 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -406,10 +406,10 @@ it("더 나은 것: 유효한 제품이 추가된다면, 성공을 얻는다.",
-## ⚪ ️ 1.7 프로퍼티 기반(Property-based) 테스트를 통해 다양한 인풋 값 조합으로 테스를 하십시오.
+## ⚪ ️ 1.7 프로퍼티 기반(Property-based) 테스트를 통해 다양한 인풋 값 조합으로 테스트를 하십시오.
-:white_check_mark: **이렇게 해라:** 우리는 일반적으로 적은 수의 인풋 샘플 데이터를 가지고 테스트를 합니다. 심지어 인풋 데이터 형식이 실제 데이터와 비슷할 때에도 다음과 같이 제한된 인풋 조합으로만 테스트를 커버합니다.(method(‘’, true, 1), method(“string” , false” , 0)) 하지만, 운영시에는 5개의 파라미터를 가지는 API는 수 천 개의 다른 조합의 파라미터로 호출 될 수 있고, 이 중 하나가 우리의 시스템을 다운시킬 수도 있습니다. 그렇다면 만약 1000 가지 조합의 인풋값을 자동으로 생성하고 올바른 응답을 반환하지 못하는 인풋값을 찾아내는 단일 테스트를 작성할 수 있다면 어떨까요?
-프로퍼티 기반 테스트는 유닛 테스트에 모든 가능한 인풋 조합을 사용하여 생각하지 못 한 버그를 찾을 확률을 높여줍니다. 예를들어, 다음의 메소드가 주어졌을 때 — addNewProduct(id, name, isDiscount) — 프로퍼티 기반 테스트 라이브러리들은 다양한 파라미터 (number, string, boolean) 조합으로 - (1, “iPhone”, false), (2, “Galaxy”, true) - 이 메소드를 호출합니다. [js-verify](https://github.com/jsverify/jsverify) 나 [testcheck](https://github.com/leebyron/testcheck-js) (much better documentation) 같은 라이브러리를 지원하는 테스트 러너들 (Mocha, Jest, etc) 중 당신이 가장 선호하는 방법을 통해 프로퍼티 기반 테스트를 할 수 있습니다.
+:white_check_mark: **이렇게 해라:** 우리는 일반적으로 적은 수의 인풋 샘플 데이터를 가지고 테스트를 합니다. 심지어 인풋 데이터 형식이 실제 데이터와 비슷할 때에도 다음과 같이 제한된 인풋 조합으로만 테스트를 커버합니다.(method(‘’, true, 1), method(“string” , false” , 0)) 하지만, 운영시에는 5개의 파라미터를 가지는 API는 수 천 개의 다른 조합의 파라미터로 호출 될 수 있고, 이 중 하나가 우리의 시스템을 다운시킬 수도 있습니다. 그렇다면 만약 1000 가지 조합의 인풋값을 자동으로 생성하고 올바른 응답을 반환하지 못하는 인풋값을 찾아내는 단위 테스트를 작성할 수 있다면 어떨까요?
+프로퍼티 기반 테스트는 단위 테스트에 모든 가능한 인풋 조합을 사용하여 생각하지 못 한 버그를 찾을 확률을 높여줍니다. 예를들어, 다음의 메소드가 주어졌을 때 — addNewProduct(id, name, isDiscount) — 프로퍼티 기반 테스트 라이브러리들은 다양한 파라미터 (number, string, boolean) 조합으로 - (1, “iPhone”, false), (2, “Galaxy”, true) - 이 메소드를 호출합니다. [js-verify](https://github.com/jsverify/jsverify) 나 [testcheck](https://github.com/leebyron/testcheck-js) (much better documentation) 같은 라이브러리를 지원하는 테스트 러너들 (Mocha, Jest, etc) 중 당신이 가장 선호하는 방법을 통해 프로퍼티 기반 테스트를 할 수 있습니다.
업데이트 : Nicolas Dubien가 코멘트를 통해 더 많은 부가적인 기능들을 제공하고 활발하게 유지보수되고 있는 라이브러리 [fast-check](https://github.com/dubzzz/fast-check#readme)를 추천해 주었습니다.
From a4b06bea9f8714b58fa2ad31a5492fbbc34982b6 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Aurelijus=20Ro=C5=BE=C4=97nas?=
Date: Fri, 6 Sep 2019 14:35:24 +0300
Subject: [PATCH 059/502] Fixed mistype.
---
readme.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/readme.md b/readme.md
index c459d651..3b2d801a 100644
--- a/readme.md
+++ b/readme.md
@@ -1338,7 +1338,7 @@ test('When no products exist, show the appropriate message', () => {
-❌ **Otherwise:** UI might invest much in testing its functionality only to realizes very late that the backend returned payload (the data schema the UI has to work with) is very differnt than expected
+❌ **Otherwise:** UI might invest much in testing its functionality only to realizes very late that the backend returned payload (the data schema the UI has to work with) is very different than expected
From bd3d1630104fb750470080a4f7c428757796c799 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Aurelijus=20Ro=C5=BE=C4=97nas?=
Date: Fri, 6 Sep 2019 15:36:09 +0300
Subject: [PATCH 060/502] Added closing parentheses.
---
readme.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/readme.md b/readme.md
index 3b2d801a..c03fdfa2 100644
--- a/readme.md
+++ b/readme.md
@@ -1948,7 +1948,7 @@ An efficient update policy may allow some ‘vesting period’ — let the c
## ⚪ ️ 5.8 Other, non-Node related, CI tips
:white_check_mark: **Do:** This post is focused on testing advice that is related to, or at least can be exemplified with Node JS. This bullet, however, groups few non-Node related tips that are well-known
- - Use a declarative syntax. This is the only option for most vendors but older versions of Jenkins allows using code or UI
- Opt for a vendor that has native Docker support
- Fail early, run your fastest tests first. Create a ‘Smoke testing’ step/milestone that groups multiple fast inspections (e.g. linting, unit tests) and provide snappy feedback to the code committer
- Make it easy to skim-through all build artifacts including test reports, coverage reports, mutation reports, logs, etc
- Create multiple pipelines/jobs for each event, reuse steps between them. For example, configure a job for feature branch commits and a different one for master PR. Let each reuse logic using shared steps (most vendors provide some mechanism for code reuse
- Never embed secrets in a job declaration, grab them from a secret store or from the job’s configuration
- Explicitly bump version in a release build or at least ensure the developer did so
- Build only once and perform all the inspections over the single build artifact (e.g. Docker image)
- Test in an ephemeral environment that doesn’t drift state between builds. Caching node_modules might be the only exception
+ - Use a declarative syntax. This is the only option for most vendors but older versions of Jenkins allows using code or UI
- Opt for a vendor that has native Docker support
- Fail early, run your fastest tests first. Create a ‘Smoke testing’ step/milestone that groups multiple fast inspections (e.g. linting, unit tests) and provide snappy feedback to the code committer
- Make it easy to skim-through all build artifacts including test reports, coverage reports, mutation reports, logs, etc
- Create multiple pipelines/jobs for each event, reuse steps between them. For example, configure a job for feature branch commits and a different one for master PR. Let each reuse logic using shared steps (most vendors provide some mechanism for code reuse)
- Never embed secrets in a job declaration, grab them from a secret store or from the job’s configuration
- Explicitly bump version in a release build or at least ensure the developer did so
- Build only once and perform all the inspections over the single build artifact (e.g. Docker image)
- Test in an ephemeral environment that doesn’t drift state between builds. Caching node_modules might be the only exception
From 0637e6e5a0d75d6f007bb55dbd66dfddb5b98f3a Mon Sep 17 00:00:00 2001
From: Rain Byun
Date: Sat, 7 Sep 2019 16:59:39 +0900
Subject: [PATCH 061/502] GPG signature test.
---
readme.kr.md | 18 +++++-------------
1 file changed, 5 insertions(+), 13 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 317d1671..33ea8c9c 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -346,15 +346,14 @@ it("유효한 제품을 삭제하려고 할 때, 메일을 보낸다", async ()
-## ⚪ ️1.6 의미없는 인풋 데이터를 사용하지 말고, 실제와 같은 인풋 데이터를 사용해라
+## ⚪ ️ 1.6 의미없는 인풋 데이터를 사용하지 말고, 실제와 같은 인풋 데이터를 사용해라
:white_check_mark: **이렇게 해라:** 흔히 제품의 버그들은 매우 특수한 인풋데이터를 통해 나타납니다 - 테스트 인풋이 혈실적일 수록 버그를 조기에 발견할 가능성이 높아집니다. 실제 데이터와 다양성 및 형태가 유사한 데이터를 생성해 주는 [Faker](https://www.npmjs.com/package/faker) 같은 전용 라이브러리들을 사용하십시오. 이런 라이브러리들은 실제같은 전화번호, 사용자 이름, 신용카드, 회사명 그리고 심지어 'lorem ipsum'같은 문자등을 생성할 수도 있습니다. 당신은 가상의 데이터를 사용하여 테스트(단위 테스트 위에서)를 무작위화 하거나 심지어 실제 환경으로부터의 실제 데이터를 임포트 할수도 있습니다. 다음 단계를 얻기를 원하십니까? 그렇다면 아래로 가십시오 (property-based testing).
-
+
❌ **그렇지 않다면:** "Foo"와 같은 인풋을 사용하면 당신의 모든 테스트가 모두 통과한것 처럼 표시되지만, 실제 환경에서는 해커가 “@3e2ddsf . ##’ 1 fdsfds . fds432 AAAA” 같은 인풋을 전달해 실패 할수도 있습니다.
-
✏ 코드 예제
@@ -384,11 +383,12 @@ test("잘못된 예제: 유효한 속성과 함께 제품을 추가한다면,
expect(addProductResult).toBe(true);
//거짓된 성공: 공백을 포함하는 문자열을 사용하지 않았기 때문에 테스트는 성공한다.
});
-
```
+
### :clap:올바른 예: 무작위한 현실적인 인풋
+
```javascript
it("더 나은 것: 유효한 제품이 추가된다면, 성공을 얻는다.", async () => {
const addProductResult = addProduct(faker.commerce.productName(), faker.random.number());
@@ -401,9 +401,6 @@ it("더 나은 것: 유효한 제품이 추가된다면, 성공을 얻는다.",
-
-
-
## ⚪ ️ 1.7 프로퍼티 기반(Property-based) 테스트를 통해 다양한 인풋 값 조합으로 테스트를 하십시오.
@@ -411,12 +408,11 @@ it("더 나은 것: 유효한 제품이 추가된다면, 성공을 얻는다.",
:white_check_mark: **이렇게 해라:** 우리는 일반적으로 적은 수의 인풋 샘플 데이터를 가지고 테스트를 합니다. 심지어 인풋 데이터 형식이 실제 데이터와 비슷할 때에도 다음과 같이 제한된 인풋 조합으로만 테스트를 커버합니다.(method(‘’, true, 1), method(“string” , false” , 0)) 하지만, 운영시에는 5개의 파라미터를 가지는 API는 수 천 개의 다른 조합의 파라미터로 호출 될 수 있고, 이 중 하나가 우리의 시스템을 다운시킬 수도 있습니다. 그렇다면 만약 1000 가지 조합의 인풋값을 자동으로 생성하고 올바른 응답을 반환하지 못하는 인풋값을 찾아내는 단위 테스트를 작성할 수 있다면 어떨까요?
프로퍼티 기반 테스트는 단위 테스트에 모든 가능한 인풋 조합을 사용하여 생각하지 못 한 버그를 찾을 확률을 높여줍니다. 예를들어, 다음의 메소드가 주어졌을 때 — addNewProduct(id, name, isDiscount) — 프로퍼티 기반 테스트 라이브러리들은 다양한 파라미터 (number, string, boolean) 조합으로 - (1, “iPhone”, false), (2, “Galaxy”, true) - 이 메소드를 호출합니다. [js-verify](https://github.com/jsverify/jsverify) 나 [testcheck](https://github.com/leebyron/testcheck-js) (much better documentation) 같은 라이브러리를 지원하는 테스트 러너들 (Mocha, Jest, etc) 중 당신이 가장 선호하는 방법을 통해 프로퍼티 기반 테스트를 할 수 있습니다.
업데이트 : Nicolas Dubien가 코멘트를 통해 더 많은 부가적인 기능들을 제공하고 활발하게 유지보수되고 있는 라이브러리 [fast-check](https://github.com/dubzzz/fast-check#readme)를 추천해 주었습니다.
-
+
❌ **그렇지 않으면:** 의심할 여지 없이 당신은 오직 코드가 잘 동작하는 테스트 인풋을 사용할 것입니다. 불행하게도 이러한 방식은 버그를 찾는 도구로써의 테스트 효율성을 떨어뜨릴 것 입니다.
-
✏ 코드 예제
@@ -441,14 +437,10 @@ describe('Product service', () => {
});
})
});
-
```
-
-
-
## ⚪ ️ 1.8 If needed, use only short & inline snapshots
From 6e17cc3a7a5d5a3e9f51c507cdf637c74a2c4f60 Mon Sep 17 00:00:00 2001
From: Rain Byun
Date: Mon, 9 Sep 2019 00:09:05 +0900
Subject: [PATCH 062/502] Translate into Korean 1.8
---
readme.kr.md | 28 +++++++++++++++-------------
1 file changed, 15 insertions(+), 13 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 33ea8c9c..a79db949 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -443,30 +443,31 @@ describe('Product service', () => {
-## ⚪ ️ 1.8 If needed, use only short & inline snapshots
+## ⚪ ️ 1.8 필요한 경우 짧거나 인라인 스냅샷만 사용하십시오.
-:white_check_mark: **Do:** When there is a need for [snapshot testing](https://jestjs.io/docs/en/snapshot-testing), use only short and focused snapshots (i.e. 3-7 lines) that are included as part of the test ([Inline Snapshot](https://jestjs.io/docs/en/snapshot-testing#inline-snapshots)) and not within external files. Keeping this guideline will ensure your tests remain self-explanatory and less fragile.
+:white_check_mark: **이렇게 해라:** [스냅샷 테스트](https://jestjs.io/docs/en/snapshot-testing)가 필요한 경우 외부 파일이 아닌 테스트의 일부 ([인라인 스냅샷](https://jestjs.io/docs/en/snapshot-testing#inline-snapshots))에 포함 된 짧고 집중된 스냅샷(3~7 라인)만 사용하십시오. 이 지침을 따르면 따로 설명이 필요없고 잘 깨지지 않는 테스트가 됩니다.
-On the other hand, ‘classic snapshots’ tutorials and tools encourage to store big files (e.g. component rendering markup, API JSON result) over some external medium and ensure each time when the test run to compare the received result with the saved version. This, for example, can implicitly couple our test to 1000 lines with 3000 data values that the test writer never read and reasoned about. Why is this wrong? By doing so, there are 1000 reasons for your test to fail - it’s enough for a single line to change for the snapshot to get invalid and this is likely to happen a lot. How frequently? for every space, comment or minor CSS/HTML change. Not only this, the test name wouldn’t give a clue about the failure as it just checks that 1000 lines didn’t change, also it encourages to the test writer to accept as the desired true a long document he couldn’t inspect and verify. All of these are symptoms of obscure and eager test that is not focused and aims to achieve too much
+반면에, '고전적인 스냅샷' 튜토리얼 및 도구는 외부에 큰 파일(예: 구성 요소 랜더링 마크업, API JSON 결과)를 저장하고, 테스트를 실행할 때 마다 수신된 결과를 저장된 버전과 비교하기를 권장합니다. 예를 들어, 이것은 1,000 라인(우리가 절대 읽지 않고 추론하지 않을 3,000개의 데이터 값을 가진)의 코드를 우리 테스트에 암시적으로 연결할 수 있습니다. 왜 이것이 잘못 되었을까요? 이렇게하면 테스트에 실패할 1,000 가지 이유가 생깁니다. 한줄만 변경되어도 스냅샷이 유효하지 않게 되고, 이런일이 일어날 가능성이 높습니다. 얼마나 자주? 모든 공백, 주석에서 혹은 사소한 CSS/HTML 변경에 대해서. 뿐만 아니라 테스트 이름은 1,000 라인이 변경되지 않았는지를 나타내기 때분에, 실패에 대한 단서를 제공하지 않습니다. 또한 테스트 작성자가 긴 문서(검사하고 확인할 수 없는)를 받아들이게끔 합니다. 이 모든 것은 초점이 맞지않고 너무 많은 것을 달성하려는 모호하고 간절한 테스트 증상입니다.
+
+긴 외부 스냅샷이 허용되는 경우가 거의 없다는 점은 주목할 가치가 있습니다 - 데이터가 아닌 스키마를 assert 할 때(값 추출 및 필드에 집중) 또는 수신된 문서가 거의 변경되지 않는 경우
-It’s worth noting that there are few cases where long & external snapshots are acceptable - when asserting on schema and not data (extracting out values and focusing on fields) or when the received document rarely changes
-❌ **Otherwise:** A UI test fails. The code seems right, the screen renders perfect pixels, what happened? your snapshot testing just found a difference from the origin document to current received one - a single space character was added to the markdown...
+❌ **그렇지 않다면:** UI 테스트가 실패합니다. 코드가 문제없어 보이고 화면이 완벽한 픽셀을 렌더링합니다. 어떻게 되었습니까? 스냅샷 테스트에서 원본 문서와 현재 수신된 문서와의 차이점을 발견했습니다. 빈칸 하나가 마크 다운에 추가되었습니다...
-✏ Code Examples
+✏ 코드 예제
-### :thumbsdown: Anti-Pattern Example: Coupling our test to unseen 2000 lines of code
+### :thumbsdown: 올바르지 않은 예: 보이지 않는 2,000 라인의 코드를 우리 테스트에 연결

```javascript
-it('TestJavaScript.com is renderd correctly', () => {
+it('TestJavaScript.com 이 올바르게 랜더링 된다.', () => {
//Arrange
@@ -477,16 +478,18 @@ const receivedPage = renderer
//Assert
expect(receivedPage).toMatchSnapshot();
-//We now implicitly maintain a 2000 lines long document
-//every additional line break or comment - will break this test
+// 이제 2,000 라인의 문서를 암묵적으로 유지합니다.
+// 모든 줄바꿈 또는 주석이 테스트를 망가뜨립니다.
});
```
+
-### :clap: Doing It Right Example: Expectations are visible and focused
+### :clap: 올바른 예: expectation이 잘 보이고 집중된다.
+
```javascript
-it('When visiting TestJavaScript.com home page, a menu is displayed', () => {
+it('TestJavaScript.com 홈페이지를 방문하면 메뉴가 보인다.', () => {
//Arrange
//Act
@@ -509,7 +512,6 @@ expect(menu).toMatchInlineSnapshot(`
-
## ⚪ ️1.9 Avoid global test fixtures and seeds, add data per-test
From 1822ed387a28d6f64d09f59bf8b0f6671e5f533e Mon Sep 17 00:00:00 2001
From: Rain Byun
Date: Mon, 9 Sep 2019 09:45:12 +0900
Subject: [PATCH 063/502] Translate into Korean 1.9
---
readme.kr.md | 55 +++++++++++++++++++++++++---------------------------
1 file changed, 26 insertions(+), 29 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index a79db949..e9de9fbd 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -99,9 +99,9 @@ JavaScript 및 Node.js에 대한 A부터 Z까지의 믿음직한 가이드입니
-**👇 주의:** 각 글에는 코드 예제가 있으며 때로는 이미지도 있습니다. 클릭하여 확장
+**👇 주의:** 각 글에는 예제 코드가 있으며 때로는 이미지도 있습니다. 클릭하여 확장
-✏ 코드 예제
+✏ 예제 코드
@@ -149,7 +149,7 @@ describe('제품 서비스', function() {
-✏ 코드 예제
+✏ 예제 코드
@@ -203,7 +203,7 @@ test('프리미엄으로 분류해야 합니다.', () => {
-✏ 코드 예제
+✏ 예제 코드
 
-✏ 코드 예제
+✏ 예제 코드
@@ -415,7 +415,7 @@ it("더 나은 것: 유효한 제품이 추가된다면, 성공을 얻는다.",
-✏ 코드 예제
+✏ 예제 코드
@@ -457,7 +457,7 @@ describe('Product service', () => {
-✏ 코드 예제
+✏ 예제 코드
@@ -514,51 +514,50 @@ expect(menu).toMatchInlineSnapshot(`
-## ⚪ ️1.9 Avoid global test fixtures and seeds, add data per-test
-
-:white_check_mark: **Do:** Going by the golden rule (bullet 0), each test should add and act on its own set of DB rows to prevent coupling and easily reason about the test flow. In reality, this is often violated by testers who seed the DB with data before running the tests ([also known as ‘test fixture’](https://en.wikipedia.org/wiki/Test_fixture)) for the sake of performance improvement. While performance is indeed a valid concern — it can be mitigated (see “Component testing” bullet), however, test complexity is a much painful sorrow that should govern other considerations most of the time. Practically, make each test case explicitly add the DB records it needs and act only on those records. If performance becomes a critical concern — a balanced compromise might come in the form of seeding the only suite of tests that are not mutating data (e.g. queries)
-
+## ⚪ ️ 1.9 테스트 데이터를 글로벌로 하지말고 테스트별로 따로 추가하라.
+:white_check_mark: **이렇게 해라:** 황금률에 따르면(섹션 0), 각 테스트는 커플링을 방지하고 테스트 흐름을 쉽게 추론하기 위해 자체 DB 데이터를 추가하고 실행해야 합니다. 실제로 성능 향상(테스트를 실행하기 전에 DB 데이터를 준비(['테스트 픽스쳐'라고도 합니다](https://en.wikipedia.org/wiki/Test_fixture)))을 위해 이를 위반하는 테스터들이 많습니다. 성능은 실제로 유효한 문제이지만 완화될 수 있습니다(2.2 컴포넌트 테스트 참고). 그러나 테스트 복잡성은 대부분의 다른 고려사항들을 통제해야 하는 고통을 수반합니다. 각 테스트에 필요한 DB 레코드를 명시적으로 추가하고, 해당 데이터에 대해서만 테스트를 수행하십시오. 성능이 중요한 문제가 되는 경우 - 데이터를 변경하지 않는 테스트 모음(예: 쿼리)에 대해서 데이터를 준비하는 형태로 타협할 수 있습니다.
-❌ **Otherwise:** Few tests fail, a deployment is aborted, our team is going to spend precious time now, do we have a bug? let’s investigate, oh no — it seems that two tests were mutating the same seed data
+
+❌ **그렇지 않으면:** 테스트 실패, 배포 중단으로 팀원들이 귀중한 시간을 소비할 것입니다. 버그가 있습니까? 조사해보니 '없습니다' - 두 테스트에서 동일한 테스트 데이터를 변겨안 것으로 보입니다.
-✏ Code Examples
+✏ 예제 코드
-### :thumbsdown: Anti Pattern Example: tests are not independent and rely on some global hook to feed global DB data
+### :thumbsdown: 올바르지 않은 예: 테스트는 독립적이지 않으며 글로벌 훅에 의한 DB 데이터에 의존

```javascript
before(() => {
- //adding sites and admins data to our DB. Where is the data? outside. At some external json or migration framework
+ // 사이트 및 관리자 데이터를 DB에 추가. 데이터는 어디에 있습니까? 외부에. 외부 JSON 또는 마이그레이션 프레임워크에
await DB.AddSeedDataFromJson('seed.json');
});
-it("When updating site name, get successful confirmation", async () => {
- //I know that site name "portal" exists - I saw it in the seed files
+it("사이트 이름을 업데이트 할 때, 성공을 확인한다.", async () => {
+ // 사이트 이름 "portal"이 존재한다는 것을 알고있습니다. 시드파일에서 봤습니다.
const siteToUpdate = await SiteService.getSiteByName("Portal");
const updateNameResult = await SiteService.changeName(siteToUpdate, "newName");
expect(updateNameResult).to.be(true);
});
-it("When querying by site name, get the right site", async () => {
- //I know that site name "portal" exists - I saw it in the seed files
+it("사이트 이름을 쿼리할 때, 올바른 사이트 이름을 얻는다.", async () => {
+ // 사이트 이름 "portal"이 존재한다는 것을 알고있습니다. 시드파일에서 봤습니다.
const siteToCheck = await SiteService.getSiteByName("Portal");
- expect(siteToCheck.name).to.be.equal("Portal"); //Failure! The previous test change the name :[
+ expect(siteToCheck.name).to.be.equal("Portal"); // 실패! 이전 테스트에서 이름이 변경되었습니다. ㅠㅠ
});
-
```
+
-### :clap: Doing It Right Example: We can stay within the test, each test acts on its own set of data
+### :clap: 올바른 예: 우리는 테스트 내부에만 머물 수 있으며, 각 테스트는 자체 데이터 세트에서 동작합니다.
```javascript
-it("When updating site name, get successful confirmation", async () => {
- //test is adding a fresh new records and acting on the records only
+it("사이트 이름을 업데이트 할 때, 성공을 확인한다.", async () => {
+ // 테스트는 새로운 레코드를 새로 추가하고 해당 레코드에 대해서만 동작합니다.
const siteUnderTest = await SiteService.addSite({
name: "siteForUpdateTest"
});
@@ -567,13 +566,11 @@ it("When updating site name, get successful confirmation", async () => {
expect(updateNameResult).to.be(true);
});
-
```
-
-
+
## ⚪ ️ 1.10 Don’t catch errors, expect them
:white_check_mark: **Do:** When trying to assert that some input triggers an error, it might look right to use try-catch-finally and asserts that the catch clause was entered. The result is an awkward and verbose test case (example below) that hides the simple test intent and the result expectations
From 9847377306f0afb2fef4a094a04286c56ca182fa Mon Sep 17 00:00:00 2001
From: Rain Byun
Date: Wed, 11 Sep 2019 00:24:47 +0900
Subject: [PATCH 064/502] Translate into Korean 1.10 to 1.11
---
readme.kr.md | 76 +++++++++++++++++++++++-----------------------------
1 file changed, 33 insertions(+), 43 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index e9de9fbd..fce412a0 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -572,98 +572,88 @@ it("사이트 이름을 업데이트 할 때, 성공을 확인한다.", async ()
-## ⚪ ️ 1.10 Don’t catch errors, expect them
-:white_check_mark: **Do:** When trying to assert that some input triggers an error, it might look right to use try-catch-finally and asserts that the catch clause was entered. The result is an awkward and verbose test case (example below) that hides the simple test intent and the result expectations
+## ⚪ ️ 1.10 오류를 catch 하지말고 expect 하십시오.
-A more elegant alternative is the using the one-line dedicated Chai assertion: expect(method).to.throw (or in Jest: expect(method).toThrow()). It’s absolutely mandatory to also ensure the exception contains a property that tells the error type, otherwise given just a generic error the application won’t be able to do much rather than show a disappointing message to the user
-
+:white_check_mark: **이렇게 해라:** 오류를 발생시키는 입력값을 assert 할 때, try-catch-finally를 사용하고 catch 블럭에서 assert 하는게 맞아 보일수도 있습니다. 아래 예는 테스트 의도와 결과 expectation을 숨기는 어색하고 장황한 테스트 사례입니다.
+보다 우아한 대안은 한줄짜리 Chai assertion을 사용하는 것 입니다: expect(method).to.throw (혹은 Jest: expect(method).toThrow()). 오류 유형을 알려주는 속성이 예외에 포함되어야 합니다. 그렇지 않고 일반적인 오류를 발생시키면 어플리케이션은 사용자에게 실망스러운 메시지를 표시하는 것 밖에 할 수 없습니다.
-❌ **Otherwise:** It will be challenging to infer from the test reports (e.g. CI reports) what went wrong
+
+❌ **그렇지 않으면:** 무엇이 잘못되었는지 테스트 보고서(예: CI 보고서)에서 추론하기 어려울 것입니다.
-✏ Code Examples
+✏ 예제 코드
-### :thumbsdown: Anti-pattern Example: A long test case that tries to assert the existence of error with try-catch
+### :thumbsdown: 올바르지 않은 예: try-catch로 오류가 존재한다고 assert 하는 긴 테스트 사례

```javascript
-it("When no product name, it throws error 400", async() => {
-let errorWeExceptFor = null;
-try {
- const result = await addNewProduct({name:'nest'});}
-catch (error) {
- expect(error.code).to.equal('InvalidInput');
- errorWeExceptFor = error;
-}
-expect(errorWeExceptFor).not.to.be.null;
-//if this assertion fails, the tests results/reports will only show
-//that some value is null, there won't be a word about a missing Exception
+it("제품명이 없으면 400 오류를 던진다.", async() => {
+ let errorWeExceptFor = null;
+ try {
+ const result = await addNewProduct({name:'nest'});}
+ catch (error) {
+ expect(error.code).to.equal('InvalidInput');
+ errorWeExceptFor = error;
+ }
+ expect(errorWeExceptFor).not.to.be.null;
+ // 이 asserting이 실패하면, 테스트 결과에서 누락된 입력값에 대한 단어는 알 수 없고
+ // 입력값이 null 이라는 것만 알 수 있습니다.
});
-
```
+
-### :clap: Doing It Right Example: A human-readable expectation that could be understood easily, maybe even by QA or technical PM
+### :clap: 올바른 예: QA나 PM이라도 쉽게 이해할 수 있고 읽기 쉬운 expectation
```javascript
-it.only("When no product name, it throws error 400", async() => {
+it.only("제품명이 없으면 400 오류를 던진다.", async() => {
expect(addNewProduct)).to.eventually.throw(AppError).with.property('code', "InvalidInput");
});
-
```
-
-
-
-## ⚪ ️ 1.11 Tag your tests
-
-:white_check_mark: **Do:** Different tests must run on different scenarios: quick smoke, IO-less, tests should run when a developer saves or commits a file, full end-to-end tests usually run when a new pull request is submitted, etc. This can be achieved by tagging tests with keywords like #cold #api #sanity so you can grep with your testing harness and invoke the desired subset. For example, this is how you would invoke only the sanity test group with Mocha: mocha — grep ‘sanity’
-
+## ⚪ ️ 1.11 테스트에 태깅하십시오.
+:white_check_mark: **이렇게 해라:** 다른 테스트는 꼭 다른 시나리오에서 실행해야 합니다: 개발자가 파일을 저장하거나 커밋을 할 때 빠르고, IO가 많이 없는 테스트를 실행해야 합니다. 전체 end-to-end 테스트는 일반적으로 새로운 Pull Request가 제출되었을 때 실행됩니다. 등.. 이러한 경우에 #cold #api #sanity와 같은 키워드로 테스트에 태깅하면 테스트를 효율적으로 grep 할 수 있고, 원하는 하위세트를 호출할 수 있습니다. 예) Mocha를 이용해서 sanity 테스트 그룹만 실행하는 방법입니다: mocha - grep 'sanity'
-❌ **Otherwise:** Running all the tests, including tests that perform dozens of DB queries, any time a developer makes a small change can be extremely slow and keeps developers away from running tests
+
+❌ **그렇지 앟으면:** 개발자가 작은 변경을 할 때마다 수십 개의 DB 쿼리를 수행하는 테스트를 포함한 모든 테스트를 실행한다면, 속도가 매우 느려져 개발자가 테스트를 수행하지 않게 만들 것입니다.
-✏ Code Examples
+✏ 예제 코드
-### :clap: Doing It Right Example: Tagging tests as ‘#cold-test’ allows the test runner to execute only fast tests (Cold===quick tests that are doing no IO and can be executed frequently even as the developer is typing)
+### :clap: 올바른 예: 테스트를 '#cold-test'로 태깅하면 테스트를 수행하는 사람이 빠른 테스트만 실행할 수 있습니다(IO를 수행하지 않고 개발자가 코딩하는 중에도 자주 실행할 수 있는 테스트 cold === quick).

+
```javascript
-//this test is fast (no DB) and we're tagging it correspondigly
-//now the user/CI can run it frequently
-describe('Order service', function() {
- describe('Add new order #cold-test #sanity', function() {
- test('Scenario - no currency was supplied. Expectation - Use the default currency #sanity', function() {
- //code logic here
+// 이 테스트는 빠르고(DB 없음) 현재 사용자/CI가 자주 실행할 수 있는 태그를 지정하고 있습니다.
+describe('주문 서비스', function() {
+ describe('새 주문 추가 #cold-test #sanity', function() {
+ test('시나리오 - 통화가 제공되지 않음. 예외 - 기본 통화 사용 #sanity', function() {
+ // code logic here
});
});
});
-
-
```
-
-
-
## ⚪ ️1.12 Other generic good testing hygiene
From 49bdd5e52c43ee06e875ecc2874138d7a94738eb Mon Sep 17 00:00:00 2001
From: sury
Date: Thu, 12 Sep 2019 11:50:06 +0900
Subject: [PATCH 065/502] translate into Korean 2.1
---
readme.kr.md | 20 +++++++++-----------
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index fce412a0..02b5d02a 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -670,32 +670,30 @@ Learn and practice [TDD principles](https://www.sm-cloud.com/book-review-test-dr
# Section 2️⃣: Backend Testing
-## ⚪ ️2.1 Enrich your testing portfolio: Look beyond unit tests and the pyramid
+## ⚪ ️2.1 당신의 테스트 포트폴리오를 풍부하게 하십시오: 단위 테스트와 피라미드를 넘어서세요.
-:white_check_mark: **Do:** The [testing pyramid](https://martinfowler.com/bliki/TestPyramid.html), though 10> years old, is a great and relevant model that suggests three testing types and influences most developers’ testing strategy. At the same time, more than a handful of shiny new testing techniques emerged and are hiding in the shadows of the testing pyramid. Given all the dramatic changes that we’ve seen in the recent 10 years (Microservices, cloud, serverless), is it even possible that one quite-old model will suit *all* types of applications? shouldn’t the testing world consider welcoming new testing techniques?
+:white_check_mark: **이렇게 해라:** 10년이 넘은 모델인 [테스트 피라미드](https://martinfowler.com/bliki/TestPyramid.html)는 세 가지 테스트 유형을 제시하고 대다수 개발자의 테스트 전략에 영향을 주는 훌륭한 모델입니다. 동시에, 몇 가지 반짝이는 새로운 테스트 기술들이 등장하였지만 모두 테스트 피라미드의 그림자 뒤로 사라졌습니다. 우리가 최근 10년간 보아 온 극적인 기술의 변화들(Microservices, cloud, serverless)을 고려할 때, 아주 오래된 모델 하나가 *모든* 어플리케이션 유형에 적합하다는 것이 가능한가요? 테스트 세계는 새로운 기술을 받아들이는 것을 고려하지 않나요?
-Don’t get me wrong, in 2019 the testing pyramid, TDD and unit tests are still a powerful technique and are probably the best match for many applications. Only like any other model, despite its usefulness, [it must be wrong sometimes](https://en.wikipedia.org/wiki/All_models_are_wrong). For example, consider an IOT application that ingests many events into a message-bus like Kafka/RabbitMQ, which then flow into some data-warehouse and are eventually queried by some analytics UI. Should we really spend 50% of our testing budget on writing unit tests for an application that is integration-centric and has almost no logic? As the diversity of application types increase (bots, crypto, Alexa-skills) greater are the chances to find scenarios where the testing pyramid is not the best match.
+오해는 하지 마세요. 2019 테스트 피라미드에서 TDD와 단위 테스트는 여전히 강력한 기술이고 아마도 많은 어플리케이션에 가장 어울리는 기술입니다. 다른 모델과 마찬가지로, 테스트 미라미드는 유용하지만 [그것이 항상 맞는 것은 아닙니다](https://en.wikipedia.org/wiki/All_models_are_wrong). 예를 들어, 어떤 IOT 어플리케이션을 생각해 봅시다. 이 어플리케이션은 다수의 이벤트를 Kafka/RabbitMQ 같은 메세지 버스로 보내고 다시 데이터 웨어하우스로 흘려보냅니다. 그리고 이 데이터들은 어떤 분석 UI에서 조회됩니다. 우리는 정말 우리의 테스트 예산의 50%를 통합 중심적(intergration-centric)이고 로직이 거의 없는 어플리케이션의 단위 테스트를 작성하는데 할애해야 할까요? 어플리케이션 유형들이 다양해질 수록(bots, crypto, Alexa-skills) 테스트 피라미드가 적합하지 않은 시나리오들을 발견할 가능성이 커집니다.
-It’s time to enrich your testing portfolio and become familiar with more testing types (the next bullets suggest few ideas), mind models like the testing pyramid but also match testing types to real-world problems that you’re facing (‘Hey, our API is broken, let’s write consumer-driven contract testing!’), diversify your tests like an investor that build a portfolio based on risk analysis — assess where problems might arise and match some prevention measures to mitigate those potential risks
+지금이 당신의 테스트 포트폴리오를 넓히고 더 많은 테스트 유형들에 익숙해질 시간입니다. (다음 총알에서 몇 가지 아이디어들을 제안합니다.) 테스트 피라미드 같은 모델들도 염두에 둘 뿐만 아니라 당신이 직면하고 있는 현실 세계의 문제들에 적합한 테스트 유형들을 찾으세요. ("우리 API 깨졌어. Consumer-driven contract 테스트 작성하자!" 처럼요.) 위험성 분석을 기반으로 포르폴리오를 구축하는 투자자처럼 당신의 테스트를 다양화하세요 - 문제가 발생할 수 있는 부분을 가늠하고 잠재적 위험성을 줄일 수 있는 예방 방법을 찾으세요.
-A word of caution: the TDD argument in the software world takes a typical false-dichotomy face, some preach to use it everywhere, others think it’s the devil. Everyone who speaks in absolutes is wrong :]
+주의 사항 : 소프트웨어 세계에서의 TDD 논쟁은 전형적인 잘못된 이분법입니다. 어떤 사람들은 TDD를 모든 곳에 적용하라고 주장하지만, 다른 일부는 TDD를 악마라고 생각합니다. 절대적으로 한쪽만 주장하는 사람들은 모두 틀렸습니다 :]
-❌ **Otherwise:** You’re going to miss some tools with amazing ROI, some like Fuzz, lint, and mutation can provide value in 10 minutes
-
+❌ **그렇지 않으면:** 당신은 굉장한 RIO를 주는 몇 가지 툴들을 놓칠 것입니다. Fuzz, lint, mutation 테스트들은 단 10분만에 당신에게 가치를 제공할 수 있습니다.
-✏ Code Examples
+✏ 코드 예제
-### :clap: Doing It Right Example: Cindy Sridharan suggests a rich testing portfolio in her amazing post ‘Testing Microservices — the sane way’
-
+### :clap: 올바른 예: Cindy Sridharan은 그녀의 훌륭한 글 ‘Testing Microservices — the sane way’에서 풍부한 테스트 포트폴리오를 제안합니다. 
-☺️Example: [YouTube: “Beyond Unit Tests: 5 Shiny Node.JS Test Types (2018)” (Yoni Goldberg)](https://www.youtube.com/watch?v=-2zP494wdUY&feature=youtu.be)
+예제: [YouTube: “Beyond Unit Tests: 5 Shiny Node.JS Test Types (2018)” (Yoni Goldberg)](https://www.youtube.com/watch?v=-2zP494wdUY&feature=youtu.be)
From fd9ce8fd91a4d14e4278c9dd4a616f828bb126d8 Mon Sep 17 00:00:00 2001
From: Rain Byun
Date: Sat, 14 Sep 2019 01:18:42 +0900
Subject: [PATCH 066/502] Translate into Korean 1.12
- Fix typo.
---
readme.kr.md | 23 +++++++++--------------
1 file changed, 9 insertions(+), 14 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 02b5d02a..74855db3 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -656,21 +656,21 @@ describe('주문 서비스', function() {
-## ⚪ ️1.12 Other generic good testing hygiene
-:white_check_mark: **Do:** This post is focused on testing advice that is related to, or at least can be exemplified with Node JS. This bullet, however, groups few non-Node related tips that are well-known
+## ⚪ ️ 1.12 일반적인 좋은 테스트 기법들
+
+:white_check_mark: **이렇게 해라:** 이 글은 Node.js와 관련이 있거나 최소한 Node.js로 예를 들 수 있는 테스트 조언에 중점을두고 있습니다. 그러나 이번에는 Node.js가 아니지만 잘 알려진 팁들을 포함하고 있습니다.
-Learn and practice [TDD principles](https://www.sm-cloud.com/book-review-test-driven-development-by-example-a-tldr/) — they are extremely valuable for many but don’t get intimidated if they don’t fit your style, you’re not the only one. Consider writing the tests before the code in a [red-green-refactor style](https://blog.cleancoder.com/uncle-bob/2014/12/17/TheCyclesOfTDD.html), ensure each test checks exactly one thing, when you find a bug — before fixing write a test that will detect this bug in the future, let each test fail at least once before turning green, start a module by writing a quick and simplistic code that satsifies the test - then refactor gradually and take it to a production grade level, avoid any dependency on the environment (paths, OS, etc)
-
+[TDD 원칙](https://www.sm-cloud.com/book-review-test-driven-development-by-example-a-tldr/)을 배우고 연습하십시오 - 많은 사람들에게 매우 가치가 있지만, 자신의 스타일에 맞지 않을 수 있습니다. [실패-성공-리페토링 스타일](https://blog.cleancoder.com/uncle-bob/2014/12/17/TheCyclesOfTDD.html)로 코드 작성 전에 테스트를 작성하는 것을 고려하십시오. 버그를 발견하면 각 테스트에서 정확히 한 가지만 확인하도록 하십시오. 수정하기 전에 앞으로 이 버그를 발견 할 테스트를 작성하십시오. 테스트가 성공하기 전에 각 테스트가 한번 이상 실패하도록 하십시오. 테스트를 만족시키는 간단한 코드를 작성하여 빠르게 모듈을 시작하십시오 - 점신적으로 리펙토링하여 프로덕션 등급의 수준으로 가져가십시오. 환경(경로, OS 등)에 대한 종속성을 피하십시오.
+
-❌ **Otherwise:** You‘ll miss pearls of wisdom that were collected for decades
+❌ **그렇지 않으면:** 수십 년 동안 수집 된 아주 소중한 조언을 놓치게 될 것입니다.
+# 섹션 2️⃣: 백엔드 테스트
-# Section 2️⃣: Backend Testing
-
-## ⚪ ️2.1 당신의 테스트 포트폴리오를 풍부하게 하십시오: 단위 테스트와 피라미드를 넘어서세요.
+## ⚪ ️ 2.1 당신의 테스트 포트폴리오를 풍부하게 하십시오: 단위 테스트와 피라미드를 넘어서세요.
:white_check_mark: **이렇게 해라:** 10년이 넘은 모델인 [테스트 피라미드](https://martinfowler.com/bliki/TestPyramid.html)는 세 가지 테스트 유형을 제시하고 대다수 개발자의 테스트 전략에 영향을 주는 훌륭한 모델입니다. 동시에, 몇 가지 반짝이는 새로운 테스트 기술들이 등장하였지만 모두 테스트 피라미드의 그림자 뒤로 사라졌습니다. 우리가 최근 10년간 보아 온 극적인 기술의 변화들(Microservices, cloud, serverless)을 고려할 때, 아주 오래된 모델 하나가 *모든* 어플리케이션 유형에 적합하다는 것이 가능한가요? 테스트 세계는 새로운 기술을 받아들이는 것을 고려하지 않나요?
@@ -682,8 +682,7 @@ Learn and practice [TDD principles](https://www.sm-cloud.com/book-review-test-dr
-
-❌ **그렇지 않으면:** 당신은 굉장한 RIO를 주는 몇 가지 툴들을 놓칠 것입니다. Fuzz, lint, mutation 테스트들은 단 10분만에 당신에게 가치를 제공할 수 있습니다.
+❌ **그렇지 않으면:** 당신은 굉장한 ROI를 주는 몇 가지 툴들을 놓칠 것입니다. Fuzz, lint, mutation 테스트들은 단 10분만에 당신에게 가치를 제공할 수 있습니다.
@@ -699,12 +698,8 @@ Learn and practice [TDD principles](https://www.sm-cloud.com/book-review-test-dr

-
-
-
-
## ⚪ ️2.2 Component testing might be your best affair
From 3ef90bd495584c05ad402d141ff31351b004d3a8 Mon Sep 17 00:00:00 2001
From: Aaron <42848750+aaronshivers@users.noreply.github.com>
Date: Sat, 14 Sep 2019 21:31:45 -0500
Subject: [PATCH 067/502] fixed typo in 1.4
---
readme.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/readme.md b/readme.md
index c03fdfa2..59995f31 100644
--- a/readme.md
+++ b/readme.md
@@ -272,7 +272,7 @@ it("When asking for an admin, ensure only ordered admins in results" , () => {
## ⚪ ️ 1.4 Stick to black-box testing: Test only public methods
-:white_check_mark: **Do:** Testing the internals brings huge overhead for almost nothing. If your code/API deliver the right results, should you really invest your next 3 hours in testing HOW it worked internally and then maintain these fragile tests? Whenever a public behavior is checked, the private implementation is also implicitly tested and your tests will break only if there is a certain problem (e.g. wrong output). This approach is also referred to as `behavioral testing`. On the other side, should you test the internals (white box approach) — your focus shifts from planning the component outcome to nitty-gritty details and your test might break because of minor code refactors although the results are fine - this dramatically increases the maintenance burden
+:white_check_mark: **Do:** Testing the internals brings huge overhead for almost nothing. If your code/API delivers the right results, should you really invest your next 3 hours in testing HOW it worked internally and then maintain these fragile tests? Whenever a public behavior is checked, the private implementation is also implicitly tested and your tests will break only if there is a certain problem (e.g. wrong output). This approach is also referred to as `behavioral testing`. On the other side, should you test the internals (white box approach) — your focus shifts from planning the component outcome to nitty-gritty details and your test might break because of minor code refactors although the results are fine - this dramatically increases the maintenance burden
From 04090966421af5c1faf2bb0ac8b463a0d9798073 Mon Sep 17 00:00:00 2001
From: leo lee
Date: Sun, 15 Sep 2019 17:00:37 +0900
Subject: [PATCH 068/502] - Translation into Korean(3.1, 3.2)
---
readme.kr.md | 46 +++++++++++++++++++++++-----------------------
1 file changed, 23 insertions(+), 23 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 74855db3..6ea44346 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -917,32 +917,32 @@ it("When updating site name, get successful confirmation", async () => {
-# Section 3️⃣: Frontend Testing
+# 섹션 3️⃣: 프론트엔드 테스트
-## ⚪ ️ 3.1. Separate UI from functionality
+## ⚪ ️ 3.1. 기능으로부터 화면을 분리하십시오
-:white_check_mark: **Do:** When focusing on testing component logic, UI details become a noise that should be extracted, so your tests can focus on pure data. Practically, extract the desired data from the markup in an abstract way that is not too coupled to the graphic implementation, assert only on pure data (vs HTML/CSS graphic details) and disable animations that slow down. You might get tempted to avoid rendering and test only the back part of the UI (e.g. services, actions, store) but this will result in fictional tests that don't resemble the reality and won't reveal cases where the right data doesn't even arrive in the UI
+:white_check_mark: **이렇게 해라:** 컴포넌트 로직을 테스트할때, 화면의 세부사항들은 제외되어야할 노이즈가 됩니다. 그것을 제외함으로써 당신의 테스트들은 순수한 데이터에 집중할 수 있습니다. 실제로, 그래픽 구현에 너무 결합되지 않는 추상적인 방법을 통해 요구되어지는 데이터를 마크업으로부터 추출하십시오. 그리고 느리게 만드는 애니메이션들을 제외한 오직 순수한 데이터를 검증하십시오(vs HTML/CSS 화면 세부사항). 당신은 렌더링하는 것을 피하고 오직 화면의 뒷부분(서비스, 액션, 스토어등과 같은)만을 테스트 하려고 할 수도 있습니다. 하지만, 이것은 실제와 같지도 않으며 심지어 화면에 올바른 데이터가 도달하지 않은 경우를 나타내지도 않는 가짜 테스트에서의 결과가 될 것 입니다.
-❌ **Otherwise:** The pure calculated data of your test might be ready in 10ms, but then the whole test will last 500ms (100 tests = 1 min) due to some fancy and irrelevant animation
+❌ **그렇지 않으면:** 당신의 테스트의 순수하게 계산된 데이터는 10ms 내에 준비될수도 있지만, 전체 테스트는 화려하고 불필요한 애니메이션 때문에 500ms(100 테스트 = 1분) 동안 지속될 것 입니다.
-✏ Code Examples
+✏ 코드 예제
-### :clap: Doing It Right Example: Separating out the UI details
+### :clap: 올바른 예: 화면의 세부사항을 빼내는 것
 
```javascript
-test('When users-list is flagged to show only VIP, should display only VIP members', () => {
+test('오직 VIP를 보기위해 사용자목록을 표시했을때, 오직 VIP 멤버들만 보여져야 한다', () => {
// Arrange
const allUsers = [
{ id: 1, name: 'Yoni Goldberg', vip: false },
@@ -952,19 +952,19 @@ test('When users-list is flagged to show only VIP, should display only VIP membe
// Act
const { getAllByTestId } = render();
- // Assert - Extract the data from the UI first
+ // Assert - 우선 화면으로부터 데이터를 추출
const allRenderedUsers = getAllByTestId('user').map(uiElement => uiElement.textContent);
const allRealVIPUsers = allUsers.filter((user) => user.vip).map((user) => user.name);
- expect(allRenderedUsers).toEqual(allRealVIPUsers); //compare data with data, no UI here
+ expect(allRenderedUsers).toEqual(allRealVIPUsers); // 화면에 아닌 데이터를 비교
});
```
-### :thumbsdown: Anti Pattern Example: Assertion mix UI details and data
+### :thumbsdown: 잘못된 예: 화면 세부사항들과 데이터를 섞어서 검증
```javascript
-test('When flagging to show only VIP, should display only VIP members', () => {
+test('오직 VIP를 보기위해 사용자목록을 표시했을때, 오직 VIP 멤버들만 보여져야 한다', () => {
// Arrange
const allUsers = [
{id: 1, name: 'Yoni Goldberg', vip: false },
@@ -974,7 +974,7 @@ test('When flagging to show only VIP, should display only VIP members', () => {
// Act
const { getAllByTestId } = render();
- // Assert - Mix UI & data in assertion
+ // Assert - 화면과 데이터를 섞어서 검증
expect(getAllByTestId('user')).toEqual('[John Doe]');
});
@@ -988,21 +988,21 @@ test('When flagging to show only VIP, should display only VIP members', () => {
-## ⚪ ️ 3.2 Query HTML elements based on attributes that are unlikely to change
+## ⚪ ️ 3.2 변하지 않은 요소들에 기반해서 HTML 엘리먼트들을 찾으십시오
-:white_check_mark: **Do:** Query HTML elements based on attributes that are likely to survive graphic changes unlike CSS selectors and like form labels. If the designated element doesn't have such attributes, create a dedicated test attribute like 'test-id-submit-button'. Going this route not only ensures that your functional/logic tests never break because of look & feel changes but also it becomes clear to the entire team that this element and attribute are utilized by tests and shouldn't get removed
+:white_check_mark: **이렇게 해라:** CSS 검색자들과 다르게 양식 레이블들과 같이 그래픽 변경에도 살아남을 요소들을 기반으로 HTML 엘리먼트들을 찾으십시오. 만약 설계된 엘리먼트가 이와 같은 요소들을 가지고 있지 않다면, 'test-id-submit-button' 과 같이 테스트에 한정된 요소를 만드십시오. 이 방법은 당신의 기능/로직 테스트들이 룩앤필때문에 절대 망가지지 않을 것을 보장할 뿐만 아니라, 이 엘리먼트와 요소가 테스트에 의해 사용되어지고 제거되어서는 안된다는것을 팀 전체에게 명확하게 합니다.
-❌ **Otherwise:** You want to test the login functionality that spans many components, logic and services, everything is set up perfectly - stubs, spies, Ajax calls are isolated. All seems perfect. Then the test fails because the designer changed the div CSS class from 'thick-border' to 'thin-border'
+❌ **그렇지 않으면:** 당신은 로그인 기능을 테스트하기를 원합니다. 이 기능은 많은 컴포넌트들, 로직 그리고 서비스들에 걸쳐져 있고 모든 것은 완벽하게 준비되어 있습니다 - 스텁, 스파이, Ajax 호출은 격리되어져 있습니다. 모든것은 완벽한 것 처럼 보입니다. 그렇지만, 이 테스트는 디자이너에 의해 div 클래스 이름이 'thick-border' 에서 'thin-border'로 바뀌었기 때문에 실패합니다.
-✏ Code Examples
+✏ 코드 예제
-### :clap: Doing It Right Example: Querying an element using a dedicated attrbiute for testing
+### :clap: 올바른 예: 테스트를 위해 한정된 요소를 사용해서 엘리먼트를 찾으십시오

@@ -1017,8 +1017,8 @@ test('When flagging to show only VIP, should display only VIP members', () => {
```
```javascript
-// this example is using react-testing-library
- test('Whenever no data is passed to metric, show 0 as default', () => {
+// react-testing-library를 사용한 예제
+ test('metric에 데이터가 전달되지 않으면, 0을 기본값으로 보여준다', () => {
// Arrange
const metricValue = undefined;
@@ -1032,15 +1032,15 @@ test('When flagging to show only VIP, should display only VIP members', () => {
-### :thumbsdown: Anti-Pattern Example: Relying on CSS attributes
+### :thumbsdown: 잘못된 예: CSS 요소들에 의존
```html
-{value}
+{value}
```
```javascript
-// this exammple is using enzyme
-test('Whenever no data is passed, error metric shows zero', () => {
+// enzyme을 사용한 예제
+test('데이터가 전달되지 않으면, 0을 보여준다', () => {
// ...
expect(wrapper.find("[className='d-flex-column']").text()).toBe("0");
From 4e82e7084e8a2b3a3ecd97a7ccbcef62978b2b37 Mon Sep 17 00:00:00 2001
From: sury
Date: Wed, 18 Sep 2019 18:33:16 +0900
Subject: [PATCH 069/502] Translation into Korean (2.2,2.3)
---
readme.kr.md | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 6ea44346..47f0dde0 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -702,24 +702,23 @@ describe('주문 서비스', function() {
-## ⚪ ️2.2 Component testing might be your best affair
+## ⚪ ️2.2 컴포넌트 테스트가 최선의 방법일 수 있다.
-:white_check_mark: **Do:** Each unit test covers a tiny portion of the application and it’s expensive to cover the whole, whereas end-to-end testing easily covers a lot of ground but is flaky and slower, why not apply a balanced approach and write tests that are bigger than unit tests but smaller than end-to-end testing? Component testing is the unsung song of the testing world — they provide the best from both worlds: reasonable performance and a possibility to apply TDD patterns + realistic and great coverage.
+:white_check_mark: **이렇게 해라:** 각각의 단위 테스트는 어플리케이션의 매우 작은 부분만을 커버하고 전체를 모두 커버하기에는 비용이 많이 듭니다. 반면에, end-to-end 테스트는 간단하게 많은 부분을 커버할 수 있지만 깊이가 얕고 더 느립니다. 그렇다면 균형 잡힌 접근법을 적용하여 단위 테스트보다는 크지만 end-to-end 테스트보다는 작은 테스트를 작성하는 것은 어떨까요? 컴포넌트 테스트는 테스트 세계에서 잘 알려지지 않은 방법입니다. - 컴포넌트 테스트는 다음의 두 가지 이점을 모두 제공합니다: 합리적인 성능과 TDD 패턴을 적용할 수 있는 가능성 + 현실적이면서 훌륭한 커버리지
-Component tests focus on the Microservice ‘unit’, they work against the API, don’t mock anything which belongs to the Microservice itself (e.g. real DB, or at least the in-memory version of that DB) but stub anything that is external like calls to other Microservices. By doing so, we test what we deploy, approach the app from outwards to inwards and gain great confidence in a reasonable amount of time.
+컴포넌트 테스트는 마이크로 서비스 '단위'에 중점을 두고 API에 대하여 동작합니다. 마이크로서비스 그 자체에 속한 것들 (예를들면, 실제 DB 또는 해당 DB의 인-메모리 버전)은 모킹(Mock)하지 않고, 다른 마이크로서비스 호출과 같은 외부적인 것은 스텁(Stub)합니다. 그렇게 함으로써 우리는 우리가 배포하는 것을 테스트하고 어플리케이션의 바깥쪽에서 안쪽으로 접근하며, 적당한 시간 안에서 큰 자신감을 얻을 수 있습니다.
-❌ **Otherwise:** You may spend long days on writing unit tests to find out that you got only 20% system coverage
-
+❌ **그렇지 않으면:** 시스템 커버리지가 20%에 불과하다는 것을 깨닫기까지 단위 테스트를 작성하는 데 오랜 시간이 걸릴 수 있습니다.
-✏ Code Examples
+✏ 코드 예제
-### :clap: Doing It Right Example: Supertest allows approaching Express API in-process (fast and cover many layers)
+### :clap: 올바른 예: Supertest를 통해 프로세스 내 Express API에 접근할 수 있습니다. (빠르고 다양한 계층을 커버함)

@@ -730,22 +729,23 @@ Component tests focus on the Microservice ‘unit’, they work against the API,
-## ⚪ ️2.3 Ensure new releases don’t break the API using
+## ⚪ ️2.3 신규 릴리즈가 API 사용을 깨지게 하지 마십시오.
-:white_check_mark: **Do:** So your Microservice has multiple clients, and you run multiple versions of the service for compatibility reasons (keeping everyone happy). Then you change some field and ‘boom!’, some important client who relies on this field is angry. This is the Catch-22 of the integration world: It’s very challenging for the server side to consider all the multiple client expectations — On the other hand, the clients can’t perform any testing because the server controls the release dates. [Consumer-driven contracts and the framework PACT](https://docs.pact.io/) were born to formalize this process with a very disruptive approach — not the server defines the test plan of itself rather the client defines the tests of the… server! PACT can record the client expectation and put in a shared location, “broker”, so the server can pull the expectations and run on every build using PACT library to detect broken contracts — a client expectation that is not met. By doing so, all the server-client API mismatches are caught early during build/CI and might save you a great deal of frustration
-
+:white_check_mark: **이렇게 해라:** 당신의 마이크로서비스는 다수의 클라이언트를 가지고 있고 호환성의 이유로 여러 버전의 서비스를 운영합니다 (모든 사람을 만족시키기 위해서). 그런 상황에서 당신이 일부 필드를 변경하면 이 필드를 믿고 사용하던 일부 중요한 클라이언트는 화가 날 것입니다. 이것은 통합(integration) 세계에서 해결하기 어려운 진퇴양난에 놓인 문제입니다: 서버 사이드가 여러 클라이언트들의 모든 기댓값을 고려하는 것은 매우 어려운 일입니다. - 반면에, 서버가 릴리즈 날짜를 결정하기 때문에 클라이언트는 어떠한 테스트도 수행할 수 없습니다.
+[소비자 주도 계약 테스트(Consumer-driven contracts)와 PACT 프레임워크](https://docs.pact.io/)는 매우 파괴적인 방법으로 이러한 프로세스를 표준화하기 위해 나타났습니다. - 서버가 서버의 테스트 계획을 결정하지 않고, 클라이언트가 서버의 테스트를 결정합니다! PACT는 클라이언트의 기댓값을 기록하여 "브로커"라는 공유된 위치에 올려둘 수 있습니다. 그러면 서버는 그 기댓값을 당겨 받을 수 있고 빌드할 때마다 PACT 라이브러리를 사용하여 깨진 계약(contract - 충족되지 않은 클라이언트의 기댓값)을 감지할 수 있습니다. 이렇게 함으로써, 모든 서버-클라이언트 API 간 일치하지 않은 것들을 빌드/CI 환경에서 조기에 잡을 수 있고 당신의 큰 절망감을 줄여줄 수 있을 것입니다.
+
-❌ **Otherwise:** The alternatives are exhausting manual testing or deployment fear
+❌ **그렇지 않으면:** 대안은 수동 배포나 배포에 대한 두려움을 안고 가는 것 뿐입니다.
-✏ Code Examples
+✏ 코드 예제
-### :clap: Doing It Right Example:
+### :clap: 올바른 예:

From 4c0528389eb6742ca13d44301d664f6623487751 Mon Sep 17 00:00:00 2001
From: Iago Cavalcante
Date: Wed, 18 Sep 2019 14:29:09 -0300
Subject: [PATCH 070/502] begin translation in section 0 and section 1
---
readme-pt-br.md | 76 ++++++++++++++++++++++++-------------------------
1 file changed, 38 insertions(+), 38 deletions(-)
diff --git a/readme-pt-br.md b/readme-pt-br.md
index ca49910c..0b6c8bc2 100644
--- a/readme-pt-br.md
+++ b/readme-pt-br.md
@@ -35,81 +35,81 @@ Comece entendendo as práticas de teste onipresentes que são a base para qualqu
Um único conselho que inspira todos os outros (1 marcador especial)
-#### [`Section 1: The Test Anatomy`](#section-1-the-test-anatomy-1)
+#### [`Seção 1: A Anatomia do Teste`](#section-1-the-test-anatomy-1)
-The foundation - structuring clean tests (12 bullets)
+A fundação - estruturando testes limpos (12 marcadores)
-#### [`Section 2: Backend`](#section-2️⃣-backend-testing)
+#### [`Seção 2: Backend`](#section-2️⃣-backend-testing)
-Writing backend and Microservices tests efficiently (8 bullets)
+Escrevendo testes de back-end e microsserviços com eficiência (8 marcadores)
-#### [`Section 3: Frontend`](#section-3️⃣-frontend-testing)
+#### [`Seção 3: Frontend`](#section-3️⃣-frontend-testing)
-Writing tests for web UI including component and E2E tests (11 bullets)
+Escrevendo testes para interface do usuário da web, incluindo testes de componentes e E2E (11 marcadores)
-#### [`Section 4: Measuring Tests Effectiveness`](#section-4️⃣-measuring-test-effectiveness)
+#### [`Seção 4: Metrificando Testes Efetivamente`](#section-4️⃣-measuring-test-effectiveness)
-Watching the watchman - measuring test quality (4 bullets)
+Observando o vigia - medindo a qualidade do teste (4 marcadores)
-#### [`Section 5: Continuous Integration`](#section-5️⃣-ci-and-other-quality-measures)
+#### [`Seção 5: Integração Contínua`](#section-5️⃣-ci-and-other-quality-measures)
-Guidelines for CI in the JS world (9 bullets)
+Diretrizes para CI no mundo JS (9 marcadores)
-# Section 0️⃣: The Golden Rule
+# Seção 0️⃣: A Regra de Ouro
-## ⚪️ 0. The Golden Rule: Design for lean testing
+## ⚪️ 0. A Regra de Ouro: Design para testes enxutos
-:white_check_mark: **Do:**
-Testing code is not like production-code - design it to be dead-simple, short, abstraction-free, flat, delightful to work with, lean. One should look at a test and get the intent instantly.
+:white_check_mark: **Faça:**
+O código de teste não é como o código de produção - projete-o para ser simples, curto, sem abstrações, plano, agradável de se trabalhar, enxuto. Deve-se olhar para um teste e obter a intenção instantaneamente.
-Our minds are full with the main production code, we don't have 'headspace' for additional complexity. Should we try to squeeze yet another challenging code into our poor brain it will slow the team down which works against the reason we do testing. Practically this is where many teams just abandon testing.
-
-The tests are an opportunity for something else - a friendly and smiley assistant, one that it's delightful to work with and delivers great value for such a small investment. Science tells we have two brain systems: system 1 which is used for effortless activities like driving a car on an empty road and system 2 which is meant for complex and conscious operations like solving a math equation. Design your test for system 1, when looking at test code it should *feel* as easy as modifying an HTML document and not like solving 2X(17 × 24).
+Nossas mentes estão cheias com o código principal de produção, não temos 'espaço de sobra' para complexidade adicional. Se tentarmos espremer outro código desafiador em nosso cérebro fraco, a equipe ficará mais lenta, o que vai de encontro com a razão pela qual fazemos os testes. Praticamente é aqui que muitas equipes abandonam os testes.
+
+Os testes são uma oportunidade para outra coisa - um assistente amigável e sorridente, que é agradável de trabalhar e oferece grande valor para um investimento tão pequeno. A ciência diz que temos dois sistemas cerebrais: o sistema 1, usado para atividades sem esforço, como dirigir um carro em uma estrada vazia, e o sistema 2, destinado a operações complexas e conscientes, como resolver uma equação matemática. Projete seu teste para o sistema 1, ao analisar o código de teste, ele deve parecer tão fácil quanto modificar um documento HTML e não como resolver um equação 2X (17 × 24).
-This can be achieved by selectively cherry-picking techniques, tools and test targets that are cost-effective and provide great ROI. Test only as much as needed, strive to keep it nimble, sometimes it's even worth dropping some tests and trade reliability for agility and simplicity.
+Isso pode ser alcançado através de técnicas, ferramentas e alvos de teste selecionados de forma econômica, que são econômicos e proporcionam um ótimo ROI. Teste apenas o necessário, esforce-se para mantê-lo ágil, às vezes vale a pena abandonar alguns testes e trocar a confiabilidade por agilidade e simplicidade.

-Most of the advice below are derivatives of this principle.
+A maioria dos conselhos abaixo são derivados desse princípio.
-### Ready to start?
+### Pronto para começar?
-# Section 1: The Test Anatomy
+# Seção 1: A Anatomia do Teste
-## ⚪ ️ 1.1 Include 3 parts in each test name
+## ⚪ ️ 1.1 Inclua 3 partes em cada nome de teste
-:white_check_mark: **Do:** A test report should tell whether the current application revision satisfies the requirements for the people who are not necessarily familiar with the code: the tester, the DevOps engineer who is deploying and the future you two years from now. This can be achieved best if the tests speak at the requirements level and include 3 parts:
+:white_check_mark: **Faça:** Um relatório de teste deve informar se a revisão atual do aplicativo atende aos requisitos para as pessoas que não estão necessariamente familiarizadas com o código: o testador, o engenheiro DevOps que está implantando e você daqui a dois anos. Isso pode ser melhor alcançado se os testes falarem no nível de requisitos e incluirem 3 partes:
-(1) What is being tested? For example, the ProductsService.addNewProduct method
+(1) O que está sendo testado? Por exemplo, o método ProductsService.addNewProduct
-(2) Under what circumstances and scenario? For example, no price is passed to the method
+(2) Sob que circunstâncias e cenário? Por exemplo, nenhum preço é passado para o método
-(3) What is the expected result? For example, the new product is not approved
+(3) Qual é o resultado esperado? Por exemplo, o novo produto não é aprovado
-❌ **Otherwise:** A deployment just failed, a test named “Add product” failed. Does this tell you what exactly is malfunctioning?
+❌ **De outra forma:** Uma implantação acabou de falhar, um teste chamado "Adicionar produto" falhou. Isso diz o que exatamente está com defeito?
-**👇 Note:** Each bullet has code examples and sometime also an image illustration. Click to expand
-✏ Code Examples
+**👇 Nota:** Cada marcador possui exemplos de código e alguns tem ilustrações. Clique para expandir
+✏ Códigos de Exemplo
-### :clap: Doing It Right Example: A test name that constitutes 3 parts
+### :clap: Exemplo: um nome de teste que constitui 3 partes

@@ -129,32 +129,32 @@ describe('Products Service', function() {
```
-### :clap: Doing It Right Example: A test name that constitutes 3 parts
+### :clap: Exemplo: um nome de teste que constitui 3 partes

-## ⚪ ️ 1.2 Structure tests by the AAA pattern
+## ⚪ ️ 1.2 Testes de estrutura pelo padrão em inglês AAA
-:white_check_mark: **Do:** Structure your tests with 3 well-separated sections Arrange, Act & Assert (AAA). Following this structure guarantees that the reader spends no brain CPU on understanding the test plan:
+:white_check_mark: **Faça:** Estruture seus testes com 3 seções bem separadas: Organizar, Atuar e Afirmar (OAA). Seguir essa estrutura garante que o leitor não gaste CPU do cérebro na compreensão do plano de teste:
-1st A - Arrange: All the setup code to bring the system to the scenario the test aims to simulate. This might include instantiating the unit under test constructor, adding DB records, mocking/stubbing on objects and any other preparation code
+1st O - Organizar: todo o código de configuração para levar o sistema ao cenário que o teste pretende simular. Isso pode incluir instanciar a unidade sob o construtor de teste, adicionar registros de banco de dados, mockar/stubbing de objetos e qualquer outro código de preparação
-2nd A - Act: Execute the unit under test. Usually 1 line of code
+2nd A - Ato: Execute teste em unidade. Geralmente 1 linha de código
-3rd A - Assert: Ensure that the received value satisfies the expectation. Usually 1 line of code
+3rd A - Afirmar: Garanta que o valor recebido satisfaça a expectativa. Geralmente 1 linha de código
-❌ **Otherwise:** Not only you spend long daily hours on understanding the main code, now also what should have been the simple part of the day (testing) stretches your brain
+❌ **De outra forma:** Você não gata apenas longas horas diárias para entender o código principal, agora também o que deveria ter sido a parte simples do dia (teste) estica seu cérebro
-✏ Code Examples
+✏ Códigos de Exemplo
From 4bc7fed98cdbb0fd10f63d9718f2ad50de114cdc Mon Sep 17 00:00:00 2001
From: sury
Date: Thu, 19 Sep 2019 13:25:51 +0900
Subject: [PATCH 071/502] Translation into Korean(2.4,2.5)
---
readme.kr.md | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 47f0dde0..374e163b 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -760,32 +760,32 @@ describe('주문 서비스', function() {
-## ⚪ ️ 2.4 Test your middlewares in isolation
+## ⚪ ️ 2.4 당신의 미들웨어를 독립적으로 테스트 하십시오.
-:white_check_mark: **Do:** Many avoid Middleware testing because they represent a small portion of the system and require a live Express server. Both reasons are wrong — Middlewares are small but affect all or most of the requests and can be tested easily as pure functions that get {req,res} JS objects. To test a middleware function one should just invoke it and spy ([using Sinon for example](https://www.npmjs.com/package/sinon)) on the interaction with the {req,res} objects to ensure the function performed the right action. The library [node-mock-http](https://www.npmjs.com/package/node-mocks-http) takes it even further and factors the {req,res} objects along with spying on their behavior. For example, it can assert whether the http status that was set on the res object matches the expectation (See example below)
+:white_check_mark: **Do:** 많은 사람들은 미들웨어(Middleware) 테스트를 피합니다. 왜냐하면 미들웨어 테스트는 시스템의 작은 부분일 뿐이고 라이브 Express 서버가 필요하기 때문입니다. 하지만 두 가지 이유 모두 틀렸습니다. - 미들웨어는 작지만 모든 요청 또는 대부분의 요청에 영향을 미치고, {req,res} JS 객체를 가지는 순수한 함수로 쉽게 테스트할 수 있기 때문입니다. 미들웨어 함수를 테스트하기 위해서는 단지 함수를 불러오고 함수가 올바르게 동작하는 것을 확인하기 위해 {req, res} 객체에 대한 인터렉션을 스파이(spy)([예를들어 Sinon을 사용](https://www.npmjs.com/package/sinon))하면 됩니다. 라이브러리 [node-mock-http](https://www.npmjs.com/package/node-mocks-http)는 더 나아가서 행위에 대한 스파이와 함께 {req, res} 객체도 테스트할 수 있습니다. 예를 들어, response 객체의 http 상태가 기대했던 값과 일치하는지 여부를 확인(assert)할 수 있습니다. (아래 예제를 보세요)
-❌ **Otherwise:** A bug in Express middleware === a bug in all or most requests
+❌ **Otherwise:** Express 미들웨어에서의 버그 === 모든 요청 또는 대부분의 요청에서의 버그
-✏ Code Examples
+✏ 코드 예제
-### :clap:Doing It Right Example: Testing middleware in isolation without issuing network calls and waking-up the entire Express machine
+### :clap:올바른 예: 네트워크 호출 없이 전체 Express 시스템도 깨우지 않으면서 미들웨어를 독립적으로 테스트

```javascript
-//the middleware we want to test
+//테스트하고 싶은 미들웨어
const unitUnderTest = require('./middleware')
const httpMocks = require('node-mocks-http');
-//Jest syntax, equivelant to describe() & it() in Mocha
-test('A request without authentication header, should return http status 403', () => {
+//Jest 문법으로 Mocha의 describe() & it()과 동일
+test('헤더에 인증정보가 없는 요청은, http status 403을 리턴해야한다.', () => {
const request = httpMocks.createRequest({
method: 'GET',
url: '/user/42',
@@ -807,24 +807,24 @@ test('A request without authentication header, should return http status 403', (
-## ⚪ ️2.5 Measure and refactor using static analysis tools
-:white_check_mark: **Do:** Using static analysis tools helps by giving objective ways to improve code quality and keep your code maintainable. You can add static analysis tools to your CI build to abort when it finds code smells. Its main selling points over plain linting are the ability to inspect quality in the context of multiple files (e.g. detect duplications), perform advanced analysis (e.g. code complexity) and follow the history and progress of code issues. Two examples of tools you can use are [Sonarqube](https://www.sonarqube.org/) (2,600+ [stars](https://github.com/SonarSource/sonarqube)) and [Code Climate](https://codeclimate.com/) (1,500+ [stars](https://github.com/codeclimate/codeclimate))
+## ⚪ ️2.5 정적 분석 도구를 사용하여 측정하고 리팩토링 하십시오.
+:white_check_mark: **이렇게 해라:** 정적 분석 도구를 사용하면 코드 품질을 개선하고 코드를 유지 관리할 수 있는 객관적인 방법을 제공할 수 있습니다. 정적 분석 도구를 당신의 CI 빌드에 추가하여 코드 냄새(code smell)가 발견되면 중단되도록 할 수 있습니다. 정적 분석 도구가 일반적인 린트(lint) 도구보다 더 좋은 점은 여러 파일들의 컨텍스트 안에서 품질을 검사하고(예: 중복 탐지), 고급 분석(예: 코드 복잡성)을 할 수 있으며 코드 이슈에 대한 히스토리와 프로세스를 추적할 수 있다는 것입니다. 사용할 수 있는 정적 분석 도구 두 가지는 [Sonarqube](https://www.sonarqube.org/) (2,600+ [stars](https://github.com/SonarSource/sonarqube))와 [Code Climate](https://codeclimate.com/) (1,500+ [stars](https://github.com/codeclimate/codeclimate))입니다.
Credit:: [Keith Holliday](https://github.com/TheHollidayInn)
-❌ **Otherwise:** With poor code quality, bugs and performance will always be an issue that no shiny new library or state of the art features can fix
+❌ **그렇지 않으면:** 코드 품질이 좋지 않으면 버그와 성능은 빛나는 새 라이브러리나 최신 기능으로 해결할 수 없는 문제가 될 것입니다.
-✏ Code Examples
+✏ 코드 예제
-### :clap: Doing It Right Example: CodeClimate, a commercial tool that can identify complex methods:
+### :clap: 올바른 예: 복잡도가 높은 함수를 찾아내는 상용 도구인 CodeClimate:

From 754094868211ff1ec1a970b893e4aed1dc7532e1 Mon Sep 17 00:00:00 2001
From: Rain Byun
Date: Fri, 6 Sep 2019 00:30:37 +0900
Subject: [PATCH 072/502] Translate into Korean 1.5
- Fix typo.
---
readme.kr.md | 39 ++++++++++++++++++++-------------------
1 file changed, 20 insertions(+), 19 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 6577d1e0..be8617d7 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -69,7 +69,7 @@ JavaScript 및 Node.js에 대한 A부터 Z까지의 믿음직한 가이드입니
테스트는 친절하고 웃는 동료와 함께 일하는 것이 즐거울 수 있는 기회이고, 적은 투자로 큰 가치를 제공하는 것입니다. 과학은 우리에게 두 개의 뇌 시스템이 있다고 말합니다. 빈 도로에서 자동차를 운전하는 등의 간편한 활동에 사용되는 시스템 1, 그리고 수학 방정식을 푸는 것과 같이 복잡하고 의식적인 연산을 위한 시스템 2. 테스트 코드를 볼 때 수학 문제를 푸는 것 같은게 아닌, HTML 문서를 수정하는 것만 큼 쉬워야하는 시스템 1에 맞게 테스트를 설계하십시오.
-선택적인 체리픽 기술, 툴 그리고 비용-효율적이고 뛰어난 ROI를 제공하는 테스트 대상 선정으로 목적을 이러한 달성할 수 있습니다. 필요한 만큼의 테스트, 융통성 있게 유자하려는 노력, 때로는 애자일함과 단순성을 위해 일부 테스트와 신뢰성을 포기하는 것도 가치가 있습니다.
+선택적인 체리픽 기술, 툴 그리고 비용-효율적이고 뛰어난 ROI를 제공하는 테스트 대상 선정으로 이러한 목적을 달성할 수 있습니다. 필요한 만큼의 테스트, 융통성 있게 유지하려는 노력, 때로는 애자일함과 단순성을 위해 일부 테스트와 신뢰성을 포기하는 것도 가치가 있습니다.

@@ -295,54 +295,55 @@ it("화이트박스 테스트: 내부 method가 VAT 0을 받으면 0을 반환
-## ⚪ ️ ️1.5 Choose the right test doubles: Avoid mocks in favor of stubs and spies
+## ⚪ ️ 1.5 올바른 테스트 더블 선택: Stub과 Spy를 위한 Mock을 피하십시오.
-:white_check_mark: **Do:** Test doubles are a necessary evil because they are coupled to the application internals, yet some provide an immense value ([Read here a reminder about test doubles: mocks vs stubs vs spies](https://martinfowler.com/articles/mocksArentStubs.html)).
+:white_check_mark: **이렇게 해라:** 테스트 더블은 어플리케이션 내부에 연결되어 있기때문에 필요악이지만 일부는 엄청난 가치를 제공합니다([테스트 더블에 대한 알림: mocks vs stubs vs spies](https://martinfowler.com/articles/mocksArentStubs.html)).
-Before using test doubles, ask a very simple question: Do I use it to test functionality that appears, or could appear, in the requirements document? If no, it’s a smell of white-box testing.
+테스트 더블을 사용하기 전에 간단한 질문: 요구사항 문서에 있거나 있을 수 있는 기능을 테스트하는 데 테스트 더블을 사용합니까? 만약 아니라면 화이트박스 테스트 낌새가 보입니다.
-For example, if you want to test what your app behaves reasonably when the payment service is down, you might stub the payment service and trigger some ‘No Response’ return to ensure that the unit under test returns the right value. This checks our application behavior/response/outcome under certain scenarios. You might also use a spy to assert that an email was sent when that service is down — this is again a behavioral check which is likely to appear in a requirements doc (“Send an email if payment couldn’t be saved”). On the flip side, if you mock the Payment service and ensure that it was called with the right JavaScript types — then your test is focused on internal things that got nothing with the application functionality and are likely to change frequently
+예를 들어, 결제 서비스가 중단되었을 때 앱이 적절하게 작동하는 것을 테스트하려는 경우, 테스트중인 단위가 올바른 값을 반환하도록, 결제 서비스를 stub하고 '응답 없음' 반환을 트리거 할 수 있습니다.
+이것은 특정 시나리오에서 애플리케이션의 동작/응답/결과를 확인합니다. 그리고 spy를 사용하여 해당 서비스가 중단되었을 때 메일이 보내지는지를 assert 할 수 있습니다. 이것은 다시 요구사항 문서에 있을 수 있는 행동에 대한 점검입니다(결제가 저장되지 않으면 메일은 보낸다). 반대로, 결제 서비스를 mock 하고 올바른 JavaScript 타입으로 호출 되었는지를 확인한다면 - 당신의 테스트는 애플리케이션 기능에 전혀 영향을 받지 않고 자주 변경될 수 있는 내부 구현에 초점을 둔 경우입니다.
-
-❌ **Otherwise:** Any refactoring of code mandates searching for all the mocks in the code and updating accordingly. Tests become a burden rather than a helpful friend
+❌ **그렇지 않으면:** 코드를 리펙토링 할 때, 모든 mock을 찾아서 수정해야 합니다. 테스트가 도움이 아닌 부담이 됩니다.
-✏ Code Examples
+✏ 코드 예제
-### :thumbsdown: Anti-pattern example: Mocks focus on the internals
+### :thumbsdown: 올바르지 않은 예: 내부에 초점을 둔 mock
+

+
```javascript
-it("When a valid product is about to be deleted, ensure data access DAL was called once, with the right product and right config", async () => {
- //Assume we already added a product
- const dataAccessMock = sinon.mock(DAL);
- //hmmm BAD: testing the internals is actually our main goal here, not just a side-effect
+it("유효한 제품을 삭제하려고 할 때, 올바른 제품과 올바른 구성 정보로 데이터 액세스 DAL을 한 번 호출했는지 확인한다", async () => {
+ // 이미 제품을 추가했다고 가정
+ const dataAccessMock = sinon.mock(DAL);
+ // 좋지 않음: 내부 테스트는 side-effect를 위해서가 주요 목적을 위해서 입니다.
dataAccessMock.expects("deleteProduct").once().withArgs(DBConfig, theProductWeJustAdded, true, false);
new ProductService().deletePrice(theProductWeJustAdded);
dataAccessMock.verify();
});
```
+
-### :clap:Doing It Right Example: spies are focused on testing the requirements but as a side-effect are unavoidably touching to the internals
+### :clap:올바른 예: spy는 요구사항을 테스트하는데 초점을 두고있지만, 내부를 건드리는 side-effect를 피할 순 없습니다.
```javascript
-it("When a valid product is about to be deleted, ensure an email is sent", async () => {
- //Assume we already added here a product
+it("유효한 제품을 삭제하려고 할 때, 메일을 보낸다", async () => {
+ // 이미 제품을 추가했다고 가정
const spy = sinon.spy(Emailer.prototype, "sendEmail");
new ProductService().deletePrice(theProductWeJustAdded);
- //hmmm OK: we deal with internals? Yes, but as a side effect of testing the requirements (sending an email)
+ // 좋음: 우리는 내부를 다루는가? 그렇다, 그러나 요구사항(이메일을 보낸다)에 대한 테스트의 side-effect이다.
});
```
-
-
## ⚪ ️1.6 Don’t “foo”, use realistic input data
From 4d9363138b7178402f2cec94321e7da62914b4f6 Mon Sep 17 00:00:00 2001
From: devori
Date: Fri, 6 Sep 2019 15:20:35 +0900
Subject: [PATCH 073/502] Translation-ko_KR(1.6)
---
readme.kr.md | 31 +++++++++++++++----------------
1 file changed, 15 insertions(+), 16 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index be8617d7..3c61d2cd 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -346,22 +346,22 @@ it("유효한 제품을 삭제하려고 할 때, 메일을 보낸다", async ()
-## ⚪ ️1.6 Don’t “foo”, use realistic input data
+## ⚪ ️1.6 의미없는 인풋 데이터를 사용하지 말고, 실제와 같은 인풋 데이터를 사용해라
-:white_check_mark: **Do:** Often production bugs are revealed under some very specific and surprising input — the more realistic the test input is, the greater the chances are to catch bugs early. Use dedicated libraries like [Faker](https://www.npmjs.com/package/faker) to generate pseudo-real data that resembles the variety and form of production data. For example, such libraries can generate realistic phone numbers, usernames, credit card, company names, and even ‘lorem ipsum’ text. You may also create some tests (on top of unit tests, not instead) that randomize fakers data to stretch your unit under test or even import real data from your production environment. Want to take it to the next level? see next bullet (property-based testing).
+:white_check_mark: **이렇게 해라:** 흔히 제품의 버그들은 매우 특수한 인풋데이터를 통해 나타납니다 - 테스트 인풋이 혈실적일 수록 버그를 조기에 발견할 가능성이 높아집니다. 실제 데이터와 다양성 및 형태가 유사한 데이터를 생성해 주는 [Faker](https://www.npmjs.com/package/faker) 같은 전용 라이브러리들을 사용하십시오. 이런 라이브러리들은 실제같은 전화번호, 사용자 이름, 신용카드, 회사명 그리고 심지어 'lorem ipsum'같은 문자등을 생성할 수도 있습니다. 당신은 가상의 데이터를 사용하여 테스트(단위 테스트 위에서)를 무작위화 하거나 심지어 실제 환경으로부터의 실제 데이터를 임포트 할수도 있습니다. 다음 단계를 얻기를 원하십니까? 그렇다면 아래로 가십시오 (property-based testing).
-❌ **Otherwise:** All your development testing will falsely seem green when you use synthetic inputs like “Foo” but then production might turn red when a hacker passes-in a nasty string like “@3e2ddsf . ##’ 1 fdsfds . fds432 AAAA”
+❌ **그렇지 않다면:** "Foo"와 같은 인풋을 사용하면 당신의 모든 테스트가 모두 통과한것 처럼 표시되지만, 실제 환경에서는 해커가 “@3e2ddsf . ##’ 1 fdsfds . fds432 AAAA” 같은 인풋을 전달해 실패 할수도 있습니다.
-✏ Code Examples
+✏ 코드 예제
-### :thumbsdown: Anti-Pattern Example: A test suite that passes due to non-realistic data
+### :thumbsdown: 올바르지 않은 예: 현실적이지 않은 데이터 때문에 통과하는 테스트

@@ -369,34 +369,33 @@ it("유효한 제품을 삭제하려고 할 때, 메일을 보낸다", async ()
```javascript
const addProduct = (name, price) =>{
- const productNameRegexNoSpace = /^\S*$/;//no white-space allowd
+ const productNameRegexNoSpace = /^\S*$/;// 공백은 허용되지 않음
if(!productNameRegexNoSpace.test(name))
- return false;//this path never reached due to dull input
+ return false;//도달하지 않는 곳
//some logic here
return true;
};
-test("Wrong: When adding new product with valid properties, get successful confirmation", async () => {
- //The string "Foo" which is used in all tests never triggers a false result
+test("잘못된 예제: 유효한 속성과 함께 제품을 추가한다면, 성공을 얻는다.", async () => {
+ //모든 테스트에서 false 가 리턴되지 않는 "Foo" 인풋을 사용
const addProductResult = addProduct("Foo", 5);
expect(addProductResult).toBe(true);
- //Positive-false: the operation succeeded because we never tried with long
- //product name including spaces
+ //거짓된 성공: 공백을 포함하는 문자열을 사용하지 않았기 때문에 테스트는 성공한다.
});
```
-### :clap:Doing It Right Example: Randomizing realistic input
+### :clap:올바른 예: 무작위한 현실적인 인풋Randomizing realistic input
```javascript
-it("Better: When adding new valid product, get successful confirmation", async () => {
+it("더 나은 것: 유효한 제품이 추가된다면, 성공을 얻는다.", async () => {
const addProductResult = addProduct(faker.commerce.productName(), faker.random.number());
- //Generated random input: {'Sleek Cotton Computer', 85481}
+ //생성된 무작위 인풋: {'Sleek Cotton Computer', 85481}
expect(addProductResult).to.be.true;
- //Test failed, the random input triggered some path we never planned for.
- //We discovered a bug early!
+ //테스트는 실패한다, 무작위 인풋은 우리가 계획하지 않은 일이 일어나도록 만든다.
+ //우리는 조기에 버그를 발견했다!
});
```
From aa318282f343f6ee5ce983d28a17be8c7b79e255 Mon Sep 17 00:00:00 2001
From: devori
Date: Fri, 6 Sep 2019 15:23:04 +0900
Subject: [PATCH 074/502] fix typo
remove english sentence that translated into ko-KR
---
readme.kr.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/readme.kr.md b/readme.kr.md
index 3c61d2cd..927f5871 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -388,7 +388,7 @@ test("잘못된 예제: 유효한 속성과 함께 제품을 추가한다면,
```
-### :clap:올바른 예: 무작위한 현실적인 인풋Randomizing realistic input
+### :clap:올바른 예: 무작위한 현실적인 인풋
```javascript
it("더 나은 것: 유효한 제품이 추가된다면, 성공을 얻는다.", async () => {
const addProductResult = addProduct(faker.commerce.productName(), faker.random.number());
From 2998176bc61468fc93e64191a88ed9316ee25475 Mon Sep 17 00:00:00 2001
From: sury
Date: Fri, 6 Sep 2019 15:40:28 +0900
Subject: [PATCH 075/502] translation 1.7 ko/kr
---
readme.kr.md | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 927f5871..d8c03f0d 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -406,13 +406,15 @@ it("더 나은 것: 유효한 제품이 추가된다면, 성공을 얻는다.",
-## ⚪ ️ 1.7 Test many input combinations using Property-based testing
+## ⚪ ️ 1.7 프로퍼티 기반(Property-based) 테스트를 통해 다양한 인풋 값 조합으로 테스를 하십시오.
-:white_check_mark: **Do:** Typically we choose a few input samples for each test. Even when the input format resembles real-world data (see bullet ‘Don’t foo’), we cover only a few input combinations (method(‘’, true, 1), method(“string” , false” , 0)), However, in production, an API that is called with 5 parameters can be invoked with thousands of different permutations, one of them might render our process down ([see Fuzz Testing](https://en.wikipedia.org/wiki/Fuzzing)). What if you could write a single test that sends 1000 permutations of different inputs automatically and catches for which input our code fails to return the right response? Property-based testing is a technique that does exactly that: by sending all the possible input combinations to your unit under test it increases the serendipity of finding a bug. For example, given a method — addNewProduct(id, name, isDiscount) — the supporting libraries will call this method with many combinations of (number, string, boolean) like (1, “iPhone”, false), (2, “Galaxy”, true). You can run property-based testing using your favorite test runner (Mocha, Jest, etc) using libraries like [js-verify](https://github.com/jsverify/jsverify) or [testcheck](https://github.com/leebyron/testcheck-js) (much better documentation). Update: Nicolas Dubien suggests in the comments below to [checkout fast-check](https://github.com/dubzzz/fast-check#readme) which seems to offer some additional features and also to be actively maintained
+:white_check_mark: **이렇게 해라:** 우리는 일반적으로 적은 수의 인풋 샘플 데이터를 가지고 테스트를 합니다. 심지어 인풋 데이터 형식이 실제 데이터와 비슷할 때에도 다음과 같이 제한된 인풋 조합으로만 테스트를 커버합니다.(method(‘’, true, 1), method(“string” , false” , 0)) 하지만, 운영시에는 5개의 파라미터를 가지는 API는 수 천 개의 다른 조합의 파라미터로 호출 될 수 있고, 이 중 하나가 우리의 시스템을 다운시킬 수도 있습니다. 그렇다면 만약 1000 가지 조합의 인풋값을 자동으로 생성하고 올바른 응답을 반환하지 못하는 인풋값을 찾아내는 단일 테스트를 작성할 수 있다면 어떨까요?
+프로퍼티 기반 테스트는 유닛 테스트에 모든 가능한 인풋 조합을 사용하여 생각하지 못 한 버그를 찾을 확률을 높여줍니다. 예를들어, 다음의 메소드가 주어졌을 때 — addNewProduct(id, name, isDiscount) — 프로퍼티 기반 테스트 라이브러리들은 다양한 파라미터 (number, string, boolean) 조합으로 - (1, “iPhone”, false), (2, “Galaxy”, true) - 이 메소드를 호출합니다. [js-verify](https://github.com/jsverify/jsverify) 나 [testcheck](https://github.com/leebyron/testcheck-js) (much better documentation) 같은 라이브러리를 지원하는 테스트 러너들 (Mocha, Jest, etc) 중 당신이 가장 선호하는 방법을 통해 프로퍼티 기반 테스트를 할 수 있습니다.
+업데이트 : Nicolas Dubien가 코멘트를 통해 더 많은 부가적인 기능들을 제공하고 활발하게 유지보수되고 있는 라이브러리 [fast-check](https://github.com/dubzzz/fast-check#readme)를 추천해 주었습니다.
-❌ **Otherwise:** Unconsciously, you choose the test inputs that cover only code paths that work well. Unfortunately, this decreases the efficiency of testing as a vehicle to expose bugs
+❌ **그렇지 않으면:** 의심할 여지 없이 당신은 오직 코드가 잘 동작하는 테스트 인풋을 사용할 것입니다. 불행하게도 이러한 방식은 버그를 찾는 도구로써의 테스트 효율성을 떨어뜨릴 것 입니다.
@@ -421,7 +423,7 @@ it("더 나은 것: 유효한 제품이 추가된다면, 성공을 얻는다.",
-### :clap: Doing It Right Example: Testing many input permutations with “mocha-testcheck”
+### :clap: 올바른 예: “mocha-testcheck”를 사용하여 다양한 인풋 조합으로 테스트 하십시오.

@@ -432,7 +434,7 @@ const {expect} = require('chai');
describe('Product service', () => {
describe('Adding new', () => {
- //this will run 100 times with different random properties
+ //서로 다른 무작위 값으로 100회 호출됩니다.
check.it('Add new product with random yet valid properties, always successful',
gen.int, gen.string, (id, name) => {
expect(addNewProduct(id, name).status).to.equal('approved');
From f4a29a1a736fb827619ce739852fb808db4bebf5 Mon Sep 17 00:00:00 2001
From: sury
Date: Fri, 6 Sep 2019 15:46:03 +0900
Subject: [PATCH 076/502] fix 1.7 tranlation to Korean
---
readme.kr.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/readme.kr.md b/readme.kr.md
index d8c03f0d..3ea34a81 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -419,7 +419,7 @@ it("더 나은 것: 유효한 제품이 추가된다면, 성공을 얻는다.",
-✏ Code Examples
+✏ 코드 예제
From 41d444b468fb224bb42e5a7997a566522b91b1e3 Mon Sep 17 00:00:00 2001
From: sury
Date: Fri, 6 Sep 2019 16:01:06 +0900
Subject: [PATCH 077/502] =?UTF-8?q?fix=20used=20word=20(=EB=8B=A8=EC=9D=BC?=
=?UTF-8?q?=20->=20=EB=8B=A8=EC=9C=84)?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
readme.kr.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 3ea34a81..317d1671 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -406,10 +406,10 @@ it("더 나은 것: 유효한 제품이 추가된다면, 성공을 얻는다.",
-## ⚪ ️ 1.7 프로퍼티 기반(Property-based) 테스트를 통해 다양한 인풋 값 조합으로 테스를 하십시오.
+## ⚪ ️ 1.7 프로퍼티 기반(Property-based) 테스트를 통해 다양한 인풋 값 조합으로 테스트를 하십시오.
-:white_check_mark: **이렇게 해라:** 우리는 일반적으로 적은 수의 인풋 샘플 데이터를 가지고 테스트를 합니다. 심지어 인풋 데이터 형식이 실제 데이터와 비슷할 때에도 다음과 같이 제한된 인풋 조합으로만 테스트를 커버합니다.(method(‘’, true, 1), method(“string” , false” , 0)) 하지만, 운영시에는 5개의 파라미터를 가지는 API는 수 천 개의 다른 조합의 파라미터로 호출 될 수 있고, 이 중 하나가 우리의 시스템을 다운시킬 수도 있습니다. 그렇다면 만약 1000 가지 조합의 인풋값을 자동으로 생성하고 올바른 응답을 반환하지 못하는 인풋값을 찾아내는 단일 테스트를 작성할 수 있다면 어떨까요?
-프로퍼티 기반 테스트는 유닛 테스트에 모든 가능한 인풋 조합을 사용하여 생각하지 못 한 버그를 찾을 확률을 높여줍니다. 예를들어, 다음의 메소드가 주어졌을 때 — addNewProduct(id, name, isDiscount) — 프로퍼티 기반 테스트 라이브러리들은 다양한 파라미터 (number, string, boolean) 조합으로 - (1, “iPhone”, false), (2, “Galaxy”, true) - 이 메소드를 호출합니다. [js-verify](https://github.com/jsverify/jsverify) 나 [testcheck](https://github.com/leebyron/testcheck-js) (much better documentation) 같은 라이브러리를 지원하는 테스트 러너들 (Mocha, Jest, etc) 중 당신이 가장 선호하는 방법을 통해 프로퍼티 기반 테스트를 할 수 있습니다.
+:white_check_mark: **이렇게 해라:** 우리는 일반적으로 적은 수의 인풋 샘플 데이터를 가지고 테스트를 합니다. 심지어 인풋 데이터 형식이 실제 데이터와 비슷할 때에도 다음과 같이 제한된 인풋 조합으로만 테스트를 커버합니다.(method(‘’, true, 1), method(“string” , false” , 0)) 하지만, 운영시에는 5개의 파라미터를 가지는 API는 수 천 개의 다른 조합의 파라미터로 호출 될 수 있고, 이 중 하나가 우리의 시스템을 다운시킬 수도 있습니다. 그렇다면 만약 1000 가지 조합의 인풋값을 자동으로 생성하고 올바른 응답을 반환하지 못하는 인풋값을 찾아내는 단위 테스트를 작성할 수 있다면 어떨까요?
+프로퍼티 기반 테스트는 단위 테스트에 모든 가능한 인풋 조합을 사용하여 생각하지 못 한 버그를 찾을 확률을 높여줍니다. 예를들어, 다음의 메소드가 주어졌을 때 — addNewProduct(id, name, isDiscount) — 프로퍼티 기반 테스트 라이브러리들은 다양한 파라미터 (number, string, boolean) 조합으로 - (1, “iPhone”, false), (2, “Galaxy”, true) - 이 메소드를 호출합니다. [js-verify](https://github.com/jsverify/jsverify) 나 [testcheck](https://github.com/leebyron/testcheck-js) (much better documentation) 같은 라이브러리를 지원하는 테스트 러너들 (Mocha, Jest, etc) 중 당신이 가장 선호하는 방법을 통해 프로퍼티 기반 테스트를 할 수 있습니다.
업데이트 : Nicolas Dubien가 코멘트를 통해 더 많은 부가적인 기능들을 제공하고 활발하게 유지보수되고 있는 라이브러리 [fast-check](https://github.com/dubzzz/fast-check#readme)를 추천해 주었습니다.
From 19a263f893dfd1bb3ecbf3295410eab9d996b0f4 Mon Sep 17 00:00:00 2001
From: Rain Byun
Date: Sat, 7 Sep 2019 16:59:39 +0900
Subject: [PATCH 078/502] GPG signature test.
---
readme.kr.md | 18 +++++-------------
1 file changed, 5 insertions(+), 13 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 317d1671..33ea8c9c 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -346,15 +346,14 @@ it("유효한 제품을 삭제하려고 할 때, 메일을 보낸다", async ()
-## ⚪ ️1.6 의미없는 인풋 데이터를 사용하지 말고, 실제와 같은 인풋 데이터를 사용해라
+## ⚪ ️ 1.6 의미없는 인풋 데이터를 사용하지 말고, 실제와 같은 인풋 데이터를 사용해라
:white_check_mark: **이렇게 해라:** 흔히 제품의 버그들은 매우 특수한 인풋데이터를 통해 나타납니다 - 테스트 인풋이 혈실적일 수록 버그를 조기에 발견할 가능성이 높아집니다. 실제 데이터와 다양성 및 형태가 유사한 데이터를 생성해 주는 [Faker](https://www.npmjs.com/package/faker) 같은 전용 라이브러리들을 사용하십시오. 이런 라이브러리들은 실제같은 전화번호, 사용자 이름, 신용카드, 회사명 그리고 심지어 'lorem ipsum'같은 문자등을 생성할 수도 있습니다. 당신은 가상의 데이터를 사용하여 테스트(단위 테스트 위에서)를 무작위화 하거나 심지어 실제 환경으로부터의 실제 데이터를 임포트 할수도 있습니다. 다음 단계를 얻기를 원하십니까? 그렇다면 아래로 가십시오 (property-based testing).
-
+
❌ **그렇지 않다면:** "Foo"와 같은 인풋을 사용하면 당신의 모든 테스트가 모두 통과한것 처럼 표시되지만, 실제 환경에서는 해커가 “@3e2ddsf . ##’ 1 fdsfds . fds432 AAAA” 같은 인풋을 전달해 실패 할수도 있습니다.
-
✏ 코드 예제
@@ -384,11 +383,12 @@ test("잘못된 예제: 유효한 속성과 함께 제품을 추가한다면,
expect(addProductResult).toBe(true);
//거짓된 성공: 공백을 포함하는 문자열을 사용하지 않았기 때문에 테스트는 성공한다.
});
-
```
+
### :clap:올바른 예: 무작위한 현실적인 인풋
+
```javascript
it("더 나은 것: 유효한 제품이 추가된다면, 성공을 얻는다.", async () => {
const addProductResult = addProduct(faker.commerce.productName(), faker.random.number());
@@ -401,9 +401,6 @@ it("더 나은 것: 유효한 제품이 추가된다면, 성공을 얻는다.",
-
-
-
## ⚪ ️ 1.7 프로퍼티 기반(Property-based) 테스트를 통해 다양한 인풋 값 조합으로 테스트를 하십시오.
@@ -411,12 +408,11 @@ it("더 나은 것: 유효한 제품이 추가된다면, 성공을 얻는다.",
:white_check_mark: **이렇게 해라:** 우리는 일반적으로 적은 수의 인풋 샘플 데이터를 가지고 테스트를 합니다. 심지어 인풋 데이터 형식이 실제 데이터와 비슷할 때에도 다음과 같이 제한된 인풋 조합으로만 테스트를 커버합니다.(method(‘’, true, 1), method(“string” , false” , 0)) 하지만, 운영시에는 5개의 파라미터를 가지는 API는 수 천 개의 다른 조합의 파라미터로 호출 될 수 있고, 이 중 하나가 우리의 시스템을 다운시킬 수도 있습니다. 그렇다면 만약 1000 가지 조합의 인풋값을 자동으로 생성하고 올바른 응답을 반환하지 못하는 인풋값을 찾아내는 단위 테스트를 작성할 수 있다면 어떨까요?
프로퍼티 기반 테스트는 단위 테스트에 모든 가능한 인풋 조합을 사용하여 생각하지 못 한 버그를 찾을 확률을 높여줍니다. 예를들어, 다음의 메소드가 주어졌을 때 — addNewProduct(id, name, isDiscount) — 프로퍼티 기반 테스트 라이브러리들은 다양한 파라미터 (number, string, boolean) 조합으로 - (1, “iPhone”, false), (2, “Galaxy”, true) - 이 메소드를 호출합니다. [js-verify](https://github.com/jsverify/jsverify) 나 [testcheck](https://github.com/leebyron/testcheck-js) (much better documentation) 같은 라이브러리를 지원하는 테스트 러너들 (Mocha, Jest, etc) 중 당신이 가장 선호하는 방법을 통해 프로퍼티 기반 테스트를 할 수 있습니다.
업데이트 : Nicolas Dubien가 코멘트를 통해 더 많은 부가적인 기능들을 제공하고 활발하게 유지보수되고 있는 라이브러리 [fast-check](https://github.com/dubzzz/fast-check#readme)를 추천해 주었습니다.
-
+
❌ **그렇지 않으면:** 의심할 여지 없이 당신은 오직 코드가 잘 동작하는 테스트 인풋을 사용할 것입니다. 불행하게도 이러한 방식은 버그를 찾는 도구로써의 테스트 효율성을 떨어뜨릴 것 입니다.
-
✏ 코드 예제
@@ -441,14 +437,10 @@ describe('Product service', () => {
});
})
});
-
```
-
-
-
## ⚪ ️ 1.8 If needed, use only short & inline snapshots
From f127f4cec5699ec610b47d4e56ad13741c28f027 Mon Sep 17 00:00:00 2001
From: Rain Byun
Date: Mon, 9 Sep 2019 00:09:05 +0900
Subject: [PATCH 079/502] Translate into Korean 1.8
---
readme.kr.md | 28 +++++++++++++++-------------
1 file changed, 15 insertions(+), 13 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 33ea8c9c..a79db949 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -443,30 +443,31 @@ describe('Product service', () => {
-## ⚪ ️ 1.8 If needed, use only short & inline snapshots
+## ⚪ ️ 1.8 필요한 경우 짧거나 인라인 스냅샷만 사용하십시오.
-:white_check_mark: **Do:** When there is a need for [snapshot testing](https://jestjs.io/docs/en/snapshot-testing), use only short and focused snapshots (i.e. 3-7 lines) that are included as part of the test ([Inline Snapshot](https://jestjs.io/docs/en/snapshot-testing#inline-snapshots)) and not within external files. Keeping this guideline will ensure your tests remain self-explanatory and less fragile.
+:white_check_mark: **이렇게 해라:** [스냅샷 테스트](https://jestjs.io/docs/en/snapshot-testing)가 필요한 경우 외부 파일이 아닌 테스트의 일부 ([인라인 스냅샷](https://jestjs.io/docs/en/snapshot-testing#inline-snapshots))에 포함 된 짧고 집중된 스냅샷(3~7 라인)만 사용하십시오. 이 지침을 따르면 따로 설명이 필요없고 잘 깨지지 않는 테스트가 됩니다.
-On the other hand, ‘classic snapshots’ tutorials and tools encourage to store big files (e.g. component rendering markup, API JSON result) over some external medium and ensure each time when the test run to compare the received result with the saved version. This, for example, can implicitly couple our test to 1000 lines with 3000 data values that the test writer never read and reasoned about. Why is this wrong? By doing so, there are 1000 reasons for your test to fail - it’s enough for a single line to change for the snapshot to get invalid and this is likely to happen a lot. How frequently? for every space, comment or minor CSS/HTML change. Not only this, the test name wouldn’t give a clue about the failure as it just checks that 1000 lines didn’t change, also it encourages to the test writer to accept as the desired true a long document he couldn’t inspect and verify. All of these are symptoms of obscure and eager test that is not focused and aims to achieve too much
+반면에, '고전적인 스냅샷' 튜토리얼 및 도구는 외부에 큰 파일(예: 구성 요소 랜더링 마크업, API JSON 결과)를 저장하고, 테스트를 실행할 때 마다 수신된 결과를 저장된 버전과 비교하기를 권장합니다. 예를 들어, 이것은 1,000 라인(우리가 절대 읽지 않고 추론하지 않을 3,000개의 데이터 값을 가진)의 코드를 우리 테스트에 암시적으로 연결할 수 있습니다. 왜 이것이 잘못 되었을까요? 이렇게하면 테스트에 실패할 1,000 가지 이유가 생깁니다. 한줄만 변경되어도 스냅샷이 유효하지 않게 되고, 이런일이 일어날 가능성이 높습니다. 얼마나 자주? 모든 공백, 주석에서 혹은 사소한 CSS/HTML 변경에 대해서. 뿐만 아니라 테스트 이름은 1,000 라인이 변경되지 않았는지를 나타내기 때분에, 실패에 대한 단서를 제공하지 않습니다. 또한 테스트 작성자가 긴 문서(검사하고 확인할 수 없는)를 받아들이게끔 합니다. 이 모든 것은 초점이 맞지않고 너무 많은 것을 달성하려는 모호하고 간절한 테스트 증상입니다.
+
+긴 외부 스냅샷이 허용되는 경우가 거의 없다는 점은 주목할 가치가 있습니다 - 데이터가 아닌 스키마를 assert 할 때(값 추출 및 필드에 집중) 또는 수신된 문서가 거의 변경되지 않는 경우
-It’s worth noting that there are few cases where long & external snapshots are acceptable - when asserting on schema and not data (extracting out values and focusing on fields) or when the received document rarely changes
-❌ **Otherwise:** A UI test fails. The code seems right, the screen renders perfect pixels, what happened? your snapshot testing just found a difference from the origin document to current received one - a single space character was added to the markdown...
+❌ **그렇지 않다면:** UI 테스트가 실패합니다. 코드가 문제없어 보이고 화면이 완벽한 픽셀을 렌더링합니다. 어떻게 되었습니까? 스냅샷 테스트에서 원본 문서와 현재 수신된 문서와의 차이점을 발견했습니다. 빈칸 하나가 마크 다운에 추가되었습니다...
-✏ Code Examples
+✏ 코드 예제
-### :thumbsdown: Anti-Pattern Example: Coupling our test to unseen 2000 lines of code
+### :thumbsdown: 올바르지 않은 예: 보이지 않는 2,000 라인의 코드를 우리 테스트에 연결

```javascript
-it('TestJavaScript.com is renderd correctly', () => {
+it('TestJavaScript.com 이 올바르게 랜더링 된다.', () => {
//Arrange
@@ -477,16 +478,18 @@ const receivedPage = renderer
//Assert
expect(receivedPage).toMatchSnapshot();
-//We now implicitly maintain a 2000 lines long document
-//every additional line break or comment - will break this test
+// 이제 2,000 라인의 문서를 암묵적으로 유지합니다.
+// 모든 줄바꿈 또는 주석이 테스트를 망가뜨립니다.
});
```
+
-### :clap: Doing It Right Example: Expectations are visible and focused
+### :clap: 올바른 예: expectation이 잘 보이고 집중된다.
+
```javascript
-it('When visiting TestJavaScript.com home page, a menu is displayed', () => {
+it('TestJavaScript.com 홈페이지를 방문하면 메뉴가 보인다.', () => {
//Arrange
//Act
@@ -509,7 +512,6 @@ expect(menu).toMatchInlineSnapshot(`
-
## ⚪ ️1.9 Avoid global test fixtures and seeds, add data per-test
From d0616c6f28a29e58466104d6853328f38937535a Mon Sep 17 00:00:00 2001
From: Rain Byun
Date: Mon, 9 Sep 2019 09:45:12 +0900
Subject: [PATCH 080/502] Translate into Korean 1.9
---
readme.kr.md | 55 +++++++++++++++++++++++++---------------------------
1 file changed, 26 insertions(+), 29 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index a79db949..e9de9fbd 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -99,9 +99,9 @@ JavaScript 및 Node.js에 대한 A부터 Z까지의 믿음직한 가이드입니
-**👇 주의:** 각 글에는 코드 예제가 있으며 때로는 이미지도 있습니다. 클릭하여 확장
+**👇 주의:** 각 글에는 예제 코드가 있으며 때로는 이미지도 있습니다. 클릭하여 확장
-✏ 코드 예제
+✏ 예제 코드
@@ -149,7 +149,7 @@ describe('제품 서비스', function() {
-✏ 코드 예제
+✏ 예제 코드
@@ -203,7 +203,7 @@ test('프리미엄으로 분류해야 합니다.', () => {
-✏ 코드 예제
+✏ 예제 코드
 
-✏ 코드 예제
+✏ 예제 코드
@@ -415,7 +415,7 @@ it("더 나은 것: 유효한 제품이 추가된다면, 성공을 얻는다.",
-✏ 코드 예제
+✏ 예제 코드
@@ -457,7 +457,7 @@ describe('Product service', () => {
-✏ 코드 예제
+✏ 예제 코드
@@ -514,51 +514,50 @@ expect(menu).toMatchInlineSnapshot(`
-## ⚪ ️1.9 Avoid global test fixtures and seeds, add data per-test
-
-:white_check_mark: **Do:** Going by the golden rule (bullet 0), each test should add and act on its own set of DB rows to prevent coupling and easily reason about the test flow. In reality, this is often violated by testers who seed the DB with data before running the tests ([also known as ‘test fixture’](https://en.wikipedia.org/wiki/Test_fixture)) for the sake of performance improvement. While performance is indeed a valid concern — it can be mitigated (see “Component testing” bullet), however, test complexity is a much painful sorrow that should govern other considerations most of the time. Practically, make each test case explicitly add the DB records it needs and act only on those records. If performance becomes a critical concern — a balanced compromise might come in the form of seeding the only suite of tests that are not mutating data (e.g. queries)
-
+## ⚪ ️ 1.9 테스트 데이터를 글로벌로 하지말고 테스트별로 따로 추가하라.
+:white_check_mark: **이렇게 해라:** 황금률에 따르면(섹션 0), 각 테스트는 커플링을 방지하고 테스트 흐름을 쉽게 추론하기 위해 자체 DB 데이터를 추가하고 실행해야 합니다. 실제로 성능 향상(테스트를 실행하기 전에 DB 데이터를 준비(['테스트 픽스쳐'라고도 합니다](https://en.wikipedia.org/wiki/Test_fixture)))을 위해 이를 위반하는 테스터들이 많습니다. 성능은 실제로 유효한 문제이지만 완화될 수 있습니다(2.2 컴포넌트 테스트 참고). 그러나 테스트 복잡성은 대부분의 다른 고려사항들을 통제해야 하는 고통을 수반합니다. 각 테스트에 필요한 DB 레코드를 명시적으로 추가하고, 해당 데이터에 대해서만 테스트를 수행하십시오. 성능이 중요한 문제가 되는 경우 - 데이터를 변경하지 않는 테스트 모음(예: 쿼리)에 대해서 데이터를 준비하는 형태로 타협할 수 있습니다.
-❌ **Otherwise:** Few tests fail, a deployment is aborted, our team is going to spend precious time now, do we have a bug? let’s investigate, oh no — it seems that two tests were mutating the same seed data
+
+❌ **그렇지 않으면:** 테스트 실패, 배포 중단으로 팀원들이 귀중한 시간을 소비할 것입니다. 버그가 있습니까? 조사해보니 '없습니다' - 두 테스트에서 동일한 테스트 데이터를 변겨안 것으로 보입니다.
-✏ Code Examples
+✏ 예제 코드
-### :thumbsdown: Anti Pattern Example: tests are not independent and rely on some global hook to feed global DB data
+### :thumbsdown: 올바르지 않은 예: 테스트는 독립적이지 않으며 글로벌 훅에 의한 DB 데이터에 의존

```javascript
before(() => {
- //adding sites and admins data to our DB. Where is the data? outside. At some external json or migration framework
+ // 사이트 및 관리자 데이터를 DB에 추가. 데이터는 어디에 있습니까? 외부에. 외부 JSON 또는 마이그레이션 프레임워크에
await DB.AddSeedDataFromJson('seed.json');
});
-it("When updating site name, get successful confirmation", async () => {
- //I know that site name "portal" exists - I saw it in the seed files
+it("사이트 이름을 업데이트 할 때, 성공을 확인한다.", async () => {
+ // 사이트 이름 "portal"이 존재한다는 것을 알고있습니다. 시드파일에서 봤습니다.
const siteToUpdate = await SiteService.getSiteByName("Portal");
const updateNameResult = await SiteService.changeName(siteToUpdate, "newName");
expect(updateNameResult).to.be(true);
});
-it("When querying by site name, get the right site", async () => {
- //I know that site name "portal" exists - I saw it in the seed files
+it("사이트 이름을 쿼리할 때, 올바른 사이트 이름을 얻는다.", async () => {
+ // 사이트 이름 "portal"이 존재한다는 것을 알고있습니다. 시드파일에서 봤습니다.
const siteToCheck = await SiteService.getSiteByName("Portal");
- expect(siteToCheck.name).to.be.equal("Portal"); //Failure! The previous test change the name :[
+ expect(siteToCheck.name).to.be.equal("Portal"); // 실패! 이전 테스트에서 이름이 변경되었습니다. ㅠㅠ
});
-
```
+
-### :clap: Doing It Right Example: We can stay within the test, each test acts on its own set of data
+### :clap: 올바른 예: 우리는 테스트 내부에만 머물 수 있으며, 각 테스트는 자체 데이터 세트에서 동작합니다.
```javascript
-it("When updating site name, get successful confirmation", async () => {
- //test is adding a fresh new records and acting on the records only
+it("사이트 이름을 업데이트 할 때, 성공을 확인한다.", async () => {
+ // 테스트는 새로운 레코드를 새로 추가하고 해당 레코드에 대해서만 동작합니다.
const siteUnderTest = await SiteService.addSite({
name: "siteForUpdateTest"
});
@@ -567,13 +566,11 @@ it("When updating site name, get successful confirmation", async () => {
expect(updateNameResult).to.be(true);
});
-
```
-
-
+
## ⚪ ️ 1.10 Don’t catch errors, expect them
:white_check_mark: **Do:** When trying to assert that some input triggers an error, it might look right to use try-catch-finally and asserts that the catch clause was entered. The result is an awkward and verbose test case (example below) that hides the simple test intent and the result expectations
From ee492a27515188967ed2618c19e1579421efa8e8 Mon Sep 17 00:00:00 2001
From: Rain Byun
Date: Wed, 11 Sep 2019 00:24:47 +0900
Subject: [PATCH 081/502] Translate into Korean 1.10 to 1.11
---
readme.kr.md | 76 +++++++++++++++++++++++-----------------------------
1 file changed, 33 insertions(+), 43 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index e9de9fbd..fce412a0 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -572,98 +572,88 @@ it("사이트 이름을 업데이트 할 때, 성공을 확인한다.", async ()
-## ⚪ ️ 1.10 Don’t catch errors, expect them
-:white_check_mark: **Do:** When trying to assert that some input triggers an error, it might look right to use try-catch-finally and asserts that the catch clause was entered. The result is an awkward and verbose test case (example below) that hides the simple test intent and the result expectations
+## ⚪ ️ 1.10 오류를 catch 하지말고 expect 하십시오.
-A more elegant alternative is the using the one-line dedicated Chai assertion: expect(method).to.throw (or in Jest: expect(method).toThrow()). It’s absolutely mandatory to also ensure the exception contains a property that tells the error type, otherwise given just a generic error the application won’t be able to do much rather than show a disappointing message to the user
-
+:white_check_mark: **이렇게 해라:** 오류를 발생시키는 입력값을 assert 할 때, try-catch-finally를 사용하고 catch 블럭에서 assert 하는게 맞아 보일수도 있습니다. 아래 예는 테스트 의도와 결과 expectation을 숨기는 어색하고 장황한 테스트 사례입니다.
+보다 우아한 대안은 한줄짜리 Chai assertion을 사용하는 것 입니다: expect(method).to.throw (혹은 Jest: expect(method).toThrow()). 오류 유형을 알려주는 속성이 예외에 포함되어야 합니다. 그렇지 않고 일반적인 오류를 발생시키면 어플리케이션은 사용자에게 실망스러운 메시지를 표시하는 것 밖에 할 수 없습니다.
-❌ **Otherwise:** It will be challenging to infer from the test reports (e.g. CI reports) what went wrong
+
+❌ **그렇지 않으면:** 무엇이 잘못되었는지 테스트 보고서(예: CI 보고서)에서 추론하기 어려울 것입니다.
-✏ Code Examples
+✏ 예제 코드
-### :thumbsdown: Anti-pattern Example: A long test case that tries to assert the existence of error with try-catch
+### :thumbsdown: 올바르지 않은 예: try-catch로 오류가 존재한다고 assert 하는 긴 테스트 사례

```javascript
-it("When no product name, it throws error 400", async() => {
-let errorWeExceptFor = null;
-try {
- const result = await addNewProduct({name:'nest'});}
-catch (error) {
- expect(error.code).to.equal('InvalidInput');
- errorWeExceptFor = error;
-}
-expect(errorWeExceptFor).not.to.be.null;
-//if this assertion fails, the tests results/reports will only show
-//that some value is null, there won't be a word about a missing Exception
+it("제품명이 없으면 400 오류를 던진다.", async() => {
+ let errorWeExceptFor = null;
+ try {
+ const result = await addNewProduct({name:'nest'});}
+ catch (error) {
+ expect(error.code).to.equal('InvalidInput');
+ errorWeExceptFor = error;
+ }
+ expect(errorWeExceptFor).not.to.be.null;
+ // 이 asserting이 실패하면, 테스트 결과에서 누락된 입력값에 대한 단어는 알 수 없고
+ // 입력값이 null 이라는 것만 알 수 있습니다.
});
-
```
+
-### :clap: Doing It Right Example: A human-readable expectation that could be understood easily, maybe even by QA or technical PM
+### :clap: 올바른 예: QA나 PM이라도 쉽게 이해할 수 있고 읽기 쉬운 expectation
```javascript
-it.only("When no product name, it throws error 400", async() => {
+it.only("제품명이 없으면 400 오류를 던진다.", async() => {
expect(addNewProduct)).to.eventually.throw(AppError).with.property('code', "InvalidInput");
});
-
```
-
-
-
-## ⚪ ️ 1.11 Tag your tests
-
-:white_check_mark: **Do:** Different tests must run on different scenarios: quick smoke, IO-less, tests should run when a developer saves or commits a file, full end-to-end tests usually run when a new pull request is submitted, etc. This can be achieved by tagging tests with keywords like #cold #api #sanity so you can grep with your testing harness and invoke the desired subset. For example, this is how you would invoke only the sanity test group with Mocha: mocha — grep ‘sanity’
-
+## ⚪ ️ 1.11 테스트에 태깅하십시오.
+:white_check_mark: **이렇게 해라:** 다른 테스트는 꼭 다른 시나리오에서 실행해야 합니다: 개발자가 파일을 저장하거나 커밋을 할 때 빠르고, IO가 많이 없는 테스트를 실행해야 합니다. 전체 end-to-end 테스트는 일반적으로 새로운 Pull Request가 제출되었을 때 실행됩니다. 등.. 이러한 경우에 #cold #api #sanity와 같은 키워드로 테스트에 태깅하면 테스트를 효율적으로 grep 할 수 있고, 원하는 하위세트를 호출할 수 있습니다. 예) Mocha를 이용해서 sanity 테스트 그룹만 실행하는 방법입니다: mocha - grep 'sanity'
-❌ **Otherwise:** Running all the tests, including tests that perform dozens of DB queries, any time a developer makes a small change can be extremely slow and keeps developers away from running tests
+
+❌ **그렇지 앟으면:** 개발자가 작은 변경을 할 때마다 수십 개의 DB 쿼리를 수행하는 테스트를 포함한 모든 테스트를 실행한다면, 속도가 매우 느려져 개발자가 테스트를 수행하지 않게 만들 것입니다.
-✏ Code Examples
+✏ 예제 코드
-### :clap: Doing It Right Example: Tagging tests as ‘#cold-test’ allows the test runner to execute only fast tests (Cold===quick tests that are doing no IO and can be executed frequently even as the developer is typing)
+### :clap: 올바른 예: 테스트를 '#cold-test'로 태깅하면 테스트를 수행하는 사람이 빠른 테스트만 실행할 수 있습니다(IO를 수행하지 않고 개발자가 코딩하는 중에도 자주 실행할 수 있는 테스트 cold === quick).

+
```javascript
-//this test is fast (no DB) and we're tagging it correspondigly
-//now the user/CI can run it frequently
-describe('Order service', function() {
- describe('Add new order #cold-test #sanity', function() {
- test('Scenario - no currency was supplied. Expectation - Use the default currency #sanity', function() {
- //code logic here
+// 이 테스트는 빠르고(DB 없음) 현재 사용자/CI가 자주 실행할 수 있는 태그를 지정하고 있습니다.
+describe('주문 서비스', function() {
+ describe('새 주문 추가 #cold-test #sanity', function() {
+ test('시나리오 - 통화가 제공되지 않음. 예외 - 기본 통화 사용 #sanity', function() {
+ // code logic here
});
});
});
-
-
```
-
-
-
## ⚪ ️1.12 Other generic good testing hygiene
From 5c8c6524836cdedbb09a8fdb04f0ef5b862c9399 Mon Sep 17 00:00:00 2001
From: sury
Date: Thu, 12 Sep 2019 11:50:06 +0900
Subject: [PATCH 082/502] translate into Korean 2.1
---
readme.kr.md | 20 +++++++++-----------
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index fce412a0..02b5d02a 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -670,32 +670,30 @@ Learn and practice [TDD principles](https://www.sm-cloud.com/book-review-test-dr
# Section 2️⃣: Backend Testing
-## ⚪ ️2.1 Enrich your testing portfolio: Look beyond unit tests and the pyramid
+## ⚪ ️2.1 당신의 테스트 포트폴리오를 풍부하게 하십시오: 단위 테스트와 피라미드를 넘어서세요.
-:white_check_mark: **Do:** The [testing pyramid](https://martinfowler.com/bliki/TestPyramid.html), though 10> years old, is a great and relevant model that suggests three testing types and influences most developers’ testing strategy. At the same time, more than a handful of shiny new testing techniques emerged and are hiding in the shadows of the testing pyramid. Given all the dramatic changes that we’ve seen in the recent 10 years (Microservices, cloud, serverless), is it even possible that one quite-old model will suit *all* types of applications? shouldn’t the testing world consider welcoming new testing techniques?
+:white_check_mark: **이렇게 해라:** 10년이 넘은 모델인 [테스트 피라미드](https://martinfowler.com/bliki/TestPyramid.html)는 세 가지 테스트 유형을 제시하고 대다수 개발자의 테스트 전략에 영향을 주는 훌륭한 모델입니다. 동시에, 몇 가지 반짝이는 새로운 테스트 기술들이 등장하였지만 모두 테스트 피라미드의 그림자 뒤로 사라졌습니다. 우리가 최근 10년간 보아 온 극적인 기술의 변화들(Microservices, cloud, serverless)을 고려할 때, 아주 오래된 모델 하나가 *모든* 어플리케이션 유형에 적합하다는 것이 가능한가요? 테스트 세계는 새로운 기술을 받아들이는 것을 고려하지 않나요?
-Don’t get me wrong, in 2019 the testing pyramid, TDD and unit tests are still a powerful technique and are probably the best match for many applications. Only like any other model, despite its usefulness, [it must be wrong sometimes](https://en.wikipedia.org/wiki/All_models_are_wrong). For example, consider an IOT application that ingests many events into a message-bus like Kafka/RabbitMQ, which then flow into some data-warehouse and are eventually queried by some analytics UI. Should we really spend 50% of our testing budget on writing unit tests for an application that is integration-centric and has almost no logic? As the diversity of application types increase (bots, crypto, Alexa-skills) greater are the chances to find scenarios where the testing pyramid is not the best match.
+오해는 하지 마세요. 2019 테스트 피라미드에서 TDD와 단위 테스트는 여전히 강력한 기술이고 아마도 많은 어플리케이션에 가장 어울리는 기술입니다. 다른 모델과 마찬가지로, 테스트 미라미드는 유용하지만 [그것이 항상 맞는 것은 아닙니다](https://en.wikipedia.org/wiki/All_models_are_wrong). 예를 들어, 어떤 IOT 어플리케이션을 생각해 봅시다. 이 어플리케이션은 다수의 이벤트를 Kafka/RabbitMQ 같은 메세지 버스로 보내고 다시 데이터 웨어하우스로 흘려보냅니다. 그리고 이 데이터들은 어떤 분석 UI에서 조회됩니다. 우리는 정말 우리의 테스트 예산의 50%를 통합 중심적(intergration-centric)이고 로직이 거의 없는 어플리케이션의 단위 테스트를 작성하는데 할애해야 할까요? 어플리케이션 유형들이 다양해질 수록(bots, crypto, Alexa-skills) 테스트 피라미드가 적합하지 않은 시나리오들을 발견할 가능성이 커집니다.
-It’s time to enrich your testing portfolio and become familiar with more testing types (the next bullets suggest few ideas), mind models like the testing pyramid but also match testing types to real-world problems that you’re facing (‘Hey, our API is broken, let’s write consumer-driven contract testing!’), diversify your tests like an investor that build a portfolio based on risk analysis — assess where problems might arise and match some prevention measures to mitigate those potential risks
+지금이 당신의 테스트 포트폴리오를 넓히고 더 많은 테스트 유형들에 익숙해질 시간입니다. (다음 총알에서 몇 가지 아이디어들을 제안합니다.) 테스트 피라미드 같은 모델들도 염두에 둘 뿐만 아니라 당신이 직면하고 있는 현실 세계의 문제들에 적합한 테스트 유형들을 찾으세요. ("우리 API 깨졌어. Consumer-driven contract 테스트 작성하자!" 처럼요.) 위험성 분석을 기반으로 포르폴리오를 구축하는 투자자처럼 당신의 테스트를 다양화하세요 - 문제가 발생할 수 있는 부분을 가늠하고 잠재적 위험성을 줄일 수 있는 예방 방법을 찾으세요.
-A word of caution: the TDD argument in the software world takes a typical false-dichotomy face, some preach to use it everywhere, others think it’s the devil. Everyone who speaks in absolutes is wrong :]
+주의 사항 : 소프트웨어 세계에서의 TDD 논쟁은 전형적인 잘못된 이분법입니다. 어떤 사람들은 TDD를 모든 곳에 적용하라고 주장하지만, 다른 일부는 TDD를 악마라고 생각합니다. 절대적으로 한쪽만 주장하는 사람들은 모두 틀렸습니다 :]
-❌ **Otherwise:** You’re going to miss some tools with amazing ROI, some like Fuzz, lint, and mutation can provide value in 10 minutes
-
+❌ **그렇지 않으면:** 당신은 굉장한 RIO를 주는 몇 가지 툴들을 놓칠 것입니다. Fuzz, lint, mutation 테스트들은 단 10분만에 당신에게 가치를 제공할 수 있습니다.
-✏ Code Examples
+✏ 코드 예제
-### :clap: Doing It Right Example: Cindy Sridharan suggests a rich testing portfolio in her amazing post ‘Testing Microservices — the sane way’
-
+### :clap: 올바른 예: Cindy Sridharan은 그녀의 훌륭한 글 ‘Testing Microservices — the sane way’에서 풍부한 테스트 포트폴리오를 제안합니다. 
-☺️Example: [YouTube: “Beyond Unit Tests: 5 Shiny Node.JS Test Types (2018)” (Yoni Goldberg)](https://www.youtube.com/watch?v=-2zP494wdUY&feature=youtu.be)
+예제: [YouTube: “Beyond Unit Tests: 5 Shiny Node.JS Test Types (2018)” (Yoni Goldberg)](https://www.youtube.com/watch?v=-2zP494wdUY&feature=youtu.be)
From 323f6df95c84c5a99f71c3412f475109c00b6615 Mon Sep 17 00:00:00 2001
From: Rain Byun
Date: Sat, 14 Sep 2019 01:18:42 +0900
Subject: [PATCH 083/502] Translate into Korean 1.12
- Fix typo.
---
readme.kr.md | 23 +++++++++--------------
1 file changed, 9 insertions(+), 14 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 02b5d02a..74855db3 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -656,21 +656,21 @@ describe('주문 서비스', function() {
-## ⚪ ️1.12 Other generic good testing hygiene
-:white_check_mark: **Do:** This post is focused on testing advice that is related to, or at least can be exemplified with Node JS. This bullet, however, groups few non-Node related tips that are well-known
+## ⚪ ️ 1.12 일반적인 좋은 테스트 기법들
+
+:white_check_mark: **이렇게 해라:** 이 글은 Node.js와 관련이 있거나 최소한 Node.js로 예를 들 수 있는 테스트 조언에 중점을두고 있습니다. 그러나 이번에는 Node.js가 아니지만 잘 알려진 팁들을 포함하고 있습니다.
-Learn and practice [TDD principles](https://www.sm-cloud.com/book-review-test-driven-development-by-example-a-tldr/) — they are extremely valuable for many but don’t get intimidated if they don’t fit your style, you’re not the only one. Consider writing the tests before the code in a [red-green-refactor style](https://blog.cleancoder.com/uncle-bob/2014/12/17/TheCyclesOfTDD.html), ensure each test checks exactly one thing, when you find a bug — before fixing write a test that will detect this bug in the future, let each test fail at least once before turning green, start a module by writing a quick and simplistic code that satsifies the test - then refactor gradually and take it to a production grade level, avoid any dependency on the environment (paths, OS, etc)
-
+[TDD 원칙](https://www.sm-cloud.com/book-review-test-driven-development-by-example-a-tldr/)을 배우고 연습하십시오 - 많은 사람들에게 매우 가치가 있지만, 자신의 스타일에 맞지 않을 수 있습니다. [실패-성공-리페토링 스타일](https://blog.cleancoder.com/uncle-bob/2014/12/17/TheCyclesOfTDD.html)로 코드 작성 전에 테스트를 작성하는 것을 고려하십시오. 버그를 발견하면 각 테스트에서 정확히 한 가지만 확인하도록 하십시오. 수정하기 전에 앞으로 이 버그를 발견 할 테스트를 작성하십시오. 테스트가 성공하기 전에 각 테스트가 한번 이상 실패하도록 하십시오. 테스트를 만족시키는 간단한 코드를 작성하여 빠르게 모듈을 시작하십시오 - 점신적으로 리펙토링하여 프로덕션 등급의 수준으로 가져가십시오. 환경(경로, OS 등)에 대한 종속성을 피하십시오.
+
-❌ **Otherwise:** You‘ll miss pearls of wisdom that were collected for decades
+❌ **그렇지 않으면:** 수십 년 동안 수집 된 아주 소중한 조언을 놓치게 될 것입니다.
+# 섹션 2️⃣: 백엔드 테스트
-# Section 2️⃣: Backend Testing
-
-## ⚪ ️2.1 당신의 테스트 포트폴리오를 풍부하게 하십시오: 단위 테스트와 피라미드를 넘어서세요.
+## ⚪ ️ 2.1 당신의 테스트 포트폴리오를 풍부하게 하십시오: 단위 테스트와 피라미드를 넘어서세요.
:white_check_mark: **이렇게 해라:** 10년이 넘은 모델인 [테스트 피라미드](https://martinfowler.com/bliki/TestPyramid.html)는 세 가지 테스트 유형을 제시하고 대다수 개발자의 테스트 전략에 영향을 주는 훌륭한 모델입니다. 동시에, 몇 가지 반짝이는 새로운 테스트 기술들이 등장하였지만 모두 테스트 피라미드의 그림자 뒤로 사라졌습니다. 우리가 최근 10년간 보아 온 극적인 기술의 변화들(Microservices, cloud, serverless)을 고려할 때, 아주 오래된 모델 하나가 *모든* 어플리케이션 유형에 적합하다는 것이 가능한가요? 테스트 세계는 새로운 기술을 받아들이는 것을 고려하지 않나요?
@@ -682,8 +682,7 @@ Learn and practice [TDD principles](https://www.sm-cloud.com/book-review-test-dr
-
-❌ **그렇지 않으면:** 당신은 굉장한 RIO를 주는 몇 가지 툴들을 놓칠 것입니다. Fuzz, lint, mutation 테스트들은 단 10분만에 당신에게 가치를 제공할 수 있습니다.
+❌ **그렇지 않으면:** 당신은 굉장한 ROI를 주는 몇 가지 툴들을 놓칠 것입니다. Fuzz, lint, mutation 테스트들은 단 10분만에 당신에게 가치를 제공할 수 있습니다.
@@ -699,12 +698,8 @@ Learn and practice [TDD principles](https://www.sm-cloud.com/book-review-test-dr

-
-
-
-
## ⚪ ️2.2 Component testing might be your best affair
From d67d24ea5ea22e73965f3144704a87c89fc80192 Mon Sep 17 00:00:00 2001
From: leo lee
Date: Sun, 15 Sep 2019 17:00:37 +0900
Subject: [PATCH 084/502] - Translation into Korean(3.1, 3.2)
---
readme.kr.md | 46 +++++++++++++++++++++++-----------------------
1 file changed, 23 insertions(+), 23 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 74855db3..6ea44346 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -917,32 +917,32 @@ it("When updating site name, get successful confirmation", async () => {
-# Section 3️⃣: Frontend Testing
+# 섹션 3️⃣: 프론트엔드 테스트
-## ⚪ ️ 3.1. Separate UI from functionality
+## ⚪ ️ 3.1. 기능으로부터 화면을 분리하십시오
-:white_check_mark: **Do:** When focusing on testing component logic, UI details become a noise that should be extracted, so your tests can focus on pure data. Practically, extract the desired data from the markup in an abstract way that is not too coupled to the graphic implementation, assert only on pure data (vs HTML/CSS graphic details) and disable animations that slow down. You might get tempted to avoid rendering and test only the back part of the UI (e.g. services, actions, store) but this will result in fictional tests that don't resemble the reality and won't reveal cases where the right data doesn't even arrive in the UI
+:white_check_mark: **이렇게 해라:** 컴포넌트 로직을 테스트할때, 화면의 세부사항들은 제외되어야할 노이즈가 됩니다. 그것을 제외함으로써 당신의 테스트들은 순수한 데이터에 집중할 수 있습니다. 실제로, 그래픽 구현에 너무 결합되지 않는 추상적인 방법을 통해 요구되어지는 데이터를 마크업으로부터 추출하십시오. 그리고 느리게 만드는 애니메이션들을 제외한 오직 순수한 데이터를 검증하십시오(vs HTML/CSS 화면 세부사항). 당신은 렌더링하는 것을 피하고 오직 화면의 뒷부분(서비스, 액션, 스토어등과 같은)만을 테스트 하려고 할 수도 있습니다. 하지만, 이것은 실제와 같지도 않으며 심지어 화면에 올바른 데이터가 도달하지 않은 경우를 나타내지도 않는 가짜 테스트에서의 결과가 될 것 입니다.
-❌ **Otherwise:** The pure calculated data of your test might be ready in 10ms, but then the whole test will last 500ms (100 tests = 1 min) due to some fancy and irrelevant animation
+❌ **그렇지 않으면:** 당신의 테스트의 순수하게 계산된 데이터는 10ms 내에 준비될수도 있지만, 전체 테스트는 화려하고 불필요한 애니메이션 때문에 500ms(100 테스트 = 1분) 동안 지속될 것 입니다.
-✏ Code Examples
+✏ 코드 예제
-### :clap: Doing It Right Example: Separating out the UI details
+### :clap: 올바른 예: 화면의 세부사항을 빼내는 것
 
```javascript
-test('When users-list is flagged to show only VIP, should display only VIP members', () => {
+test('오직 VIP를 보기위해 사용자목록을 표시했을때, 오직 VIP 멤버들만 보여져야 한다', () => {
// Arrange
const allUsers = [
{ id: 1, name: 'Yoni Goldberg', vip: false },
@@ -952,19 +952,19 @@ test('When users-list is flagged to show only VIP, should display only VIP membe
// Act
const { getAllByTestId } = render();
- // Assert - Extract the data from the UI first
+ // Assert - 우선 화면으로부터 데이터를 추출
const allRenderedUsers = getAllByTestId('user').map(uiElement => uiElement.textContent);
const allRealVIPUsers = allUsers.filter((user) => user.vip).map((user) => user.name);
- expect(allRenderedUsers).toEqual(allRealVIPUsers); //compare data with data, no UI here
+ expect(allRenderedUsers).toEqual(allRealVIPUsers); // 화면에 아닌 데이터를 비교
});
```
-### :thumbsdown: Anti Pattern Example: Assertion mix UI details and data
+### :thumbsdown: 잘못된 예: 화면 세부사항들과 데이터를 섞어서 검증
```javascript
-test('When flagging to show only VIP, should display only VIP members', () => {
+test('오직 VIP를 보기위해 사용자목록을 표시했을때, 오직 VIP 멤버들만 보여져야 한다', () => {
// Arrange
const allUsers = [
{id: 1, name: 'Yoni Goldberg', vip: false },
@@ -974,7 +974,7 @@ test('When flagging to show only VIP, should display only VIP members', () => {
// Act
const { getAllByTestId } = render();
- // Assert - Mix UI & data in assertion
+ // Assert - 화면과 데이터를 섞어서 검증
expect(getAllByTestId('user')).toEqual('[John Doe]');
});
@@ -988,21 +988,21 @@ test('When flagging to show only VIP, should display only VIP members', () => {
-## ⚪ ️ 3.2 Query HTML elements based on attributes that are unlikely to change
+## ⚪ ️ 3.2 변하지 않은 요소들에 기반해서 HTML 엘리먼트들을 찾으십시오
-:white_check_mark: **Do:** Query HTML elements based on attributes that are likely to survive graphic changes unlike CSS selectors and like form labels. If the designated element doesn't have such attributes, create a dedicated test attribute like 'test-id-submit-button'. Going this route not only ensures that your functional/logic tests never break because of look & feel changes but also it becomes clear to the entire team that this element and attribute are utilized by tests and shouldn't get removed
+:white_check_mark: **이렇게 해라:** CSS 검색자들과 다르게 양식 레이블들과 같이 그래픽 변경에도 살아남을 요소들을 기반으로 HTML 엘리먼트들을 찾으십시오. 만약 설계된 엘리먼트가 이와 같은 요소들을 가지고 있지 않다면, 'test-id-submit-button' 과 같이 테스트에 한정된 요소를 만드십시오. 이 방법은 당신의 기능/로직 테스트들이 룩앤필때문에 절대 망가지지 않을 것을 보장할 뿐만 아니라, 이 엘리먼트와 요소가 테스트에 의해 사용되어지고 제거되어서는 안된다는것을 팀 전체에게 명확하게 합니다.
-❌ **Otherwise:** You want to test the login functionality that spans many components, logic and services, everything is set up perfectly - stubs, spies, Ajax calls are isolated. All seems perfect. Then the test fails because the designer changed the div CSS class from 'thick-border' to 'thin-border'
+❌ **그렇지 않으면:** 당신은 로그인 기능을 테스트하기를 원합니다. 이 기능은 많은 컴포넌트들, 로직 그리고 서비스들에 걸쳐져 있고 모든 것은 완벽하게 준비되어 있습니다 - 스텁, 스파이, Ajax 호출은 격리되어져 있습니다. 모든것은 완벽한 것 처럼 보입니다. 그렇지만, 이 테스트는 디자이너에 의해 div 클래스 이름이 'thick-border' 에서 'thin-border'로 바뀌었기 때문에 실패합니다.
-✏ Code Examples
+✏ 코드 예제
-### :clap: Doing It Right Example: Querying an element using a dedicated attrbiute for testing
+### :clap: 올바른 예: 테스트를 위해 한정된 요소를 사용해서 엘리먼트를 찾으십시오

@@ -1017,8 +1017,8 @@ test('When flagging to show only VIP, should display only VIP members', () => {
```
```javascript
-// this example is using react-testing-library
- test('Whenever no data is passed to metric, show 0 as default', () => {
+// react-testing-library를 사용한 예제
+ test('metric에 데이터가 전달되지 않으면, 0을 기본값으로 보여준다', () => {
// Arrange
const metricValue = undefined;
@@ -1032,15 +1032,15 @@ test('When flagging to show only VIP, should display only VIP members', () => {
-### :thumbsdown: Anti-Pattern Example: Relying on CSS attributes
+### :thumbsdown: 잘못된 예: CSS 요소들에 의존
```html
-{value}
+{value}
```
```javascript
-// this exammple is using enzyme
-test('Whenever no data is passed, error metric shows zero', () => {
+// enzyme을 사용한 예제
+test('데이터가 전달되지 않으면, 0을 보여준다', () => {
// ...
expect(wrapper.find("[className='d-flex-column']").text()).toBe("0");
From 76f62965c38da95de5fb0006c47290571c51ed7b Mon Sep 17 00:00:00 2001
From: sury
Date: Wed, 18 Sep 2019 18:33:16 +0900
Subject: [PATCH 085/502] Translation into Korean (2.2,2.3)
---
readme.kr.md | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 6ea44346..47f0dde0 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -702,24 +702,23 @@ describe('주문 서비스', function() {
-## ⚪ ️2.2 Component testing might be your best affair
+## ⚪ ️2.2 컴포넌트 테스트가 최선의 방법일 수 있다.
-:white_check_mark: **Do:** Each unit test covers a tiny portion of the application and it’s expensive to cover the whole, whereas end-to-end testing easily covers a lot of ground but is flaky and slower, why not apply a balanced approach and write tests that are bigger than unit tests but smaller than end-to-end testing? Component testing is the unsung song of the testing world — they provide the best from both worlds: reasonable performance and a possibility to apply TDD patterns + realistic and great coverage.
+:white_check_mark: **이렇게 해라:** 각각의 단위 테스트는 어플리케이션의 매우 작은 부분만을 커버하고 전체를 모두 커버하기에는 비용이 많이 듭니다. 반면에, end-to-end 테스트는 간단하게 많은 부분을 커버할 수 있지만 깊이가 얕고 더 느립니다. 그렇다면 균형 잡힌 접근법을 적용하여 단위 테스트보다는 크지만 end-to-end 테스트보다는 작은 테스트를 작성하는 것은 어떨까요? 컴포넌트 테스트는 테스트 세계에서 잘 알려지지 않은 방법입니다. - 컴포넌트 테스트는 다음의 두 가지 이점을 모두 제공합니다: 합리적인 성능과 TDD 패턴을 적용할 수 있는 가능성 + 현실적이면서 훌륭한 커버리지
-Component tests focus on the Microservice ‘unit’, they work against the API, don’t mock anything which belongs to the Microservice itself (e.g. real DB, or at least the in-memory version of that DB) but stub anything that is external like calls to other Microservices. By doing so, we test what we deploy, approach the app from outwards to inwards and gain great confidence in a reasonable amount of time.
+컴포넌트 테스트는 마이크로 서비스 '단위'에 중점을 두고 API에 대하여 동작합니다. 마이크로서비스 그 자체에 속한 것들 (예를들면, 실제 DB 또는 해당 DB의 인-메모리 버전)은 모킹(Mock)하지 않고, 다른 마이크로서비스 호출과 같은 외부적인 것은 스텁(Stub)합니다. 그렇게 함으로써 우리는 우리가 배포하는 것을 테스트하고 어플리케이션의 바깥쪽에서 안쪽으로 접근하며, 적당한 시간 안에서 큰 자신감을 얻을 수 있습니다.
-❌ **Otherwise:** You may spend long days on writing unit tests to find out that you got only 20% system coverage
-
+❌ **그렇지 않으면:** 시스템 커버리지가 20%에 불과하다는 것을 깨닫기까지 단위 테스트를 작성하는 데 오랜 시간이 걸릴 수 있습니다.
-✏ Code Examples
+✏ 코드 예제
-### :clap: Doing It Right Example: Supertest allows approaching Express API in-process (fast and cover many layers)
+### :clap: 올바른 예: Supertest를 통해 프로세스 내 Express API에 접근할 수 있습니다. (빠르고 다양한 계층을 커버함)

@@ -730,22 +729,23 @@ Component tests focus on the Microservice ‘unit’, they work against the API,
-## ⚪ ️2.3 Ensure new releases don’t break the API using
+## ⚪ ️2.3 신규 릴리즈가 API 사용을 깨지게 하지 마십시오.
-:white_check_mark: **Do:** So your Microservice has multiple clients, and you run multiple versions of the service for compatibility reasons (keeping everyone happy). Then you change some field and ‘boom!’, some important client who relies on this field is angry. This is the Catch-22 of the integration world: It’s very challenging for the server side to consider all the multiple client expectations — On the other hand, the clients can’t perform any testing because the server controls the release dates. [Consumer-driven contracts and the framework PACT](https://docs.pact.io/) were born to formalize this process with a very disruptive approach — not the server defines the test plan of itself rather the client defines the tests of the… server! PACT can record the client expectation and put in a shared location, “broker”, so the server can pull the expectations and run on every build using PACT library to detect broken contracts — a client expectation that is not met. By doing so, all the server-client API mismatches are caught early during build/CI and might save you a great deal of frustration
-
+:white_check_mark: **이렇게 해라:** 당신의 마이크로서비스는 다수의 클라이언트를 가지고 있고 호환성의 이유로 여러 버전의 서비스를 운영합니다 (모든 사람을 만족시키기 위해서). 그런 상황에서 당신이 일부 필드를 변경하면 이 필드를 믿고 사용하던 일부 중요한 클라이언트는 화가 날 것입니다. 이것은 통합(integration) 세계에서 해결하기 어려운 진퇴양난에 놓인 문제입니다: 서버 사이드가 여러 클라이언트들의 모든 기댓값을 고려하는 것은 매우 어려운 일입니다. - 반면에, 서버가 릴리즈 날짜를 결정하기 때문에 클라이언트는 어떠한 테스트도 수행할 수 없습니다.
+[소비자 주도 계약 테스트(Consumer-driven contracts)와 PACT 프레임워크](https://docs.pact.io/)는 매우 파괴적인 방법으로 이러한 프로세스를 표준화하기 위해 나타났습니다. - 서버가 서버의 테스트 계획을 결정하지 않고, 클라이언트가 서버의 테스트를 결정합니다! PACT는 클라이언트의 기댓값을 기록하여 "브로커"라는 공유된 위치에 올려둘 수 있습니다. 그러면 서버는 그 기댓값을 당겨 받을 수 있고 빌드할 때마다 PACT 라이브러리를 사용하여 깨진 계약(contract - 충족되지 않은 클라이언트의 기댓값)을 감지할 수 있습니다. 이렇게 함으로써, 모든 서버-클라이언트 API 간 일치하지 않은 것들을 빌드/CI 환경에서 조기에 잡을 수 있고 당신의 큰 절망감을 줄여줄 수 있을 것입니다.
+
-❌ **Otherwise:** The alternatives are exhausting manual testing or deployment fear
+❌ **그렇지 않으면:** 대안은 수동 배포나 배포에 대한 두려움을 안고 가는 것 뿐입니다.
-✏ Code Examples
+✏ 코드 예제
-### :clap: Doing It Right Example:
+### :clap: 올바른 예:

From 4ba6c308e26f5543b8617024bf0f5c9cee1dc951 Mon Sep 17 00:00:00 2001
From: sury
Date: Thu, 19 Sep 2019 13:25:51 +0900
Subject: [PATCH 086/502] Translation into Korean(2.4,2.5)
---
readme.kr.md | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 47f0dde0..374e163b 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -760,32 +760,32 @@ describe('주문 서비스', function() {
-## ⚪ ️ 2.4 Test your middlewares in isolation
+## ⚪ ️ 2.4 당신의 미들웨어를 독립적으로 테스트 하십시오.
-:white_check_mark: **Do:** Many avoid Middleware testing because they represent a small portion of the system and require a live Express server. Both reasons are wrong — Middlewares are small but affect all or most of the requests and can be tested easily as pure functions that get {req,res} JS objects. To test a middleware function one should just invoke it and spy ([using Sinon for example](https://www.npmjs.com/package/sinon)) on the interaction with the {req,res} objects to ensure the function performed the right action. The library [node-mock-http](https://www.npmjs.com/package/node-mocks-http) takes it even further and factors the {req,res} objects along with spying on their behavior. For example, it can assert whether the http status that was set on the res object matches the expectation (See example below)
+:white_check_mark: **Do:** 많은 사람들은 미들웨어(Middleware) 테스트를 피합니다. 왜냐하면 미들웨어 테스트는 시스템의 작은 부분일 뿐이고 라이브 Express 서버가 필요하기 때문입니다. 하지만 두 가지 이유 모두 틀렸습니다. - 미들웨어는 작지만 모든 요청 또는 대부분의 요청에 영향을 미치고, {req,res} JS 객체를 가지는 순수한 함수로 쉽게 테스트할 수 있기 때문입니다. 미들웨어 함수를 테스트하기 위해서는 단지 함수를 불러오고 함수가 올바르게 동작하는 것을 확인하기 위해 {req, res} 객체에 대한 인터렉션을 스파이(spy)([예를들어 Sinon을 사용](https://www.npmjs.com/package/sinon))하면 됩니다. 라이브러리 [node-mock-http](https://www.npmjs.com/package/node-mocks-http)는 더 나아가서 행위에 대한 스파이와 함께 {req, res} 객체도 테스트할 수 있습니다. 예를 들어, response 객체의 http 상태가 기대했던 값과 일치하는지 여부를 확인(assert)할 수 있습니다. (아래 예제를 보세요)
-❌ **Otherwise:** A bug in Express middleware === a bug in all or most requests
+❌ **Otherwise:** Express 미들웨어에서의 버그 === 모든 요청 또는 대부분의 요청에서의 버그
-✏ Code Examples
+✏ 코드 예제
-### :clap:Doing It Right Example: Testing middleware in isolation without issuing network calls and waking-up the entire Express machine
+### :clap:올바른 예: 네트워크 호출 없이 전체 Express 시스템도 깨우지 않으면서 미들웨어를 독립적으로 테스트

```javascript
-//the middleware we want to test
+//테스트하고 싶은 미들웨어
const unitUnderTest = require('./middleware')
const httpMocks = require('node-mocks-http');
-//Jest syntax, equivelant to describe() & it() in Mocha
-test('A request without authentication header, should return http status 403', () => {
+//Jest 문법으로 Mocha의 describe() & it()과 동일
+test('헤더에 인증정보가 없는 요청은, http status 403을 리턴해야한다.', () => {
const request = httpMocks.createRequest({
method: 'GET',
url: '/user/42',
@@ -807,24 +807,24 @@ test('A request without authentication header, should return http status 403', (
-## ⚪ ️2.5 Measure and refactor using static analysis tools
-:white_check_mark: **Do:** Using static analysis tools helps by giving objective ways to improve code quality and keep your code maintainable. You can add static analysis tools to your CI build to abort when it finds code smells. Its main selling points over plain linting are the ability to inspect quality in the context of multiple files (e.g. detect duplications), perform advanced analysis (e.g. code complexity) and follow the history and progress of code issues. Two examples of tools you can use are [Sonarqube](https://www.sonarqube.org/) (2,600+ [stars](https://github.com/SonarSource/sonarqube)) and [Code Climate](https://codeclimate.com/) (1,500+ [stars](https://github.com/codeclimate/codeclimate))
+## ⚪ ️2.5 정적 분석 도구를 사용하여 측정하고 리팩토링 하십시오.
+:white_check_mark: **이렇게 해라:** 정적 분석 도구를 사용하면 코드 품질을 개선하고 코드를 유지 관리할 수 있는 객관적인 방법을 제공할 수 있습니다. 정적 분석 도구를 당신의 CI 빌드에 추가하여 코드 냄새(code smell)가 발견되면 중단되도록 할 수 있습니다. 정적 분석 도구가 일반적인 린트(lint) 도구보다 더 좋은 점은 여러 파일들의 컨텍스트 안에서 품질을 검사하고(예: 중복 탐지), 고급 분석(예: 코드 복잡성)을 할 수 있으며 코드 이슈에 대한 히스토리와 프로세스를 추적할 수 있다는 것입니다. 사용할 수 있는 정적 분석 도구 두 가지는 [Sonarqube](https://www.sonarqube.org/) (2,600+ [stars](https://github.com/SonarSource/sonarqube))와 [Code Climate](https://codeclimate.com/) (1,500+ [stars](https://github.com/codeclimate/codeclimate))입니다.
Credit:: [Keith Holliday](https://github.com/TheHollidayInn)
-❌ **Otherwise:** With poor code quality, bugs and performance will always be an issue that no shiny new library or state of the art features can fix
+❌ **그렇지 않으면:** 코드 품질이 좋지 않으면 버그와 성능은 빛나는 새 라이브러리나 최신 기능으로 해결할 수 없는 문제가 될 것입니다.
-✏ Code Examples
+✏ 코드 예제
-### :clap: Doing It Right Example: CodeClimate, a commercial tool that can identify complex methods:
+### :clap: 올바른 예: 복잡도가 높은 함수를 찾아내는 상용 도구인 CodeClimate:

From bbfe0d6ead21d5a99a6f215bedc6dc44d1be694a Mon Sep 17 00:00:00 2001
From: sury05
Date: Mon, 23 Sep 2019 19:57:48 +0900
Subject: [PATCH 087/502] Translation into Korean(2.6, 2.7)
---
readme.kr.md | 44 +++++++++++++++++++++-----------------------
1 file changed, 21 insertions(+), 23 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 374e163b..6ad3c9d2 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -676,7 +676,7 @@ describe('주문 서비스', function() {
오해는 하지 마세요. 2019 테스트 피라미드에서 TDD와 단위 테스트는 여전히 강력한 기술이고 아마도 많은 어플리케이션에 가장 어울리는 기술입니다. 다른 모델과 마찬가지로, 테스트 미라미드는 유용하지만 [그것이 항상 맞는 것은 아닙니다](https://en.wikipedia.org/wiki/All_models_are_wrong). 예를 들어, 어떤 IOT 어플리케이션을 생각해 봅시다. 이 어플리케이션은 다수의 이벤트를 Kafka/RabbitMQ 같은 메세지 버스로 보내고 다시 데이터 웨어하우스로 흘려보냅니다. 그리고 이 데이터들은 어떤 분석 UI에서 조회됩니다. 우리는 정말 우리의 테스트 예산의 50%를 통합 중심적(intergration-centric)이고 로직이 거의 없는 어플리케이션의 단위 테스트를 작성하는데 할애해야 할까요? 어플리케이션 유형들이 다양해질 수록(bots, crypto, Alexa-skills) 테스트 피라미드가 적합하지 않은 시나리오들을 발견할 가능성이 커집니다.
-지금이 당신의 테스트 포트폴리오를 넓히고 더 많은 테스트 유형들에 익숙해질 시간입니다. (다음 총알에서 몇 가지 아이디어들을 제안합니다.) 테스트 피라미드 같은 모델들도 염두에 둘 뿐만 아니라 당신이 직면하고 있는 현실 세계의 문제들에 적합한 테스트 유형들을 찾으세요. ("우리 API 깨졌어. Consumer-driven contract 테스트 작성하자!" 처럼요.) 위험성 분석을 기반으로 포르폴리오를 구축하는 투자자처럼 당신의 테스트를 다양화하세요 - 문제가 발생할 수 있는 부분을 가늠하고 잠재적 위험성을 줄일 수 있는 예방 방법을 찾으세요.
+지금이 당신의 테스트 포트폴리오를 넓히고 더 많은 테스트 유형들에 익숙해질 시간입니다. (다음 항목에서 몇 가지 아이디어들을 제안합니다.) 테스트 피라미드 같은 모델들도 염두에 둘 뿐만 아니라 당신이 직면하고 있는 현실 세계의 문제들에 적합한 테스트 유형들을 찾으세요. ("우리 API 깨졌어. Consumer-driven contract 테스트 작성하자!" 처럼요.) 위험성 분석을 기반으로 포르폴리오를 구축하는 투자자처럼 당신의 테스트를 다양화하세요 - 문제가 발생할 수 있는 부분을 가늠하고 잠재적 위험성을 줄일 수 있는 예방 방법을 찾으세요.
주의 사항 : 소프트웨어 세계에서의 TDD 논쟁은 전형적인 잘못된 이분법입니다. 어떤 사람들은 TDD를 모든 곳에 적용하라고 주장하지만, 다른 일부는 TDD를 악마라고 생각합니다. 절대적으로 한쪽만 주장하는 사람들은 모두 틀렸습니다 :]
@@ -838,72 +838,70 @@ Credit::
-## ⚪ ️2.7 Avoid global test fixtures and seeds, add data per-test
-
-:white_check_mark: **Do:** Going by the golden rule (bullet 0), each test should add and act on its own set of DB rows to prevent coupling and easily reason about the test flow. In reality, this is often violated by testers who seed the DB with data before running the tests (also known as ‘test fixture’) for the sake of performance improvement. While performance is indeed a valid concern — it can be mitigated (see “Component testing” bullet), however, test complexity is a much painful sorrow that should govern other considerations most of the time. Practically, make each test case explicitly add the DB records it needs and act only on those records. If performance becomes a critical concern — a balanced compromise might come in the form of seeding the only suite of tests that are not mutating data (e.g. queries)
+## ⚪ ️2.7 글로벌한 초기 테스트 데이터 집합을 만들지 말고 각 테스트 마다 데이터를 추가하십시오.
+:white_check_mark: **이렇게 해라:** 황금률(섹션 0)에 따르면 각 테스트는 커플링을 방지하고 테스트 흐름에 대해서 쉽게 추론하기 위해 자신의 DB 데이터들을 추가하고 해당 데이터로 테스트되어야 합니다. 하지만 현실 세계에선 성능 향상을 위해 테스트를 실행하기 전에 초기 데이터를 DB에 추가하는(‘test fixture’라고 알려져 있음) 테스터들에 의해서 이 규칙은 종종 깨지곤 합니다. 성능은 실제로 중요한 문제입니다. - 이 문제는 완화될 수 있습니다 ('컴포넌트 테스트' 섹션을 보세요). 하지만 테스트 복잡성은 대부분의 다른 고려사항들을 지배해 버리는 더욱 고통스런 문제입니다. 실질적으로 각 테스트 케이스에 필요한 DB 레코드만 명시적으로 추가하고 해당 레코드를 가지고만 테스트하세요. 만약 성능이 중요한 문제라면 - 데이터를 변경하지 않는 테스트들에 대해서만 초기 데이터를 채우는 형태로 타협할 수 있습니다. (예: 쿼리)
-❌ **Otherwise:** Few tests fail, a deployment is aborted, our team is going to spend precious time now, do we have a bug? let’s investigate, oh no — it seems that two tests were mutating the same seed data
-
+❌ **그렇지 않으면:** 테스트가 실패하고 배포는 중단되어 팀원들은 지금 소중한 시간을 할애해야 합니다. 버그가 있습니까? 찾아봅시다, 오 이런 - 두 개의 테스트가 동일한 테스트 데이터(seed data)를 변경한 것으로 보입니다.
-✏ Code Examples
+✏ 코드 예제
-### :thumbsdown: Anti Pattern Example: tests are not independent and rely on some global hook to feed global DB data
+### :thumbsdown: 올바르지 않은 예: 테스트는 독립적이지 않고 테스트마다 글로벌 DB 데이터를 사용하도록 훅이 걸려있습니다.

```javascript
before(() => {
- //adding sites and admins data to our DB. Where is the data? outside. At some external json or migration framework
+ // DB에 사이트와 어드민 데이터를 추가합니다. 데이터는 어디에 있나요? 외부에 있습니다. 외부 json 파일이나 마이그레이션 프레임워크에 있습니다.
await DB.AddSeedDataFromJson('seed.json');
});
-it("When updating site name, get successful confirmation", async () => {
- //I know that site name "portal" exists - I saw it in the seed files
+it("사이트 이름을 변경하면, 성공 결과값을 받아온다", async () => {
+ //"portal"이라는 이름의 사이트가 있다는 것을 알고 있습니다. - 씨드 파일에서 봤습니다.
const siteToUpdate = await SiteService.getSiteByName("Portal");
const updateNameResult = await SiteService.changeName(siteToUpdate, "newName");
expect(updateNameResult).to.be(true);
});
-it("When querying by site name, get the right site", async () => {
- //I know that site name "portal" exists - I saw it in the seed files
+it("사이트 이름으로 조회했을때, 해당 사이트를 가져온다", async () => {
+ //"portal"이라는 이름의 사이트가 있다는 것을 알고 있습니다. - 씨드 파일에서 봤습니다.
const siteToCheck = await SiteService.getSiteByName("Portal");
- expect(siteToCheck.name).to.be.equal("Portal"); //Failure! The previous test change the name :[
+ expect(siteToCheck.name).to.be.equal("Portal"); //실패! 이전 테스트에서 이름이 변경되었습니다 :[
});
```
-### :clap: Doing It Right Example: We can stay within the test, each test acts on its own set of data
+### :clap: 올바른 예: 테스트 안에서만 머물며 각 테스트는 자신의 데이터 세트 안에서만 동작합니다.
```javascript
-it("When updating site name, get successful confirmation", async () => {
- //test is adding a fresh new records and acting on the records only
+it("사이트 이름을 변경하면, 성공 결과값을 받아온다", async () => {
+ //테스트는 새로운 신규 레코드를 추가하고 그 레코드를 가지고 동작합니다.
const siteUnderTest = await SiteService.addSite({
name: "siteForUpdateTest"
});
From 6509c23243fce1fb05bf3710797b887bbc6de87c Mon Sep 17 00:00:00 2001
From: sury05
Date: Tue, 24 Sep 2019 14:55:41 +0900
Subject: [PATCH 088/502] Translation into Korean (4.1)
---
readme.kr.md | 18 ++++++++----------
1 file changed, 8 insertions(+), 10 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 6ad3c9d2..826d20c0 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -1537,34 +1537,32 @@ cy.eyesCheckWindow('mark as completed');
-# Section 4️⃣: Measuring Test Effectiveness
+# Section 4️⃣: 테스트 효과 측정
-## ⚪ ️ 4.1 Get enough coverage for being confident, ~80% seems to be the lucky number
+## ⚪ ️ 4.1 자신감을 갖기에 충분한 커버리지를 확보하십시오. ~80%가 이상적인 것 같습니다.
-:white_check_mark: **Do:** The purpose of testing is to get enough confidence for moving fast, obviously the more code is tested the more confident the team can be. Coverage is a measure of how many code lines (and branches, statements, etc) are being reached by the tests. So how much is enough? 10–30% is obviously too low to get any sense about the build correctness, on the other side 100% is very expensive and might shift your focus from the critical paths to the exotic corners of the code. The long answer is that it depends on many factors like the type of application — if you’re building the next generation of Airbus A380 than 100% is a must, for a cartoon pictures website 50% might be too much. Although most of the testing enthusiasts claim that the right coverage threshold is contextual, most of them also mention the number 80% as a thumb of a rule ([Fowler: “in the upper 80s or 90s”](https://martinfowler.com/bliki/TestCoverage.html)) that presumably should satisfy most of the applications.
-
-Implementation tips: You may want to configure your continuous integration (CI) to have a coverage threshold ([Jest link](https://jestjs.io/docs/en/configuration.html#collectcoverage-boolean)) and stop a build that doesn’t stand to this standard (it’s also possible to configure threshold per component, see code example below). On top of this, consider detecting build coverage decrease (when a newly committed code has less coverage) — this will push developers raising or at least preserving the amount of tested code. All that said, coverage is only one measure, a quantitative based one, that is not enough to tell the robustness of your testing. And it can also be fooled as illustrated in the next bullets
+:white_check_mark: **이렇게 해라:** 테스트의 목적은 빠른 변경에 대한 충분한 자신감을 갖기 위한 것입니다. 분명히 더 많은 코드가 테스트 될수록 팀은 더 자신감을 가질 수 있습니다. 커버리지는 얼마나 많은 라인(브랜치, 구문(statements) 등)이 테스트에 의해 커버되었는지에 대한 지표입니다. 그렇다면 어느 정도가 충분할까요? 10–30%는 빌드 정확성에 대해 판단하기에는 분명히 너무 낮습니다. 반면에 100%는 비용이 많이 들고 정작 당신의 관심을 중요한 부분이 아닌 테스트 코드로 옮겨버릴지도 모릅니다. 이것에 대한 답은 수치는 어플리케이션 유형과 같은 다양한 요소들에 따라 달라진다는 것입니다. - 만약 당신이 Airbus A380의 차세대 버전을 만들면 100%로 맞춰야 하지만 웹툰 사이트라면 50%면 충분합니다. 비록 테스트에 열성인 대부분의 사람들은 적절한 커버리지 임계값이 상황에 따라 달라져야 한다고 하지만, 그들 중 대부분은 대다수의 어플리케이션을 만족하기 위해서 경험상으로 80%([마틴 파울러: “in the upper 80s or 90s”](https://martinfowler.com/bliki/TestCoverage.html))가 적절하다고 얘기합니다.
+구현 팁: 당신의 CI 환경에서 커버리지 임계치를 설정하여 그 기준에 미치지 못하면 빌드를 멈추도록 하고 싶을 것입니다. (컴포넌트 당 임계치를 설정하는 것도 가능합니다. 아래 예제 코드를 보세요). 이 위에, 빌드 커버리지 감소에 대한 감지도 고려해 보세요. (새로 커밋 된 코드가 커버리지에 못 미칠 때) - 이렇게 함으로써 개발자들이 커버리지를 올리거나 적어도 유지하도록 압박할 수 있습니다. 말한대로 커버리지는 오직 하나의 양적 지표일 뿐 테스트의 견고성을 나타내기에는 충분하지 않습니다. 그리고 다음 항목에 나와있는 것처럼 당신을 속일 수 있습니다.
-❌ **Otherwise:** Confidence and numbers go hand in hand, without really knowing that you tested most of the system — there will also be some fear. and fear will slow you down
-
+❌ **그렇지 않으면:** Confidence and numbers go hand in hand, without really knowing that you tested most of the system — there will also be some fear. and fear will slow you down
-✏ Code Examples
+✏ 코드 예제
-### :clap: Example: A typical coverage report
+### :clap: 예제: 일반적인 커버리지 보고서

-### :clap: Doing It Right Example: Setting up coverage per component (using Jest)
+### :clap: 올바른 예: 컴포넌트 당 커버리지를 설정하십시오. (Jest를 사용하여)

From 052aebf3c906b7aa2e7de4a4bc5f0e6c6aace157 Mon Sep 17 00:00:00 2001
From: sury05
Date: Wed, 25 Sep 2019 11:25:58 +0900
Subject: [PATCH 089/502] Translation into Korean (4.2)
---
readme.kr.md | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/readme.kr.md b/readme.kr.md
index 826d20c0..3062f4c4 100644
--- a/readme.kr.md
+++ b/readme.kr.md
@@ -1575,22 +1575,22 @@ cy.eyesCheckWindow('mark as completed');
-## ⚪ ️ 4.2 Inspect coverage reports to detect untested areas and other oddities
+## ⚪ ️ 4.2 커버리지 리포트를 확인하여 테스트 되지 않은 부분과 기타 이상한 점들을 감지하십시오.
-:white_check_mark: **Do:** Some issues sneak just under the radar and are really hard to find using traditional tools. These are not really bugs but more of surprising application behavior that might have a severe impact. For example, often some code areas are never or rarely being invoked — you thought that the ‘PricingCalculator’ class is always setting the product price but it turns out it is actually never invoked although we have 10000 products in DB and many sales… Code coverage reports help you realize whether the application behaves the way you believe it does. Other than that, it can also highlight which types of code is not tested — being informed that 80% of the code is tested doesn’t tell whether the critical parts are covered. Generating reports is easy — just run your app in production or during testing with coverage tracking and then see colorful reports that highlight how frequent each code area is invoked. If you take your time to glimpse into this data — you might find some gotchas
+:white_check_mark: **이렇게 해라:** 일부 문제들은 레이더망 아래로 숨어버려 기존의 툴들을 사용하여 찾기 매우 어렵습니다. 이것들은 실제로 버그는 아니지만 심각한 영향을 줄 수 있는 생각지 못 한 어플리케이션 동작들입니다. 예를 들어, 일부 코드 영역은 절대 또는 거의 호출되지 않습니다. - ‘PricingCalculator’라는 상품 가격을 설정하는 클래스가 있다고 생각해 보세요. DB에 100000개의 상품이 있고 판매도 많지만 이 클래스는 실제로 절대 호출되지 않는 것으로 밝혀졌습니다... 코드 커버리지 리포트를 통해 어플리케이션이 당신이 원하는 대로 동작하는지 확인할 수 있습니다. 그 외에도 리포트는 어떤 코드들이 테스트되지 않았는지를 강조해서 보여줄 수도 있습니다. - 코드의 80%가 테스트 되었다는 알림이 중요한 부분이 커버되었는지에 대한 여부를 나타내진 않습니다. 리포트를 만드는 것은 쉽습니다. - 운영 또는 테스트를 할 때 커버리지 트래킹을 하면서 어플리케이션을 실행하세요. 그러고 나서 각 코드 영역이 얼마나 자주 호출됐는지를 나타내는 형형색색의 리포트를 보세요. 잠깐 시간을 내서 이 데이터들을 보면 몇 가지 문제점들을 발견하게 될 수도 있습니다.
-❌ **Otherwise:** If you don’t know which parts of your code are left un-tested, you don’t know where the issues might come from
-
+❌ **그렇지 않으면:** 어떤 코드가 테스트되지 않았는지 알 수 없으면 문제의 원인도 알 수 없습니다.
-✏ Code Examples
+✏ 코드 예제
-### :thumbsdown: Anti-Pattern Example: What’s wrong with this coverage report? based on a real-world scenario where we tracked our application usage in QA and find out interesting login patterns (Hint: the amount of login failures is non-proportional, something is clearly wrong. Finally it turned out that some frontend bug keeps hitting the backend login API)
+### :thumbsdown: 올바르지 않은 예: 이 커버리지 리포트에는 어떤 문제가 있나요? 현실 세계 시나리오로 QA에서 어플리케이션 사용을 추적했고 흥미로운 로그인 패턴을 찾았습니다. (힌트: 로그인 실패 횟수가 비례하지 않습니다. 분명히 무언가 잘못되었습니다.) 마침내 일부 프론트엔드 버그가 백엔드 로그인 API를 계속 호출하고 있다는 것이 밝혀졌습니다.
+

From 1a6af40b199a4484a9bc9b65fcbc6b3d5db39fac Mon Sep 17 00:00:00 2001
From: NoriSte
Date: Tue, 1 Oct 2019 09:59:16 +0200
Subject: [PATCH 090/502] Optimize the image
---
assets/headspace.png | Bin 426823 -> 333277 bytes
1 file changed, 0 insertions(+), 0 deletions(-)
diff --git a/assets/headspace.png b/assets/headspace.png
index 324d820417b82f9ce22b5ed31b7fb1f303e0c9a4..83ed85d7e1d523a8188e9c3366ff500c4dda77ec 100644
GIT binary patch
literal 333277
zcmeFacT`i`*Dky%3W_ulu+c?PiXbArNL4usHU#OQfOJGc2~7owAfOxtM4H&8h7O@g
zS0KSgf)GMcq9hQC5FnJhHlE{=_x&Be_xt0HaqqZf3^cO$UgcTOob#D;t;CzACyY2Z
zZr=z&5a-b&`lldh2O5GlEwHZ#zqy?3ehd7y!TX4XF9gX+z&}g@PqY!xI{x^h`iIX1
ze4=(LFoF9UGUt?u8#b`5FZW;rPyEYE=Q``ZTmioh=h^V@7a_6DzqzQp>o*r8j{N2#
z`<>rh8#+CAq)|?TH!kX8GQCJHvFbZpN
z6h>i9Fu*9RNkABdHLbgX!kS=!VOSFkFbZpe0Y+g>Fu*AMA0`;ashyx;0zq}6VU(*m
z5?4VK%u6kY26hR*0e4R!~N^iT&p4nMR)9TwR;)FW{`L?mu9JuaWs5Fm%>z_a89UT-85VKx>})
z7ckZW(Jx^99};(M*u@TtMKGkh*s^KMr(2em4~cl2P2{cnWu$fTtJFofpgvWUGPU>C
zI_$o_l3(tF%~+8y1El{Hl_2T37X$EF0ndH9B6SUMX_9bh|jB
zES=%s+OO;t`g+mM7U3q4wD4uZM_u3n7O
z7gEynym`nnhvU_vV{;6>My_94urpYt%<5z$#2pQynzb|?@HRtTzBDdA{1nkA?fo^<
zR*#V2-YKK4F$r|9@8l;%(f+c(v;#douu3~RedCDNs0baTG44)B^2AyDJmjf&8Vow6
zPtD4Wk4G-CJD!|e;z#?M@vfdw+Uf}zn?#VEF}h)-79*-!o-db}yTn-3>CYrpx4#Bs
zvcTBuVC3O^__wpVAQrLM8cu9LcaqEv8s!FqUfw)#r)f#BIqUPHv;#Ru+~L>Hl^3Y3
zUem&KsSfEfp8BVCBq;}F4^45z=kZOOo3kSR2hS!L-Ol5DUmYE1rMAR4ceVC(qvxj}
zeQE$Xl<$Amc|v-B>l|-3CAuf1lePtotQO`~@2en1prrzrqY0|n|AkT5)r{H|;XTS8
zw^Nt>0pR7(vKI#xK$2$5tM1IttNzPMhJW1lE5V^hkvs??^qrpf
zwcj>&9gJKwG*CCPDisKStIOUJM`)e+Mxy#|=qh##
zWIdHM2H5Hq3
z{1{JtXBgDVdaJUUYK*%}6X02aZz~MmBWzBI{4~JagN{k^Q-_=92`%
zBY>>oNGwxky)|}crei35baN`Y;o73N&2|#7jCjW5Ad9im<
zCRYfUR$KDPOJOX>{)@xXSWWJ`>2^n_21#Q<^^yr8i^*CICR54)x7*>l=TkT
zN`oaKawMa4DyP_aZ&%QRNaviJ8fC~Gw?DO;8}ssFjvOwlji7$2_K6J`&1Wwn7z8(K
zS{>6eb~6NXzwC78-G7^Va`P&ZZav6j(d`#JoU$+L9lD0rs*pl6x4&nFKWEt}7vX(`
zM>d7OcY9zd*M@6I%zcW6WGYc$$trO}99vBV0~!jJdT>j~ca)
zE$k~Kgq9xVE!@GrwK`hRx~IfFv=aMtk%nh=+d0)EvOGLKScI8b=8TMwa~q9~#HBAb
znxDq2YTrtodO;2xpAEAWLdv}E3#e=<<7GH%`{IPPzsNiZwo%Kzh<#oV@?aVx@QVdO
z?^o0FdeAl5<2dK}dKm`FZO6uUsfIRW-e(n!T-l$iYCx14wV}qiTiY7=@6NmA*?&p>
zz*k=+|JjrZGJ>zGECwCc<0izGS7Pl*lM7zD6-x+-t?F~4s)og#icKS;s?iy<3}u$X
z`BSZwU|H*PW6s|=amLOib=&S0dijQ}j&{PU$difJd3;H6yxW*+iZj8Jn%RvOuXLPl
zJYqFQ$(D4$3$|CZo8!eCc6t6$_f<`DyPpUbmykqq%O8=Y%|>Q6j~aC;ymSQ*cL
z;v&YleTQQeset`I5+iY$4^c$>VZWA$
z+?sPWY#3IbGN&;aUoGPo8DIbv|APTMyZW{5+z2kdd`8w_=d;*1y@?TLFWZu;(^15d
zJ&((+14K?34_%aCHpHx>sLxpWD-KRY=~G-2ZOU$=h=BtoI2NR1##do?Lz|(5uX8h+
z&grTdPd>oV_ip=A<7FI>^s82|(^?JvX_M41hs46)@jF!RI^Q#>(-$JTG`WF-Imd(E
z!G24LckHljFZsaa_7J8=L-o@_0?qeQw)i}3@i#&(-{V2|>Fh-4rxWqx-%
z5g<)2x;r2EEhka^Q7_$N1MR@Cd~A1K<*lx46thaYfmkoTuVj{KV8PINNm4(?cSBd
zQyQ?BcD~D~ksYvxzZd53SNPoOIFh8xj!_kR80SV*%X=?lPLeg~{q}9%SOQ^@PqQ&<
z%%cdWN~Hf;bHuc0d}=R?p{=yGtv=PPCXA!O65og`dyU`!P39)yCIQg;_t^G7G0J=_
zA#>Ss>O8&mYIQm-S;*{lNQeo3mVhFvkG*>DZ%uBV)TioGyDGzc^M!N?RReQ~)S#!Z
zx8Ll1cdADA%Wr2=>mp%L7FW>f(>y(+peGeSdki;?xZ_l?%K<30^t_4JyqKqCVR!M2
zq@inm(O}5AmzM?fsa!8`X1KZP(0Yr_Xg;*GJt=YWcNS7-`T2%=_ifu4QeAE~DTS>^S$9#4(b!EA2y3P+w`)`H1P}CZ1(&v>tTD?q
z68h<2amvH!7kM$KuQ9Yf?MZ#&4@ZnD=evQ8c_{yFf71Z|dsuOmxLmp*8uae7H!;27
zjpX1?peYN@NhrTi-NroW#(>bN``ypxsr7y#O@032h>!YIIdR0C5or2=(;1gWx$OV1
zrbh~`14i9$B_B7@-%NqpT;7e0aYi>8rb@Iqw|c#jw)zhvvl*(!`j%xINA#
zIomD@qcN++Sn#1$CfjckG^m!uj`wtEJikI7a}%6@$Q2Oe3OK}i`?l=mL&pR!2j%T|
zD0^3ZWIFh=S=)mXb!$1*)~#F4o<
z@Vgz7W|y8+vQ@O#JpSnc+n3jGnfR@3dFis_H;E&-F}!Y??XfqGC$}WPd`DI`?=hmL
zd;k$m%}YY-fp6W{%3X(4y*>V#MgX}xS|#5k9tihHU{O4@_m$bJtJvS
zcvD0!m-LcQ_5qOIBx*itmq7f!t4_iDfrMT2Sl<>p(J*xBkn%O<$@v(dK=Wx!1H2gj
z-s=qS&(F3^%vnhwFlQ*0F`)Oby|h<4Jk;Vr68gLZ0_*xp@l6ORar}3{nN}P+@_~P^
z>Yjx@Jrk=h@`F?4yY{K<@SObfB^>bEj@{ySree6`!#hzp@^*9&hhyw{BR^iuOcDNr
zv-D(@mMWN#fW{+caYT3R@31>F#(ThG^PiUE)iHyCFYrRTDss077prTKLcEw7Q-?nT
zxNQw>B=9Jr+{Yy$_G32*2IXNA`L(;MK0em4#5v2n`~THd&j>rJw{M*IEM4UdqQ5AF
z2~CTnjBwNfxdrzL-0D?7;*A*s%1uOq_1fiFg2yR?H@fWScrnFvug}v-5oPn~UfdXV
zN>3@7ZZ&CfY_aN!SzG!^CSDAcbn#b;B;!_D>%K!gJgPMV47zdsf%vdL9Tr(mrtG~_
z-isGhbnt(D(1HF@F`II&L?|eaA@pri-gfz8ymH&w&&6?p8Rzam@0bW?*lfP(-e7^l
zRZvg`myYscp8wDw_X>dg`<;zFC+H+!pg&a##&Nb4KYY~lU{Kdr+@~Uczf|0=H{6?Igl|SB>sCUzO@ATBW
z74rrzJ@mI-o15JIJm0Kz(3n|{B|}8$*`+8~ZFdLxu%vK*?o
zdfZ(2c17IFosL4g04wD;LQkQ4b9^l
zI-r-w3zSmceuR}vvsYDBV2JG<+AAF0QPJ^ug%R}>l}Jms>-LMS#@ID~G@_0T=^>1R
z?}aAzjRPKX1gG;0pxfrHIesBx__tM|C{x!1b-_$(&)cB+L5Gf3H+PBVvnN6(lJ3_k
zV5TKejibJC1kKzD0o8h6n}T#Ii{YqWjbroDdhwJ8{xo7F5S9LXZ<~{b*OHkT6
zKRlJw`QMVVp;bwlPNeWea=H}W>4{RBL!^~1dsI4!ugipndOj+rCm;}o;w&!cCC@FKMfjF{*6FhoQh>zC)Rm$9hf>gdml9Yt^sB6amY
zV^Oz^_Ajf9bw^&0@_O~q(9HJm<5u^_j-3VJha-0=9pX@WP}jLT@x>#F*e6N1J~RsQ
z@SL){p4fiFAaR)az{^EB`s@V;<|Xy)S?+*VRbm{4_GNyAI)AvaaMvi6{bES$r4B?y
zSy%l9p4ihv0k*Xi`pk7O8Xf*sruNS?GIiC(uRM}qs=U`PSkN-xwlsLr&GtYq)tQO}
z78CCHFOT)JsykeX(!Jt_mK9~rE-*RoKAL}sQ4kjb*8z)@{`1tKT;E+s_877izeW;B
z9!6thLek*w%F;Xy_xm?9tc_V`m=LlDyKdhEH|4iuZr;YDp
zOY5)1P~4nl%zh74i02<~
z2o5GNC1r2m(|7fpAS7p%N$h)&-F%skY(ZB1?i81OW|SitFon^ccZLO
zA2@Y0$0S)t`qsrlJS7bE!B0Ll@OARPKwAzvK#HAG=6YCZ!2a>(7N)vOeR^O0`g>vH
zid&tP{p_dw8qbRz5l4u|s1LdNQBdaQ7@nw6Rc@G(VgE7_siR(N7ddW6c6F7L%7-^e=0q?de;*-8rxPZ~iGLxW{$AsE>E=R^s0+No@#SBj^aOM}k@)yJ
zPxgb0M=Q59F+r}o<45DB*z>1gk1(=&V+{P}$E!SrCws@mqw|{`{M_E|fHLh92ZEms
zEpUTnFkclb|9$|1p0p(tCrN&jH-2|C@o|jI(R0@hK{^~AlfL5QC~#Nf>dNrX=cINS
zzdH=!Hjmfm;Lm7x=)W-B!c9@8c*!dxrJF<#;F7Uz+ens$cVSJG>1F
z&wg|L)etaXtO+oU^7X4{aRAJs=c@VXvDc{(L!Y7lqkkyRxPS{14FaQQ!&-yKo+qpt
z^SDu2&tmnSobN?-ZlOEb4{4^TcNQ+XUt%=wo9X_miG%TWfHt|gg}a2DkACv;1P)AR
zwHFW9-WN^Kqy=^?I1}&x)qwubO`@93IV_M%G4M|;u|ntGi$0*b&2HOHldFLpsb^ikz}h?onhER$uqEFoB@X
zH*POalrR*#)wU9}F%obIwnr~zy5TeAtc6v=;SBfN)Q?3(p%%gH0JE*t0Sd1iWBF7{X)UT%Sg9Udl?CoTR>Ud)+7kXfHCsupjB_4g|0vxO-
zJRil1m6`Bj4tdi1>pd*TP<@=5w*duiSPOdU4#)6>3OImfH)qQnHL^%Rq&fp%;eBiM
z%0`K94i7m5+VbAPukjf1?%k-TZ)4uM>!az#N(8`fTWl;+OlC4c@*AcTeBm~~c@(lx
zliUv;pgOK{BMV|ER=W&NaH01GC@)zXms*0&)9_N8Db5v2RI)4Zp5zCBUP_ewR_zPE
zn;WY6$E#*_W|*ii&Ry}o@jB5WOv`$GM#Xjyx^s8V9&lgu-xCg3YZWQp&J0!9vpXKI
z*E3H#oXdQUU*k5Sdu7;THS2q-VM1iC!{?-V=gyyct11scRtsKia}_VA;Of+&pKBEU
zz|FU{qjcGWPc${;CLb5}3j@SA?-b86hMURvucx%*7pgLlOMNsx5{>S=916I%QER4}
zJibgyQ^jD$>FM0x36TNsYnBLT&^(NE=Hn{YR
z>FoZL0QbpW2Ww~Y7+-d;vxvE6SlDh!58BX^7U0^TPkS^zFT36KtdKZ{Z>_rr5Cj-L
z(x$!Iz#Z3@>@mu+$MO|+xxbtV;`umdvma1bCCPHbBM35OnG2+}&S`))Zh)VaS6#=3
zitd?G58!6&a+RbU=o4RXA?o8rT(29>lH#hOq9m?n$Jk=($N}Gw^A;bG`+R!1*0INT
zFp%qLWON<<={Qo2XFF2b{&?^i{y5AuJZuF6YCT2dydudsdFtU;^7SFOl-~o2qtR^W
z6Rw4ePrQ&Ur3_PoTlxO6d-+ecyVLP@Lk@11r2Px+NDnKr64*fLnHOS%T)s|ILJ}Vj
zc?;JH=rBQ<&0Pb+ek~PRz+b5U^zS;m=BF;@YiDpFw$6~>zfLp&obXiLGQIu7g`UV>
zcMM#z`~A-F%j{TA%8Ufxk-LF6pTO_1MPBMTrYZ
zEiaC3^7#^acf4Ekv#<1o=3=yqwN_G<+?1;3mH=%N$=ZzLQez2UyK7KHqt~Qw@#2ax
z9-ipDB6bOcKbx+vh$OYhmG2#3rYINxI;KRW?wRX71s4SdH|%K0MRlJOqC~fL&57o8
zzcypNU@?xdjY1J~av6u9@a^eE2z?6BAgT(02H^rfrC9ipGh5IPiv4z*v#*!3}twOcMgo5WSs=e9{P#s?lZK^(Cs7(wD_U#N&)gk4QX*Dy+GT!eA0&+L|u
zdJw%(B1pNw3)pB~#rZv{zQg1c+BHcC8gI$sqT7N6koqTBi}Plk6-PKYl@d!uik3E4
z>=?eI&NvBhmkHGWmS9S}gU>nOK##{$5f~H9;1OVH3JYOIEGBIAc5};g#}>CRj{HS4
z19>@8Iv+E_#Mbe&ShN2dMo`T_3Q|jqXQo*CG5-=B&)3yR+(skcRZ3XC-=dlE{{2$`z$xV@qbHzz)sq<6c
zq*&7gY_EA=PjNTm=fzw+UTWRmUfMI0=nmG7r#8ii8k#yA6i67w-#=NffD2Qfn2v9%
zs>x8T#Yp(%WP%XQ+xI#ea!?IlK3uT|Sj5+E6~Ah&V+X+Bu6?
z<>q983AGqG2SFP*TdAp`dKz8t?R^j!3VJp?tklz|LY_rl40p#yxt;l?B3G(6p_7lw
zZwlLuFEWF|H|9Sw0F=s1O(u5wfdHga^z-mGPwZ0grws~~*3Dw3WkN43)3*A*G;Z`1
zckoZ77jPxoA9wF+DUizuB+MM{b}b$iN5J)boe0k$j>+Yosfn^`je8He?TjWY?o-Lv%U|?y&o$I4``Xp*+{lZ;HJogS=cpNaVk8I<|MWB~>ZV+W
zK&C>=%@dq`kAQPF{TSFBf`VcIwBa!8p>_W-<=#?oWsc}ran9>fkr;w(S^K04DbaqG
zv2163_%4ZVW=M^rlFig-e1^vT$8vW75u3i%8c1z4N8
zcV7@sEf}PqR-$SglMWIm9VB+XtTo|^>}Ie84zNzR`zw^H4s6A)wtF1>?@uSS5>0b}
zDu08oM-d<5L*-^a`5TB|VXJo#kqpNA8?e}~3tIy5elhzoJ}hrgQFc6ZV-tHm4_f{E
ze7PUxpCv;(Z9*!vbQ2QdiSPtQH`!Yi1-b#>+s@!pNrf
zNaI-U8y?G#`CbMhn@M%;2TwoQ3dW`>fD`hr{qPW
zdqsdxE+C8A15EV8g1Om@kGn%2)cMsn3M5PWf#Qt`{d-LKpQ&W0DDY)v{D*@!Og!>K
zEo^cvP~H_8KA`%z(xiLgj!nV4)r)ci1z~W6gafE`qa=wp7~1iCO0ZRJ%b@VNNWN?)=)wm
zVbn!AO&k`OQp-LH3$X}4c%nFncuw8nzBuUMk?`xG=bU1MvTqm_+
zQuA}_Nf{BD8Bu{Ore#BZ`*$`7Sc;to_^pVYBR@l|{>y8vxOY}eE(G*(7_)t%4uZ!U
z$IfwpAR@7#rEyk+0u)Pa8;Z~*57SLZ95GO$JySI^b+vY?icG6gYX&aCAOdCG&E52!
z)&c39CCfe0?(DH*LM;%IqR60f5F$7LI+nZDO~llIo69->PPxO6e0i%165Q;YcVy3B_y
z3Jt2`^zZ(3E%bi@>>txT>Y8IBmuj-%RjT#LsJiwT5PFgq}C4#sELA
ztoGCo_-K;8M5ZE+wgfxf1}@du+umM4w_u?1?OR89q3m^keqkwt;1&CPJjefN-yBZ
z6VNLUL)Dm{o0^_13!PJon+5|ox8)*mQROO5F^GU>BJ`kHP6kP}#eLHr9NQbtscQqdZ>0
zqLYg~`w~S6l;?#YaL6lWx5K&fpZC=(m3sB`c`nqXa|heRmzlHe$$MW2jxHos;ef=X
zs4;RZ8s!#lm1$}Z1yXPtLJUQ~3WLa~5gO9v%YlfIN%MAZ^)t>0NM{%MKpOK^b@D;z
zZpa_oZ&1FL@W4z~q>aHFNq2uP;qx>gF*(G_ja7JgZ(>m1+jk3>e%-{AWKoq6}foN
zz6C~>xnUCNLeHB6Q;Ystrpvm=);p4Y2;;aMjcAsI&@Z$P}o17jZ
zHHUa5TgRx<(E&r+x3M~8P~}H86`!<3(5;qo>edx)cyW|3yU#gePE|W%d0C>_P81}k
z#5gf99f
zS;Ils&1huL)Vvql(urc<7@xFzB_=sR8P#O+!lx;ZBY>sN)PC(*u*W65z${*C!|xs7&?sV&W`{NBwWBEigA2Q5iGOS&uTF0
z{D!N7Zv`RIVc#@trARTid1nrg5G5lUMClPCs4Dcy{7g
zKxaUxCkA|=;l6C$QAWQ#dqiF&U=pN&x$^lu;CsKuPb2=I+x~=#B0lS8hLlveMa+R2(dKu8)V}2NrSlbHoC<9#C#H6i1vipr&&zEI!+;J%!nkQ>523lN*5#
z)7;aV*iy<>mC6zgW4wFdlZ1XdC3cB1f4#YYG5!;($4y%
zyOu&o8s{zD+0*5~bo(>M0
zt84FNV3XhIcLymmUdb4A_A*-PGqSKp^Oa3}?!@2mJ1?~ru}gq{rJfXz@Knev!c9sb
zYV$yjqaooy?@tDW%8#!TuAFMp+u0!ig)@_%Q#N8x?s4KM4kmp90n_TF6e#9q9=DlP
zDR(;#cu4#^R)&9#41cZq>QDMo@cia$ewx5+%DV%5A4iBz;C-teg0(-B-7{Kav)xH;
zJVhLl1kN;o9T^hEYD;<>%V{upGz&MQ(@FV?@WT+vEMfGDM%pQ0`Y~OU
zJ9U;z-Mwk!+uvekn}Ht+ZOr=)iV7KYpL+7tZ~#2?Hwjlnf!ENvKz@E!!z$RWRELB9
z?148C(+aTFdC@gDo#-&_b>|m<8Ny{0AmzP%YjvaVwh9;LZfSyw~zD
zy7B%@)l-!9RXo?L3Y-KQEg~7xERv)@v9X%@-3N{|O4rVz=-LmOut+
zXbU+VRpTJv7jJ{eqk54PU`=_>_m+E}Q$Xn^eEH`&Bb1HsGsfl|B=F;pmV$%4v>@nY
zM$1`Vv}KAcq*D^d`d+K1`DP_?jED{gGIs*6wfW_|B5s^4j96G~8D9?B85m1bHuu#w
zL46v@YKrnlvX2)`xOc
z+xMJznpWc0+im!Iy{oiSXMAJ)(#`^17y1Qc$(h4{4oN{)aV20ZACwtH3vU0Xs@2C#nm=}lapYiA&8SSfOx;eWLA
zfqE3qk_~JcY{6Ozg1JmMkk6qa
z;9H?g8+u*@%;u3xV3V${L>7Q~=x;DhSuEdHCLO33v2+^dfa*B0UP#N6!y2AtY7+~7
zmq5OZ_cPoAj;b$H&dq&_Uw6$u-@w^Fvc&)B1!_e*$}D+;#^J;m|KcaiwK#W;@|}@H
zdsE>y_{in^{;rCqN-U6H_bPB&6V4%LCMv-i8fCc;x;q(^P%Z)OZyX>x;N%e~4`}F3
z4RR=$_MW;)VRu>7ddN4Xz>WtL{Xmkb#rjrr009yCH(5XEpa>ymI;^aqynqd46LT-?
z^<7d^kcPr3ZFgL
c<%2J8KNU5m%EZ_WV#jPjIE
zRAbFA0>|_)8(DPSFnBqt9u)AP+==vMOcBK%Ebq@v^vvD5RgeCP^g7?jW7oZFsyC4*
zmL!zGT|B(%;+F9u>Rkib-B8_}+^xcr1Ax@2XtN-TggE1X+f^Vn0P8S6^o|niSMIbe
z=F{QanyCh!$5tFFpu^%*p~ne6v73Om`1vWz4~=H*k2X=3g4HOvXL@iI;vJ4iU-w-3
zB*s2WYg!}+%wP{6S|Y1tw%l20NnQj}F?l6rnGGn=bZ7eQ2JSmyAljWBWMNMNI^dpTf0E$HD;Hvxc19fVTGbZJkj@kGXd55y
z{W&qYw2Wum?Lb}-j2h!mU4xW3rP;Tm2pHlM=T6`>%eu8i9&S`-R>Oa
z$BR30oD8l-4ye~`b_TPxi>J#RU8IMDu;uu70Z=E_il<@gO;8!E`-tn&=4i#u*@6mq
z0WfV~)6nb|v_!gJK$$aVO$s-Zn)g}C^3f2O=r*^DV!{9jt2wm`-_%&QPp6H66B=B+
z%nBe`fKDj2Bm1{gTb~F*sVcb
z5Lzl;&y<-Qo#h^TtLQ$zb6yfU0K{1DFO=K+^3EkWahzM0iw9=|P(V#XraDU)PDec0
z@YYG&273hX(us9nr*P^Dr#Se|RzZSD+wmd*yeWS||=T-04z!RYyShH6Fs62KNp1*zdF94mP1`uFyG
zxY-Eq869}pD*ohgQE^}bx7=cJ(E&O@9Tz8~gp|+w!+V_5b)Og)Pe9(i_a3>I!~rbV
z;c;zua6#La{t&+V_5L;h>Jztn8mUSf@e;=P=iVP_wtIq?`_=&+1`_rXq$U_t+TRrE
z+!**+E?(<9`8~Yv$Ds@E;kQO^0we`lRmgr-hdosw!Is?(PnaMlGDiYc9~G+M$
lI0(kX<7Y}>|`)%Hfg`0_h
zz$e7uxg;yGLC(>;-k8OxJg|V*!+Rj)$%#lx>PRyg@vs&_D<{WzffS
zVo3bq@=*A^6E>mPS+3Vb2rFKd54Zo5}lQTBwG1!+e=HgJQ=sPLPyTDbvY&ws1WNnk@TQ&pp(h03r
z>Glz-ev|NBvNrLc#6BxzwRgUQB=94U4$J4-{R)Tc?Pr_eR+{f7iEVg7u}yt2jeW@SJ(ay&45!RdQ(}Cm6SKprCGrOLoQhhHf_~N
z@UAZRCvwJs)u#@xl-~bpSjFggmitLyG5eyYKW*R+4^tJ>8YjOBtf7D#th_zaT3QUj
zA*}Xz#V(8ey=CS%0k5IX&3`VCCW#zlLufLE{Fbhzr|QVNJ%km=X$
z9=@?(=0V^BV~QL%9rf!F4n0))HfPLDCEH%F%SCuoGC2Db`b{R71b{jp#(lwbe7kz>
zZXq(At^H3;H<{Wn^cL6~Wj>iq#w_m!FqLo6gPVggZ;yMA_{P>>&w1b!FrcBATOcgCEn8h*U8r!lRhHHL|s!O(c)>Vjzh6
zd7>Jei>lWfdy^6s^#zQjn6cFI_;XYtE`=xl*?8qhByy`88YcW~PflEIHhA;lFD3k;
zH}YY5()jpSJ*}JA`3RhcnP@RNDJfh{m@QtY$jtJs8bcKhwZa3^n&I&&Amln
z@gOKgw{@)Fot2u05m4vvxG;zTYu7n~6lvR+r`%Brn^{rto4qt`ATW{GP
z63p8`J*bvcC7U$%y|#%$2pyVKQZEA2}g-F0Ca;~d6HRnhXQ_mcqSQ7HU@&4YViLikD)
zZNcMEre_3o$x(nAQDzzS=f2?~^&gWIZtx~Ko*gV1UOv!cmXmfe;A_n+e0%Et0wT
z62xttdz)aV0J+@LaG#PyDfLBPtPZ|;Dc7Jm6CqYL3iWwYFmkRS7l%xVA^4+b>0#|K
zDWT^QV(Ls7_OAZS$7}<#dOTiRY~Irx0eA%!V+@ME+2SkTS
zm&*NN0zSVc2Dbo}%t#(76UsmDZjfuHYg?DtQDcuT!EN=Jk(|keuRD{TS)2KZw1CJ8
z=3D9N+j^o-E=4I|Y?GNiF-~I^Ofg(BPg{N9{AGbWS(9q6#oA$f>$JR#m?ym3N>%pZ
z2IGc
zR}ka6D1@{NUQ(6)aRO~B4FHk(nDUn4om-)DIljNPJzF6oaDQ0%(*2ch2FoPSR5$e|
zffnBU*mJ2n>mHe5s9t}C=3t@Iq32&zV7ntSBrU4K^qJ$8{W-*>f^@un2zLh?JeRTM
zxY993o^d68Q2X%;cU|+3!!UpC>buY~GtB)b&}1KIQ2-C`T*`Y5??W5$kJ@sPY|HP1
zU6;GLT!U!9A3!OCPhQkcfj3r3KY)(hdGmely1ej2%gDt6V}dr9W^fvN91#w{y%{Eg
zMsXh%rvOr!b<@uKdpTayVybK}do}G`f&HnA9t`^%KV|lw3b37-YQz#O06S1Wx``#W@ReX~y9m|5UQc!p&idp;Y`w*lR4
z!7iP?)Bd0~cq2?afvV(9b7s@=EvJ>DA&)sFAI@hMWwnP9x1BFXHa-ah-WkBd5kdi(iKev0GPveK
zUVPv;ONufWQ_?R@3(WEQHn0SK-FYPDX>S8~^YX_zJ#RgAUB4KfmweXDna3#JRrezt
zJFAB-OJJp)B=hc^4bbF1|MzUdqdy$HpnS==POfp@2B_cBfR6i0!2dw8wf??uG;d#I
zC;7XUdZdlloD&`f0SJmsA`NZDkN_3_IuvNC@K6nM{j3?$);x!9^Ql6RcMfgVT1G$r
zL2~ejmC12EMC>L)WKo}Icsy{0X<>_u
zd>_J-~Kt@3k%iBZs;b;*$E2wYJuvT&nt?0JlxQ5
zU;h16pLn~I*GIQSuzkch!^<(67&Y25whZ3&{P4$E`0!$2-JV99-C}3THmfk}5aNzt
z&&H)C(wF!c5aXc;CFR`fz%qZ7QkbCFQ^rQ
z7sjJ$q+24)WD;M%dMJG$k)=M}^ASo`N3Fup{3(nEEcW*dG?v
zxoQ__#cE!~cJ+3VV0WLniqyPtv0#3u98dA3Z5Ol4MvJfMWmL&sBfPnM{dD#Tq9LN8
z^;4a~(5_rJ6|M*POZ{>J8Ua@fUu2=)ZBpIkd?SNN%Z0D)(LlrfJF~X~Ykba_eBgq%
zs*7PJSgG~Pv=2LCCT5)~T#!A1u>C$see~jSs>Ji*@!4}~(}p+=kcohEt@8c_?ph$2
z$gdpVX+MZogSI}fS@89HBA_JLfU~Ma{_)xW!Ez7t9JpQn4!+-4uRY0~qk{#ivnkFy
zP>aSKtVQ>vkO-q+4q7Q6QRyq0ovmiu2;GXjZC%QQS+W)I6v1bWhVF-nG1-A@f0ArW
zz`pf4*C7jgUp<$oz;#o)plQse6+=?g%i)5a=2^z1xJ}?3V@l%5bRiD-KQ*npU~_$c
zyo$-Y81hkp*gX(bG^kUaXDcq|LZ$~nqP_wNkJ31
zk$WnK+!VlaLzz-EgJQ*W310BYfy}$Z%82Z#9T@nhI-!ZeMQMsJ8VO{B3dJuZS=Y%4
z<0^9M{jji)PJUgH;%eTyh%)c(-4cIkZ#tiXGHeYOB-=w~HDn;Ded~4bPe{i;vh(=P
zDvLNle~%a@CY=EfeAck*&d1YN4}q(q`81PT7d;zi!B&_6=rjiWj3z`;)?e!IqROdo
zL*X*)-52=^)aBEy2jK87dR_IwvE=<S=+*X|2<@_4SQu`o~+j7kX9p}~6S
zvhzMraVpx}GEyn74axIW`s5&v0un(`>sy9E%tWd|BJEu9Em^Pysr=VUr@ajz+V|LN
zqzXcH&m*XX8wgE~6*6$)5!!Mk?)j$|%4UNV9KK*hp}Jkg4XF_oKPZ=wnr5=GGun
zwL7DWc*I7X53`^ymPg_f-%(*7YR5dKd)8ME?rqDo+H`YPAUI+wrM%Kz-5F3z;3G3Z
z@7U2jW{D>;=Ra88(hC&X@NJ&R37$A2pKlsVqPuV^;7%$5J-nr3p?U0DfVIP@ybX2-
zWM?iQEh1~@Z@1GjO5i-lkZY&_c&N3VOvcK;HH-8OzW**QR%UmyFX6#^c^lB(>xE`drMSYI@1Ni|4v{sO
zp&Mcsl+OeDG(1tUDRb}vg)>MrYDzl>FxX=WM!5CicpwLb2I#V38_Xi8SzO<7Fw$Zj
z1q$yq932eHxMJ%Tii2S^44_?*3ne{9HdZyfRA7cO*#T*#0dJ&ruiQ8HIl%-q@D%RQ
zvV)twJp{h&_^!E&g
zyz9v+%aZ37)0kcGJrHzJsTOT{pG5H3@aFi(^8v!C@sQ3bNv^*{nn8Nei^PCFP$q%;
zrHCi0(ROgs5ZxpSKiYnq$LfVa#}lw(xdt5`Vqnw{{f0;{1QyeO{Fl}`7VTbCEO0NL
zF|A+fOK%H7w+%N4>I<$Y*Jw2piIt^mujHT72RQFs@h6#$01LUeuH2@F{I9?VDkfY$
zW({9Pt!HSi$Y=0*k!Ti#=0v&G23$+(6d!ctrGCsKjuk~|kotf(ALi}sSF(067qGOA
zbt?nCdt_&>Wt_x5O$E^43j&}Q%(8Lyec)cxEc!!%p^A93h0rq}%fWKC00U02v&z#$
z#9&WDqAxbf+?87)+ZGQkr-!G#ie>+Rord>ge%~X2vR*$%T?ZfdzPx`SuGB0h#kl18
zC3{J5Q)Vm8I9Eu4{ubc#k*8eI5YR}{?%Y;m>^aafbJSmpDBkf31#cEWyZDApUk={|
zCY!NBVx6n-3E3?+(n}%Os1a~OcviK^M@{>_M}kpftfOFp7eXEV&+a_Y$^}VfoTyef
z*SW4OQ8NtOslzmE(+I`{LGLJY?qJbA|GC1oR=Ptbd?xx!iFHg$_JF@uEd#cRbpg12
z9aT}_Q))U)VA1k!)^F%$f}U9EhsfD{&wkF4I2W?wJSbH$0kDm$`)nbgJ9
zSMKqZ8w8_*pzi<0)R%xm`F8)0lqGx063J3g_EgB4(n6(Oc1HF+yTPa|W7i^*t&mD~
z*~Y$BWZIaqZ=tfUgTegIGrsTd`g>p3)%fx}&wcK5KIe0m`#ihTTnanDef$olT{m&&
zVOl)rvG5E}lX=+Dk&u2OvT#(8V0Ct9hwXsaRO`r36zb+F#71_A-(y5QyDz_7v~wz$
zW*=4UW!s{S($Bf+a;se%+_eV{_kuplE+BxE1Gpp&hki77a>
ze7mX1A(8{3izUAB$4kV)$7M
zUhu;^(B_-wG~6jNYbw4GnG^|63mRtf3R0lka(Gh3QOa4%`22HUsQfNeVlwCHmq4VL
zYvOO{fmT@iSu7VZ{wIt}?S;3RX*zM=Ld9RGB^fSq`T6$tS=V<|!1jSXmm)>{hhDHS
z&Ci1ZIZf_dy<+V)QUbcL(8NJfE)yw!z{@8!Wfmpp9+^YXsYA36=GPN9ba2F613@A9
zAnbjo9gY;#{pLelOwf0papN5Auc@~GyK-GrjmsUxC=7p1y-99-yI%t?j-$8Hav39f
z)zH^nmM&0t_@8dHRqWfA^F}ZGqSAOVc@K