You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: circuit-breaker/README.md
+31-16
Original file line number
Diff line number
Diff line change
@@ -12,32 +12,43 @@ tags:
12
12
13
13
## Intent
14
14
15
-
Handle costly remote *procedure/service* calls in such a way that the failure of a **single** service/component cannot bring the whole application down, and we can reconnect to the service as soon as possible.
15
+
Handle costly remote service calls in such a way that the failure of a single service/component
16
+
cannot bring the whole application down, and we can reconnect to the service as soon as possible.
16
17
17
18
## Explanation
18
19
19
20
Real world example
20
21
21
-
> Imagine a Web App that has both local (example: files and images) and remote (example: database entries) to serve. The database might not be responding due to a variety of reasons, so if the application keeps trying to read from the database using multiple threads/processes, soon all of them will hang and our entire web application will crash. We should be able to detect this situation and show the user an appropriate message so that he/she can explore other parts of the app unaffected by the database failure without any problem.
22
+
> Imagine a web application that has both local files/images and remote database entries to serve.
23
+
> The database might not be responding due to a variety of reasons, so if the application keeps
24
+
> trying to read from the database using multiple threads/processes, soon all of them will hang
25
+
> causing our entire web application will crash. We should be able to detect this situation and show
26
+
> the user an appropriate message so that he/she can explore other parts of the app unaffected by
27
+
> the database failure.
22
28
23
29
In plain words
24
30
25
-
> Allows us to save resources when we know a remote service failed. Useful when all parts of our application are highly decoupled from each other, and failure of one component doesn't mean the other parts will stop working.
31
+
> Circuit Breaker allows graceful handling of failed remote services. It's especially useful when
32
+
> all parts of our application are highly decoupled from each other, and failure of one component
33
+
> doesn't mean the other parts will stop working.
26
34
27
35
Wikipedia says
28
36
29
-
> **Circuit breaker** is a design pattern used in modern software development. It is used to detect failures and encapsulates the logic of preventing a failure from constantly recurring, during maintenance, temporary external system failure or unexpected system difficulties.
30
-
31
-
So, how does this all come together?
37
+
> Circuit breaker is a design pattern used in modern software development. It is used to detect
38
+
> failures and encapsulates the logic of preventing a failure from constantly recurring, during
39
+
> maintenance, temporary external system failure or unexpected system difficulties.
32
40
33
41
## Programmatic Example
34
-
With the above example in mind we will imitate the functionality in a simple manner. We have two services: A *monitoring service* which will mimic the web app and will make both **local** and **remote** calls.
42
+
43
+
So, how does this all come together? With the above example in mind we will imitate the
44
+
functionality in a simple example. A monitoring service mimics the web app and makes both local and
@@ -80,7 +91,8 @@ public class MonitoringService {
80
91
}
81
92
}
82
93
```
83
-
As it can be seen, it does the call to get local resources directly, but it wraps the call to remote (costly) service in a circuit breaker object, which prevents faults as follows:
94
+
As it can be seen, it does the call to get local resources directly, but it wraps the call to
95
+
remote (costly) service in a circuit breaker object, which prevents faults as follows:
84
96
85
97
```java
86
98
publicclassCircuitBreaker {
@@ -155,24 +167,27 @@ public class CircuitBreaker {
155
167
}
156
168
```
157
169
158
-
How does the above pattern prevent failures? Let's understand via this finite state machine implemented by it.
170
+
How does the above pattern prevent failures? Let's understand via this finite state machine
- We initialize the Circuit Breaker object with certain parameters: **timeout**, **failureThreshold** and **retryTimePeriod** which help determine how resilient the API is.
163
-
- Initially, we are in the **closed** state and the remote call to API happens.
175
+
- We initialize the Circuit Breaker object with certain parameters: `timeout`, `failureThreshold` and `retryTimePeriod` which help determine how resilient the API is.
176
+
- Initially, we are in the `closed` state and nos remote calls to the API have occurred.
164
177
- Every time the call succeeds, we reset the state to as it was in the beginning.
165
-
- If the number of failures cross a certain threshold, we move to the **open** state, which acts just like an open circuit and prevents remote service calls from being made, thus saving resources. (Here, we return the response called ```stale response from API```)
166
-
- Once we exceed the retry timeout period, we move to the **half-open** state and make another call to the remote service again to check if the service is working so that we can serve fresh content. A *failure* sets it back to **open** state and another attempt is made after retry timeout period, while a *success* sets it to **closed** state so that everything starts working normally again.
178
+
- If the number of failures cross a certain threshold, we move to the `open` state, which acts just like an open circuit and prevents remote service calls from being made, thus saving resources. (Here, we return the response called ```stale response from API```)
179
+
- Once we exceed the retry timeout period, we move to the `half-open` state and make another call to the remote service again to check if the service is working so that we can serve fresh content. A failure sets it back to `open` state and another attempt is made after retry timeout period, while a success sets it to `closed` state so that everything starts working normally again.
167
180
168
181
## Class diagram
182
+
169
183

170
184
171
185
## Applicability
186
+
172
187
Use the Circuit Breaker pattern when
173
188
174
189
- Building a fault-tolerant application where failure of some services shouldn't bring the entire application down.
175
-
- Building an continuously incremental/continuous delivery application, as some of it's components can be upgraded without shutting it down entirely.
190
+
- Building a continuously running (always-on) application, so that its components can be upgraded without shutting it down entirely.
0 commit comments