Skip to content

Commit a4f2d14

Browse files
committed
Update README.md
1 parent 96c16a8 commit a4f2d14

File tree

1 file changed

+31
-16
lines changed

1 file changed

+31
-16
lines changed

circuit-breaker/README.md

+31-16
Original file line numberDiff line numberDiff line change
@@ -12,32 +12,43 @@ tags:
1212

1313
## Intent
1414

15-
Handle costly remote *procedure/service* calls in such a way that the failure of a **single** service/component cannot bring the whole application down, and we can reconnect to the service as soon as possible.
15+
Handle costly remote service calls in such a way that the failure of a single service/component
16+
cannot bring the whole application down, and we can reconnect to the service as soon as possible.
1617

1718
## Explanation
1819

1920
Real world example
2021

21-
> Imagine a Web App that has both local (example: files and images) and remote (example: database entries) to serve. The database might not be responding due to a variety of reasons, so if the application keeps trying to read from the database using multiple threads/processes, soon all of them will hang and our entire web application will crash. We should be able to detect this situation and show the user an appropriate message so that he/she can explore other parts of the app unaffected by the database failure without any problem.
22+
> Imagine a web application that has both local files/images and remote database entries to serve.
23+
> The database might not be responding due to a variety of reasons, so if the application keeps
24+
> trying to read from the database using multiple threads/processes, soon all of them will hang
25+
> causing our entire web application will crash. We should be able to detect this situation and show
26+
> the user an appropriate message so that he/she can explore other parts of the app unaffected by
27+
> the database failure.
2228
2329
In plain words
2430

25-
> Allows us to save resources when we know a remote service failed. Useful when all parts of our application are highly decoupled from each other, and failure of one component doesn't mean the other parts will stop working.
31+
> Circuit Breaker allows graceful handling of failed remote services. It's especially useful when
32+
> all parts of our application are highly decoupled from each other, and failure of one component
33+
> doesn't mean the other parts will stop working.
2634
2735
Wikipedia says
2836

29-
> **Circuit breaker** is a design pattern used in modern software development. It is used to detect failures and encapsulates the logic of preventing a failure from constantly recurring, during maintenance, temporary external system failure or unexpected system difficulties.
30-
31-
So, how does this all come together?
37+
> Circuit breaker is a design pattern used in modern software development. It is used to detect
38+
> failures and encapsulates the logic of preventing a failure from constantly recurring, during
39+
> maintenance, temporary external system failure or unexpected system difficulties.
3240
3341
## Programmatic Example
34-
With the above example in mind we will imitate the functionality in a simple manner. We have two services: A *monitoring service* which will mimic the web app and will make both **local** and **remote** calls.
42+
43+
So, how does this all come together? With the above example in mind we will imitate the
44+
functionality in a simple example. A monitoring service mimics the web app and makes both local and
45+
remote calls.
3546

3647
The service architecture is as follows:
3748

3849
![alt text](./etc/ServiceDiagram.PNG "Service Diagram")
3950

40-
In terms of code, the End user application is:
51+
In terms of code, the end user application is:
4152

4253
```java
4354
public class App {
@@ -62,7 +73,7 @@ public class App {
6273
}
6374
```
6475

65-
The monitoring service is:
76+
The monitoring service:
6677

6778
``` java
6879
public class MonitoringService {
@@ -80,7 +91,8 @@ public class MonitoringService {
8091
}
8192
}
8293
```
83-
As it can be seen, it does the call to get local resources directly, but it wraps the call to remote (costly) service in a circuit breaker object, which prevents faults as follows:
94+
As it can be seen, it does the call to get local resources directly, but it wraps the call to
95+
remote (costly) service in a circuit breaker object, which prevents faults as follows:
8496

8597
```java
8698
public class CircuitBreaker {
@@ -155,24 +167,27 @@ public class CircuitBreaker {
155167
}
156168
```
157169

158-
How does the above pattern prevent failures? Let's understand via this finite state machine implemented by it.
170+
How does the above pattern prevent failures? Let's understand via this finite state machine
171+
implemented by it.
159172

160173
![alt text](./etc/StateDiagram.PNG "State Diagram")
161174

162-
- We initialize the Circuit Breaker object with certain parameters: **timeout**, **failureThreshold** and **retryTimePeriod** which help determine how resilient the API is.
163-
- Initially, we are in the **closed** state and the remote call to API happens.
175+
- We initialize the Circuit Breaker object with certain parameters: `timeout`, `failureThreshold` and `retryTimePeriod` which help determine how resilient the API is.
176+
- Initially, we are in the `closed` state and nos remote calls to the API have occurred.
164177
- Every time the call succeeds, we reset the state to as it was in the beginning.
165-
- If the number of failures cross a certain threshold, we move to the **open** state, which acts just like an open circuit and prevents remote service calls from being made, thus saving resources. (Here, we return the response called ```stale response from API```)
166-
- Once we exceed the retry timeout period, we move to the **half-open** state and make another call to the remote service again to check if the service is working so that we can serve fresh content. A *failure* sets it back to **open** state and another attempt is made after retry timeout period, while a *success* sets it to **closed** state so that everything starts working normally again.
178+
- If the number of failures cross a certain threshold, we move to the `open` state, which acts just like an open circuit and prevents remote service calls from being made, thus saving resources. (Here, we return the response called ```stale response from API```)
179+
- Once we exceed the retry timeout period, we move to the `half-open` state and make another call to the remote service again to check if the service is working so that we can serve fresh content. A failure sets it back to `open` state and another attempt is made after retry timeout period, while a success sets it to `closed` state so that everything starts working normally again.
167180

168181
## Class diagram
182+
169183
![alt text](./etc/circuit-breaker.urm.png "Circuit Breaker class diagram")
170184

171185
## Applicability
186+
172187
Use the Circuit Breaker pattern when
173188

174189
- Building a fault-tolerant application where failure of some services shouldn't bring the entire application down.
175-
- Building an continuously incremental/continuous delivery application, as some of it's components can be upgraded without shutting it down entirely.
190+
- Building a continuously running (always-on) application, so that its components can be upgraded without shutting it down entirely.
176191

177192
## Related Patterns
178193

0 commit comments

Comments
 (0)