Skip to content
This repository was archived by the owner on Sep 21, 2021. It is now read-only.

Commit 7e4c8f9

Browse files
committed
Edited 020_Distributed_Cluster/20_Add_failover.asciidoc with Atlas code editor
1 parent ec5b1d1 commit 7e4c8f9

File tree

1 file changed

+15
-17
lines changed

1 file changed

+15
-17
lines changed
Original file line numberDiff line numberDiff line change
@@ -1,41 +1,39 @@
1-
=== Add failover
1+
=== Add Failover
22

3-
Running a single node means that you have a single point of failure -- there
4-
is no redundancy.((("failover", "adding"))) Fortunately all we need to do to protect ourselves from data
3+
Running a single node means that you have a single point of failure--there
4+
is no redundancy.((("failover", "adding"))) Fortunately, all we need to do to protect ourselves from data
55
loss is to start another node.
66

7-
.Starting a second node
7+
.Starting a Second Node
88
***************************************
99
10-
To test out what happens when you add a second((("nodes", "starting a second node"))) node, you can start a new node
10+
To test what happens when you add a second((("nodes", "starting a second node"))) node, you can start a new node
1111
in exactly the same way as you started the first one (see
12-
<<running-elasticsearch>>), and from the same directory -- multiple nodes can
12+
<<running-elasticsearch>>), and from the same directory. Multiple nodes can
1313
share the same directory.
1414
1515
As long as the second node has the same `cluster.name` as the first node (see
1616
the `./config/elasticsearch.yml` file), it should automatically discover and
1717
join the cluster run by the first node. If it doesn't, check the logs to find
1818
out what went wrong. It may be that multicast is disabled on your network, or
19-
there is a firewall preventing your nodes from communicating.
19+
that a firewall is preventing your nodes from communicating.
2020
2121
***************************************
2222

2323
If we start a second node, our cluster would look like <<cluster-two-nodes>>.
2424

2525
[[cluster-two-nodes]]
26-
.A two-node cluster -- all primary and replica shards are allocated
26+
.A two-node cluster--all primary and replica shards are allocated
2727
image::images/elas_0203.png["A two-node cluster"]
2828

29-
The((("clusters", "two-node cluster"))) second node has joined the cluster and three _replica shards_ have ((("replica shards", "allocated to second node")))been
30-
allocated to it -- one for each primary shard. That means that we can lose
31-
either node and all of our data will be intact.
29+
The((("clusters", "two-node cluster"))) second node has joined the cluster, and three _replica shards_ have ((("replica shards", "allocated to second node")))been
30+
allocated to it--one for each primary shard. That means that we can lose
31+
either node, and all of our data will be intact.
3232

33-
Any newly indexed document will first be stored on a primary shard, then
34-
copied in parallel to the associated replica shard(s). This ensures that our
35-
document can be retrieved from a primary shard or from any of its replicas.
33+
Any newly indexed document will first be stored on a primary shard, and then copied in parallel to the associated replica shard(s). This ensures that our document can be retrieved from a primary shard or from any of its replicas.
3634

37-
The `cluster-health` now ((("cluster health", "checking after adding second node")))shows a status of `green`, which means that all 6
38-
shards (all 3 primary shards and all 3 replica shards) are active:
35+
The `cluster-health` now ((("cluster health", "checking after adding second node")))shows a status of `green`, which means that all six
36+
shards (all three primary shards and all three replica shards) are active:
3937

4038
[source,js]
4139
--------------------------------------------------
@@ -54,4 +52,4 @@ shards (all 3 primary shards and all 3 replica shards) are active:
5452
--------------------------------------------------
5553
<1> Cluster `status` is `green`.
5654

57-
Our cluster is not only fully functional but also _always available_.
55+
Our cluster is not only fully functional, but also _always available_.

0 commit comments

Comments
 (0)