Skip to content
This repository was archived by the owner on Sep 21, 2021. It is now read-only.

Commit d25f1b0

Browse files
Made the increasing-replicas example easier to understand
Closes #136
1 parent aa29969 commit d25f1b0

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

020_Distributed_Cluster/30_Scale_more.asciidoc

+4-4
Original file line numberDiff line numberDiff line change
@@ -27,9 +27,9 @@ PUT /blogs/_settings
2727
image::images/02-05_replicas.png["A three-node cluster with two replica shards"]
2828

2929
As can be seen in <<cluster-three-nodes-two-replicas>>, the `blogs` index now
30-
has 9 shards: 3 primaries and 6 replicas. If we were to add another three
31-
nodes to our 6 node cluster, we would again have one shard per node, and our
32-
cluster would be able to handle *50%* more search requests than before.
30+
has 9 shards: 3 primaries and 6 replicas. This means that we can scale out to
31+
a total of 9 nodes, again with one shard per node. This would allow us to
32+
*triple* search performance compared to our original three node cluster.
3333

3434
[NOTE]
3535
===================================================
@@ -39,7 +39,7 @@ increase our performance at all because each shard has access to a smaller
3939
fraction of its node's resources. You need to add hardware to increase
4040
throughput.
4141
42-
But these extra replicas do mean that we have more redundancy. With the node
42+
But these extra replicas do mean that we have more redundancy: with the node
4343
configuration above, we can now afford to lose two nodes without losing any
4444
data.
4545

0 commit comments

Comments
 (0)