Skip to content
This repository was archived by the owner on Sep 21, 2021. It is now read-only.

Commit fcdf493

Browse files
committed
Typos, rewording
1 parent bf21de0 commit fcdf493

File tree

1 file changed

+9
-8
lines changed

1 file changed

+9
-8
lines changed

510_Deployment/50_heap.asciidoc

+9-8
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ $ JAVA_HOME=`/usr/libexec/java_home -v 1.8` java -Xmx32767m -XX:+PrintFlagsFinal
122122
bool UseCompressedOops = false
123123
----
124124

125-
The morale of the story is that the exact cutoff to leverage compressed oops
125+
The moral of the story is that the exact cutoff to leverage compressed oops
126126
varies from JVM to JVM, so take caution when taking examples from elsewhere and
127127
be sure to check your system with your configuration and JVM.
128128

@@ -147,22 +147,23 @@ of RAM.
147147
148148
First, we would recommend avoiding such large machines (see <<hardware>>).
149149
150-
But if you already have the machines, you have two practical options:
150+
But if you already have the machines, you have three practical options:
151151
152152
- Are you doing mostly full-text search? Consider giving 4-32 GB to Elasticsearch
153153
and letting Lucene use the rest of memory via the OS filesystem cache. All that
154154
memory will cache segments and lead to blisteringly fast full-text search.
155155
156156
- Are you doing a lot of sorting/aggregations? Are most of your aggregations on numerics,
157-
dates, geo_points and `not_analyzed` strings? You're in luck! Give Elasticsearch
158-
somewhere from 4-32 GB of memory and leave the rest for the OS to cache doc values
159-
in memory.
157+
dates, geo_points and `not_analyzed` strings? You're in luck, your aggregations will be done on
158+
memory-friendly doc values! Give Elasticsearch somewhere from 4-32 GB of memory and leave the
159+
rest for the OS to cache doc values in memory.
160160
161161
- Are you doing a lot of sorting/aggregations on analyzed strings (e.g. for word-tags,
162162
or SigTerms, etc)? Unfortunately that means you'll need fielddata, which means you
163-
need heap space. Instead of one node with more than 512 GB of RAM, consider running two or
164-
more nodes on a single machine. Still adhere to the 50% rule, though. So if your
165-
machine has 128 GB of RAM, run two nodes, each with just under 32 GB. This means that less
163+
need heap space. Instead of one node with a huge amount of RAM, consider running two or
164+
more nodes on a single machine. Still adhere to the 50% rule, though.
165+
+
166+
So if your machine has 128 GB of RAM, run two nodes each with just under 32 GB. This means that less
166167
than 64 GB will be used for heaps, and more than 64 GB will be left over for Lucene.
167168
+
168169
If you choose this option, set `cluster.routing.allocation.same_shard.host: true`

0 commit comments

Comments
 (0)