@@ -122,7 +122,7 @@ $ JAVA_HOME=`/usr/libexec/java_home -v 1.8` java -Xmx32767m -XX:+PrintFlagsFinal
122
122
bool UseCompressedOops = false
123
123
----
124
124
125
- The morale of the story is that the exact cutoff to leverage compressed oops
125
+ The moral of the story is that the exact cutoff to leverage compressed oops
126
126
varies from JVM to JVM, so take caution when taking examples from elsewhere and
127
127
be sure to check your system with your configuration and JVM.
128
128
@@ -147,22 +147,23 @@ of RAM.
147
147
148
148
First, we would recommend avoiding such large machines (see <<hardware>>).
149
149
150
- But if you already have the machines, you have two practical options:
150
+ But if you already have the machines, you have three practical options:
151
151
152
152
- Are you doing mostly full-text search? Consider giving 4-32 GB to Elasticsearch
153
153
and letting Lucene use the rest of memory via the OS filesystem cache. All that
154
154
memory will cache segments and lead to blisteringly fast full-text search.
155
155
156
156
- Are you doing a lot of sorting/aggregations? Are most of your aggregations on numerics,
157
- dates, geo_points and `not_analyzed` strings? You're in luck! Give Elasticsearch
158
- somewhere from 4-32 GB of memory and leave the rest for the OS to cache doc values
159
- in memory.
157
+ dates, geo_points and `not_analyzed` strings? You're in luck, your aggregations will be done on
158
+ memory-friendly doc values! Give Elasticsearch somewhere from 4-32 GB of memory and leave the
159
+ rest for the OS to cache doc values in memory.
160
160
161
161
- Are you doing a lot of sorting/aggregations on analyzed strings (e.g. for word-tags,
162
162
or SigTerms, etc)? Unfortunately that means you'll need fielddata, which means you
163
- need heap space. Instead of one node with more than 512 GB of RAM, consider running two or
164
- more nodes on a single machine. Still adhere to the 50% rule, though. So if your
165
- machine has 128 GB of RAM, run two nodes, each with just under 32 GB. This means that less
163
+ need heap space. Instead of one node with a huge amount of RAM, consider running two or
164
+ more nodes on a single machine. Still adhere to the 50% rule, though.
165
+ +
166
+ So if your machine has 128 GB of RAM, run two nodes each with just under 32 GB. This means that less
166
167
than 64 GB will be used for heaps, and more than 64 GB will be left over for Lucene.
167
168
+
168
169
If you choose this option, set `cluster.routing.allocation.same_shard.host: true`
0 commit comments