Skip to content
This repository was archived by the owner on Sep 21, 2021. It is now read-only.

Commit 104dcd3

Browse files
author
troutman_margaret@yahoo.com
committed
Edited 510_Deployment/50_heap.asciidoc with Atlas code editor
1 parent b0d5fb6 commit 104dcd3

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

510_Deployment/50_heap.asciidoc

+6-6
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
[[heap-sizing]]
22
=== Heap: Sizing and Swapping
33

4-
The default installation of Elasticsearch is configured with a 1gb heap. ((("deployment", "heap, sizing and swapping"))) For
4+
The default installation of Elasticsearch is configured with a 1gb heap. ((("deployment", "heap, sizing and swapping")))((("heap", "sizing and setting"))) For
55
just about every deployment, this number is far too small. If you are using the
66
default heap values, your cluster is probably configured incorrectly.
77

8-
There are two ways to change the heap size in Elasticsearch.((("heap size, setting"))) The easiest is to
8+
There are two ways to change the heap size in Elasticsearch. The easiest is to
99
set an environment variable called `ES_HEAP_SIZE`.((("ES_HEAP_SIZE environment variable"))) When the server process
1010
starts, it will read this environment variable and set the heap accordingly.
1111
As an example, you can set it via the command line with:
@@ -30,9 +30,9 @@ explicit `-Xmx` and `-Xms` values.
3030

3131
==== Give half your memory to Lucene
3232

33-
A common problem is configuring a heap that is _too_ large. You have a 64gb
33+
A common problem is configuring a heap that is _too_ large. ((("heap", "sizing and setting", "giving half your memory to Lucene"))) You have a 64gb
3434
machine...and by golly, you want to give Elasticsearch all 64gb of memory. More
35-
is better!((("memory", "allocating for Lucene")))
35+
is better!
3636

3737
Heap is definitely important to Elasticsearch. It is used by many in-memory data
3838
structures to provide fast operation. But with that said, there is another major
@@ -53,7 +53,7 @@ gobble up whatever is leftover.
5353

5454
[[compressed_oops]]
5555
==== Don't cross 32gb!
56-
There is another reason to not allocate enormous heaps to Elasticsearch. As it turns((("32gb Heap boundary")))
56+
There is another reason to not allocate enormous heaps to Elasticsearch. As it turns((("heap", "sizing and setting", "32gb heap boundary")))((("32gb Heap boundary")))
5757
out, the JVM uses a trick to compress object pointers when heaps are less than
5858
~32gb.
5959

@@ -112,7 +112,7 @@ to the same physical machine (since this would remove the benefits of replica HA
112112

113113
==== Swapping is the death of performance
114114

115-
It should be obvious,((("memory", "swapping as the death of performance")))((("swapping, the death of performance"))) but it bears spelling out clearly: swapping main memory
115+
It should be obvious,((("heap", "sizing and setting", "swapping, death of performance")))((("memory", "swapping as the death of performance")))((("swapping, the death of performance"))) but it bears spelling out clearly: swapping main memory
116116
to disk will _crush_ server performance. Think about it...an in-memory operation
117117
is one that needs to execute quickly.
118118

0 commit comments

Comments
 (0)