You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Sep 21, 2021. It is now read-only.
Copy file name to clipboardexpand all lines: 510_Deployment/50_heap.asciidoc
+6-6
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,11 @@
1
1
[[heap-sizing]]
2
2
=== Heap: Sizing and Swapping
3
3
4
-
The default installation of Elasticsearch is configured with a 1gb heap. ((("deployment", "heap, sizing and swapping"))) For
4
+
The default installation of Elasticsearch is configured with a 1gb heap. ((("deployment", "heap, sizing and swapping")))((("heap", "sizing and setting"))) For
5
5
just about every deployment, this number is far too small. If you are using the
6
6
default heap values, your cluster is probably configured incorrectly.
7
7
8
-
There are two ways to change the heap size in Elasticsearch.((("heap size, setting"))) The easiest is to
8
+
There are two ways to change the heap size in Elasticsearch. The easiest is to
9
9
set an environment variable called `ES_HEAP_SIZE`.((("ES_HEAP_SIZE environment variable"))) When the server process
10
10
starts, it will read this environment variable and set the heap accordingly.
11
11
As an example, you can set it via the command line with:
@@ -30,9 +30,9 @@ explicit `-Xmx` and `-Xms` values.
30
30
31
31
==== Give half your memory to Lucene
32
32
33
-
A common problem is configuring a heap that is _too_ large. You have a 64gb
33
+
A common problem is configuring a heap that is _too_ large. ((("heap", "sizing and setting", "giving half your memory to Lucene"))) You have a 64gb
34
34
machine...and by golly, you want to give Elasticsearch all 64gb of memory. More
35
-
is better!((("memory", "allocating for Lucene")))
35
+
is better!
36
36
37
37
Heap is definitely important to Elasticsearch. It is used by many in-memory data
38
38
structures to provide fast operation. But with that said, there is another major
@@ -53,7 +53,7 @@ gobble up whatever is leftover.
53
53
54
54
[[compressed_oops]]
55
55
==== Don't cross 32gb!
56
-
There is another reason to not allocate enormous heaps to Elasticsearch. As it turns((("32gb Heap boundary")))
56
+
There is another reason to not allocate enormous heaps to Elasticsearch. As it turns((("heap", "sizing and setting", "32gb heap boundary")))((("32gb Heap boundary")))
57
57
out, the JVM uses a trick to compress object pointers when heaps are less than
58
58
~32gb.
59
59
@@ -112,7 +112,7 @@ to the same physical machine (since this would remove the benefits of replica HA
112
112
113
113
==== Swapping is the death of performance
114
114
115
-
It should be obvious,((("memory", "swapping as the death of performance")))((("swapping, the death of performance"))) but it bears spelling out clearly: swapping main memory
115
+
It should be obvious,((("heap", "sizing and setting", "swapping, death of performance")))((("memory", "swapping as the death of performance")))((("swapping, the death of performance"))) but it bears spelling out clearly: swapping main memory
116
116
to disk will _crush_ server performance. Think about it...an in-memory operation
0 commit comments