You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Sep 21, 2021. It is now read-only.
Copy file name to clipboardexpand all lines: 510_Deployment/50_heap.asciidoc
+11-12
Original file line number
Diff line number
Diff line change
@@ -52,10 +52,9 @@ heap, while leaving the other 50% free. It won't go unused; Lucene will happily
52
52
gobble up whatever is left over.
53
53
54
54
[[compressed_oops]]
55
-
==== Don't Cross 32 GB!
55
+
==== Don't Cross 30.5 GB!
56
56
There is another reason to not allocate enormous heaps to Elasticsearch. As it turns((("heap", "sizing and setting", "32gb heap boundary")))((("32gb Heap boundary")))
57
-
out, the JVM uses a trick to compress object pointers when heaps are less than
58
-
~32 GB.
57
+
out, the JVM uses a trick to compress object pointers when heaps are 30.5 GB or less.
59
58
60
59
In Java, all objects are allocated on the heap and referenced by a pointer.
61
60
Ordinary object pointers (OOP) point at these objects, and are traditionally
@@ -75,36 +74,36 @@ reference four billion _objects_, rather than four billion bytes. Ultimately, t
75
74
means the heap can grow to around 32 GB of physical size while still using a 32-bit
76
75
pointer.
77
76
78
-
Once you cross that magical ~30–32 GB boundary, the pointers switch back to
77
+
Once you cross that magical 30.5 GB boundary, the pointers switch back to
79
78
ordinary object pointers. The size of each pointer grows, more CPU-memory
80
79
bandwidth is used, and you effectively lose memory. In fact, it takes until around
81
-
40–50 GB of allocated heap before you have the same _effective_ memory of a 32 GB
80
+
40–50 GB of allocated heap before you have the same _effective_ memory of a 30.5 GB
82
81
heap using compressed oops.
83
82
84
83
The moral of the story is this: even when you have memory to spare, try to avoid
85
-
crossing the 32 GB heap boundary. It wastes memory, reduces CPU performance, and
84
+
crossing the 30.5 GB heap boundary. It wastes memory, reduces CPU performance, and
86
85
makes the GC struggle with large heaps.
87
86
88
87
[role="pagebreak-before"]
89
88
.I Have a Machine with 1 TB RAM!
90
89
****
91
-
The 32 GB line is fairly important. So what do you do when your machine has a lot
92
-
of memory? It is becoming increasingly common to see super-servers with 300–500 GB
90
+
The 30.5 GB line is fairly important. So what do you do when your machine has a lot
91
+
of memory? It is becoming increasingly common to see super-servers with 512–768 GB
93
92
of RAM.
94
93
95
94
First, we would recommend avoiding such large machines (see <<hardware>>).
96
95
97
96
But if you already have the machines, you have two practical options:
98
97
99
-
- Are you doing mostly full-text search? Consider giving 32 GB to Elasticsearch
98
+
- Are you doing mostly full-text search? Consider giving 30.5 GB to Elasticsearch
100
99
and letting Lucene use the rest of memory via the OS filesystem cache. All that
101
100
memory will cache segments and lead to blisteringly fast full-text search.
102
101
103
102
- Are you doing a lot of sorting/aggregations? You'll likely want that memory
104
-
in the heap then. Instead of one node with 32 GB+ of RAM, consider running two or
103
+
in the heap then. Instead of one node with more than 31.5 GB of RAM, consider running two or
105
104
more nodes on a single machine. Still adhere to the 50% rule, though. So if your
106
-
machine has 128 GB of RAM, run two nodes, each with 32 GB. This means 64 GB will be
107
-
used for heaps, and 64 will be left over for Lucene.
105
+
machine has 128 GB of RAM, run two nodes, each with 30.5 GB. This means 61 GB will be
106
+
used for heaps, and 67 will be left over for Lucene.
108
107
+
109
108
If you choose this option, set `cluster.routing.allocation.same_shard.host: true`
110
109
in your config. This will prevent a primary and a replica shard from colocating
0 commit comments