Skip to content
This repository was archived by the owner on Sep 21, 2021. It is now read-only.

Commit 4cdd2d8

Browse files
committed
Updated 32GB JVM limit to new 30.5GB limit per advice from Oracle
1 parent 80d4b30 commit 4cdd2d8

File tree

1 file changed

+11
-12
lines changed

1 file changed

+11
-12
lines changed

510_Deployment/50_heap.asciidoc

+11-12
Original file line numberDiff line numberDiff line change
@@ -52,10 +52,9 @@ heap, while leaving the other 50% free. It won't go unused; Lucene will happily
5252
gobble up whatever is left over.
5353

5454
[[compressed_oops]]
55-
==== Don't Cross 32 GB!
55+
==== Don't Cross 30.5 GB!
5656
There is another reason to not allocate enormous heaps to Elasticsearch. As it turns((("heap", "sizing and setting", "32gb heap boundary")))((("32gb Heap boundary")))
57-
out, the JVM uses a trick to compress object pointers when heaps are less than
58-
~32 GB.
57+
out, the JVM uses a trick to compress object pointers when heaps are 30.5 GB or less.
5958

6059
In Java, all objects are allocated on the heap and referenced by a pointer.
6160
Ordinary object pointers (OOP) point at these objects, and are traditionally
@@ -75,36 +74,36 @@ reference four billion _objects_, rather than four billion bytes. Ultimately, t
7574
means the heap can grow to around 32 GB of physical size while still using a 32-bit
7675
pointer.
7776

78-
Once you cross that magical ~30–32 GB boundary, the pointers switch back to
77+
Once you cross that magical 30.5 GB boundary, the pointers switch back to
7978
ordinary object pointers. The size of each pointer grows, more CPU-memory
8079
bandwidth is used, and you effectively lose memory. In fact, it takes until around
81-
40–50 GB of allocated heap before you have the same _effective_ memory of a 32 GB
80+
40–50 GB of allocated heap before you have the same _effective_ memory of a 30.5 GB
8281
heap using compressed oops.
8382

8483
The moral of the story is this: even when you have memory to spare, try to avoid
85-
crossing the 32 GB heap boundary. It wastes memory, reduces CPU performance, and
84+
crossing the 30.5 GB heap boundary. It wastes memory, reduces CPU performance, and
8685
makes the GC struggle with large heaps.
8786

8887
[role="pagebreak-before"]
8988
.I Have a Machine with 1 TB RAM!
9089
****
91-
The 32 GB line is fairly important. So what do you do when your machine has a lot
92-
of memory? It is becoming increasingly common to see super-servers with 300–500 GB
90+
The 30.5 GB line is fairly important. So what do you do when your machine has a lot
91+
of memory? It is becoming increasingly common to see super-servers with 512–768 GB
9392
of RAM.
9493
9594
First, we would recommend avoiding such large machines (see <<hardware>>).
9695
9796
But if you already have the machines, you have two practical options:
9897
99-
- Are you doing mostly full-text search? Consider giving 32 GB to Elasticsearch
98+
- Are you doing mostly full-text search? Consider giving 30.5 GB to Elasticsearch
10099
and letting Lucene use the rest of memory via the OS filesystem cache. All that
101100
memory will cache segments and lead to blisteringly fast full-text search.
102101
103102
- Are you doing a lot of sorting/aggregations? You'll likely want that memory
104-
in the heap then. Instead of one node with 32 GB+ of RAM, consider running two or
103+
in the heap then. Instead of one node with more than 31.5 GB of RAM, consider running two or
105104
more nodes on a single machine. Still adhere to the 50% rule, though. So if your
106-
machine has 128 GB of RAM, run two nodes, each with 32 GB. This means 64 GB will be
107-
used for heaps, and 64 will be left over for Lucene.
105+
machine has 128 GB of RAM, run two nodes, each with 30.5 GB. This means 61 GB will be
106+
used for heaps, and 67 will be left over for Lucene.
108107
+
109108
If you choose this option, set `cluster.routing.allocation.same_shard.host: true`
110109
in your config. This will prevent a primary and a replica shard from colocating

0 commit comments

Comments
 (0)