You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Sep 21, 2021. It is now read-only.
Copy file name to clipboardexpand all lines: 020_Distributed_Cluster/20_Add_failover.asciidoc
+3-4
Original file line number
Diff line number
Diff line change
@@ -13,11 +13,10 @@ in exactly the same way as you started the first one (see
13
13
share the same directory.
14
14
15
15
When you run a second node on the same machine, it automatically discovers
16
-
and joins the cluster as long as it has the same `cluster.name` as the first node (see
17
-
the `./config/elasticsearch.yml` file). However, for nodes running on different machines
16
+
and joins the cluster as long as it has the same `cluster.name` as the first node.
17
+
However, for nodes running on different machines
18
18
to join the same cluster, you need to configure a list of unicast hosts the nodes can contact
19
-
to join the cluster. For more information about how Elasticsearch nodes find eachother, see https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery-zen.html[Zen Discovery]
20
-
in the Elasticsearch Reference.
19
+
to join the cluster. For more information, see <<unicast, Prefer Unicast over Multicast>>.
Copy file name to clipboardexpand all lines: 400_Relationships/26_Concurrency_solutions.asciidoc
+31-23
Original file line number
Diff line number
Diff line change
@@ -81,10 +81,9 @@ parallelism by making our locking more fine-grained.
81
81
==== Document Locking
82
82
83
83
Instead of locking the whole filesystem, we could lock individual documents
84
-
by using the same technique as previously described.((("locking", "document locking")))((("document locking"))) A process could use a
85
-
<<scan-scroll,scan-and-scroll>> request to retrieve the IDs of all documents
86
-
that would be affected by the change, and would need to create a lock file for
87
-
each of them:
84
+
by using the same technique as previously described.((("locking", "document locking")))((("document locking")))
85
+
We can use a <<scroll,scrolled search>> to retrieve all documents that would be affected by the change and
86
+
create a lock file for each one:
88
87
89
88
[source,json]
90
89
--------------------------
@@ -93,7 +92,6 @@ PUT /fs/lock/_bulk
93
92
{ "process_id": 123 } <2>
94
93
{ "create": { "_id": 2}}
95
94
{ "process_id": 123 }
96
-
...
97
95
--------------------------
98
96
<1> The ID of the `lock` document would be the same as the ID of the file
99
97
that should be locked.
@@ -135,41 +133,51 @@ POST /fs/lock/1/_update
135
133
}
136
134
--------------------------
137
135
138
-
If the document doesn't already exist, the `upsert` document will be inserted--much the same as the `create` request we used previously. However, if the
139
-
document _does_ exist, the script will look at the `process_id` stored in the
140
-
document. If it is the same as ours, it aborts the update (`noop`) and
141
-
returns success. If it is different, the `assert false` throws an exception
142
-
and we know that the lock has failed.
136
+
If the document doesn't already exist, the `upsert` document is inserted--much
137
+
the same as the previous `create` request. However, if the
138
+
document _does_ exist, the script looks at the `process_id` stored in the
139
+
document. If the `process_id` matches, no update is performed (`noop`) but the
140
+
script returns successfully. If it is different, `assert false` throws an exception
141
+
and you know that the lock has failed.
142
+
143
+
Once all locks have been successfully created, you can proceed with your changes.
144
+
145
+
Afterward, you must release all of the locks, which you can do by
146
+
retrieving all of the locked documents and performing a bulk delete:
143
147
144
-
Once all locks have been successfully created, the rename operation can begin.
145
-
Afterward, we must release((("delete-by-query request"))) all of the locks, which we can do with a
146
-
`delete-by-query` request:
147
148
148
149
[source,json]
149
150
--------------------------
150
151
POST /fs/_refresh <1>
151
152
152
-
DELETE /fs/lock/_query
153
+
GET /fs/lock/_search?scroll=1m <2>
153
154
{
154
-
"query": {
155
-
"term": {
156
-
"process_id": 123
155
+
"sort" : ["_doc"],
156
+
"query": {
157
+
"match" : {
158
+
"process_id" : 123
159
+
}
157
160
}
158
-
}
159
161
}
162
+
163
+
PUT /fs/lock/_bulk
164
+
{ "delete": { "_id": 1}}
165
+
{ "delete": { "_id": 2}}
160
166
--------------------------
161
167
<1> The `refresh` call ensures that all `lock` documents are visible to
162
-
the `delete-by-query` request.
168
+
the search request.
169
+
<2> You can use a <<scan-scroll,`scroll`>> query when you need to retrieve large
170
+
numbers of results with a single search request.
163
171
164
172
Document-level locking enables fine-grained access control, but creating lock
165
-
files for millions of documents can be expensive. In certain scenarios, such
166
-
as this example with directory trees, it is possible to achieve fine-grained
167
-
locking with much less work.
173
+
files for millions of documents can be expensive. In some cases,
174
+
you can achieve fine-grained locking with much less work, as shown in the
175
+
following directory tree scenario.
168
176
169
177
[[tree-locking]]
170
178
==== Tree Locking
171
179
172
-
Rather than locking every involved document, as in the previous option, we
180
+
Rather than locking every involved document as in the previous example, we
173
181
could lock just part of the directory tree.((("locking", "tree locking"))) We will need exclusive access
174
182
to the file or directory that we want to rename, which can be achieved with an
0 commit comments