Skip to content
This repository was archived by the owner on Sep 21, 2021. It is now read-only.

Commit dbd159c

Browse files
committed
Fixed build errors.
1 parent baecf3d commit dbd159c

File tree

8 files changed

+12
-11
lines changed

8 files changed

+12
-11
lines changed

060_Distributed_Search/10_Fetch_phase.asciidoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -58,6 +58,6 @@ after page until your servers crumble at the knees.
5858
5959
If you _do_ need to fetch large numbers of docs from your cluster, you can
6060
do so efficiently by disabling sorting with the `scroll` query,
61-
which we discuss <<scan-scroll,later in this chapter>>.
61+
which we discuss <<scroll,later in this chapter>>.
6262
6363
****

070_Index_Mgmt/50_Reindexing.asciidoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ whole document available to you in Elasticsearch itself. You don't have to
1515
rebuild your index from the database, which is usually much slower.
1616

1717
To reindex all of the documents from the old index efficiently, use
18-
<<scan-scroll,_scroll_>> to retrieve batches((("using in reindexing documents"))) of documents from the old index,
18+
<<scroll,_scroll_>> to retrieve batches((("using in reindexing documents"))) of documents from the old index,
1919
and the <<bulk,`bulk` API>> to push them into the new index.
2020

2121
.Reindexing in Batches

300_Aggregations/20_basic_example.asciidoc

+4-4
Original file line numberDiff line numberDiff line change
@@ -49,9 +49,9 @@ using a simple aggregation. We will do this using a `terms` bucket:
4949
GET /cars/transactions/_search
5050
{
5151
"size" : 0,
52-
"aggs" : {
53-
"popular_colors" : {
54-
"terms" : {
52+
"aggs" : { <1>
53+
"popular_colors" : { <2>
54+
"terms" : { <3>
5555
"field" : "color"
5656
}
5757
}
@@ -62,7 +62,7 @@ GET /cars/transactions/_search
6262

6363
<1> Aggregations are placed under the ((("aggregations", "aggs parameter")))top-level `aggs` parameter (the longer `aggregations`
6464
will also work if you prefer that).
65-
<2> We then name the aggregation whatever we want: `colors`, in this example
65+
<2> We then name the aggregation whatever we want: `popular_colors`, in this example
6666
<3> Finally, we define a single bucket of type `terms`.
6767

6868
Aggregations are executed in the context of search results,((("searching", "aggregations executed in context of search results"))) which means it is

400_Relationships/25_Concurrency.asciidoc

+2-2
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ The more important issue is that, if the user were to change his name, all
1111
of his blog posts would need to be updated too. Fortunately, users don't
1212
often change names. Even if they did, it is unlikely that a user would have
1313
written more than a few thousand blog posts, so updating blog posts with
14-
the <<scan-scroll,`scroll`>> and <<bulk,`bulk`>> APIs would take less than a
14+
the <<scroll,`scroll`>> and <<bulk,`bulk`>> APIs would take less than a
1515
second.
1616

1717
However, let's consider a more complex scenario in which changes are common, far
@@ -182,7 +182,7 @@ PUT /fs/file/1?version=2 <1>
182182
We can even rename a directory, but this means updating all of the files that
183183
exist anywhere in the path hierarchy beneath that directory. This may be
184184
quick or slow, depending on how many files need to be updated. All we would
185-
need to do is to use <<scan-scroll,`scroll`>> to retrieve all the
185+
need to do is to use <<scroll,`scroll`>> to retrieve all the
186186
files, and the <<bulk,`bulk` API>> to update them. The process isn't
187187
atomic, but all files will quickly move to their new home.
188188

400_Relationships/26_Concurrency_solutions.asciidoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@ PUT /fs/lock/_bulk
166166
--------------------------
167167
<1> The `refresh` call ensures that all `lock` documents are visible to
168168
the search request.
169-
<2> You can use a <<scan-scroll,`scroll`>> query when you need to retrieve large
169+
<2> You can use a <<scroll,`scroll`>> query when you need to retrieve large
170170
numbers of results with a single search request.
171171

172172
Document-level locking enables fine-grained access control, but creating lock

410_Scaling/45_Index_per_timeframe.asciidoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ data.
2929

3030
If we were to have one big index for documents of this type, we would soon run
3131
out of space. Logging events just keep on coming, without pause or
32-
interruption. We could delete the old events with a <<scan-scroll,`scroll`>>
32+
interruption. We could delete the old events with a <<scroll,`scroll`>>
3333
query and bulk delete, but this approach is _very inefficient_. When you delete a
3434
document, it is only _marked_ as deleted (see <<deletes-and-updates>>). It won't
3535
be physically deleted until the segment containing it is merged away.

410_Scaling/75_One_big_user.asciidoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ PUT /baking_v1
2323
------------------------------
2424

2525
The next step is to migrate the data from the shared index into the dedicated
26-
index, which can be done using a <<scan-scroll, `scroll`>> query and the
26+
index, which can be done using a <<scroll, `scroll`>> query and the
2727
<<bulk,`bulk` API>>. As soon as the migration is finished, the index alias
2828
can be updated to point to the new index:
2929

510_Deployment/40_config.asciidoc

+1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
[[important-configuration-changes]]
12
=== Important Configuration Changes
23
Elasticsearch ships with _very good_ defaults,((("deployment", "configuration changes, important")))((("configuration changes, important"))) especially when it comes to performance-
34
related settings and options. When in doubt, just leave

0 commit comments

Comments
 (0)