From dc75d8834128d163192a50e1b97236ffc50dc3a7 Mon Sep 17 00:00:00 2001 From: Clinton Gormley Date: Tue, 19 Apr 2016 09:58:07 +0200 Subject: [PATCH 01/88] Removed out-of-date warnings --- book-docinfo.xml | 5 ----- 1 file changed, 5 deletions(-) diff --git a/book-docinfo.xml b/book-docinfo.xml index 07834aade..23ba2ad9b 100644 --- a/book-docinfo.xml +++ b/book-docinfo.xml @@ -1,5 +1,3 @@ -PLEASE NOTE:
We are working on updating this book for the latest version. Some content might be out of date.?> - This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. @@ -15,9 +13,6 @@ Elasticsearch: The Definitive Guide, Second Edition - - We are working on updating this book for the latest version. Some content might be out of date. - If you would like to purchase an eBook or printed version of this book once it is complete, you can do so from O'Reilly Media: From 8532073c70ee1642e2c08224734f180e9fe1fa42 Mon Sep 17 00:00:00 2001 From: sploiselle Date: Mon, 23 May 2016 08:26:53 -0700 Subject: [PATCH 02/88] Resolve error in "More-Complicated Searches" example (#533) The current on-page example in "More-Complicated Searches" produces an error (included below) using Elasticsearch 2.3.2. To resolve this, moving code from SENSE example (which works) onto the page. _Error from current on-page example:_ ```javascript { "error": { "root_cause": [ { "type": "query_parsing_exception", "reason": "Failed to parse", "index": "megacorp" } ], "type": "search_phase_execution_exception", "reason": "all shards failed", "phase": "query_fetch", "grouped": true, "failed_shards": [ { "shard": 0, "index": "megacorp", "node": "nodeIDHash", "reason": { "type": "query_parsing_exception", "reason": "Failed to parse", "index": "megacorp", "caused_by": { "type": "json_parse_exception", "reason": "Unexpected character (':' (code 58)): was expecting comma to separate ARRAY entries\n at [Source: [B@2cc82fb6; line: 5, column: 26]" } } } ] }, "status": 400 } ``` --- 010_Intro/30_Tutorial_Search.asciidoc | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/010_Intro/30_Tutorial_Search.asciidoc b/010_Intro/30_Tutorial_Search.asciidoc index f9717422d..a07b63636 100644 --- a/010_Intro/30_Tutorial_Search.asciidoc +++ b/010_Intro/30_Tutorial_Search.asciidoc @@ -209,15 +209,15 @@ which allows us to execute structured searches efficiently: GET /megacorp/employee/_search { "query" : { - "bool": { - "must": [ - "match" : { - "last_name" : "smith" <1> - } - ], - "filter": { + "filtered" : { + "filter" : { "range" : { - "age" : { "gt" : 30 } <2> + "age" : { "gt" : 30 } <1> + } + }, + "query" : { + "match" : { + "last_name" : "smith" <2> } } } @@ -226,9 +226,9 @@ GET /megacorp/employee/_search -------------------------------------------------- // SENSE: 010_Intro/30_Query_DSL.json -<1> This portion of the query is the((("match queries"))) same `match` _query_ that we used before. -<2> This portion of the query is a `range` _filter_, which((("range filters"))) will find all ages +<1> This portion of the query is a `range` _filter_, which((("range filters"))) will find all ages older than 30—`gt` stands for _greater than_. +<2> This portion of the query is the((("match queries"))) same `match` _query_ that we used before. Don't worry about the syntax too much for now; we will cover it in great From 2cdc7ea71f367bd8551b303bfd7356c8c5be3969 Mon Sep 17 00:00:00 2001 From: ericamick Date: Tue, 24 May 2016 09:53:27 -0400 Subject: [PATCH 03/88] Update 50_Analysis_chain.asciidoc (#541) --- 260_Synonyms/50_Analysis_chain.asciidoc | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/260_Synonyms/50_Analysis_chain.asciidoc b/260_Synonyms/50_Analysis_chain.asciidoc index e7b642366..66ea49fb8 100644 --- a/260_Synonyms/50_Analysis_chain.asciidoc +++ b/260_Synonyms/50_Analysis_chain.asciidoc @@ -39,7 +39,7 @@ stemmer, and to list just the root words that would be emitted by the stemmer: Normally, synonym filters are placed after the `lowercase` token filter and so all synonyms are ((("synonyms", "and the analysis chain", "case-sensitive synonyms")))((("case-sensitive synonyms")))written in lowercase, but sometimes that can lead to odd conflations. For instance, a `CAT` scan and a `cat` are quite different, as -are `PET` (positron emmision tomography) and a `pet`. For that matter, the +are `PET` (positron emission tomography) and a `pet`. For that matter, the surname `Little` is distinct from the adjective `little` (although if a sentence starts with the adjective, it will be uppercased anyway). @@ -49,7 +49,7 @@ that your synonym rules would need to list all of the case variations that you want to match (for example, `Little,LITTLE,little`). Instead of that, you could have two synonym filters: one to catch the case-sensitive -synonyms and one for all the case-insentive synonyms. For instance, the +synonyms and one for all the case-insensitive synonyms. For instance, the case-sensitive rules could look like this: "CAT,CAT scan => cat_scan" @@ -57,7 +57,7 @@ case-sensitive rules could look like this: "Johnny Little,J Little => johnny_little" "Johnny Small,J Small => johnny_small" -And the case-insentive rules could look like this: +And the case-insensitive rules could look like this: "cat => cat,pet" "dog => dog,pet" From 8006304bf5dcb887ce8de739c6f979576456e932 Mon Sep 17 00:00:00 2001 From: Colin Clay Date: Tue, 31 May 2016 13:35:32 -0700 Subject: [PATCH 04/88] Improper word choice. (#548) --- 510_Deployment/50_heap.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/510_Deployment/50_heap.asciidoc b/510_Deployment/50_heap.asciidoc index ca6a9b2ff..4bb0102f4 100644 --- a/510_Deployment/50_heap.asciidoc +++ b/510_Deployment/50_heap.asciidoc @@ -122,7 +122,7 @@ $ JAVA_HOME=`/usr/libexec/java_home -v 1.8` java -Xmx32767m -XX:+PrintFlagsFinal bool UseCompressedOops = false ---- -The morale of the story is that the exact cutoff to leverage compressed oops +The moral of the story is that the exact cutoff to leverage compressed oops varies from JVM to JVM, so take caution when taking examples from elsewhere and be sure to check your system with your configuration and JVM. From 4b79848fe2d9c384d18e7c39d537f670bc6a4a78 Mon Sep 17 00:00:00 2001 From: Brian Atwood Date: Tue, 31 May 2016 15:37:37 -0500 Subject: [PATCH 05/88] Fix typo (#544) --- 110_Multi_Field_Search/05_Multiple_query_strings.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/110_Multi_Field_Search/05_Multiple_query_strings.asciidoc b/110_Multi_Field_Search/05_Multiple_query_strings.asciidoc index ef2f0c54e..322a9c964 100644 --- a/110_Multi_Field_Search/05_Multiple_query_strings.asciidoc +++ b/110_Multi_Field_Search/05_Multiple_query_strings.asciidoc @@ -73,7 +73,7 @@ would have reduced the contribution of the title and author clauses to one-quart It is likely that an even one-third split between clauses is not what we need for the preceding query. ((("multifield search", "multiple query strings", "prioritizing query clauses")))((("bool query", "prioritizing clauses"))) Probably we're more interested in the title and author -clauses then we are in the translator clauses. We need to tune the query to +clauses than we are in the translator clauses. We need to tune the query to make the title and author clauses relatively more important. The simplest weapon in our tuning arsenal is the `boost` parameter. To From c5ce311429aef17f30ef054193668897808ef4e1 Mon Sep 17 00:00:00 2001 From: Natthakit Susanthitanon Date: Wed, 1 Jun 2016 03:38:05 +0700 Subject: [PATCH 06/88] Fix typo in 40_bitsets.asciidoc (#542) --- 080_Structured_Search/40_bitsets.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/080_Structured_Search/40_bitsets.asciidoc b/080_Structured_Search/40_bitsets.asciidoc index 1522a9efd..38a690bac 100644 --- a/080_Structured_Search/40_bitsets.asciidoc +++ b/080_Structured_Search/40_bitsets.asciidoc @@ -81,7 +81,7 @@ that was cacheable. This often meant the system cached bitsets too aggressively and performance suffered due to thrashing the cache. In addition, many filters are very fast to evaluate, but substantially slower to cache (and reuse from cache). These filters don't make sense to cache, since you'd be better off just re-executing -the fitler again. +the filter again. Inspecting the inverted index is very fast and most query components are rare. Consider a `term` filter on a `"user_id"` field: if you have millions of users, From bdea4d2592e074175618b1f926c35a98e4160b41 Mon Sep 17 00:00:00 2001 From: ericamick Date: Tue, 31 May 2016 16:40:21 -0400 Subject: [PATCH 07/88] Update 20_hardware.asciidoc (#523) --- 510_Deployment/20_hardware.asciidoc | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/510_Deployment/20_hardware.asciidoc b/510_Deployment/20_hardware.asciidoc index acc588466..ec3954462 100644 --- a/510_Deployment/20_hardware.asciidoc +++ b/510_Deployment/20_hardware.asciidoc @@ -2,7 +2,7 @@ === Hardware If you've been following the normal development path, you've probably been playing((("deployment", "hardware")))((("hardware"))) -with Elasticsearch on your laptop or on a small cluster of machines laying around. +with Elasticsearch on your laptop or on a small cluster of machines lying around. But when it comes time to deploy Elasticsearch to production, there are a few recommendations that you should consider. Nothing is a hard-and-fast rule; Elasticsearch is used for a wide range of tasks and on a bewildering array of @@ -27,8 +27,7 @@ discuss in <>. Most Elasticsearch deployments tend to be rather light on CPU requirements. As such,((("CPUs (central processing units)")))((("hardware", "CPUs"))) the exact processor setup matters less than the other resources. You should -choose a modern processor with multiple cores. Common clusters utilize two to eight -core machines. +choose a modern processor with multiple cores. Common clusters utilize two- to eight-core machines. If you need to choose between faster CPUs or more cores, choose more cores. The extra concurrency that multiple cores offers will far outweigh a slightly faster From feeedc27c199efc2ee2114edd1d9b27b92a90100 Mon Sep 17 00:00:00 2001 From: ericamick Date: Tue, 31 May 2016 16:40:45 -0400 Subject: [PATCH 08/88] Update 40_other_stats.asciidoc (#522) --- 500_Cluster_Admin/40_other_stats.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/500_Cluster_Admin/40_other_stats.asciidoc b/500_Cluster_Admin/40_other_stats.asciidoc index 6224aee64..4d2a120c4 100644 --- a/500_Cluster_Admin/40_other_stats.asciidoc +++ b/500_Cluster_Admin/40_other_stats.asciidoc @@ -42,7 +42,7 @@ GET _all/_stats <3> ---- <1> Stats for `my_index`. <2> Stats for multiple indices can be requested by separating their names with a comma. -<3> Stats indices can be requested using the special `_all` index name. +<3> Stats for all indices can be requested using the special `_all` index name. The stats returned will be familar to the `node-stats` output: `search` `fetch` `get` `index` `bulk` `segment counts` and so forth From 7667469a539d2a834f9f5b646c3c689aba7488db Mon Sep 17 00:00:00 2001 From: ericamick Date: Tue, 31 May 2016 16:43:50 -0400 Subject: [PATCH 09/88] Update 20_health.asciidoc (#521) --- 500_Cluster_Admin/20_health.asciidoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/500_Cluster_Admin/20_health.asciidoc b/500_Cluster_Admin/20_health.asciidoc index 1adf814f0..2b5e636b6 100644 --- a/500_Cluster_Admin/20_health.asciidoc +++ b/500_Cluster_Admin/20_health.asciidoc @@ -50,7 +50,7 @@ high availability is compromised to some degree. If _more_ shards disappear, yo might lose data. Think of `yellow` as a warning that should prompt investigation. `red`:: - At least one primary shard (and all of its replicas) are missing. This means + At least one primary shard (and all of its replicas) is missing. This means that you are missing data: searches will return partial results, and indexing into that shard will return an exception. @@ -205,7 +205,7 @@ This is important for automated scripts and tests. If you create an index, Elasticsearch must broadcast the change in cluster state to all nodes. Those nodes must initialize those new shards, and then respond to the -master that the shards are `Started`. This process is fast, but because network +master that the shards are `Started`. This process is fast, but because of network latency may take 10–20ms. If you have an automated script that (a) creates an index and then (b) immediately From 11ad2a5d5658bb0675d3e624dd46d9101de76b2e Mon Sep 17 00:00:00 2001 From: romainsalles Date: Tue, 31 May 2016 22:44:54 +0200 Subject: [PATCH 10/88] Add missing "inside" word (#515) --- 070_Index_Mgmt/25_Mappings.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/070_Index_Mgmt/25_Mappings.asciidoc b/070_Index_Mgmt/25_Mappings.asciidoc index d9c6464d2..f9897072f 100644 --- a/070_Index_Mgmt/25_Mappings.asciidoc +++ b/070_Index_Mgmt/25_Mappings.asciidoc @@ -154,5 +154,5 @@ In summary: - **Good:** `kitchen` and `lawn-care` types inside the `products` index, because the two types are essentially the same schema -- **Bad:** `products` and `logs` types the `data` index, because the two types are +- **Bad:** `products` and `logs` types inside the `data` index, because the two types are mutually exclusive. Separate these into their own indices. From c7913f87d03b2a5601e0ff438d2ebc099978304f Mon Sep 17 00:00:00 2001 From: "Md.Abdulla-Al-Sun" Date: Wed, 1 Jun 2016 02:46:19 +0600 Subject: [PATCH 11/88] Update the misplacement of Comma (#524) In my sense, the comma should be after the closing inverted comma. --- 120_Proximity_Matching/05_Phrase_matching.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/120_Proximity_Matching/05_Phrase_matching.asciidoc b/120_Proximity_Matching/05_Phrase_matching.asciidoc index 645f8aedb..2d0af4695 100644 --- a/120_Proximity_Matching/05_Phrase_matching.asciidoc +++ b/120_Proximity_Matching/05_Phrase_matching.asciidoc @@ -95,7 +95,7 @@ all the words in exactly the order specified, with no words in-between. ==== What Is a Phrase -For a document to be considered a((("match_phrase query", "documents matching a phrase")))((("phrase matching", "criteria for matching documents"))) match for the phrase ``quick brown fox,'' the following must be true: +For a document to be considered a((("match_phrase query", "documents matching a phrase")))((("phrase matching", "criteria for matching documents"))) match for the phrase ``quick brown fox'', the following must be true: * `quick`, `brown`, and `fox` must all appear in the field. From e53cae33c13eec86293748358f614438f5e94b8f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Rafa=C5=82=20Bigaj?= <4rafalbigaj@gmail.com> Date: Tue, 31 May 2016 22:46:36 +0200 Subject: [PATCH 12/88] Colon added before code snippet (#516) Hope it helps :) --- 056_Sorting/88_String_sorting.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/056_Sorting/88_String_sorting.asciidoc b/056_Sorting/88_String_sorting.asciidoc index f35a59058..db220ea1b 100644 --- a/056_Sorting/88_String_sorting.asciidoc +++ b/056_Sorting/88_String_sorting.asciidoc @@ -22,7 +22,7 @@ and one that is `not_analyzed` for sorting. But storing the same string twice in the `_source` field is waste of space. What we really want to do is to pass in a _single field_ but to _index it in two different ways_. All of the _core_ field types (strings, numbers, Booleans, dates) accept a `fields` parameter ((("mapping (types)", "transforming simple mapping to multifield mapping")))((("types", "core simple field types", "accepting fields parameter")))((("fields parameter")))((("multifield mapping")))that allows you to transform a -simple mapping like +simple mapping like: [source,js] -------------------------------------------------- From e0b5241e7aa27c5d1a3229c5cfafd05194659e7e Mon Sep 17 00:00:00 2001 From: Prashant Tiwari Date: Wed, 1 Jun 2016 02:17:50 +0530 Subject: [PATCH 13/88] Fix token positions (#513) --- 240_Stopwords/20_Using_stopwords.asciidoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/240_Stopwords/20_Using_stopwords.asciidoc b/240_Stopwords/20_Using_stopwords.asciidoc index 4fa4e438a..3c3fd47f2 100644 --- a/240_Stopwords/20_Using_stopwords.asciidoc +++ b/240_Stopwords/20_Using_stopwords.asciidoc @@ -70,14 +70,14 @@ The quick and the dead "start_offset": 4, "end_offset": 9, "type": "", - "position": 2 <1> + "position": 1 <1> }, { "token": "dead", "start_offset": 18, "end_offset": 22, "type": "", - "position": 5 <1> + "position": 4 <1> } ] } From 981db26d62ef1e3964a7cd7f52d9c0f638d03c10 Mon Sep 17 00:00:00 2001 From: gopimanikandan Date: Wed, 1 Jun 2016 02:28:30 +0530 Subject: [PATCH 14/88] Update 60_restore.asciidoc (#496) The command in the documentation not working as expected. It's showing error. I have the updated the command in the documentation after checking the command --- 520_Post_Deployment/60_restore.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/520_Post_Deployment/60_restore.asciidoc b/520_Post_Deployment/60_restore.asciidoc index f0ef88b6d..a4dd37f45 100644 --- a/520_Post_Deployment/60_restore.asciidoc +++ b/520_Post_Deployment/60_restore.asciidoc @@ -66,7 +66,7 @@ The API can be invoked for the specific indices that you are recovering: [source,js] ---- -GET /_recovery/restored_index_3 +GET restored_index_3/_recovery ---- Or for all indices in your cluster, which may include other shards moving around, From ec0f3ea84515802a6ecbdd202cbd37e3e8426bc3 Mon Sep 17 00:00:00 2001 From: ericamick Date: Tue, 31 May 2016 17:02:55 -0400 Subject: [PATCH 15/88] Update 10_Intro.asciidoc (#507) --- 400_Relationships/10_Intro.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/400_Relationships/10_Intro.asciidoc b/400_Relationships/10_Intro.asciidoc index e9461a449..438c2d8ac 100644 --- a/400_Relationships/10_Intro.asciidoc +++ b/400_Relationships/10_Intro.asciidoc @@ -25,7 +25,7 @@ surprise to you--to manage((("relational databases", "managing relationships"))) entities. But relational ((("ACID transactions")))databases do have their limitations, besides their poor support -for full-text search. Joining entities at query time is expensive--more +for full-text search. Joining entities at query time is expensive--the more joins that are required, the more expensive the query. Performing joins between entities that live on different hardware is so expensive that it is just not practical. This places a limit on the amount of data that can be From b61f7d6dcec659f0e8af1f25289044a3eebbaca9 Mon Sep 17 00:00:00 2001 From: Aaron Johnson Date: Tue, 31 May 2016 17:03:54 -0400 Subject: [PATCH 16/88] Fix drunken indentation. (#497) * Fix drunken indentation. * Fix more drunken indentation. --- 402_Nested/32_Nested_query.asciidoc | 52 +++++++++++++++++++++++------ 1 file changed, 42 insertions(+), 10 deletions(-) diff --git a/402_Nested/32_Nested_query.asciidoc b/402_Nested/32_Nested_query.asciidoc index 54c7d7702..d680ceb7a 100644 --- a/402_Nested/32_Nested_query.asciidoc +++ b/402_Nested/32_Nested_query.asciidoc @@ -12,17 +12,32 @@ GET /my_index/blogpost/_search "query": { "bool": { "must": [ - { "match": { "title": "eggs" }}, <1> + { + "match": { + "title": "eggs" <1> + } + }, { "nested": { "path": "comments", <2> "query": { "bool": { "must": [ <3> - { "match": { "comments.name": "john" }}, - { "match": { "comments.age": 28 }} + { + "match": { + "comments.name": "john" + } + }, + { + "match": { + "comments.age": 28 + } + } ] - }}}} + } + } + } + } ] }}} -------------------------- @@ -58,20 +73,37 @@ GET /my_index/blogpost/_search "query": { "bool": { "must": [ - { "match": { "title": "eggs" }}, + { + "match": { + "title": "eggs" + } + }, { "nested": { - "path": "comments", + "path": "comments", "score_mode": "max", <1> "query": { "bool": { "must": [ - { "match": { "comments.name": "john" }}, - { "match": { "comments.age": 28 }} + { + "match": { + "comments.name": "john" + } + }, + { + "match": { + "comments.age": 28 + } + } ] - }}}} + } + } + } + } ] -}}} + } + } +} -------------------------- <1> Give the root document the `_score` from the best-matching nested document. From bb59dcd6f00bcf17b952847837524afa0e2371a9 Mon Sep 17 00:00:00 2001 From: Tobias Feldhaus Date: Fri, 3 Jun 2016 16:58:11 +0200 Subject: [PATCH 17/88] Update 40_bitsets.asciidoc (#514) From 388cb2f7794c76e042276147960e133187cb2500 Mon Sep 17 00:00:00 2001 From: Zachary Tong Date: Fri, 3 Jun 2016 13:37:10 -0400 Subject: [PATCH 18/88] Use new body specification for analyze API Closes #510 Closes #473 Closes #433 Closes #511 --- 052_Mapping_Analysis/40_Analysis.asciidoc | 7 +++++-- 052_Mapping_Analysis/45_Mapping.asciidoc | 12 +++++++----- 080_Structured_Search/05_term.asciidoc | 9 +++++++-- snippets/052_Mapping_Analysis/40_Analyze.json | 6 +++++- snippets/052_Mapping_Analysis/45_Mapping.json | 12 ++++++++++-- snippets/080_Structured_Search/05_Term_number.json | 7 +++++++ 6 files changed, 41 insertions(+), 12 deletions(-) diff --git a/052_Mapping_Analysis/40_Analysis.asciidoc b/052_Mapping_Analysis/40_Analysis.asciidoc index 2fd738a3c..3244139bf 100644 --- a/052_Mapping_Analysis/40_Analysis.asciidoc +++ b/052_Mapping_Analysis/40_Analysis.asciidoc @@ -159,8 +159,11 @@ parameters, and the text to analyze in the body: [source,js] -------------------------------------------------- -GET /_analyze?analyzer=standard -Text to analyze +GET /_analyze +{ + "analyzer": "standard", + "text": "Text to analyze" +} -------------------------------------------------- // SENSE: 052_Mapping_Analysis/40_Analyze.json diff --git a/052_Mapping_Analysis/45_Mapping.asciidoc b/052_Mapping_Analysis/45_Mapping.asciidoc index 3d0dbaa75..8408ce78b 100644 --- a/052_Mapping_Analysis/45_Mapping.asciidoc +++ b/052_Mapping_Analysis/45_Mapping.asciidoc @@ -144,10 +144,10 @@ can contain one of three values: `analyzed`:: First analyze the string and then index it. In other words, index this field as full text. - `not_analyzed`:: + `not_analyzed`:: Index this field, so it is searchable, but index the value exactly as specified. Do not analyze it. - `no`:: + `no`:: Don't index this field at all. This field will not be searchable. The default value of `index` for a `string` field is `analyzed`. If we @@ -204,7 +204,7 @@ for an existing type) later, using the `/_mapping` endpoint. ================================================ Although you can _add_ to an existing mapping, you can't _change_ existing field mappings. If a mapping already exists for a field, data from that -field has probably been indexed. If you were to change the field mapping, +field has probably been indexed. If you were to change the field mapping, the indexed data would be wrong and would not be properly searchable. ================================================ @@ -278,13 +278,15 @@ name. Compare the output of these two requests: [source,js] -------------------------------------------------- -GET /gb/_analyze?field=tweet +GET /gb/_analyze { + "field": "tweet" "text": "Black-cats" <1> } -GET /gb/_analyze?field=tag +GET /gb/_analyze { + "field": "tag", "text": "Black-cats" <1> } -------------------------------------------------- diff --git a/080_Structured_Search/05_term.asciidoc b/080_Structured_Search/05_term.asciidoc index 170ed3181..b65350536 100644 --- a/080_Structured_Search/05_term.asciidoc +++ b/080_Structured_Search/05_term.asciidoc @@ -147,9 +147,14 @@ can see that our UPC has been tokenized into smaller tokens: [source,js] -------------------------------------------------- -GET /my_store/_analyze?field=productID -XHDK-A-1293-#fJ3 +GET /my_store/_analyze +{ + "field": "productID", + "text": "XHDK-A-1293-#fJ3" +} -------------------------------------------------- +// SENSE: 080_Structured_Search/05_Term_text.json + [source,js] -------------------------------------------------- { diff --git a/snippets/052_Mapping_Analysis/40_Analyze.json b/snippets/052_Mapping_Analysis/40_Analyze.json index e2043871d..1e48df8d5 100644 --- a/snippets/052_Mapping_Analysis/40_Analyze.json +++ b/snippets/052_Mapping_Analysis/40_Analyze.json @@ -1,2 +1,6 @@ # Analyze the `text` with the `standard` analyzer -GET /_analyze?analyzer=standard&text=Text to analyze +GET /_analyze +{ + "analyzer": "standard", + "text": "Text to analyze" +} diff --git a/snippets/052_Mapping_Analysis/45_Mapping.json b/snippets/052_Mapping_Analysis/45_Mapping.json index 683c73403..6e1ac8b3c 100644 --- a/snippets/052_Mapping_Analysis/45_Mapping.json +++ b/snippets/052_Mapping_Analysis/45_Mapping.json @@ -40,7 +40,15 @@ PUT /gb/_mapping/tweet GET /gb/_mapping/tweet # Test the analyzer for the `tweet` field -GET /gb/_analyze?field=tweet&text=Black-cats +GET /gb/_analyze +{ + "field": "tweet", + "text": "Black-cats" +} # Test the analyzer for the `tag` field -GET /gb/_analyze?field=tag&text=Black-cats \ No newline at end of file +GET /gb/_analyze +{ + "field": "tag", + "text": "Black-cats" +} diff --git a/snippets/080_Structured_Search/05_Term_number.json b/snippets/080_Structured_Search/05_Term_number.json index 25a3b7f99..d718770e2 100644 --- a/snippets/080_Structured_Search/05_Term_number.json +++ b/snippets/080_Structured_Search/05_Term_number.json @@ -26,6 +26,13 @@ GET /my_store/products/_search } } +# Check the analyzed tokens +GET /my_store/_analyze +{ + "field": "productID", + "text": "XHDK-A-1293-#fJ3" +} + # Same as above, without the `match_all` query GET /my_store/products/_search { From d982cd4ec9c493c3a42e1b76ce26a91f5d569f81 Mon Sep 17 00:00:00 2001 From: rabu3082 Date: Fri, 3 Jun 2016 19:50:02 +0200 Subject: [PATCH 19/88] corrects syntax for testing an analyzer (#505) TODO: the json for Sense has to be corrected as well (it says: "child \"uri\" fails because [\"uri\" must be a valid uri]"") --- 130_Partial_Matching/35_Search_as_you_type.asciidoc | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/130_Partial_Matching/35_Search_as_you_type.asciidoc b/130_Partial_Matching/35_Search_as_you_type.asciidoc index aa110867b..96485ebc5 100644 --- a/130_Partial_Matching/35_Search_as_you_type.asciidoc +++ b/130_Partial_Matching/35_Search_as_you_type.asciidoc @@ -93,7 +93,9 @@ the `analyze` API: [source,js] -------------------------------------------------- GET /my_index/_analyze?analyzer=autocomplete -quick brown +{ + "text": "quick brown" +} -------------------------------------------------- // SENSE: 130_Partial_Matching/35_Search_as_you_type.json From 496cd6f8d6c3673ade07178dc85fa45df0525e98 Mon Sep 17 00:00:00 2001 From: rabu3082 Date: Fri, 3 Jun 2016 19:51:15 +0200 Subject: [PATCH 20/88] corrects syntax for testing an analyzer in json for Sense (#506) at the moment, you are confronted with { "statusCode": 400, "error": "Bad Request", "message": "child \"uri\" fails because [\"uri\" must be a valid uri]", "validation": { "source": "query", "keys": [ "uri" ] } } when sending the request. --- snippets/130_Partial_Matching/35_Search_as_you_type.json | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/snippets/130_Partial_Matching/35_Search_as_you_type.json b/snippets/130_Partial_Matching/35_Search_as_you_type.json index ee8c86f11..6a39e14ed 100644 --- a/snippets/130_Partial_Matching/35_Search_as_you_type.json +++ b/snippets/130_Partial_Matching/35_Search_as_you_type.json @@ -31,7 +31,10 @@ PUT /my_index } # Test the autocomplete analyzer -GET /my_index/_analyze?analyzer=autocomplete&text=quick brown +GET /my_index/_analyze?analyzer=autocomplete +{ + "text": "quick brown" +} # Map the `name` field to use the `autocomplete` analyzer PUT /my_index/_mapping/mytype From 0b9c250f43c2bc8456ea248e76ffb39924209b19 Mon Sep 17 00:00:00 2001 From: Zachary Tong Date: Fri, 3 Jun 2016 13:52:35 -0400 Subject: [PATCH 21/88] Move analyzer into request body --- 130_Partial_Matching/35_Search_as_you_type.asciidoc | 6 +++--- snippets/130_Partial_Matching/35_Search_as_you_type.json | 3 ++- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/130_Partial_Matching/35_Search_as_you_type.asciidoc b/130_Partial_Matching/35_Search_as_you_type.asciidoc index 96485ebc5..027cfb460 100644 --- a/130_Partial_Matching/35_Search_as_you_type.asciidoc +++ b/130_Partial_Matching/35_Search_as_you_type.asciidoc @@ -92,9 +92,10 @@ the `analyze` API: [source,js] -------------------------------------------------- -GET /my_index/_analyze?analyzer=autocomplete +GET /my_index/_analyze { - "text": "quick brown" + "analyzer": "autocomplete", + "text": "quick brown" } -------------------------------------------------- // SENSE: 130_Partial_Matching/35_Search_as_you_type.json @@ -358,4 +359,3 @@ This example uses the `keyword` tokenizer to convert the postcode string into a to turn postcodes into edge n-grams. <2> The `postcode_search` analyzer would treat search terms as if they were `not_analyzed`. - diff --git a/snippets/130_Partial_Matching/35_Search_as_you_type.json b/snippets/130_Partial_Matching/35_Search_as_you_type.json index 6a39e14ed..8af6e7f06 100644 --- a/snippets/130_Partial_Matching/35_Search_as_you_type.json +++ b/snippets/130_Partial_Matching/35_Search_as_you_type.json @@ -31,8 +31,9 @@ PUT /my_index } # Test the autocomplete analyzer -GET /my_index/_analyze?analyzer=autocomplete +GET /my_index/_analyze { + "analyzer": "autocomplete", "text": "quick brown" } From 268b91dcbfb62b020479c092480888d9d3e92834 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=C5=9E=C3=BCkr=C3=BC=20BEZEN?= Date: Fri, 3 Jun 2016 20:54:10 +0300 Subject: [PATCH 22/88] Update 40_bitsets.asciidoc (#527) Wrong order of letters, corrected. From 73ee7ff9287e8ef45469273fa8e3a1fd50f90e70 Mon Sep 17 00:00:00 2001 From: Sumit Gupta Date: Fri, 3 Jun 2016 23:25:10 +0530 Subject: [PATCH 23/88] Update 62_Geo_distance_agg.asciidoc (#546) Query Breaking due to comma in gwo_bounding_box lat value. I added point instead of comma in geo_bounding_box lat value. --- 330_Geo_aggs/62_Geo_distance_agg.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/330_Geo_aggs/62_Geo_distance_agg.asciidoc b/330_Geo_aggs/62_Geo_distance_agg.asciidoc index c9e838ef9..f5c50ea29 100644 --- a/330_Geo_aggs/62_Geo_distance_agg.asciidoc +++ b/330_Geo_aggs/62_Geo_distance_agg.asciidoc @@ -21,7 +21,7 @@ GET /attractions/restaurant/_search "geo_bounding_box": { "location": { <2> "top_left": { - "lat": 40,8, + "lat": 40.8, "lon": -74.1 }, "bottom_right": { From 721a83c6f72ff3cb81771d286811fcd42e7723e1 Mon Sep 17 00:00:00 2001 From: Zachary Tong Date: Fri, 3 Jun 2016 13:59:32 -0400 Subject: [PATCH 24/88] Use new body specification for analyze API Related to #509 --- .../30_Controlling_analysis.asciidoc | 14 ++++++++++---- snippets/100_Full_Text_Search/30_Analysis.json | 12 ++++++++++-- 2 files changed, 20 insertions(+), 6 deletions(-) diff --git a/100_Full_Text_Search/30_Controlling_analysis.asciidoc b/100_Full_Text_Search/30_Controlling_analysis.asciidoc index d5a2091d1..fffd6bb93 100644 --- a/100_Full_Text_Search/30_Controlling_analysis.asciidoc +++ b/100_Full_Text_Search/30_Controlling_analysis.asciidoc @@ -34,11 +34,17 @@ analyzed at index time by using the `analyze` API to analyze the word `Foxes`: [source,js] -------------------------------------------------- -GET /my_index/_analyze?field=my_type.title <1> -Foxes +GET /my_index/_analyze +{ + "field": "my_type.title", <1> + "text": "Foxes" +} -GET /my_index/_analyze?field=my_type.english_title <2> -Foxes +GET /my_index/_analyze +{ + "field": "my_type.english_title", <2> + "text": "Foxes" +} -------------------------------------------------- // SENSE: 100_Full_Text_Search/30_Analysis.json diff --git a/snippets/100_Full_Text_Search/30_Analysis.json b/snippets/100_Full_Text_Search/30_Analysis.json index 76e316e2f..2a692c217 100644 --- a/snippets/100_Full_Text_Search/30_Analysis.json +++ b/snippets/100_Full_Text_Search/30_Analysis.json @@ -22,10 +22,18 @@ PUT /my_index } # Test the analysis of the `title` field -GET /my_index/_analyze?field=my_type.title&text=Foxes +GET /my_index/_analyze +{ + "field": "my_type.title", <1> + "text": "Foxes" +} # Test the analysis of the `english_title` field -GET /my_index/_analyze?field=my_type.english_title&text=Foxes +GET /my_index/_analyze +{ + "field": "my_type.english_title", <2> + "text": "Foxes" +} # Get query explanation for `title` vs `english_title` GET /my_index/my_type/_validate/query?explain From e430c6d608904b2b5ad1dd9131e0798da83c0755 Mon Sep 17 00:00:00 2001 From: Zachary Tong Date: Fri, 3 Jun 2016 14:01:49 -0400 Subject: [PATCH 25/88] Remove unnecessary braces Closes #440 --- 080_Structured_Search/30_existsmissing.asciidoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/080_Structured_Search/30_existsmissing.asciidoc b/080_Structured_Search/30_existsmissing.asciidoc index 9a9a1a8da..f3ad31570 100644 --- a/080_Structured_Search/30_existsmissing.asciidoc +++ b/080_Structured_Search/30_existsmissing.asciidoc @@ -247,8 +247,8 @@ is really executed as { "bool": { "should": [ - { "exists": { "field": { "name.first" }}}, - { "exists": { "field": { "name.last" }}} + { "exists": { "field": "name.first" }}, + { "exists": { "field": "name.last" }} ] } } From 06c76953f4f976cf1c97e8201039369bbfc01a33 Mon Sep 17 00:00:00 2001 From: Zachary Tong Date: Fri, 3 Jun 2016 14:03:42 -0400 Subject: [PATCH 26/88] Remove stray comma Closes #438 --- 100_Full_Text_Search/15_Combining_queries.asciidoc | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/100_Full_Text_Search/15_Combining_queries.asciidoc b/100_Full_Text_Search/15_Combining_queries.asciidoc index 20f8b2fc4..ee02fadba 100644 --- a/100_Full_Text_Search/15_Combining_queries.asciidoc +++ b/100_Full_Text_Search/15_Combining_queries.asciidoc @@ -1,7 +1,7 @@ [[bool-query]] === Combining Queries -In <> we discussed how to((("full text search", "combining queries"))), use the `bool` filter to combine +In <> we discussed how to((("full text search", "combining queries"))) use the `bool` filter to combine multiple filter clauses with `and`, `or`, and `not` logic. In query land, the `bool` query does a similar job but with one important difference. @@ -107,4 +107,3 @@ The results would include only documents whose `title` field contains `"brown" AND "fox"`, `"brown" AND "dog"`, or `"fox" AND "dog"`. If a document contains all three, it would be considered more relevant than those that contain just two of the three. - From 40f77159b4192bb568168c0bf6100a75e2c6e9a9 Mon Sep 17 00:00:00 2001 From: Zachary Tong Date: Fri, 3 Jun 2016 14:17:41 -0400 Subject: [PATCH 27/88] "found" not "exists" Closes #361 --- 030_Data/15_Get.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/030_Data/15_Get.asciidoc b/030_Data/15_Get.asciidoc index 26b7ded68..3258046e9 100644 --- a/030_Data/15_Get.asciidoc +++ b/030_Data/15_Get.asciidoc @@ -93,7 +93,7 @@ filtered out the `date` field: "_type" : "blog", "_id" : "123", "_version" : 1, - "exists" : true, + "found" : true, "_source" : { "title": "My first blog entry" , "text": "Just trying this out..." From 827fc699d95ad25120d525f94eee91add4842e2b Mon Sep 17 00:00:00 2001 From: Zachary Tong Date: Fri, 3 Jun 2016 14:22:25 -0400 Subject: [PATCH 28/88] Add semi-colon Closes #329 --- 230_Stemming/00_Intro.asciidoc | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/230_Stemming/00_Intro.asciidoc b/230_Stemming/00_Intro.asciidoc index c4f06931f..78be00e07 100644 --- a/230_Stemming/00_Intro.asciidoc +++ b/230_Stemming/00_Intro.asciidoc @@ -36,7 +36,7 @@ and overstemming. _Understemming_ is the failure to reduce words with the same meaning to the same root. For example, `jumped` and `jumps` may be reduced to `jump`, while -`jumping` may be reduced to `jumpi`. Understemming reduces retrieval +`jumping` may be reduced to `jumpi`. Understemming reduces retrieval; relevant documents are not returned. _Overstemming_ is the failure to keep two words with distinct meanings separate. @@ -69,6 +69,3 @@ First we will discuss the two classes of stemmers available in Elasticsearch choose the right stemmer for your needs in <>. Finally, we will discuss options for tailoring stemming in <> and <>. - - - From e302256fba1c154c7d9366e32b01647166546bdf Mon Sep 17 00:00:00 2001 From: jasiustasiu Date: Fri, 3 Jun 2016 20:36:42 +0200 Subject: [PATCH 29/88] SQL example for finding distinct counts corrected (#306) --- 300_Aggregations/60_cardinality.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/300_Aggregations/60_cardinality.asciidoc b/300_Aggregations/60_cardinality.asciidoc index e92aef310..0ec11c9b1 100644 --- a/300_Aggregations/60_cardinality.asciidoc +++ b/300_Aggregations/60_cardinality.asciidoc @@ -7,7 +7,7 @@ _unique_ count. ((("unique counts"))) You may be familiar with the SQL version: [source, sql] -------- -SELECT DISTINCT(color) +SELECT COUNT(DISTINCT color) FROM cars -------- From 9fa63fdd9272c995e7a4b2def5a094374506f829 Mon Sep 17 00:00:00 2001 From: Peter Dyson Date: Thu, 7 Jul 2016 12:34:01 +1000 Subject: [PATCH 30/88] Mentioning order of preferred Java versions. --- 510_Deployment/30_other.asciidoc | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/510_Deployment/30_other.asciidoc b/510_Deployment/30_other.asciidoc index 5e0ab281e..be1a9b1f9 100644 --- a/510_Deployment/30_other.asciidoc +++ b/510_Deployment/30_other.asciidoc @@ -8,7 +8,9 @@ tests from Lucene often expose bugs in the JVM itself. These bugs range from mild annoyances to serious segfaults, so it is best to use the latest version of the JVM where possible. -Java 7 is strongly preferred over Java 6. Either Oracle or OpenJDK are acceptable. They are comparable in performance and stability. +Java 8 is preferred over Java 7 and both Java 8/Java 7 are strongly preferred over Java 6. + +Either Oracle or OpenJDK are acceptable. They are comparable in performance and stability. If your application is written in Java and you are using the transport client or node client, make sure the JVM running your application is identical to the From 6d7ae03b4d0fd762fa6f0e8bb6b8f2a2557a3f2a Mon Sep 17 00:00:00 2001 From: Volodymyr Sorokin Date: Mon, 18 Jul 2016 19:15:13 +0300 Subject: [PATCH 31/88] Removed obsolete 'replication' param from docs (#569) --- 040_Distributed_CRUD/25_Partial_updates.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/040_Distributed_CRUD/25_Partial_updates.asciidoc b/040_Distributed_CRUD/25_Partial_updates.asciidoc index f201c783c..0c91285ed 100644 --- a/040_Distributed_CRUD/25_Partial_updates.asciidoc +++ b/040_Distributed_CRUD/25_Partial_updates.asciidoc @@ -24,7 +24,7 @@ document: `Node 3` reports success to the coordinating node, which reports success to the client. -The `update` API also accepts the `routing`, `replication`, `consistency`, and +The `update` API also accepts the `routing`, `consistency`, and `timeout` parameters that are explained in <>. .Document-Based Replication From 10cc25dd931c83438b4fe968bc01e3c74c601493 Mon Sep 17 00:00:00 2001 From: Scs Date: Tue, 19 Jul 2016 00:22:54 +0800 Subject: [PATCH 32/88] update a example code (#564) --- 260_Synonyms/20_Using_synonyms.asciidoc | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/260_Synonyms/20_Using_synonyms.asciidoc b/260_Synonyms/20_Using_synonyms.asciidoc index 3c89c7c58..83aab95a2 100644 --- a/260_Synonyms/20_Using_synonyms.asciidoc +++ b/260_Synonyms/20_Using_synonyms.asciidoc @@ -52,8 +52,11 @@ Testing our analyzer with the `analyze` API shows the following: [source,json] ------------------------------------- -GET /my_index/_analyze?analyzer=my_synonyms -Elizabeth is the English queen +GET /my_index/_analyze +{ + "analyzer" : "my_synonyms", + "text" : "Elizabeth is the English queen" +} ------------------------------------- [source,text] From ec302c328926e0d4a393cd896f2e6285ff2f6ff4 Mon Sep 17 00:00:00 2001 From: Scs Date: Tue, 19 Jul 2016 00:23:11 +0800 Subject: [PATCH 33/88] fix a spelling mistakes (#563) --- 260_Synonyms/70_Symbol_synonyms.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/260_Synonyms/70_Symbol_synonyms.asciidoc b/260_Synonyms/70_Symbol_synonyms.asciidoc index e347858bd..e5ea10dfc 100644 --- a/260_Synonyms/70_Symbol_synonyms.asciidoc +++ b/260_Synonyms/70_Symbol_synonyms.asciidoc @@ -7,7 +7,7 @@ string aliases used to represent symbols that would otherwise be removed during tokenization. While most punctuation is seldom important for full-text search, character -combinations like emoticons((("emoticons"))) may be very signficant, even changing the meaning +combinations like emoticons((("emoticons"))) may be very significant, even changing the meaning of the text. Compare these: [role="pagebreak-before"] From b302143817f6a61fa788208020a9c21ae1e9118e Mon Sep 17 00:00:00 2001 From: Mono Lin Date: Tue, 19 Jul 2016 00:26:31 +0800 Subject: [PATCH 34/88] Update 45_Mapping.asciidoc (#562) Missed a comma in json --- 052_Mapping_Analysis/45_Mapping.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/052_Mapping_Analysis/45_Mapping.asciidoc b/052_Mapping_Analysis/45_Mapping.asciidoc index 8408ce78b..81c781f4a 100644 --- a/052_Mapping_Analysis/45_Mapping.asciidoc +++ b/052_Mapping_Analysis/45_Mapping.asciidoc @@ -280,7 +280,7 @@ name. Compare the output of these two requests: -------------------------------------------------- GET /gb/_analyze { - "field": "tweet" + "field": "tweet", "text": "Black-cats" <1> } From 6010dde92142b3468143d858303824ea3c7dded8 Mon Sep 17 00:00:00 2001 From: tpetrytsyn Date: Mon, 18 Jul 2016 19:33:16 +0300 Subject: [PATCH 35/88] Update 30_Controlling_analysis.asciidoc (#559) "my_type" prefixes in fields is wrong. --- 100_Full_Text_Search/30_Controlling_analysis.asciidoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/100_Full_Text_Search/30_Controlling_analysis.asciidoc b/100_Full_Text_Search/30_Controlling_analysis.asciidoc index fffd6bb93..eb70afeeb 100644 --- a/100_Full_Text_Search/30_Controlling_analysis.asciidoc +++ b/100_Full_Text_Search/30_Controlling_analysis.asciidoc @@ -36,13 +36,13 @@ analyzed at index time by using the `analyze` API to analyze the word `Foxes`: -------------------------------------------------- GET /my_index/_analyze { - "field": "my_type.title", <1> + "field": "title", <1> "text": "Foxes" } GET /my_index/_analyze { - "field": "my_type.english_title", <2> + "field": "english_title", <2> "text": "Foxes" } -------------------------------------------------- From 40b760a14b50e67f0b90fd78428929df39ead9f9 Mon Sep 17 00:00:00 2001 From: Jakob Reiter Date: Mon, 18 Jul 2016 18:34:04 +0200 Subject: [PATCH 36/88] Changed "field": "employee.hobby" to "field": "hobby" (#558) Changed "field": "employee.hobby" to "field": "hobby", otherwise the hobbies are not returned. --- 404_Parent_Child/60_Children_agg.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/404_Parent_Child/60_Children_agg.asciidoc b/404_Parent_Child/60_Children_agg.asciidoc index 1d9accc59..6af80f0ec 100644 --- a/404_Parent_Child/60_Children_agg.asciidoc +++ b/404_Parent_Child/60_Children_agg.asciidoc @@ -27,7 +27,7 @@ GET /company/branch/_search "aggs": { "hobby": { "terms": { <3> - "field": "employee.hobby" + "field": "hobby" } } } From 43e1c190acf1a99e3458b794e2fb6d2a26ba7e33 Mon Sep 17 00:00:00 2001 From: Lars Andersen Date: Mon, 18 Jul 2016 18:55:15 +0200 Subject: [PATCH 37/88] Add missing word (#557) --- 110_Multi_Field_Search/40_Field_centric.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/110_Multi_Field_Search/40_Field_centric.asciidoc b/110_Multi_Field_Search/40_Field_centric.asciidoc index f1fd1466b..64a80eaab 100644 --- a/110_Multi_Field_Search/40_Field_centric.asciidoc +++ b/110_Multi_Field_Search/40_Field_centric.asciidoc @@ -3,7 +3,7 @@ All three of the preceding problems stem from ((("field-centric queries")))((("multifield search", "field-centric queries, problems with")))((("most fields queries", "problems with field-centric queries")))`most_fields` being _field-centric_ rather than _term-centric_: it looks for the most matching -_fields_, when really what we're interested is the most matching _terms_. +_fields_, when really what we're interested in is the most matching _terms_. NOTE: The `best_fields` type is also field-centric((("best fields queries", "problems with field-centric queries"))) and suffers from similar problems. From 75f12b0fea5436d2d6204f900b839f794e56be85 Mon Sep 17 00:00:00 2001 From: Stephen K Hess Date: Mon, 18 Jul 2016 12:56:52 -0400 Subject: [PATCH 38/88] Fixed wording (#553) * Fixed wording * Combined 2 sentences for brevity --- 054_Query_DSL/65_Queries_vs_filters.asciidoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/054_Query_DSL/65_Queries_vs_filters.asciidoc b/054_Query_DSL/65_Queries_vs_filters.asciidoc index 981465b03..11f7eda88 100644 --- a/054_Query_DSL/65_Queries_vs_filters.asciidoc +++ b/054_Query_DSL/65_Queries_vs_filters.asciidoc @@ -16,8 +16,8 @@ The answer is always a simple, binary yes|no. * Is the `lat_lon` field within `10km` of a specified point? When used in a _querying context_, the query becomes a "scoring" query. Similar to -its non-scoring sibling, this determines if a document matches. But it also determines -how _well_ does the document matches. +its non-scoring sibling, this determines _if_ a document matches and +how _well_ the document matches. A typical use for a query is to find documents: From 70bea604d19a030ba611fc73628ccee188ad0999 Mon Sep 17 00:00:00 2001 From: Peter Dyson Date: Tue, 19 Jul 2016 05:52:47 -0700 Subject: [PATCH 39/88] Improved clarity and java 6 no longer supported. (#565) --- 510_Deployment/30_other.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/510_Deployment/30_other.asciidoc b/510_Deployment/30_other.asciidoc index be1a9b1f9..a7a744a61 100644 --- a/510_Deployment/30_other.asciidoc +++ b/510_Deployment/30_other.asciidoc @@ -8,7 +8,7 @@ tests from Lucene often expose bugs in the JVM itself. These bugs range from mild annoyances to serious segfaults, so it is best to use the latest version of the JVM where possible. -Java 8 is preferred over Java 7 and both Java 8/Java 7 are strongly preferred over Java 6. +Java 8 is preferred over Java 7. Java 6 is no longer supported. Either Oracle or OpenJDK are acceptable. They are comparable in performance and stability. From 787b88513afb8e0ed34488666821ee1be6032b4f Mon Sep 17 00:00:00 2001 From: Adrien Grand Date: Mon, 25 Jul 2016 21:24:31 +0200 Subject: [PATCH 40/88] Fix description of the `timeout` parameter. (#574) Closes #536 --- .../15_Search_options.asciidoc | 28 ++++++++----------- 1 file changed, 12 insertions(+), 16 deletions(-) diff --git a/060_Distributed_Search/15_Search_options.asciidoc b/060_Distributed_Search/15_Search_options.asciidoc index af8237a20..2d9e4ec37 100644 --- a/060_Distributed_Search/15_Search_options.asciidoc +++ b/060_Distributed_Search/15_Search_options.asciidoc @@ -33,34 +33,30 @@ like the user's session ID. ==== timeout -By default, the coordinating node waits((("search options", "timeout"))) to receive a response from all shards. +By default, shards process all the data they have before returning a response to +the coordinating node, which will in turn merge these responses to build the +final response. + +This means that the time it takes to run a search request is the sum of the time +it takes to process the slowest shard and the time it takes to merge responses. If one node is having trouble, it could slow down the response to all search requests. -The `timeout` parameter tells((("timeout parameter"))) the coordinating node how long it should wait -before giving up and just returning the results that it already has. It can be -better to return some results than none at all. +The `timeout` parameter tells((("timeout parameter"))) shards how long they +are allowed to process data before returning a response to the coordinating +node. If there was not enough time to process all data, results for this shard +will be partial, even possibly empty. -The response to a search request will indicate whether the search timed out and -how many shards responded successfully: +The response to a search request will indicate whether any shards returned a +partial response with the `timed_out` property: [source,js] -------------------------------------------------- ... "timed_out": true, <1> - "_shards": { - "total": 5, - "successful": 4, - "failed": 1 <2> - }, ... -------------------------------------------------- <1> The search request timed out. -<2> One shard out of five failed to respond in time. - -If all copies of a shard fail for other reasons--perhaps because of a -hardware failure--this will also be reflected in the `_shards` section of -the response. [[search-routing]] ==== routing From f3b857c4eafc174469e6bc97de26f2923ff61e87 Mon Sep 17 00:00:00 2001 From: Zachary Tong Date: Mon, 25 Jul 2016 15:33:48 -0400 Subject: [PATCH 41/88] Add brief warning about timeout best-effort --- .../15_Search_options.asciidoc | 20 +++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/060_Distributed_Search/15_Search_options.asciidoc b/060_Distributed_Search/15_Search_options.asciidoc index 2d9e4ec37..bd2a4d8b9 100644 --- a/060_Distributed_Search/15_Search_options.asciidoc +++ b/060_Distributed_Search/15_Search_options.asciidoc @@ -18,7 +18,7 @@ the _bouncing results_ problem.((("bouncing results problem"))) .Bouncing Results **** -Imagine that you are sorting your results by a `timestamp` field, and +Imagine that you are sorting your results by a `timestamp` field, and two documents have the same timestamp. Because search requests are round-robined between all available shard copies, these two documents may be returned in one order when the request is served by the primary, and in @@ -58,6 +58,22 @@ partial response with the `timed_out` property: -------------------------------------------------- <1> The search request timed out. +[WARNING] +==== +It's important to know that the timeout is still a best-effort operation; it's +possible for the query to surpass the allotted timeout. There are two reasons for +this behavior: + +1. Timeout checks are performed on a per-document basis. However, some query types +have a significant amount of work that must be performed *before* documents are evaluated. +This "setup" phase does not consult the timeout, and so very long setup times can cause +the overall latency to shoot past the timeout. +2. Because the time is once per document, a very long query can execute on a single +document and it won't timeout until the next document is evaluated. This also means +poorly written scripts (e.g. ones with infinite loops) will be allowed to execute +forever. +==== + [[search-routing]] ==== routing @@ -79,7 +95,7 @@ discuss it in detail in <>. ==== search_type The default search type is `query_then_fetch` ((("query_then_fetch search type")))((("search options", "search_type")))((("search_type"))). In some cases, you might want to explicitly set the `search_type` -to `dfs_query_then_fetch` to improve the accuracy of relevance scoring: +to `dfs_query_then_fetch` to improve the accuracy of relevance scoring: [source,js] -------------------------------------------------- From fd06bf81c53e3c707422f7b33ab907580656ba2b Mon Sep 17 00:00:00 2001 From: Jason Tedor Date: Tue, 2 Aug 2016 12:40:12 -0400 Subject: [PATCH 42/88] Correct heap size configuration This commit corrects the heap size configuration via JVM flags. The syntax previously displayed was accepted in the 1.x series of Elasticsearch, but is not accepted in the 2.x series of Elasticsearch. Relates #579 --- 510_Deployment/50_heap.asciidoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/510_Deployment/50_heap.asciidoc b/510_Deployment/50_heap.asciidoc index 4bb0102f4..cd6fa644e 100644 --- a/510_Deployment/50_heap.asciidoc +++ b/510_Deployment/50_heap.asciidoc @@ -15,12 +15,12 @@ As an example, you can set it via the command line as follows: export ES_HEAP_SIZE=10g ---- -Alternatively, you can pass in the heap size via a command-line argument when starting +Alternatively, you can pass in the heap size via JVM flags when starting the process, if that is easier for your setup: [source,bash] ---- -./bin/elasticsearch -Xmx10g -Xms10g <1> +ES_JAVA_OPTS="-Xms10g -Xmx10g" ./bin/elasticsearch <1> ---- <1> Ensure that the min (`Xms`) and max (`Xmx`) sizes are the same to prevent the heap from resizing at runtime, a very costly process. From 19a487529d980339cdecbe5c8bf1272b589c6774 Mon Sep 17 00:00:00 2001 From: msalistra Date: Wed, 17 Aug 2016 16:51:37 +0300 Subject: [PATCH 43/88] Comma instead of dot in lat field in query example (#585) There is a mistake in query example in "lat" can't be 40,8 Fixed 40,8 -> 40.8 --- 330_Geo_aggs/66_Geo_bounds_agg.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/330_Geo_aggs/66_Geo_bounds_agg.asciidoc b/330_Geo_aggs/66_Geo_bounds_agg.asciidoc index 471d78553..19035ebc1 100644 --- a/330_Geo_aggs/66_Geo_bounds_agg.asciidoc +++ b/330_Geo_aggs/66_Geo_bounds_agg.asciidoc @@ -21,7 +21,7 @@ GET /attractions/restaurant/_search "geo_bounding_box": { "location": { "top_left": { - "lat": 40,8, + "lat": 40.8, "lon": -74.1 }, "bottom_right": { From 039190b2f12866f01204bac1fa48dd6285a67ca9 Mon Sep 17 00:00:00 2001 From: msalistra Date: Wed, 17 Aug 2016 16:51:52 +0300 Subject: [PATCH 44/88] Another - comma instead of dot in lat field in query example (#586) There is a mistake in query example in "lat" can't be 40,8 Fixed 40,8 -> 40.8 --- 330_Geo_aggs/66_Geo_bounds_agg.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/330_Geo_aggs/66_Geo_bounds_agg.asciidoc b/330_Geo_aggs/66_Geo_bounds_agg.asciidoc index 19035ebc1..70937c689 100644 --- a/330_Geo_aggs/66_Geo_bounds_agg.asciidoc +++ b/330_Geo_aggs/66_Geo_bounds_agg.asciidoc @@ -86,7 +86,7 @@ GET /attractions/restaurant/_search "geo_bounding_box": { "location": { "top_left": { - "lat": 40,8, + "lat": 40.8, "lon": -74.1 }, "bottom_right": { From 880c645989b3934b02442285da590d16ca1c89a0 Mon Sep 17 00:00:00 2001 From: Clinton Gormley Date: Thu, 18 Aug 2016 12:46:01 +0200 Subject: [PATCH 45/88] Update 60_file_descriptors.asciidoc max_file_descriptors is now found in node stats, not nodes info --- 510_Deployment/60_file_descriptors.asciidoc | 43 ++++++++++++--------- 1 file changed, 24 insertions(+), 19 deletions(-) diff --git a/510_Deployment/60_file_descriptors.asciidoc b/510_Deployment/60_file_descriptors.asciidoc index 51b2a7c6f..41a675086 100644 --- a/510_Deployment/60_file_descriptors.asciidoc +++ b/510_Deployment/60_file_descriptors.asciidoc @@ -19,27 +19,32 @@ have enough file descriptors: [source,js] ---- -GET /_nodes/process - { - "cluster_name": "elasticsearch__zach", - "nodes": { - "TGn9iO2_QQKb0kavcLbnDw": { - "name": "Zach", - "transport_address": "inet[/192.168.1.131:9300]", - "host": "zacharys-air", - "ip": "192.168.1.131", - "version": "2.0.0-SNAPSHOT", - "build": "612f461", - "http_address": "inet[/192.168.1.131:9200]", - "process": { - "refresh_interval_in_millis": 1000, - "id": 19808, - "max_file_descriptors": 64000, <1> - "mlockall": true - } + "cluster_name": "elasticsearch", + "nodes": { + "nLd81iLsRcqmah-cuHAbaQ": { + "timestamp": 1471516160318, + "name": "Marsha Rosenberg", + "transport_address": "127.0.0.1:9300", + "host": "127.0.0.1", + "ip": [ + "127.0.0.1:9300", + "NONE" + ], + "process": { + "timestamp": 1471516160318, + "open_file_descriptors": 155, + "max_file_descriptors": 10240, <1> + "cpu": { + "percent": 0, + "total_in_millis": 25084 + }, + "mem": { + "total_virtual_in_bytes": 5221900288 + } } - } + } + } } ---- <1> The `max_file_descriptors` field shows the number of available descriptors that From 4bebdeefb288b4035cf7fc4e209cc254f2e57196 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Gabriel=20Rodr=C3=ADguez=20Alberich?= Date: Thu, 6 Oct 2016 17:54:35 +0200 Subject: [PATCH 46/88] Correct the query that would match (#605) The query that would match the document with a `comments` field of type `object` needs fully qualified field names `comments.name` and `comments.age`. --- 402_Nested/30_Nested_objects.asciidoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/402_Nested/30_Nested_objects.asciidoc b/402_Nested/30_Nested_objects.asciidoc index be4392d8b..327d03748 100644 --- a/402_Nested/30_Nested_objects.asciidoc +++ b/402_Nested/30_Nested_objects.asciidoc @@ -47,8 +47,8 @@ GET /_search "query": { "bool": { "must": [ - { "match": { "name": "Alice" }}, - { "match": { "age": 28 }} <1> + { "match": { "comments.name": "Alice" }}, + { "match": { "comments.age": 28 }} <1> ] } } From 342c67e34c0b3040c35c6a2dc1c2e8d88e8189cc Mon Sep 17 00:00:00 2001 From: Anatolii Stepaniuk Date: Thu, 6 Oct 2016 18:56:46 +0300 Subject: [PATCH 47/88] fixed typo (#600) --- 310_Geopoints/50_Sorting_by_distance.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/310_Geopoints/50_Sorting_by_distance.asciidoc b/310_Geopoints/50_Sorting_by_distance.asciidoc index 8d37804f5..ce09e8b84 100644 --- a/310_Geopoints/50_Sorting_by_distance.asciidoc +++ b/310_Geopoints/50_Sorting_by_distance.asciidoc @@ -17,7 +17,7 @@ GET /attractions/restaurant/_search "type": "indexed", "location": { "top_left": { - "lat": 40,8, + "lat": 40.8, "lon": -74.0 }, "bottom_right": { From b78bf0923e225cf57dc24a654ca92e4015fe76e3 Mon Sep 17 00:00:00 2001 From: Hsu Chen-Wei Date: Thu, 6 Oct 2016 10:57:25 -0500 Subject: [PATCH 48/88] Fix typo (#594) Fix typo --- 070_Index_Mgmt/25_Mappings.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/070_Index_Mgmt/25_Mappings.asciidoc b/070_Index_Mgmt/25_Mappings.asciidoc index f9897072f..1d91511f5 100644 --- a/070_Index_Mgmt/25_Mappings.asciidoc +++ b/070_Index_Mgmt/25_Mappings.asciidoc @@ -153,6 +153,6 @@ problems. In these cases, it's much better to utilize two independent indices. In summary: - **Good:** `kitchen` and `lawn-care` types inside the `products` index, because -the two types are essentially the same schema +the two types are essentially the same schema. - **Bad:** `products` and `logs` types inside the `data` index, because the two types are mutually exclusive. Separate these into their own indices. From c0c62a8dc3355662b82f9408239e32fe2fc826a6 Mon Sep 17 00:00:00 2001 From: Hsu Chen-Wei Date: Thu, 6 Oct 2016 10:58:47 -0500 Subject: [PATCH 49/88] Fix text is missing (#593) Fix text is missing error --- 070_Index_Mgmt/20_Custom_Analyzers.asciidoc | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/070_Index_Mgmt/20_Custom_Analyzers.asciidoc b/070_Index_Mgmt/20_Custom_Analyzers.asciidoc index e930833c2..5e9d7e486 100644 --- a/070_Index_Mgmt/20_Custom_Analyzers.asciidoc +++ b/070_Index_Mgmt/20_Custom_Analyzers.asciidoc @@ -171,7 +171,9 @@ After creating the index, use the `analyze` API to((("analyzers", "testing using [source,js] -------------------------------------------------- GET /my_index/_analyze?analyzer=my_analyzer -The quick & brown fox +{ + "text": "The quick & brown fox" +} -------------------------------------------------- // SENSE: 070_Index_Mgmt/20_Custom_analyzer.json From ac2193408760ba8c083a9e40f5a83cc7ad3e1872 Mon Sep 17 00:00:00 2001 From: Zachary Tong Date: Thu, 6 Oct 2016 12:02:13 -0400 Subject: [PATCH 50/88] Remove analyzer in URL, update snippet --- 070_Index_Mgmt/20_Custom_Analyzers.asciidoc | 5 +++-- snippets/070_Index_Mgmt/20_Custom_analyzer.json | 6 +++++- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/070_Index_Mgmt/20_Custom_Analyzers.asciidoc b/070_Index_Mgmt/20_Custom_Analyzers.asciidoc index 5e9d7e486..bf7c6ee11 100644 --- a/070_Index_Mgmt/20_Custom_Analyzers.asciidoc +++ b/070_Index_Mgmt/20_Custom_Analyzers.asciidoc @@ -170,9 +170,10 @@ After creating the index, use the `analyze` API to((("analyzers", "testing using [source,js] -------------------------------------------------- -GET /my_index/_analyze?analyzer=my_analyzer +GET /my_index/_analyze { - "text": "The quick & brown fox" + "text": "The quick & brown fox", + "analyzer": "my_analyzer" } -------------------------------------------------- // SENSE: 070_Index_Mgmt/20_Custom_analyzer.json diff --git a/snippets/070_Index_Mgmt/20_Custom_analyzer.json b/snippets/070_Index_Mgmt/20_Custom_analyzer.json index 2f11e6ba6..04202c1d3 100644 --- a/snippets/070_Index_Mgmt/20_Custom_analyzer.json +++ b/snippets/070_Index_Mgmt/20_Custom_analyzer.json @@ -42,7 +42,11 @@ PUT /my_index } # Test out the new analyzer -GET /my_index/_analyze?analyzer=my_analyzer&text=The quick %26 brown fox +GET /my_index/_analyze +{ + "text": "The quick & brown fox", + "analyzer": "my_analyzer" +} # Apply "my_analyzer" to the `title` field PUT /my_index/_mapping/my_type From ea96e5f452005f349a57479a7b4cc47eaee9912d Mon Sep 17 00:00:00 2001 From: Hsu Chen-Wei Date: Thu, 6 Oct 2016 11:04:00 -0500 Subject: [PATCH 51/88] Fix text is missing (#592) Fix text is missing error --- 070_Index_Mgmt/15_Configure_Analyzer.asciidoc | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/070_Index_Mgmt/15_Configure_Analyzer.asciidoc b/070_Index_Mgmt/15_Configure_Analyzer.asciidoc index eb6617435..51293db15 100644 --- a/070_Index_Mgmt/15_Configure_Analyzer.asciidoc +++ b/070_Index_Mgmt/15_Configure_Analyzer.asciidoc @@ -54,7 +54,9 @@ specify the index name: [source,js] -------------------------------------------------- GET /spanish_docs/_analyze?analyzer=es_std -El veloz zorro marrón +{ + "text":"El veloz zorro marrón" +} -------------------------------------------------- // SENSE: 070_Index_Mgmt/15_Configure_Analyzer.json From bd4900cdf2c7815b40e7fa10d634902e885fb8e1 Mon Sep 17 00:00:00 2001 From: Zachary Tong Date: Thu, 6 Oct 2016 12:05:04 -0400 Subject: [PATCH 52/88] Remove analyzer in URL, update snippet --- 070_Index_Mgmt/15_Configure_Analyzer.asciidoc | 4 ++-- snippets/070_Index_Mgmt/15_Configure_Analyzer.json | 7 +++++-- 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/070_Index_Mgmt/15_Configure_Analyzer.asciidoc b/070_Index_Mgmt/15_Configure_Analyzer.asciidoc index 51293db15..ad89d0b20 100644 --- a/070_Index_Mgmt/15_Configure_Analyzer.asciidoc +++ b/070_Index_Mgmt/15_Configure_Analyzer.asciidoc @@ -53,8 +53,9 @@ specify the index name: [source,js] -------------------------------------------------- -GET /spanish_docs/_analyze?analyzer=es_std +GET /spanish_docs/_analyze { + "analyzer": "es_std", "text":"El veloz zorro marrón" } -------------------------------------------------- @@ -73,4 +74,3 @@ removed correctly: ] } -------------------------------------------------- - diff --git a/snippets/070_Index_Mgmt/15_Configure_Analyzer.json b/snippets/070_Index_Mgmt/15_Configure_Analyzer.json index 40aa2b996..6af3cd3c1 100644 --- a/snippets/070_Index_Mgmt/15_Configure_Analyzer.json +++ b/snippets/070_Index_Mgmt/15_Configure_Analyzer.json @@ -17,5 +17,8 @@ PUT /spanish_docs } # Test out the new analyzer -GET /spanish_docs/_analyze?analyzer=es_std&text=El veloz zorro marrón - +GET /spanish_docs/_analyze +{ + "analyzer": "es_std", + "text":"El veloz zorro marrón" +} From 2901e4ad2ebd2945b789b5af5384913cbd5adab1 Mon Sep 17 00:00:00 2001 From: tpetrytsyn Date: Thu, 25 Aug 2016 09:12:08 +0300 Subject: [PATCH 53/88] Deprecated "filtered" query replaced by "bool" In current example "filtered" query is used, which is deprecated since elasticsearch 2.0.0-beta1. I replaced it on proper, "bool" query. --- 010_Intro/30_Tutorial_Search.asciidoc | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/010_Intro/30_Tutorial_Search.asciidoc b/010_Intro/30_Tutorial_Search.asciidoc index a07b63636..b4be0b182 100644 --- a/010_Intro/30_Tutorial_Search.asciidoc +++ b/010_Intro/30_Tutorial_Search.asciidoc @@ -209,15 +209,15 @@ which allows us to execute structured searches efficiently: GET /megacorp/employee/_search { "query" : { - "filtered" : { - "filter" : { - "range" : { - "age" : { "gt" : 30 } <1> + "bool" : { + "must" : { + "match" : { + "last_name" : "smith" <1> } }, - "query" : { - "match" : { - "last_name" : "smith" <2> + "filter" : { + "range" : { + "age" : { "gt" : 30 } <2> } } } @@ -226,9 +226,9 @@ GET /megacorp/employee/_search -------------------------------------------------- // SENSE: 010_Intro/30_Query_DSL.json -<1> This portion of the query is a `range` _filter_, which((("range filters"))) will find all ages +<1> This portion of the query is the((("match queries"))) same `match` _query_ that we used before. +<2> This portion of the query is a `range` _filter_, which((("range filters"))) will find all ages older than 30—`gt` stands for _greater than_. -<2> This portion of the query is the((("match queries"))) same `match` _query_ that we used before. Don't worry about the syntax too much for now; we will cover it in great From bbe54f7af9b8b33ebc5cf472d1a9308085f85cb6 Mon Sep 17 00:00:00 2001 From: kingrhoton Date: Thu, 6 Oct 2016 09:16:12 -0700 Subject: [PATCH 54/88] make text consistent with example (#575) --- 030_Data/15_Get.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/030_Data/15_Get.asciidoc b/030_Data/15_Get.asciidoc index 3258046e9..eee8d283f 100644 --- a/030_Data/15_Get.asciidoc +++ b/030_Data/15_Get.asciidoc @@ -72,7 +72,7 @@ Content-Length: 83 ==== Retrieving Part of a Document By default, a `GET` request((("documents", "retrieving part of"))) will return the whole document, as stored in the -`_source` field. But perhaps all you are interested in is the `title` field. +`_source` field. But perhaps all you are interested in are the `title` and `text` fields. Individual fields can be ((("fields", "returning individual document fields")))((("_source field", sortas="source field")))requested by using the `_source` parameter. Multiple fields can be specified in a comma-separated list: From 4c862bc9498671851afb3950bad6b63a75212937 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?V=C3=A1clav=20Make=C5=A1?= Date: Thu, 6 Oct 2016 18:20:05 +0200 Subject: [PATCH 55/88] Added information about deprecated warmers in 2.3 (#571) --- 300_Aggregations/115_eager.asciidoc | 2 ++ 1 file changed, 2 insertions(+) diff --git a/300_Aggregations/115_eager.asciidoc b/300_Aggregations/115_eager.asciidoc index 9927833e9..d293b5b07 100644 --- a/300_Aggregations/115_eager.asciidoc +++ b/300_Aggregations/115_eager.asciidoc @@ -207,6 +207,8 @@ also reduce CPU usage, as you will need to rebuild global ordinals less often. [[index-warmers]] ==== Index Warmers +deprecated[2.3.0,Thanks to disk-based norms and doc values, warmers don't have use-cases anymore] + Finally, we come to _index warmers_. Warmers((("index warmers"))) predate eager fielddata loading and eager global ordinals, but they still serve a purpose. An index warmer allows you to specify a query and aggregations that should be run before a new From 735492598ff68b8dc595fdba93981a9f5e279414 Mon Sep 17 00:00:00 2001 From: Rudolph Gottesheim Date: Thu, 6 Oct 2016 21:54:41 +0200 Subject: [PATCH 56/88] Fix the position of a comma (#607) --- Preface.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Preface.asciidoc b/Preface.asciidoc index bfb9bdd24..b75ae19dc 100644 --- a/Preface.asciidoc +++ b/Preface.asciidoc @@ -97,7 +97,7 @@ to dispel the magic--instead of hoping that the black box will do what you want, understanding gives you certainty and clarity. This is a definitive guide: we help you not only to get started with -Elasticsearch, but also to tackle the deeper more, interesting topics. These include <>, <>, +Elasticsearch, but also to tackle the deeper, more interesting topics. These include <>, <>, <>, and <>, which are not essential reading but do give you a solid understanding of the internals. From 3e38c9de43d8aa393c0dac97ff16655aea38ecf0 Mon Sep 17 00:00:00 2001 From: Mohammad Reza Kamalifard Date: Thu, 13 Oct 2016 17:33:11 +0330 Subject: [PATCH 57/88] Update 45_Distributed.asciidoc Fix a typo in text --- 010_Intro/45_Distributed.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/010_Intro/45_Distributed.asciidoc b/010_Intro/45_Distributed.asciidoc index 96f3ce298..d71a0f2f5 100644 --- a/010_Intro/45_Distributed.asciidoc +++ b/010_Intro/45_Distributed.asciidoc @@ -40,6 +40,6 @@ handles document storage (<>), executes distributed search These chapters are not required reading--you can use Elasticsearch without understanding these internals--but they will provide insight that will make -your knowledge of Elasticsearch more complete. Feel free to skim them and +your knowledge of Elasticsearch more complete. Feel free to skip them and revisit at a later point when you need a more complete understanding. From 62c884e39b5604fcc8360a8e56c3fbf814ddd6c2 Mon Sep 17 00:00:00 2001 From: Clinton Gormley Date: Mon, 17 Oct 2016 11:53:02 +0200 Subject: [PATCH 58/88] Fixed doc version link --- book.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/book.asciidoc b/book.asciidoc index 42993718c..9a0083add 100644 --- a/book.asciidoc +++ b/book.asciidoc @@ -1,6 +1,6 @@ :bookseries: animal :es_build: 1 -:ref: https://www.elastic.co/guide/en/elasticsearch/reference/current +:ref: https://www.elastic.co/guide/en/elasticsearch/reference/2.4 = Elasticsearch: The Definitive Guide From 64e0fd7211d3be8b46c1b5142762cb3d67da841b Mon Sep 17 00:00:00 2001 From: Mikhail Khludnev Date: Mon, 24 Oct 2016 16:42:35 +0300 Subject: [PATCH 59/88] fix Elastisearch typo (#613) --- 240_Stopwords/10_Intro.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/240_Stopwords/10_Intro.asciidoc b/240_Stopwords/10_Intro.asciidoc index d07fbae6b..ad173c906 100644 --- a/240_Stopwords/10_Intro.asciidoc +++ b/240_Stopwords/10_Intro.asciidoc @@ -73,7 +73,7 @@ prevents us from doing the following: The primary advantage of removing stopwords is performance. Imagine that we search an index with one million documents for the word `fox`. Perhaps `fox` -appears in only 20 of them, which means that Elastisearch has to calculate the +appears in only 20 of them, which means that Elasticsearch has to calculate the relevance `_score` for 20 documents in order to return the top 10. Now, we change that to a search for `the OR fox`. The word `the` probably occurs in almost all the documents, which means that Elasticsearch has to calculate From 83e5993ae4ff882d2716c9433882bad7dcb6b304 Mon Sep 17 00:00:00 2001 From: Michael Date: Tue, 25 Oct 2016 13:44:02 -0700 Subject: [PATCH 60/88] Correct typo "Marvel lets your" => "Marvel lets you" --- 500_Cluster_Admin/15_marvel.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/500_Cluster_Admin/15_marvel.asciidoc b/500_Cluster_Admin/15_marvel.asciidoc index bca6bfab0..399f41f64 100644 --- a/500_Cluster_Admin/15_marvel.asciidoc +++ b/500_Cluster_Admin/15_marvel.asciidoc @@ -15,7 +15,7 @@ behavior over time, which makes it easy to spot trends. As your cluster grows, the output from the stats APIs can get truly hairy. Once you have a dozen nodes, let alone a hundred, reading through stacks of JSON -becomes very tedious. Marvel lets your explore the data interactively and +becomes very tedious. Marvel lets you explore the data interactively and makes it easy to zero in on what's going on with particular nodes or indices. Marvel uses the same stats APIs that are available to you--it does not expose From b64dd16f307baa81623cb404f5d09e18b1285d55 Mon Sep 17 00:00:00 2001 From: Thiago Souza Date: Fri, 28 Oct 2016 17:38:29 -0200 Subject: [PATCH 61/88] pt_br typo It's "Clube dA Luta" not "Clube dE Luta" --- 200_Language_intro/50_One_language_per_field.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/200_Language_intro/50_One_language_per_field.asciidoc b/200_Language_intro/50_One_language_per_field.asciidoc index ba4ebf9ec..4c358a47c 100644 --- a/200_Language_intro/50_One_language_per_field.asciidoc +++ b/200_Language_intro/50_One_language_per_field.asciidoc @@ -10,7 +10,7 @@ reasonable approach is to keep all translations in the same document: -------------------------------------------------- { "title": "Fight club", - "title_br": "Clube de Luta", + "title_br": "Clube da Luta", "title_cz": "Klub rváčů", "title_en": "Fight club", "title_es": "El club de la lucha", From 47cb4ee5bc1527c68f4cbdefd8c00034371b08b1 Mon Sep 17 00:00:00 2001 From: Srikanta Patanjali Date: Fri, 2 Dec 2016 12:00:08 +0100 Subject: [PATCH 62/88] Corrected the syntax of vm.swappiness commad for linux In linux, while changing the value of "swappiness" using sysctl, i observed that the space before and after the equals sign "=" was resulting in the below malformed error : ---- root@xx-yy:/home/abc# sysctl vm.swappiness = 1 vm.swappiness = 60 sysctl: malformed setting "=" sysctl: cannot stat /proc/sys/1: No such file or directory ---- Hence I have removed the space and now the command (vm.swappiness=1) can copied and executed avoiding the malformed exception --- 510_Deployment/50_heap.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/510_Deployment/50_heap.asciidoc b/510_Deployment/50_heap.asciidoc index cd6fa644e..ca6c08570 100644 --- a/510_Deployment/50_heap.asciidoc +++ b/510_Deployment/50_heap.asciidoc @@ -200,7 +200,7 @@ For most Linux systems, this is configured using the `sysctl` value: [source,bash] ---- -vm.swappiness = 1 <1> +vm.swappiness=1 <1> ---- <1> A `swappiness` of `1` is better than `0`, since on some kernel versions a `swappiness` of `0` can invoke the OOM-killer. From 06f97ab0e2e810c006595cf2f25f6fefe77e3cd0 Mon Sep 17 00:00:00 2001 From: dibbdob Date: Tue, 11 Apr 2017 11:53:26 +0100 Subject: [PATCH 63/88] Complicated -> complex in aggregations tutorial 'Complicated' suggests something is difficult to understand - contradicting the rest of the sentence. In this case, 'Complex' seems more appropriate. --- 010_Intro/35_Tutorial_Aggregations.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/010_Intro/35_Tutorial_Aggregations.asciidoc b/010_Intro/35_Tutorial_Aggregations.asciidoc index 47429874c..4890b231f 100644 --- a/010_Intro/35_Tutorial_Aggregations.asciidoc +++ b/010_Intro/35_Tutorial_Aggregations.asciidoc @@ -114,7 +114,7 @@ GET /megacorp/employee/_search -------------------------------------------------- // SENSE: 010_Intro/35_Aggregations.json -The aggregations that we get back are a bit more complicated, but still fairly +The aggregations that we get back are a bit more complex, but still fairly easy to understand: [source,js] From 5f0583408a6373d9e14cff9474c96078bae8c9ea Mon Sep 17 00:00:00 2001 From: Rob Moore Date: Tue, 11 Apr 2017 12:02:50 +0100 Subject: [PATCH 64/88] Use possessive its --- 080_Structured_Search/40_bitsets.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/080_Structured_Search/40_bitsets.asciidoc b/080_Structured_Search/40_bitsets.asciidoc index 38a690bac..a193e4d2e 100644 --- a/080_Structured_Search/40_bitsets.asciidoc +++ b/080_Structured_Search/40_bitsets.asciidoc @@ -24,7 +24,7 @@ search requests. It is not dependent on the "context" of the surrounding query. This allows caching to accelerate the most frequently used portions of your queries, without wasting overhead on the less frequent / more volatile portions. -Similarly, if a single search request reuses the same non-scoring query, it's +Similarly, if a single search request reuses the same non-scoring query, its cached bitset can be reused for all instances inside the single search request. Let's look at this example query, which looks for emails that are either of the following: From 0eefcfbd28ca030e0c2274cf0973bd99fcdc3564 Mon Sep 17 00:00:00 2001 From: Catherine Snow Date: Tue, 11 Apr 2017 08:27:40 -0400 Subject: [PATCH 65/88] Update text for grammar (#662) It's not that I'm an anti-descriptivist, but we have many ways to express this sentiment while actually begging the question really only has the one. --- 300_Aggregations/95_analyzed_vs_not.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/300_Aggregations/95_analyzed_vs_not.asciidoc b/300_Aggregations/95_analyzed_vs_not.asciidoc index 088a15d5b..9278b6faf 100644 --- a/300_Aggregations/95_analyzed_vs_not.asciidoc +++ b/300_Aggregations/95_analyzed_vs_not.asciidoc @@ -3,7 +3,7 @@ === Aggregations and Analysis Some aggregations, such as the `terms` bucket, operate((("analysis", "aggregations and")))((("aggregations", "and analysis"))) on string fields. And -string fields may be either `analyzed` or `not_analyzed`, which begs the question: +string fields may be either `analyzed` or `not_analyzed`, which raises the question: how does analysis affect aggregations?((("strings", "analyzed or not_analyzed string fields")))((("not_analyzed fields")))((("analyzed fields"))) The answer is "a lot," for two reasons: analysis affects the tokens used in the aggregation, From 038060563945d631300f35355434b7b5a8639bdb Mon Sep 17 00:00:00 2001 From: Robert Date: Mon, 23 May 2016 17:12:46 +0200 Subject: [PATCH 66/88] Aggregation response has wrong name (#540) Query defines aggregation with name *popular_colors* but response uses *colors*. --- 300_Aggregations/20_basic_example.asciidoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/300_Aggregations/20_basic_example.asciidoc b/300_Aggregations/20_basic_example.asciidoc index af0627226..2c3281000 100644 --- a/300_Aggregations/20_basic_example.asciidoc +++ b/300_Aggregations/20_basic_example.asciidoc @@ -99,7 +99,7 @@ Let's execute that aggregation and take a look at the results: "hits": [] <1> }, "aggregations": { - "colors": { <2> + "popular_colors": { <2> "buckets": [ { "key": "red", <3> @@ -119,7 +119,7 @@ Let's execute that aggregation and take a look at the results: } -------------------------------------------------- <1> No search hits are returned because we set the `size` parameter -<2> Our `colors` aggregation is returned as part of the `aggregations` field. +<2> Our `popular_colors` aggregation is returned as part of the `aggregations` field. <3> The `key` to each bucket corresponds to a unique term found in the `color` field. It also always includes `doc_count`, which tells us the number of docs containing the term. <4> The count of each bucket represents the number of documents with this color. From de6194c473e4680872b7b2c4afe4f7c9ec9eb9f8 Mon Sep 17 00:00:00 2001 From: Edgar Post Date: Tue, 11 Apr 2017 14:35:15 +0200 Subject: [PATCH 67/88] Correct wrong use of possessive 'its' (#646) --- 080_Structured_Search/05_term.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/080_Structured_Search/05_term.asciidoc b/080_Structured_Search/05_term.asciidoc index b65350536..0ed13335d 100644 --- a/080_Structured_Search/05_term.asciidoc +++ b/080_Structured_Search/05_term.asciidoc @@ -302,7 +302,7 @@ bitset is iterated on first (since it excludes the largest number of documents). 4. _Increment the usage counter_. + -Elasticsearch can cache non-scoring queries for faster access, but its silly to +Elasticsearch can cache non-scoring queries for faster access, but it's silly to cache something that is used only rarely. Non-scoring queries are already quite fast due to the inverted index, so we only want to cache queries we _know_ will be used again in the future to prevent resource wastage. From eb3e172d06f3dd3499d5ec93780147a285e4e02e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Paulius=20Aleksi=C5=ABnas?= Date: Tue, 11 Apr 2017 15:38:32 +0300 Subject: [PATCH 68/88] Fix typo (#630) --- 520_Post_Deployment/50_backup.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/520_Post_Deployment/50_backup.asciidoc b/520_Post_Deployment/50_backup.asciidoc index 7f7d63f69..b7b92efb7 100644 --- a/520_Post_Deployment/50_backup.asciidoc +++ b/520_Post_Deployment/50_backup.asciidoc @@ -137,7 +137,7 @@ Once you start accumulating snapshots in your repository, you may forget the det relating to each--particularly when the snapshots are named based on time demarcations (for example, `backup_2014_10_28`). -To obtain information about a single snapshot, simply issue a `GET` reguest against +To obtain information about a single snapshot, simply issue a `GET` request against the repo and snapshot name: [source,js] From f6ea23aadd05ca8a58c93c809712a0394cfd7cb9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Paulius=20Aleksi=C5=ABnas?= Date: Tue, 11 Apr 2017 15:41:36 +0300 Subject: [PATCH 69/88] Fix typo (#628) --- 320_Geohashes/40_Geohashes.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/320_Geohashes/40_Geohashes.asciidoc b/320_Geohashes/40_Geohashes.asciidoc index e756ab090..1080d8c14 100644 --- a/320_Geohashes/40_Geohashes.asciidoc +++ b/320_Geohashes/40_Geohashes.asciidoc @@ -7,7 +7,7 @@ URL-friendly way of specifying geolocations, but geohashes have turned out to be a useful way of indexing geo-points and geo-shapes in databases. Geohashes divide the world into a grid of 32 cells--4 rows and 8 columns--each represented by a letter or number. The `g` cell covers half of -Greenland, all of Iceland, and most of Great Britian. Each cell can be further +Greenland, all of Iceland, and most of Great Britain. Each cell can be further divided into another 32 cells, which can be divided into another 32 cells, and so on. The `gc` cell covers Ireland and England, `gcp` covers most of London and part of Southern England, and `gcpuuz94k` is the entrance to From c80d6b2d30cc48683322ca7b5a34969e1845695e Mon Sep 17 00:00:00 2001 From: Daniel Mitterdorfer Date: Tue, 11 Apr 2017 15:09:13 +0200 Subject: [PATCH 70/88] Remove duplicate double-colon Closes #357 --- 270_Fuzzy_matching/20_Fuzziness.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/270_Fuzzy_matching/20_Fuzziness.asciidoc b/270_Fuzzy_matching/20_Fuzziness.asciidoc index 4a6048493..5a7051bfe 100644 --- a/270_Fuzzy_matching/20_Fuzziness.asciidoc +++ b/270_Fuzzy_matching/20_Fuzziness.asciidoc @@ -13,7 +13,7 @@ one word into the other. He proposed three types of one-character edits: * _Insertion_ of a new character: sic -> sic_k_ -* _Deletion_ of a character:: b_l_ack -> back +* _Deletion_ of a character: b_l_ack -> back http://en.wikipedia.org/wiki/Frederick_J._Damerau[Frederick Damerau] later expanded these operations ((("Damerau, Frederick J.")))to include one more: From 0b125258ae4c4e5bb686a19679bae1ebde87c264 Mon Sep 17 00:00:00 2001 From: Daniel Mitterdorfer Date: Tue, 11 Apr 2017 09:39:15 +0200 Subject: [PATCH 71/88] Ignore the .idea directory --- .gitignore | 2 ++ 1 file changed, 2 insertions(+) diff --git a/.gitignore b/.gitignore index 5a114f0b9..1c363e9f3 100644 --- a/.gitignore +++ b/.gitignore @@ -6,3 +6,5 @@ book.html .settings .DS_Store + +.idea \ No newline at end of file From d5292c1354162be2326046dd3b529f74855c3718 Mon Sep 17 00:00:00 2001 From: Daniel Mitterdorfer Date: Tue, 11 Apr 2017 16:04:52 +0200 Subject: [PATCH 72/88] Add initial readme and contribution guide --- CONTRIBUTING.md | 68 +++++++++++++++++++++++++++++++++++++++++++++++++ README.md | 41 +++++++++++++++++++++++++++++ 2 files changed, 109 insertions(+) create mode 100644 CONTRIBUTING.md create mode 100644 README.md diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 000000000..b494673e5 --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,68 @@ +## Contributing to the Definitive Guide + +### Contributing documentation changes + +If you have a change that you would like to contribute, please find or open an +issue about it first. Talk about what you would like to do. It may be that +somebody is already working on it, or that there are particular issues that +you should know about before doing the change. + +The process for contributing to any of the [Elastic repositories](https://github.com/elastic/) +is similar. Details can be found below. + +### Fork and clone the repository + +You will need to fork the main repository and clone it to your local machine. +See the respective [Github help page](https://help.github.com/articles/fork-a-repo) +for help. + +### Submitting your changes + +Once your changes and tests are ready to submit for review: + +1. Test your changes + + [Build the complete book locally](https://github.com/elastic/elasticsearch-definitive-guide) + and check and correct any errors that you encounter. + +2. Sign the Contributor License Agreement + + Please make sure you have signed our [Contributor License Agreement](https://www.elastic.co/contributor-agreement/). + We are not asking you to assign copyright to us, but to give us the right + to distribute your code without restriction. We ask this of all + contributors in order to assure our users of the origin and continuing + existence of the code. You only need to sign the CLA once. + +3. Rebase your changes + + Update your local repository with the most recent code from the main + repository, and rebase your branch on top of the latest `master` branch. + We prefer your initial changes to be squashed into a single commit. Later, + if we ask you to make changes, add them as separate commits. This makes + them easier to review. As a final step before merging we will either ask + you to squash all commits yourself or we'll do it for you. + + +4. Submit a pull request + + Push your local changes to your forked copy of the repository and + [submit a pull request](https://help.github.com/articles/using-pull-requests). + In the pull request, choose a title which sums up the changes that you + have made, and in the body provide more details about what your changes do. + Also mention the number of the issue where discussion has taken place, + e.g. "Closes #123". + +Then sit back and wait. There will probably be discussion about the pull +request and, if any changes are needed, we would love to work with you to get +your pull request merged. + +Please adhere to the general guideline that you should never force push +to a publicly shared branch. Once you have opened your pull request, you +should consider your branch publicly shared. Instead of force pushing +you can just add incremental commits; this is generally easier on your +reviewers. If you need to pick up changes from master, you can merge +master into your branch. A reviewer might ask you to rebase a +long-running pull request in which case force pushing is okay for that +request. Note that squashing at the end of the review process should +also not be done, that can be done when the pull request is [integrated +via GitHub](https://github.com/blog/2141-squash-your-commits). \ No newline at end of file diff --git a/README.md b/README.md new file mode 100644 index 000000000..ca4944704 --- /dev/null +++ b/README.md @@ -0,0 +1,41 @@ +# The Definitive Guide to Elasticsearch + +This repository contains the sources to the "Definitive Guide to Elasticsearch" which you can [read online](https://www.elastic.co/guide/en/elasticsearch/guide/current/index.html). + +## Building the Definitive Guide + +In order to build this project, we rely on our [docs infrastructure](https://github.com/elastic/docs). + +To build the HTML of the complete project, run the following commands: + +``` +# clone this repo +git clone git@github.com:elastic/elasticsearch-definitive-guide.git +# clone the docs build infrastructure +git clone git@github.com:elastic/docs.git +# Build HTML and open a browser +cd elasticsearch-definitive-guide +../docs/build_docs.pl --doc book.asciidoc --open +``` + +This assumes that you have all necessary prerequisites installed. For a more complete reference, please see refer to the [README in the docs repo](https://github.com/elastic/docs). + +The Definitive Guide is written in Asciidoc and the docs repo also contains a [short Asciidoc guide](https://github.com/elastic/docs#asciidoc-guide). + +## Supported versions + +The Definitive Guide is available for multiple versions of Elasticsearch: + +* The [branch `1.x`](https://github.com/elastic/elasticsearch-definitive-guide/tree/1.x) applies to Elasticsearch 1.x +* The [branch `2.x`](https://github.com/elastic/elasticsearch-definitive-guide/tree/2.x) applies to Elasticsearch 2.x +* The [branch `master`](https://github.com/elastic/elasticsearch-definitive-guide/tree/2.x) applies to master branch of Elasticsearch (the current development version) + +## Contributing + +Before contributing a change please read our [contribution guide](CONTRIBUTING.md). + +## License + +This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. + +See http://creativecommons.org/licenses/by-nc-nd/3.0/ for the full text of the License. \ No newline at end of file From 8593bc3d51c818c21ed403ece1d5362b1c639f7d Mon Sep 17 00:00:00 2001 From: Daniel Mitterdorfer Date: Tue, 11 Apr 2017 16:14:44 +0200 Subject: [PATCH 73/88] Correct links and small typo in README/CONTRIBUTING --- CONTRIBUTING.md | 2 +- README.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index b494673e5..827c03b97 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -22,7 +22,7 @@ Once your changes and tests are ready to submit for review: 1. Test your changes - [Build the complete book locally](https://github.com/elastic/elasticsearch-definitive-guide) + [Build the complete book locally](https://github.com/elastic/elasticsearch-definitive-guide#building-the-definitive-guide) and check and correct any errors that you encounter. 2. Sign the Contributor License Agreement diff --git a/README.md b/README.md index ca4944704..0880c5dae 100644 --- a/README.md +++ b/README.md @@ -18,7 +18,7 @@ cd elasticsearch-definitive-guide ../docs/build_docs.pl --doc book.asciidoc --open ``` -This assumes that you have all necessary prerequisites installed. For a more complete reference, please see refer to the [README in the docs repo](https://github.com/elastic/docs). +This assumes that you have all necessary prerequisites installed. For a more complete reference, please refer to the [README in the docs repo](https://github.com/elastic/docs). The Definitive Guide is written in Asciidoc and the docs repo also contains a [short Asciidoc guide](https://github.com/elastic/docs#asciidoc-guide). From 61641bb3ef0d7f08613f68d2be9a21413fd8b5d1 Mon Sep 17 00:00:00 2001 From: bharath-elastic Date: Fri, 19 May 2017 16:05:33 -0400 Subject: [PATCH 74/88] code block to add tags to list code block to add tags to list has been updated to work with current version of painless. --- 030_Data/45_Partial_update.asciidoc | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/030_Data/45_Partial_update.asciidoc b/030_Data/45_Partial_update.asciidoc index dc3cf6d0f..a484a2685 100644 --- a/030_Data/45_Partial_update.asciidoc +++ b/030_Data/45_Partial_update.asciidoc @@ -133,14 +133,16 @@ another tag: [source,js] -------------------------------------------------- -POST /website/blog/1/_update +POST website/blog/1/_update { - "script" : "ctx._source.tags+=new_tag", - "params" : { - "new_tag" : "search" - } -} --------------------------------------------------- + "script": { + "lang": "painless", + "inline": "ctx._source.tags.add(params.tags)", + "params": { + "tags": "search" + } + } +}-------------------------------------------------- // SENSE: 030_Data/45_Partial_update.json From fdb32cb8b0c02797add040f00f973f9e3be68d5b Mon Sep 17 00:00:00 2001 From: debadair Date: Fri, 19 May 2017 16:38:30 -0700 Subject: [PATCH 75/88] Added missing line break in example to fix end block error. --- 030_Data/45_Partial_update.asciidoc | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/030_Data/45_Partial_update.asciidoc b/030_Data/45_Partial_update.asciidoc index a484a2685..817c50ada 100644 --- a/030_Data/45_Partial_update.asciidoc +++ b/030_Data/45_Partial_update.asciidoc @@ -142,7 +142,8 @@ POST website/blog/1/_update "tags": "search" } } -}-------------------------------------------------- +} +-------------------------------------------------- // SENSE: 030_Data/45_Partial_update.json From 7e5445df6d4267ddd62cac92c40de2a7988d5162 Mon Sep 17 00:00:00 2001 From: Clinton Gormley Date: Tue, 14 Nov 2017 15:57:16 +0100 Subject: [PATCH 76/88] Changed link from postings highlighter to unified highlighter --- 240_Stopwords/50_Phrase_queries.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/240_Stopwords/50_Phrase_queries.asciidoc b/240_Stopwords/50_Phrase_queries.asciidoc index 47e4a1065..8d89005d4 100644 --- a/240_Stopwords/50_Phrase_queries.asciidoc +++ b/240_Stopwords/50_Phrase_queries.asciidoc @@ -98,7 +98,7 @@ in the index for each field.((("fields", "index options"))) Valid values are as Store `docs`, `freqs`, `positions`, and the start and end character offsets of each term in the original string. This information is used by the - http://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-highlighting.html#postings-highlighter[`postings` highlighter] + https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-highlighting.html#_unified_highlighter[`unified` highlighter] but is disabled by default. You can set `index_options` on fields added at index creation time, or when From 396799a63224230ac545e08a0dc30597588e4e0a Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Tue, 14 Nov 2017 08:06:59 -0800 Subject: [PATCH 77/88] Fixed cross doc link. --- 240_Stopwords/50_Phrase_queries.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/240_Stopwords/50_Phrase_queries.asciidoc b/240_Stopwords/50_Phrase_queries.asciidoc index 8d89005d4..ed73f3cb0 100644 --- a/240_Stopwords/50_Phrase_queries.asciidoc +++ b/240_Stopwords/50_Phrase_queries.asciidoc @@ -98,7 +98,7 @@ in the index for each field.((("fields", "index options"))) Valid values are as Store `docs`, `freqs`, `positions`, and the start and end character offsets of each term in the original string. This information is used by the - https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-highlighting.html#_unified_highlighter[`unified` highlighter] + https://www.elastic.co/guide/en/elasticsearch/reference/1.7/search-request-highlighting.html#_unified_highlighter[`unified` highlighter] but is disabled by default. You can set `index_options` on fields added at index creation time, or when From 2f005330dbe47e7b3c15f1cc264b517d88564f38 Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Tue, 14 Nov 2017 08:38:07 -0800 Subject: [PATCH 78/88] Cross doc link fix --- 240_Stopwords/50_Phrase_queries.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/240_Stopwords/50_Phrase_queries.asciidoc b/240_Stopwords/50_Phrase_queries.asciidoc index ed73f3cb0..2b256985c 100644 --- a/240_Stopwords/50_Phrase_queries.asciidoc +++ b/240_Stopwords/50_Phrase_queries.asciidoc @@ -98,7 +98,7 @@ in the index for each field.((("fields", "index options"))) Valid values are as Store `docs`, `freqs`, `positions`, and the start and end character offsets of each term in the original string. This information is used by the - https://www.elastic.co/guide/en/elasticsearch/reference/1.7/search-request-highlighting.html#_unified_highlighter[`unified` highlighter] + https://www.elastic.co/guide/en/elasticsearch/reference/5.6/search-request-highlighting.html#_unified_highlighter[`unified` highlighter] but is disabled by default. You can set `index_options` on fields added at index creation time, or when From bfa933b0a319ed93670df04cfa1da1fe30322d18 Mon Sep 17 00:00:00 2001 From: Peter Dyson Date: Fri, 19 Jan 2018 13:49:30 +1000 Subject: [PATCH 79/88] Add synced flush step to rolling restart guide The unstable branch of the docs already has this step, but the branch listed as "2.x (current)" on the docs site hasn't had this step backported to it yet. --- 520_Post_Deployment/40_rolling_restart.asciidoc | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/520_Post_Deployment/40_rolling_restart.asciidoc b/520_Post_Deployment/40_rolling_restart.asciidoc index 1aa93dc4f..2e76c8ac8 100644 --- a/520_Post_Deployment/40_rolling_restart.asciidoc +++ b/520_Post_Deployment/40_rolling_restart.asciidoc @@ -20,8 +20,15 @@ What we want to do is tell Elasticsearch to hold off on rebalancing, because we have more knowledge about the state of the cluster due to external factors. The procedure is as follows: -1. If possible, stop indexing new data. This is not always possible, but will -help speed up recovery time. +1. If possible, stop indexing new data and perform a synced flush. This is not +always possible, but will help speed up recovery time. A synced flush request is +a “best effort” operation. It will fail if there are any pending indexing +operations, but it is safe to reissue the request multiple times if necessary. ++ +[source,js] +---- +POST /_flush/synced +---- 2. Disable shard allocation. This prevents Elasticsearch from rebalancing missing shards until you tell it otherwise. If you know the maintenance window will be From 87829445942ad5765dd7c8a2419efc8334448354 Mon Sep 17 00:00:00 2001 From: Deb Adair Date: Mon, 25 Jun 2018 15:43:16 -0700 Subject: [PATCH 80/88] Added version header and linked to ES ref. --- page_header.html | 4 ++++ 1 file changed, 4 insertions(+) create mode 100644 page_header.html diff --git a/page_header.html b/page_header.html new file mode 100644 index 000000000..96c1ad48a --- /dev/null +++ b/page_header.html @@ -0,0 +1,4 @@ +This information applies to version 2.x of Elasticsearch. For the +most up to date information, see the current version of the + +Elasticsearch Reference. From f4cecaea4a5bfb8e285539baeb8e90e0318bd6d0 Mon Sep 17 00:00:00 2001 From: James Rodewig Date: Tue, 9 Apr 2019 17:24:48 -0400 Subject: [PATCH 81/88] [DOCS] Fix broken link for 7.0 release --- 510_Deployment/40_config.asciidoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/510_Deployment/40_config.asciidoc b/510_Deployment/40_config.asciidoc index 94ea5404b..dda8bcf7a 100644 --- a/510_Deployment/40_config.asciidoc +++ b/510_Deployment/40_config.asciidoc @@ -262,6 +262,6 @@ This setting is configured in `elasticsearch.yml`: discovery.zen.ping.unicast.hosts: ["host1", "host2:port"] ---- -For more information about how Elasticsearch nodes find eachother, see -https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery-zen.html[Zen Discovery] +For more information about how Elasticsearch nodes find each other, see +{ref}/modules-discovery.html[Discovery and cluster formation] in the Elasticsearch Reference. From becc13f7c2bb53b900dc3859062d33e9c56ca800 Mon Sep 17 00:00:00 2001 From: Nik Everett Date: Tue, 23 Apr 2019 17:34:59 -0400 Subject: [PATCH 82/88] Cleanup list in preparation for moving to asciidoctor Asciidoctor like the list shaped like this. --- .../15_Create_index_delete.asciidoc | 19 ++++++------------- 1 file changed, 6 insertions(+), 13 deletions(-) diff --git a/040_Distributed_CRUD/15_Create_index_delete.asciidoc b/040_Distributed_CRUD/15_Create_index_delete.asciidoc index 954be723d..ab35e8616 100644 --- a/040_Distributed_CRUD/15_Create_index_delete.asciidoc +++ b/040_Distributed_CRUD/15_Create_index_delete.asciidoc @@ -32,44 +32,37 @@ this process, possibly increasing performance at the cost of data security. These options are seldom used because Elasticsearch is already fast, but they are explained here for the sake of completeness: --- - `consistency`:: + --- By default, the primary shard((("consistency request parameter")))((("quorum"))) requires a _quorum_, or majority, of shard copies (where a shard copy can be a primary or a replica shard) to be available before even attempting a write operation. This is to prevent writing data to the ``wrong side'' of a network partition. A quorum is defined as follows: - ++ int( (primary + number_of_replicas) / 2 ) + 1 - ++ The allowed values for `consistency` are `one` (just the primary shard), `all` (the primary and all replicas), or the default `quorum`, or majority, of shard copies. - ++ Note that the `number_of_replicas` is the number of replicas _specified_ in the index settings, not the number of replicas that are currently active. If you have specified that an index should have three replicas, a quorum would be as follows: - ++ int( (primary + 3 replicas) / 2 ) + 1 = 3 - ++ But if you start only two nodes, there will be insufficient active shard copies to satisfy the quorum, and you will be unable to index or delete any documents. --- - `timeout`:: - ++ What happens if insufficient shard copies are available? Elasticsearch waits, in the hope that more shards will appear. By default, it will wait up to 1 minute. If you need to, you can use the `timeout` parameter((("timeout parameter"))) to make it abort sooner: `100` is 100 milliseconds, and `30s` is 30 seconds. --- - [NOTE] =================================================== A new index has `1` replica by default, which means that two active shard From a14616088fda0cac9b3d5c59cf7a433cb7e60066 Mon Sep 17 00:00:00 2001 From: Nik Everett Date: Thu, 10 Oct 2019 12:06:41 -0400 Subject: [PATCH 83/88] Fix snippets One bad path and one missing file. This fixes the bad path and replaces the missing file with `// AUTOSENSE` rather than overriding the file. --- 054_Query_DSL/75_Combining_queries_together.asciidoc | 2 +- 300_Aggregations/65_percentiles.asciidoc | 8 ++++---- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/054_Query_DSL/75_Combining_queries_together.asciidoc b/054_Query_DSL/75_Combining_queries_together.asciidoc index bbba76b25..f784965e3 100644 --- a/054_Query_DSL/75_Combining_queries_together.asciidoc +++ b/054_Query_DSL/75_Combining_queries_together.asciidoc @@ -145,7 +145,7 @@ will be identical, but it may aid in query simplicity/clarity. } } -------------------------------------------------- -// SENSE: 054_Query_DSL/70_bool_query.json +// SENSE: 054_Query_DSL/70_Bool_query.json <1> A `term` query is placed inside the `constant_score`, converting it to a non-scoring filter. This method can be used in place of a `bool` query which only diff --git a/300_Aggregations/65_percentiles.asciidoc b/300_Aggregations/65_percentiles.asciidoc index 5ad642b21..fd3cd216e 100644 --- a/300_Aggregations/65_percentiles.asciidoc +++ b/300_Aggregations/65_percentiles.asciidoc @@ -76,7 +76,7 @@ POST /website/logs/_bulk { "index": {}} { "latency" : 319, "zone" : "EU", "timestamp" : "2014-10-29" } ---- -// SENSE: 300_Aggregations/65_percentiles.json +// AUTOSENSE This data contains three values: a latency, a data center zone, and a date timestamp. Let's run +percentiles+ over the whole dataset to get a feel for @@ -101,7 +101,7 @@ GET /website/logs/_search } } ---- -// SENSE: 300_Aggregations/65_percentiles.json +// AUTOSENSE <1> The `percentiles` metric is applied to the +latency+ field. <2> For comparison, we also execute an `avg` metric on the same field. @@ -163,7 +163,7 @@ GET /website/logs/_search } } ---- -// SENSE: 300_Aggregations/65_percentiles.json +// AUTOSENSE <1> First we separate our latencies into buckets, depending on their zone. <2> Then we calculate the percentiles per zone. <3> The +percents+ parameter accepts an array of percentiles that we want returned, @@ -254,7 +254,7 @@ GET /website/logs/_search } } ---- -// SENSE: 300_Aggregations/65_percentiles.json +// AUTOSENSE <1> The `percentile_ranks` metric accepts an array of values that you want ranks for. After running this aggregation, we get two values back: From 372ff18e877b9053204db649b35420cc29285a58 Mon Sep 17 00:00:00 2001 From: Nik Everett Date: Thu, 17 Oct 2019 15:51:54 -0400 Subject: [PATCH 84/88] Add title-separator It prevents `The Definitive Guide` from becoming the subtitle in Asciidoctor. --- book.asciidoc | 1 + 1 file changed, 1 insertion(+) diff --git a/book.asciidoc b/book.asciidoc index 9a0083add..042d36b10 100644 --- a/book.asciidoc +++ b/book.asciidoc @@ -1,3 +1,4 @@ +:title-separator: | :bookseries: animal :es_build: 1 :ref: https://www.elastic.co/guide/en/elasticsearch/reference/2.4 From bf105feec55e7e41ebf6e21130e1e2900fa1374a Mon Sep 17 00:00:00 2001 From: Nik Everett Date: Thu, 17 Oct 2019 16:52:02 -0400 Subject: [PATCH 85/88] Lock page names Asciidoctor derrives these page names differently and we'd prefer not to move them. So this locks them to the name that AsciiDoc gave them. --- 300_Aggregations/35_date_histogram.asciidoc | 1 + 510_Deployment/45_dont_touch.asciidoc | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/300_Aggregations/35_date_histogram.asciidoc b/300_Aggregations/35_date_histogram.asciidoc index e0271acc9..641f2cd42 100644 --- a/300_Aggregations/35_date_histogram.asciidoc +++ b/300_Aggregations/35_date_histogram.asciidoc @@ -275,6 +275,7 @@ total sale price, and a bar chart for each individual make (per quarter), as sho .Sales per quarter, with distribution per make image::images/elas_29in02.png["Sales per quarter, with distribution per make"] +[[_the_sky_8217_s_the_limit]] === The Sky's the Limit These were obviously simple examples, but the sky really is the limit diff --git a/510_Deployment/45_dont_touch.asciidoc b/510_Deployment/45_dont_touch.asciidoc index 37506390f..7e04cd6cd 100644 --- a/510_Deployment/45_dont_touch.asciidoc +++ b/510_Deployment/45_dont_touch.asciidoc @@ -1,4 +1,4 @@ - +[[_don_8217_t_touch_these_settings]] === Don't Touch These Settings! There are a few hotspots in Elasticsearch that people just can't seem to avoid From 73686f3f559c9ad0eb94c76944f41f96657d8f89 Mon Sep 17 00:00:00 2001 From: Nik Everett Date: Thu, 17 Oct 2019 17:26:44 -0400 Subject: [PATCH 86/88] Fix deprecation warning in asciidoctor It needs quotes around the second parameter because it contains a `,`. --- 300_Aggregations/115_eager.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/300_Aggregations/115_eager.asciidoc b/300_Aggregations/115_eager.asciidoc index d293b5b07..501157c8f 100644 --- a/300_Aggregations/115_eager.asciidoc +++ b/300_Aggregations/115_eager.asciidoc @@ -207,7 +207,7 @@ also reduce CPU usage, as you will need to rebuild global ordinals less often. [[index-warmers]] ==== Index Warmers -deprecated[2.3.0,Thanks to disk-based norms and doc values, warmers don't have use-cases anymore] +deprecated[2.3.0, "Thanks to disk-based norms and doc values, warmers don't have use-cases anymore."] Finally, we come to _index warmers_. Warmers((("index warmers"))) predate eager fielddata loading and eager global ordinals, but they still serve a purpose. An index warmer From a7abd15483ab7244d5c42af93be58d7f6b9bf99e Mon Sep 17 00:00:00 2001 From: Nik Everett Date: Mon, 16 Dec 2019 16:08:33 -0500 Subject: [PATCH 87/88] Add extra title page This makes it compatible with `--direct_html`. --- book-extra-title-page.html | 62 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 62 insertions(+) create mode 100644 book-extra-title-page.html diff --git a/book-extra-title-page.html b/book-extra-title-page.html new file mode 100644 index 000000000..22aaf3511 --- /dev/null +++ b/book-extra-title-page.html @@ -0,0 +1,62 @@ +
+
+

+ + Clinton + + + Gormley + +

+
+
+

+ + Zachary + + + Tong + +

+
+
+ +
+ +
+ +
+
+

+ + Abstract + +

+

+ If you would like to purchase an eBook or printed version of this book once it is complete, you can do so from O'Reilly Media: + + Buy this book from O'Reilly Media + +

+

+ We welcome feedback – if you spot any errors or would like to suggest improvements, please + + open an issue + + on the GitHub repo. +

+
\ No newline at end of file From 07acfc9d998108ec22d9d36b4777d228de64fe0c Mon Sep 17 00:00:00 2001 From: James Rodewig <40268737+jrodewig@users.noreply.github.com> Date: Fri, 17 Sep 2021 16:00:28 -0400 Subject: [PATCH 88/88] The Definitive Guide is no longer maintained We no longer maintain the Definitive Guide or this repo. These docs only cover the 1.x and 2.x versions of Elasticsearch, which have passed their EOL dates. Those interested in the latest info should use the [current Elasticsearch docs][0] instead. Changes: * Updates the page header and README to clearly state the docs are no longer maintained. * Updates the contribution guidelines to discourage pull request and issues. * Removes a section of the title page for contributions. [0]: https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html --- CONTRIBUTING.md | 68 -------------------------------------- README.md | 21 ++++++++---- book-extra-title-page.html | 11 ++---- page_header.html | 15 ++++++--- 4 files changed, 27 insertions(+), 88 deletions(-) delete mode 100644 CONTRIBUTING.md diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md deleted file mode 100644 index 827c03b97..000000000 --- a/CONTRIBUTING.md +++ /dev/null @@ -1,68 +0,0 @@ -## Contributing to the Definitive Guide - -### Contributing documentation changes - -If you have a change that you would like to contribute, please find or open an -issue about it first. Talk about what you would like to do. It may be that -somebody is already working on it, or that there are particular issues that -you should know about before doing the change. - -The process for contributing to any of the [Elastic repositories](https://github.com/elastic/) -is similar. Details can be found below. - -### Fork and clone the repository - -You will need to fork the main repository and clone it to your local machine. -See the respective [Github help page](https://help.github.com/articles/fork-a-repo) -for help. - -### Submitting your changes - -Once your changes and tests are ready to submit for review: - -1. Test your changes - - [Build the complete book locally](https://github.com/elastic/elasticsearch-definitive-guide#building-the-definitive-guide) - and check and correct any errors that you encounter. - -2. Sign the Contributor License Agreement - - Please make sure you have signed our [Contributor License Agreement](https://www.elastic.co/contributor-agreement/). - We are not asking you to assign copyright to us, but to give us the right - to distribute your code without restriction. We ask this of all - contributors in order to assure our users of the origin and continuing - existence of the code. You only need to sign the CLA once. - -3. Rebase your changes - - Update your local repository with the most recent code from the main - repository, and rebase your branch on top of the latest `master` branch. - We prefer your initial changes to be squashed into a single commit. Later, - if we ask you to make changes, add them as separate commits. This makes - them easier to review. As a final step before merging we will either ask - you to squash all commits yourself or we'll do it for you. - - -4. Submit a pull request - - Push your local changes to your forked copy of the repository and - [submit a pull request](https://help.github.com/articles/using-pull-requests). - In the pull request, choose a title which sums up the changes that you - have made, and in the body provide more details about what your changes do. - Also mention the number of the issue where discussion has taken place, - e.g. "Closes #123". - -Then sit back and wait. There will probably be discussion about the pull -request and, if any changes are needed, we would love to work with you to get -your pull request merged. - -Please adhere to the general guideline that you should never force push -to a publicly shared branch. Once you have opened your pull request, you -should consider your branch publicly shared. Instead of force pushing -you can just add incremental commits; this is generally easier on your -reviewers. If you need to pick up changes from master, you can merge -master into your branch. A reviewer might ask you to rebase a -long-running pull request in which case force pushing is okay for that -request. Note that squashing at the end of the review process should -also not be done, that can be done when the pull request is [integrated -via GitHub](https://github.com/blog/2141-squash-your-commits). \ No newline at end of file diff --git a/README.md b/README.md index 0880c5dae..893b1d7ce 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,11 @@ -# The Definitive Guide to Elasticsearch +# Elasticsearch: The Definitive Guide + +This repository contains the source for the legacy [Elasticsearch: The Definitive Guide](https://www.elastic.co/guide/en/elasticsearch/guide/current/index.html) +documentation and is no longer maintained. For the latest information, see the +current +Elasticsearch documentation. -This repository contains the sources to the "Definitive Guide to Elasticsearch" which you can [read online](https://www.elastic.co/guide/en/elasticsearch/guide/current/index.html). - ## Building the Definitive Guide In order to build this project, we rely on our [docs infrastructure](https://github.com/elastic/docs). @@ -26,13 +30,16 @@ The Definitive Guide is written in Asciidoc and the docs repo also contains a [s The Definitive Guide is available for multiple versions of Elasticsearch: -* The [branch `1.x`](https://github.com/elastic/elasticsearch-definitive-guide/tree/1.x) applies to Elasticsearch 1.x -* The [branch `2.x`](https://github.com/elastic/elasticsearch-definitive-guide/tree/2.x) applies to Elasticsearch 2.x -* The [branch `master`](https://github.com/elastic/elasticsearch-definitive-guide/tree/2.x) applies to master branch of Elasticsearch (the current development version) +* The [`1.x` branch](https://github.com/elastic/elasticsearch-definitive-guide/tree/1.x) applies to Elasticsearch 1.x +* The [`2.x` and `master` branches](https://github.com/elastic/elasticsearch-definitive-guide/tree/2.x) apply to Elasticsearch 2.x ## Contributing -Before contributing a change please read our [contribution guide](CONTRIBUTING.md). +This repository is no longer maintained. Pull requests and issues will not be +addressed. + +To contribute to the current Elasticsearch docs, refer to the [Elasticsearch +repository](https://github.com/elastic/elasticsearch/). ## License diff --git a/book-extra-title-page.html b/book-extra-title-page.html index 22aaf3511..d27cc141c 100644 --- a/book-extra-title-page.html +++ b/book-extra-title-page.html @@ -47,16 +47,9 @@

- If you would like to purchase an eBook or printed version of this book once it is complete, you can do so from O'Reilly Media: + If you would like to purchase an eBook or printed version of this book, you can do so from O'Reilly Media: Buy this book from O'Reilly Media

-

- We welcome feedback – if you spot any errors or would like to suggest improvements, please - - open an issue - - on the GitHub repo. -

-

\ No newline at end of file + diff --git a/page_header.html b/page_header.html index 96c1ad48a..d7a85f0e0 100644 --- a/page_header.html +++ b/page_header.html @@ -1,4 +1,11 @@ -This information applies to version 2.x of Elasticsearch. For the -most up to date information, see the current version of the - -Elasticsearch Reference. +

+ WARNING: The 2.x versions of Elasticsearch have passed their + EOL dates. If you are running + a 2.x version, we strongly advise you to upgrade. +

+

+ This documentation is no longer maintained and may be removed. For the latest + information, see the current + Elasticsearch documentation. +