You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Transport] Use elastic-transport-go as transport (#375)
* Initial commit to use estransport as a external dependency
* Adapt import name to transport package change
* Rename estransport to elastictransport
* Update transport reference in makefile
* Update reference to elastic-transport-go
* Apply external transport to examples
* Rename transport in readme files
* Adding go.sum for elastic-transport
Copy file name to clipboardexpand all lines: README.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -84,7 +84,7 @@ go run main.go
84
84
85
85
## Usage
86
86
87
-
The `elasticsearch` package ties together two separate packages for calling the Elasticsearch APIs and transferring data over HTTP: `esapi` and `estransport`, respectively.
87
+
The `elasticsearch` package ties together two separate packages for calling the Elasticsearch APIs and transferring data over HTTP: `esapi` and `elastictransport`, respectively.
88
88
89
89
Use the `elasticsearch.NewDefaultClient()` function to create the client with the default settings.
90
90
@@ -351,7 +351,7 @@ func main() {
351
351
As you see in the example above, the `esapi` package allows to call the Elasticsearch APIs in two distinct ways: either by creating a struct, such as `IndexRequest`, and calling its `Do()` method by passing it a context and the client, or by calling the `Search()` function on the client directly, using the option functions such as `WithIndex()`. See more information and examples in the
The `estransport` package handles the transfer of data to and from Elasticsearch, including retrying failed requests, keeping a connection pool, discovering cluster nodes and logging.
354
+
The `elastictransport` package handles the transfer of data to and from Elasticsearch, including retrying failed requests, keeping a connection pool, discovering cluster nodes and logging.
355
355
356
356
Read more about the client internals and usage in the following blog posts:
The example intentionally doesn't use any abstractions or helper functions, to
9
-
demonstrate the low-level mechanics of working with the Bulk API:
8
+
The example intentionally doesn't use any abstractions or helper functions, to demonstrate the low-level mechanics of
9
+
working with the Bulk API:
10
10
11
11
* iterating over a slice of data and preparing the `meta`/`data` pairs,
12
12
* filling a buffer with the payload until the configured threshold for a single batch is reached,
@@ -29,7 +29,8 @@ go run default.go -count=100000 -batch=25000
29
29
30
30
## `indexer.go`
31
31
32
-
The [`indexer.go`](indexer.go) example demonstrates how to use the [`esutil.BulkIndexer`](../esutil/bulk_indexer.go) helper for efficient indexing in parallel.
32
+
The [`indexer.go`](indexer.go) example demonstrates how to use the [`esutil.BulkIndexer`](../esutil/bulk_indexer.go)
Copy file name to clipboardexpand all lines: _examples/bulk/benchmarks/README.md
+15-7
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,8 @@
1
1
# Bulk Indexer Benchmarks
2
2
3
-
The [`benchmarks.go`](benchmarks.go) file executes end-to-end benchmarks for `esutil.NewBulkIndexer`. It allows to configure indexer parameters, index settings, number of runs. See `go run benchmarks.go --help` for an overview of configuration options:
3
+
The [`benchmarks.go`](benchmarks.go) file executes end-to-end benchmarks for `esutil.NewBulkIndexer`. It allows to
4
+
configure indexer parameters, index settings, number of runs. See `go run benchmarks.go --help` for an overview of
5
+
configuration options:
4
6
5
7
```
6
8
go run benchmarks.go --help
@@ -67,7 +69,8 @@ docs/sec: min [279,173] max [289,351] mean [286,987]
67
69
68
70
## HTTP Log Event
69
71
70
-
The [`httplog`](data/httplog/document.json) dataset uses a bigger document (2.5K), corresponding to a log event gathered by [Filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-nginx.html) from Nginx.
72
+
The [`httplog`](data/httplog/document.json) dataset uses a bigger document (2.5K), corresponding to a log event gathered
73
+
by [Filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-nginx.html) from Nginx.
71
74
72
75
```
73
76
ELASTICSEARCH_URL=http://server:9200 go run benchmarks.go --dataset=httplog --count=1_000_000 --flush=3MB --shards=5 --replicas=0 --fasthttp=true --easyjson=true
@@ -89,7 +92,8 @@ docs/sec: min [50,165] max [53,072] mean [52,011]
89
92
90
93
## Mock Server
91
94
92
-
The `--mockserver` flag allows to run the benchmark against a "mock server", in this case Nginx, to understand a theoretical performance of the client, without the overhead of a real Elasticsearch cluster.
95
+
The `--mockserver` flag allows to run the benchmark against a "mock server", in this case Nginx, to understand a
96
+
theoretical performance of the client, without the overhead of a real Elasticsearch cluster.
93
97
94
98
```
95
99
ELASTICSEARCH_URL=http://server:8000 go run benchmarks.go --dataset=small --count=1_000_000 --flush=2MB --warmup=0 --mockserver
@@ -117,8 +121,12 @@ the size and structure of your data, the index settings and mappings, the cluste
117
121
The benchmarks have been run in the following environment:
118
122
119
123
* OS: Ubuntu 18.04.4 LTS (5.0.0-1031-gcp)
120
-
* Client: A `n2-standard-8`[GCP instance](https://cloud.google.com/compute/docs/machine-types#n2_machine_types) (8 vCPUs/32GB RAM)
121
-
* Server: A `n2-standard-16`[GCP instance](https://cloud.google.com/compute/docs/machine-types#n2_machine_types) (16 vCPUs/64GB RAM)
122
-
* Disk: A [local SSD](https://cloud.google.com/compute/docs/disks#localssds) formatted as `ext4` on NVMe interface for Elasticsearch data
123
-
* A single-node Elasticsearch cluster, `7.6.0`, [default distribution](https://www.elastic.co/downloads/elasticsearch), installed from a TAR, with 4GB locked for heap
124
+
* Client: A `n2-standard-8`[GCP instance](https://cloud.google.com/compute/docs/machine-types#n2_machine_types) (8
125
+
vCPUs/32GB RAM)
126
+
* Server: A `n2-standard-16`[GCP instance](https://cloud.google.com/compute/docs/machine-types#n2_machine_types) (16
127
+
vCPUs/64GB RAM)
128
+
* Disk: A [local SSD](https://cloud.google.com/compute/docs/disks#localssds) formatted as `ext4` on NVMe interface for
129
+
Elasticsearch data
130
+
* A single-node Elasticsearch cluster, `7.6.0`, [default distribution](https://www.elastic.co/downloads/elasticsearch),
131
+
installed from a TAR, with 4GB locked for heap
124
132
* Nginx 1.17.8 with [`nginx.conf`](etc/nginx.conf)
0 commit comments