Skip to content

Commit d590aae

Browse files
authored
Merge pull request #245 from chskcau/patch-1
Documentation fixes
2 parents fca155b + 7f172c2 commit d590aae

File tree

1 file changed

+13
-13
lines changed

1 file changed

+13
-13
lines changed

docs/src/index.md

+13-13
Original file line numberDiff line numberDiff line change
@@ -233,29 +233,29 @@ julia> sum(A) == sum(DA)
233233
false
234234
```
235235

236-
The ultimate ordering of operations will be dependent on how the Array is distributed.
236+
The ultimate ordering of operations will be dependent on how the `Array` is distributed.
237237

238238

239239

240-
Garbage Collection and DArrays
240+
Garbage Collection and `DArray`s
241241
------------------------------
242242

243-
When a DArray is constructed (typically on the master process), the returned DArray objects stores information on how the
244-
array is distributed, which processor holds which indices and so on. When the DArray object
243+
When a `DArray` is constructed (typically on the master process), the returned `DArray` objects stores information on how the
244+
array is distributed, which processor holds which indices and so on. When the `DArray` object
245245
on the master process is garbage collected, all participating workers are notified and
246-
localparts of the DArray freed on each worker.
246+
localparts of the `DArray` freed on each worker.
247247

248-
Since the size of the DArray object itself is small, a problem arises as `gc` on the master faces no memory pressure to
249-
collect the DArray immediately. This results in a delay of the memory being released on the participating workers.
248+
Since the size of the `DArray` object itself is small, a problem arises as `gc` on the master faces no memory pressure to
249+
collect the `DArray` immediately. This results in a delay of the memory being released on the participating workers.
250250

251251
Therefore it is highly recommended to explicitly call `close(d::DArray)` as soon as user code
252252
has finished working with the distributed array.
253253

254-
It is also important to note that the localparts of the DArray is collected from all participating workers
255-
when the DArray object on the process creating the DArray is collected. It is therefore important to maintain
256-
a reference to a DArray object on the creating process for as long as it is being computed upon.
254+
It is also important to note that the localparts of the `DArray` is collected from all participating workers
255+
when the `DArray` object on the process creating the `DArray` is collected. It is therefore important to maintain
256+
a reference to a `DArray` object on the creating process for as long as it is being computed upon.
257257

258-
`d_closeall()` is another useful function to manage distributed memory. It releases all darrays created from
258+
`d_closeall()` is another useful function to manage distributed memory. It releases all `DArrays` created from
259259
the calling process, including any temporaries created during computation.
260260

261261

@@ -275,7 +275,7 @@ Argument `data` if supplied is distributed over the `pids`. `length(data)` must
275275
If the multiple is 1, returns a `DArray{T,1,T}` where T is `eltype(data)`. If the multiple is greater than 1,
276276
returns a `DArray{T,1,Array{T,1}}`, i.e., it is equivalent to calling `distribute(data)`.
277277

278-
`gather{T}(d::DArray{T,1,T})` returns an Array{T,1} consisting of all distributed elements of `d`
278+
`gather{T}(d::DArray{T,1,T})` returns an `Array{T,1}` consisting of all distributed elements of `d`.
279279

280280
Given a `DArray{T,1,T}` object `d`, `d[:L]` returns the localpart on a worker. `d[i]` returns the `localpart`
281281
on the ith worker that `d` is distributed over.
@@ -284,7 +284,7 @@ on the ith worker that `d` is distributed over.
284284

285285
SPMD Mode (An MPI Style SPMD mode with MPI like primitives, requires Julia 0.6)
286286
-------------------------------------------------------------------------------
287-
SPMD, i.e., a Single Program Multiple Data mode is implemented by submodule `DistributedArrays.SPMD`. In this mode the same function is executed in parallel on all participating nodes. This is a typical style of MPI programs where the same program is executed on all processors. A basic subset of MPI-like primitives are currently supported. As a programming model it should be familiar to folks with an MPI background.
287+
SPMD, i.e., a Single Program Multiple Data mode, is implemented by submodule `DistributedArrays.SPMD`. In this mode the same function is executed in parallel on all participating nodes. This is a typical style of MPI programs where the same program is executed on all processors. A basic subset of MPI-like primitives are currently supported. As a programming model it should be familiar to folks with an MPI background.
288288

289289
The same block of code is executed concurrently on all workers using the `spmd` function.
290290

0 commit comments

Comments
 (0)