You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: developers/compute-to-data-v2.0/free-compute-to-data-flow.md
+51Lines changed: 51 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,8 +39,59 @@ The consumer tool makes a request to the node **GET /computeEnvironments** and t
39
39
The end user selects the free resources and fills in the consumer tool together with job duration that the user considers is needed
40
40
for the algorithm execution.
41
41
42
+
### Free start compute
43
+
#### Nonce & signature check
42
44
Consumer tool calls through ocean.js `freeStartCompute` which requests in Ocean Node **POST /freeCompute**. Within Ocean Node,
43
45
nonce and signature provided by ocean.js is checked. In case of invalid nonce or signature, node returns __500, 'Invalid nonce or signature, unable to proceed.'__
44
46
47
+
#### Credentials check
48
+
If the node has configured `POLICY_SERVER_URL` and ddo contains credentials, the credentials check is performed in `Policy Server`, otherwise node performs credentials check for consumer address. In case of failure, node returns back to the consumer tool __403, 'Error: Access to asset ${ddo.id} was denied'__
45
49
50
+
After these checks are performed and are successful, job id is generated and C2D engine is called for the actual algorithm execution.
51
+
52
+
#### C2D Docker Engine
53
+
54
+
The only supported engine for start compute (free and paid) is the one for Docker.
55
+
The following steps executed by C2D Docker engine class are:
56
+
57
+
1. C2D Engine validates the Docker image by checking manifest operating system and operating system architecture with the ones from environment platform. The manifest from the Docker image is retrieved from the `tag` or `image digest hash` using Docker SDK.
58
+
If validation is failed, Node throws error withtin the engine:
59
+
__Unable to validate docker image__ and creation of the job stops.
60
+
61
+
2. Creates the folders for datasets, algorithm and result folder of algorithm execution, such as `/data/inputs`, `/data/transformations`, `/data/ddos`, `/data/outputs`.
62
+
63
+
3. Saves the job structure into `SQLite` database.
64
+
65
+
4. Starts monitoring the job execution and persists journalizing the lifecycle status of the job in `SQLite` database.
66
+
67
+
Whenever a job has started, an internal loop which monitors all the new jobs is triggered. The loop determines the lifecycle of a compute job execution.
68
+
**Lifecycle of a job according to statuses:**
69
+
70
+
`JobStarted` -> `PullImage` or `PullImageFailed` -> `ConfiguringVolumes` or `VolumeCreationFailed` -> `Provisioning` or `ContainerCreationFailed` -> `RunningAlgorithm` or `AlgorithmFailed` -> `PublishingResults` or `ResultsUploadFailed`
71
+
72
+
**Sequence of steps for internal loop:**
73
+
1. Pulling docker image for the algorithm - if failure -> throws error and returns back to the consumer tool and updates job status `SQLite` database in `PullImageFailed`.
74
+
2. Configuring volumes for the dedicated algorithm container - if failure -> throws error and returns back to the consumer tool and updates job status `SQLite` database in `VolumeCreationFailed`.
75
+
3. Create Docker container for the algorithm - if failure -> throws error and returns back to the consumer tool and updates job status `SQLite` database in `ContainerCreationFailed`.
76
+
4. Triggers algorithm execution on dedicated container - if failure from the algorithm -> throws error and returns back to the consumer tool and updates job status `SQLite` database in `AlgorithmFailed`.
77
+
5. Publish results in `/data/outputs` even if the algorithm execution was successful or not - if failure -> throws error and returns back to the consumer tool and updates job status `SQLite` database in `ResultsUploadFailed`.
78
+
If publishing results step was executed successfully, the container and volumes will be deleted together with the folders
79
+
for datasets, algorithms and results.
80
+
81
+
**Observation**: If a job exceeds its specified duration, C2DEngine internal loop will terminate the allocated container and volumes which facilitates algorithm's execution and sets the job to `PublishingResults`, meaning will perform a forced cleanup of the job setup.
82
+
83
+
### Get job status
84
+
85
+
To display the progress to the end user, the consumer tool requests the node at **GET /compute** with job ID for the job status through ocean.js method `computeStatus`.
86
+
87
+
In case of request failure from node side, the error is retruned back to the consumer tool and displayed to the end user.
88
+
89
+
### Retrieve compute job results
90
+
91
+
If compute job status is `PublishingResults`, consumer tool will
92
+
call ocean.js `computeResult` method which requests from node
93
+
on endpoint `GET /computeResult`. Node returns to ocean.js the results content and ocean.js generates a downloadable URL to pass further to the consumer tools.
94
+
95
+
In case of request failure from node side, the error is retruned back to the consumer tool and displayed to the end user.
96
+
Consumer tools received the downloadable URL and will fetch the BLOB content from it and store on end user's specified results folder path.
0 commit comments