... | ... | @@ -89,3 +89,28 @@ Iterations; Time; logname2.train.key1; logname2.train.key2; ... |
|
|
|
|
|
...
|
|
|
```
|
|
|
|
|
|
## Deployment
|
|
|
Barista also supports deploying your network automatically. This is quite handy if you are not only interested in the training results themself, but want to use the final net e.g. inside of an external application, too. As the latter usually involves inference only, some parts of the created graph aren't needed anymore or need to be altered instead.
|
|
|
|
|
|
To start the deployment process in Barista, use the according entry in the top menu bar as shown below.
|
|
|
Now you need to specify two things:
|
|
|
- a path pointing to an existing folder on your machine
|
|
|
- and a desired snapshot that has been created inside of your Barista project (or was imported by copying the file into one of the session folders)
|
|
|
|
|
|
The chosen snapshot determines which version of your network you want to deploy. The given path will be used as the destination to store all generated files. These files include a copy of the snapshot's caffemodel file, which consists of all trained weights, as well as a prototxt file containing a modified definition of your network's architecture.
|
|
|
Before exporting those files, the following steps are performed automatically:
|
|
|
- All Data Layers are removed
|
|
|
- All layers requiring labeled data are removed, too (especially all Loss Layers as well as accuracy calculations etc.)
|
|
|
- Instead, new Input Layers with static input dimension will be added
|
|
|
- Input dimension will be determined automatically as well (as long as the former Data Layers used to have a valid data source)
|
|
|
- Blob connections are set automatically
|
|
|
- Usually, there will be only one Input Layer. However, note that, the number of newly-added input layers might be lower than the number of previously-removed data layers. On the one hand, we will add only one new input layer for each unique data blob name, while multiple data layers might have used the same data blob name. On the other hand, input layers will only be added, if at least one other layer is using the provided data.
|
|
|
- Append a new Softmax layer (only if none does exist yet and a SoftmaxWithLoss layer used to be included)
|
|
|
|
|
|
The above described rules are based on the information provided in the following caffe wiki page: https://github.com/BVLC/caffe/wiki/Using-a-Trained-Network:-Deploy
|
|
|
|
|
|
Finally, some restrictions apply to the deployment process:
|
|
|
- Each data layer must not have more than two top blobs (a general restriction of caffe). Raises a warning.
|
|
|
- At least the label blob name must follow the naming conventions (so it's always called "label"). Raises a warning.
|
|
|
- The shape of a Data layer can only be determined automatically, if the layer type is either "Data" (LMDB or LEVELDB) or "HDF5Data". Otherwise, a warning will inform the user about necessary manual changes. |
|
|
\ No newline at end of file |