Title: | Deploy 'TensorFlow' Models |
---|---|
Description: | Tools to deploy 'TensorFlow' <https://www.tensorflow.org/> models across multiple services. Currently, it provides a local server for testing 'cloudml' compatible services. |
Authors: | Javier Luraschi [aut, ctb], Daniel Falbel [cre, ctb], RStudio [cph] |
Maintainer: | Daniel Falbel <[email protected]> |
License: | Apache License 2.0 |
Version: | 0.6.1 |
Built: | 2024-11-01 03:24:10 UTC |
Source: | https://github.com/cran/tfdeploy |
Loads a SavedModel using the given TensorFlow session and returns the model's graph.
load_savedmodel(sess = NULL, model_dir = NULL)
load_savedmodel(sess = NULL, model_dir = NULL)
sess |
The TensorFlow session. |
model_dir |
The path to the exported model, as a string. Defaults to a "savedmodel" path or the latest training run. |
Loading a model improves performance over multiple predict_savedmodel()
calls.
export_savedmodel()
, predict_savedmodel()
## Not run: # start session sess <- tensorflow::tf$Session() # preload an existing model into a TensorFlow session graph <- tfdeploy::load_savedmodel( sess, system.file("models/tensorflow-mnist", package = "tfdeploy") ) # perform prediction based on a pre-loaded model tfdeploy::predict_savedmodel( list(rep(9, 784)), graph ) # close session sess$close() ## End(Not run)
## Not run: # start session sess <- tensorflow::tf$Session() # preload an existing model into a TensorFlow session graph <- tfdeploy::load_savedmodel( sess, system.file("models/tensorflow-mnist", package = "tfdeploy") ) # perform prediction based on a pre-loaded model tfdeploy::predict_savedmodel( list(rep(9, 784)), graph ) # close session sess$close() ## End(Not run)
Runs a prediction over a saved model file, web API or graph object.
predict_savedmodel(instances, model, ...)
predict_savedmodel(instances, model, ...)
instances |
A list of prediction instances to be passed as input tensors to the service. Even for single predictions, a list with one entry is expected. |
model |
The model as a local path, a REST url or graph object. A local path can be exported using A |
... |
See #' @section Implementations: |
export_savedmodel()
, serve_savedmodel()
, load_savedmodel()
## Not run: # perform prediction based on an existing model tfdeploy::predict_savedmodel( list(rep(9, 784)), system.file("models/tensorflow-mnist", package = "tfdeploy") ) ## End(Not run)
## Not run: # perform prediction based on an existing model tfdeploy::predict_savedmodel( list(rep(9, 784)), system.file("models/tensorflow-mnist", package = "tfdeploy") ) ## End(Not run)
Performs a prediction using a locally exported SavedModel.
## S3 method for class 'export_prediction' predict_savedmodel(instances, model, signature_name = "serving_default", ...)
## S3 method for class 'export_prediction' predict_savedmodel(instances, model, signature_name = "serving_default", ...)
instances |
A list of prediction instances to be passed as input tensors to the service. Even for single predictions, a list with one entry is expected. |
model |
The model as a local path, a REST url or graph object. A local path can be exported using A |
signature_name |
The named entry point to use in the model for prediction. |
... |
See #' @section Implementations: |
Performs a prediction using a SavedModel model already loaded using
load_savedmodel()
.
## S3 method for class 'graph_prediction' predict_savedmodel(instances, model, sess, signature_name = "serving_default", ...)
## S3 method for class 'graph_prediction' predict_savedmodel(instances, model, sess, signature_name = "serving_default", ...)
instances |
A list of prediction instances to be passed as input tensors to the service. Even for single predictions, a list with one entry is expected. |
model |
The model as a local path, a REST url or graph object. A local path can be exported using A |
sess |
The active TensorFlow session. |
signature_name |
The named entry point to use in the model for prediction. |
... |
See #' @section Implementations: |
Performs a prediction using a Web API providing a SavedModel.
## S3 method for class 'webapi_prediction' predict_savedmodel(instances, model, ...)
## S3 method for class 'webapi_prediction' predict_savedmodel(instances, model, ...)
instances |
A list of prediction instances to be passed as input tensors to the service. Even for single predictions, a list with one entry is expected. |
model |
The model as a local path, a REST url or graph object. A local path can be exported using A |
... |
See #' @section Implementations: |
Serve a TensorFlow SavedModel as a local web api.
serve_savedmodel(model_dir, host = "127.0.0.1", port = 8089, daemonized = FALSE, browse = !daemonized)
serve_savedmodel(model_dir, host = "127.0.0.1", port = 8089, daemonized = FALSE, browse = !daemonized)
model_dir |
The path to the exported model, as a string. |
host |
Address to use to serve model, as a string. |
port |
Port to use to serve model, as numeric. |
daemonized |
Makes 'httpuv' server daemonized so R interactive sessions are not blocked to handle requests. To terminate a daemonized server, call 'httpuv::stopDaemonizedServer()' with the handle returned from this call. |
browse |
Launch browser with serving landing page? |
## Not run: # serve an existing model over a web interface tfdeploy::serve_savedmodel( system.file("models/tensorflow-mnist", package = "tfdeploy") ) ## End(Not run)
## Not run: # serve an existing model over a web interface tfdeploy::serve_savedmodel( system.file("models/tensorflow-mnist", package = "tfdeploy") ) ## End(Not run)