Once you have a fully built model container, deploying it as
a REST service so you can serve predictions on the web is as
simple as calling model$predict
on your incoming stream of single row data points representing
customers, leads, emails, or whatever else you were modeling for.
The current release of the modeling engine does not include deployment tools out-of-the-box, but it should be straightforward for any engineering teams to build some using HTTP server packages like microserver or Rserve.
See the Syberia roadmap to learn when deployment tools will be released into the modeling engine (or beat us to the punch!).