More complex resources are often built from simpler resources. It is
the responsibility of the dependency_tracker
to determine
whether any dependencies have been modified.
dependency_tracker(object, ..., dependency_tracker.return = "object")
object | active_resource. See |
---|---|
... | additional parameters to pass to the next layer in the resource parsing tower. |
dependency_tracker.return. | What to return in this layer
of the parsing tower. The options are The last (default) choice, |
The parsed resource.
The dependency_tracker
is very good at its job and can track
arbitrarily nested dependencies (for example, if resource "foo"
needs resource "bar"
who needs resource "baz"
, etc.).
But beware! The dependency_tracker
won't tolerate circular
dependencies with anything except tears of anguish.
The local any_dependencies_modified
is injected by the
dependency_tracker
for use in the preprocessor or parser
of a resource. Note this is based off the dependencies last time
the resource was executed, since it is impossible to know a priori
what the dependencies will be prior to sourcing the resource's file.
The local dependencies
, a character vector of (recursive)
dependencies is also injected.
The local any_dependencies_modified
rarely needs to be
used by a preprocessor or parser. You should usually use
resource caching
instead.
The parameters must be named object
and ...
due to
this method's inclusion in a tower
.
active_resource
, resource_caching
,
tower
not_run({ # Imagine we are constructing a stagerunner from a sequence of functions. # However, some of those functions have been built by other resources. # Imagine the following structure. # (See github.com/robertzk/stagerunner for an explanation of stagerunners.) #=== /dir/runners/project1.R === list( "import data" = resource("importers/db"), # These are some functions "munge data" = resource("mungers/impute"), # built by the user "create model" = resource("models/lm"), # that live in other "export model" = resource("exporters/file") # files. ) #=== /dir/importers/db.R === conn <- resource("connections/dev") # A list representing a connection # setting to a development database. DBI::dbReadTable(conn, "some_table") #=== /dir/connections/dev.R #=== R console === d <- director("/dir") # Create a director object. d$register_preprocessor("runners/", function(director, source, any_dependencies_modified) { # `any_dependencies_modified` has been set by the dependency_tracker to # TRUE or FALSE according as /dir/runners/project1.R *or any of its # dependencies* has been modified. if (any_dependencies_modified || is.null(runner <- director$cache_get("last_runner"))) { # Construct a new stageRunner, since a dependency has been modified. source() } else { runner } }) d$register_parser("runners/", function(output) { # If it is a stageRunner, it must have been retrieved from the cache. if (stagerunner::is.stageRunner(output)) { return(output) } runner <- stagerunner::stageRunner$new(new.env(), output) # Cache the runner so it is available in the preprocessor next time. # As long as the /dir/runners/project1.R file remains untouched, we will # not have to bother re-sourcing the file and hence reconstructing the # stageRunner. director$cache_set("last_runner", runner) runner }) sr <- d$resource("runners/project1") # A fresh new stageRunner! sr2 <- d$resource("runners/project1") # Same one, since it used the cache. stopifnot(identical(sr, sr2)) # We can use base::Sys.setFileTime to pretend like we updated the # modified time of the /dir/connections/dev.R file, triggering # `any_dependencies_modified = TRUE`. Sys.setFileTime(file.path(d$root(), "connections", "dev.R"), Sys.time() - as.difftime(1, units = "mins")) sr3 <- d$resource("runners/project1") # Now it re-builds the runner. stopifnot(!identical(sr, sr3)) # A new runner! })