Mastering Asynchronous Programming in R Shiny: A Comprehensive Guide

As Shiny developers, we’re always on a quest to create web applications that not only meet but exceed user expectations. Speed, responsiveness, and efficiency are the cornerstones of a successful Shiny app. In this blog series, we’re going to delve into a critical aspect of achieving these goals: asynchronous programming. In this comprehensive guide to async programming in R Shiny you’re going to learn what async programming is, what packages there are available for Shiny, and how to turn your synchronous app to an asynchronous one with just a couple of steps! 

In this blog, the focus is on bidding farewell to the frustrating era of unnecessary waiting and mastering asynchronous programming in R Shiny. Our journey will be structured into several parts:

Introduction to Asynchronous Programming: The concept of asynchronous programming and how it can contribute to responsiveness and speed.

Implementing Asynchronous Programming in Shiny: Moving beyond theory, we’ll explore practical techniques and packages to implement asynchronous programming within Shiny applications, enhancing their overall performance.

Comparing Methods: No discussion is complete without a comparison of different methods and tools available for asynchronous programming in Shiny. We’ll weigh the pros and cons to help you make informed choices.

Table of Contents

Why is efficiency and speed important?

Let’s start with the basics. Why do we care about an efficient and fast app?

Nobody likes slow applications. In today’s fast-paced digital landscape, users expect nothing less than instant loading times and seamless performance. Slow, unresponsive apps can lead to frustration and ultimately drive users away. But we’re not here to rely solely on gut feelings. Let’s back it up with some cold, hard facts:

  • If a website or app takes longer than 3 seconds to load, a staggering 53% of mobile users will abandon it. [1]
  • Extend that wait to 5 seconds, and you’re looking at a jaw-dropping 90% abandonment rate. [1]
  • Even a mere 1-second delay in page load time can result in a 7% reduction in conversions, a 16% dip in user satisfaction, and an 11% decrease in page views. [2]
 

These statistics paint a clear picture: application speed and efficiency matter more than ever. It’s not just about delivering a pleasant user experience; it’s about retaining your audience, maximizing revenue, and minimizing costs. Even if your Shiny apps cater to internal stakeholders, remember that application performance issues can have far-reaching financial implications.

Consider this: 84% of users are inclined to abandon a website or app if they encounter slow load times or performance issues. Additionally, 37% of users are willing to switch to a competitor’s website if they experience poor performance. [3] In other words, inefficient apps simply don’t have as much power.

Now, you might wonder if these principles apply to you, especially if you’re not primarily on software development. You just want to make a nice dashboard or app. However, here’s the key takeaway: when you build a Shiny app, you’re not just a data scientist, analyst, or manager. You’re a software developer. Regardless of your job title, you bear the responsibility of adhering to software development best practices and ensuring your apps run efficiently and smoothly.

There are various ways to make your Shiny app more efficient. You can think about caching, lazy loading or simply writing more efficient code. But asynchronous programming belongs to the toolkit too!  It allows you to perform multiple tasks simultaneously, optimizing resource usage and reducing waiting times – a true game-changer in some cases.

In fact, I’d go so far as to argue that asynchronous programming isn’t just a valuable skill; it’s an essential one. It’s a skill that can take your applications to the next level, especially when you consider its prevalence in other software platforms like Vue.js, Node.js, and Python.

Alright, now we understand efficiency and speed is important, it’s time to dive into the heart of the matter: asynchronous programming. What is it, and how can it transform your Shiny applications?

What is async programming?

In order to understand how async programming can help you, you need to understand how Shiny works. Essentially, when a user interacts with your Shiny application through a web browser, a sequence of events unfolds.

Here’s the breakdown: when a user triggers an input or makes changes in your app, the web browser sends an HTTP request to the server. This server is where our R session is running that serves our Shiny app, and it’s responsible for processing that request. Once the server completes its task, it sends a response back to the web browser, which renders the content in HTML, CSS, and JavaScript. In essence, everything your app does is executed within our R session on that server.

Now, here’s an essential point to grasp: by default, our Shiny applications run in a single-threaded manner. In other words, an R session processes tasks sequentially, one after the other. You’ve likely experienced this if you run R code in your console. It’s a step-by-step process, and R doesn’t begin with another line of code until it finishes what it is currently busy with.

This default R behavior has implications for the user experience. When a user makes requests, they are processed one after the other in a queue. As you can imagine, this sequential execution naturally leads to some waiting time for users. Our R sessions and, consequently, our Shiny apps can only do one thing at a time.

So, let’s visualize this scenario. Suppose you have four tasks that need to be performed within your Shiny application. Each of these tasks requires some time to complete, regardless of the duration. Now, if you execute all these tasks sequentially (synchronously), the total time elapsed after completing these four tasks is a cumulative 17 seconds.

In this synchronous execution model, even if one task only takes 4 seconds, the waiting time for users can quickly add up because everything happens in a strict sequence. This waiting time grows more significant when multiple users share the same session. Depending on your app’s setup and deployment, you might allow multiple connections per session. In this case, if the first user initiates a task that takes 5 seconds and, at the same time, another user triggers a task that takes 4 seconds, the second user has to wait 5 seconds before their task even begins, and then another 4 seconds for their task to complete. A total of 9 seconds.

Here’s where asynchronous programming comes into play. When we execute tasks asynchronously, we can perform them simultaneously. This approach drastically reduces the total time needed to complete all four tasks to 5 seconds, which is essentially the duration of the longest individual task. The power of asynchronous programming especially becomes evident when you have multiple users. With asynchronous execution, it doesn’t matter when users click buttons or launch computations; tasks start immediately, and therefore finish sooner.

In the example we discussed earlier, even if the second user launches a task at the same time as the first, it gets completed in 4 seconds, not 9. With async task execution, we eliminate unnecessary waiting times for users.

When to use async programming?

Now that we’ve laid the groundwork for asynchronous programming, let’s explore when and why you should consider asynchronous programming in your Shiny applications.

I/O-Bound Needs (Input/Output Operations):

  • One of the most common scenarios where asynchronous programming shines is when your application has I/O-bound needs.
  • I/O-bound tasks involve input/output operations that inherently come with waiting times. For instance, you might be requesting data from a database, reading large files, loading CSV datasets, calling an API, or generating (R markdown/Quarto) reports.

 

CPU-Bound Needs (Computational Intensive Tasks):

  • Asynchronous programming isn’t solely reserved for I/O-bound tasks; it can also be valuable when dealing with CPU-bound needs.
  • CPU-bound tasks are those that demand significant computational resources, such as constructing complex machine learning models that require substantial processing power for predictions.
  • While traditional wisdom might lead you towards multi-threading in CPU-bound scenarios (to maximize CPU utilization), asynchronous programming still has its role to play. You can use asynchronous programming to orchestrate tasks efficiently and make sure the user can still use the app while they wait for the results. The computational intensive task then becomes non-blocking.

 

Pfiew. Lots of info! On to the million-dollar question: “How do you implement asynchronous programming in the context of Shiny?” 👇 

Async programming in Shiny

There are multiple ways to implement async programming in Shiny, and in this blog we’ll go over some of them: promises, callr, coro, mirai, and crew. Note that there are more ways to achieve async programming, so feel free to explore other alternatives as well!

promises

Promises is perhaps the first package you will encounter when searching for async programming  in Shiny. The promises package is an API specifically designed for Shiny. It makes use of the future package and its functions.

A promise can be thought of as a placeholder for the result of an asynchronous task. It’s a representation of a future value that will be available at some point in the application’s execution. A promise object, is a specific construct that encapsulates this promise.

Here’s how it works: when an asynchronous task is initiated, a promise object is created to manage it. This promise object serves as a container for the eventual outcome of the task, whether it’s a successful result or an error. It allows the rest of the application to continue running without being blocked by the task.

The key feature of promise objects is their ability to “resolve” once the asynchronous operation completes. This means that when the task finishes, the promise object captures its result. This result can then be used to update the application’s user interface or perform other actions based on the outcome.

There are two ways to go about asynchronicity: inner-session, and cross-session. Very simply put, inner-session means it is not blocking for the current user, and cross-session means it is not blocking for other users in that same session. Cross-session asynchronicity is important when you allow multiple connections (aka users) on one R session that’s running your Shiny app. Below you can find a simple example using promises (and future) in Shiny that demonstrates both types of asynchronicity:

				
					library(shiny)
library(promises)
library(future)

# Options for asynchronous strategies: multisession,
# multicore (not Windows/RStudio), cluster
plan(multisession)

ui <- fluidPage(
  titlePanel("Using promises in Shiny (inner-session)"),

  p("Note that clicking 'Start Expensive Job' does not seem
    like a revolution, but the long computation (our complex Sys.sleep() 😉)
    is not blocking for the current or other users in the same session.
    This is both cross-session AND inner-session asynchronicity."),

  selectInput(
    inputId = "rows",
    label = "Number of rows",
    choices = c(10, 50, 100, 150),
    selected = 10
  ),

  actionButton(inputId = "start_job",
               label = "Start Expensive Job",
               icon = icon("bolt")),

  tableOutput("result_table")
)

server <- function(input, output, session) {

  # initiate reactive values
  table_dt <- reactiveVal(NULL)

  # start expensive calculation
  observeEvent(input$start_job, {

    print("Starting job now")

    # update actionButton to show we are busy
    updateActionButton(inputId = "start_job",
                       label = "Start Expensive Job",
                       icon = icon("sync", class = "fa-spin"))

    # reactive values and reactive expressions cannot be read from
    # within a future, therefore, you need to read any reactive
    # values/expressions in advance of launching the future
    rows <- as.numeric(input$rows)

    future_promise({
      # long computation
      Sys.sleep(5)
      # filter
      iris %>% head(rows)
    }) %...>%
      table_dt()

    print("This will execute immediately,
          even though our promise isn't resolved yet")

  })

  # Display the table data
  output$result_table <- renderTable({

    table_dt()

  })

  observe({

    req(table_dt())

    # update actionButton to show data is available and
    # we're ready for another calculation
    updateActionButton(inputId = "start_job",
                       label = "Start Expensive Job",
                       icon = icon("bolt"))

  })

}

shinyApp(ui = ui, server = server)
				
			

On GitHub you can find also find an example that demonstrates cross-session asynchronicity only.

callr

The callr package does what its name suggest: it calls R from R. With callr you can spin up another R session that runs in the background and separately from your main R session. It’s perfect for asynchronous task execution: if you have lengthly or heavy computations, you can simply send them to another R session that will execute the work while the main R session remains free.

It works a bit differently compared to promises. While promises are smart enough to “automatically” resolve, we have to check “manually” if our callr background session is already finished with its task, and then collect the results. On the other hand, with callr, you have much more control over the background sessions. It’s also easier to understand what is going on as it’s very explicit.

Working with background processes, it’s good to realise that these are separate R sessions that don’t know anything about the main R session. This means that you need to provide all the libraries, functions, and data they need to run standalone. Below there’s a simple example that demonstrates how to use callr in Shiny:

				
					library(shiny)
library(callr)

ui <- fluidPage(
  titlePanel("Using callR in Shiny"),
  actionButton("start_job", "Start Expensive Job"),
  tableOutput("result_table")
)

server <- function(input, output, session) {
  
  # initiate reactive values
  bg_proc <- reactiveVal(NULL)
  check_finished <- reactiveVal(FALSE)
  table_dt <- reactiveVal(NULL)
  
  # set whatever arguments you want to use
  some_argument <- "virginica"
  
  # callR demonstration
  observeEvent(input$start_job, {
    
    p <-
      
      r_bg(
        
        func =
          function(my_argument) {
            
            # long computation
            Sys.sleep(10)
            
            # using your supplied argument to demonstrate how to use arguments in background process
            iris <- subset(iris, Species == my_argument)
            
            # the result
            return(iris)
            
          },
        
        supervise = TRUE, args = list(my_argument = some_argument)
        
      )
    
    # update reactive vals
    bg_proc(p)
    check_finished(TRUE)
    table_dt(NULL)
  })
  
  # this part can be useful if you want to update your UI during the process
  # think about doing an expensive calculation and showing preliminary results
  observe({
    
    req(check_finished())
    
    invalidateLater(millis = 1000)
    
    # do something while waiting
    print(paste0("Still busy at ", Sys.time()))
    
    p <- isolate(bg_proc())
    
    # whenever the background job is finished the value of is_alive() will be FALSE
    if (p$is_alive() == FALSE) {
      
      print("Finished!")
      
      check_finished(FALSE)
      bg_proc(NULL)
      
      # update the table data with results
      # (do not nest setting `output` directly in observe methods)
      table_dt(p$get_result())
      
    }
    
  })

  # Display the table data
  output$result_table <- renderTable(table_dt())

}

shinyApp(ui = ui, server = server)
				
			

coro

coro implements coroutines for R. And it’s not well known in the Shiny community!

Coroutines are mechanism dating back to the 1960’s, depicting a unique way of handling asynchronous programming. The concept revolves around the use of suspension points, suspendable functions and continuations as first-class citizens in a language.

Throughout the years, several programming languages have evolved their versions of the implementation. For example, in languages like Python and C#, coroutines are first-class citizens, and can be used without an external library. In C#, there’s support for the  yield statement, like in Python, but also for  async and  await calls.

Coroutines are essentially functions that can be paused and resumed at certain points. A normal function you just call and it runs until it’s finished. A coroutine you create and then you push it forward by calling resume() repeatedly. resume()will just run the coroutine body until the next suspension point is reached.

The ‘yield’ statement in Python is a good example. It creates a coroutine. When the ‘yield’ is encountered the current state of the function is saved and control is returned to the calling function. The calling function can then transfer execution back to the yielding function and its state will be restored to the point where the ‘yield’ was encountered and execution will continue.

Long story short: this mechanism is useful for writing functions that use asynchronous I/O. And if you want to do it in Shiny, there’s of course an example!

				
					library(coro)
library(httr)
library(promises)
library(later)

ui <- fluidPage(
  titlePanel("Using co-routines to run jobs concurrently in Shiny"),

  p("This demo demonstrates the coro package. It provides the async() function.
    Async() allows cooperative concurrency. You can run multiple async() 
    functions at the same time (so concurrently) and they are all managed by a
    scheduler in the background. Async() is cooperative because it decides
    whether or not it has to await a result. Await() basically suspends async()
    and gives control back to the scheduler. The scheduler constantly
    monitors whether async() operations are ready to make progress or not.
    In the meantime, your Shiny app will just keep running and is not
    unnecessarily waiting ✨. If you would run this code synchronously,
    it would have taken at least 6 + 5 = 11 seconds. Now it runs
    two jobs concurrently, getting it down to 6 seconds."),

  actionButton(inputId = "start_job",
               label = "Start Expensive Job",
               icon = icon("bolt")),

  br(),

  h4(strong("Result of first async operation:")),

  tableOutput("table_1"),

  h4(strong("Result of second async operation:")),

  tableOutput("table_2")

)

server <- function(input, output) {

  # initiate reactive values
  table_dt <- reactiveVal(NULL)
  times <- reactiveValues(start = NULL,
                          end = NULL)

  # this is the asynchronous function that gets some data
  async_data <- async(function(dataset, seconds) {

    print(paste("Getting ", dataset))

    # await takes an awaitable value, like a promise
    # we use resolve to satisfy a promises after some seconds using later
    # later() simply executes something after a delay, mimicking
    # a long computation
    await(
      promise(function(resolve, reject) {
        later(~resolve(NULL), delay = seconds)
      })
    )

    head(get(dataset), 10)

  })

  # start expensive calculation
  observeEvent(input$start_job, {

    print("Starting job now")

    times$start <- Sys.time()

    # you can also use promise_race() to wait for the first promise object
    # to be fulfilled instead of waiting for all promise objects to be
    # fulfilled, like promise_all() does.
    # This will require more code re-writing than simply changing the
    # function though.
    promise_all(p1 = async_data(dataset = "mtcars",
                                seconds = 6),
                p2 = async_data(dataset = "iris",
                                seconds = 5)) %...>%
      table_dt()

  })

  # Calculate time that passed
  observe({

    req(table_dt())

    times$end <- Sys.time()
    total_time <- times$end - isolate(times$start)

    print(paste("Total time elapsed:", total_time, "seconds"))

  })

  # Display the table data
  output$table_1 <- renderTable({

    table_dt()$p1

  })

  # Display the table data
  output$table_2 <- renderTable({

    table_dt()$p2

  })

}

shinyApp(ui = ui, server = server)
				
			

mirai

mirai is a hidden gem. The strength of mirai lies in its simplicity: it’s lightweight, minimalistic and simple to use. The name of the package is Japanese for ‘future’. 

That it’s lightweight is because it is has a tiny pure R code base which solely relies on nanonext: An NNG (Nanomsg Next Gen) messaging library for R.

Just like callr, this package helps you to execute tasks somewhere else (on “workers”). An interesting thing about mirai is that you can set persistent background processes (daemons), both locally and remote, with TLS secure connections. Another bonus is that mirai can be used interchangeably with a promise in Shiny.

It’s not easy to get started with mirai in Shiny, because of the limited documentation and lack of examples. And it’s also not recommended because there’s something much better that we will talk about in the next section… But this example gets you started with mirai:

				
					library(shiny)
library(promises)
library(mirai)
library(mirai.promises)

# Function to retrieve stock data
run_task <- function(symbol, start_date, end_date) {

  # simulate long retrieval time
  Sys.sleep(5)

  # get stock data
  url <- paste0("https://query1.finance.yahoo.com/v8/finance/chart/", symbol, "?period1=", 
                as.numeric(as.POSIXct(start_date)), "&period2=", as.numeric(as.POSIXct(end_date)), 
                "&interval=1d")

  response <- GET(url)
  json_data <- fromJSON(content(response, as = "text"))
  prices <- json_data$chart$result$indicators$quote[[1]]$close[[1]]
  dates <- as.Date(as.POSIXct(json_data$chart$result$timestamp[[1]], origin = "1970-01-01"))
  
  stock <- data.frame(Date = dates, Close = prices, stringsAsFactors = FALSE)
  
  ggplot(stock, aes(x = Date, y = Close)) +
    geom_line(color = "steelblue") +
    labs(x = "Date", y = "Closing Price") +
    ggtitle(paste("Stock Data for", symbol)) +
    theme_minimal()
  
}

# set 4 persistent workers
# if you would use url = ... you can set a remote worker
daemons(n = 4L)

ui <- fluidPage(
  
  titlePanel("Async programming in Shiny: calling an API asynchronously and get AEX stock data"),
  sidebarLayout(
    sidebarPanel(
      selectizeInput("company", 
                     "Select Company (max 2 supported)", 
                     choices = c("ADYEN.AS", "ASML.AS", "UNA.AS", "HEIA.AS", "INGA.AS", "PHIA.AS", "ABN.AS", "KPN.AS"),
                     selected = c("ADYEN.AS", "ASML.AS"),
                     multiple = TRUE,
                     options = list(maxItems = 2)),
      dateRangeInput("dates", "Select Date Range", start = Sys.Date() - 365, end = Sys.Date()),
      actionButton("task", "Get stock data (5 seconds each)")
    ),
    mainPanel(
      fluidRow(
        plotOutput("stock_plot1")
      ),
      fluidRow(
        plotOutput("stock_plot2")
      )
    )
  )
)

server <- function(input, output, session) {
  
  # reactive values
  mirai_args <- reactiveValues(args1 = NULL,
                               args2 = NULL)
  
  # check daemon status
  print(daemons())
  
  # button to submit a task
  observeEvent(input$task, {
    
    req(input$company)
    
    # create arguments list dynamically
    for (i in 1:length(input$company)) {
      
      symbol <- input$company[i] 
      
      print(symbol)
      
      args <- list(run_task = run_task, 
                   symbol = symbol,
                   start_date = input$dates[1],
                   end_date = input$dates[2]
      )
      
      mirai_args[[paste0("args", i)]] <- args
      
    }
    
  })
  
  # Note: this code is not dynamic and would need more work
  # put req() outside renderPlot(), otherwise mirai.promises doesn't work properly
  observe({
    
    req(mirai_args$args1)
    
    output$stock_plot1 <- renderPlot(
      mirai(
        {
          library("ggplot2")
          library("jsonlite")
          library("httr")
          run_task(symbol, start_date, end_date)
        },
        .args = mirai_args$args1
      )
      %...>% plot())
    
  })
  
  observe({
    
    req(mirai_args$args2)
    
    output$stock_plot2 <- renderPlot(
      mirai(
        {
          library("ggplot2")
          library("jsonlite")
          library("httr")
          run_task(symbol, start_date, end_date)
        },
        .args = mirai_args$args2
      )
      %...>% plot())
    
  })
  
  # reset daemons on stop
  onStop(function() daemons(0))  
  
}

shinyApp(ui = ui, server = server)
				
			

crew

As promised (pun intended 😜): something much better on top of mirai! 

Let’s go back to mirai first. There are two ways to add mirai to your Shiny app:

  1. using mirai.promises
  2. using crew

 

The first option allows you to use promises and mirai interchangeably. The second option goes a bit further and gives you much greater control over how tasks are being executed. You can choose local or remote workers, there are options to set persistent workers and you can constantly monitor activity. That’s why after working with mirai for a bit I find the crew approach more appealing, simply because there’s more functionality and the interface works nicely.

So how does it compare with, for example, promises? While a promises solution keeps the main process free and responsive, it will only add real value when there are a few big blocking operations. If you have a lot of operations that are a little bit slow, there’s not much you will get from it. As the crew developers state: “By design, [promises] does not scale to the prodigious quantities of heavy tasks that industrial enterprise-level apps aim to farm out in production-level pipelines. By contrast, crew scales out easily, and its controller asynchronously manages all the tasks from a single convenient place.”

The API is very intuitive, as you can see in the below example:

				
					library(crew)
library(shiny)
library(ggplot2)
library(httr)
library(jsonlite)

# Function to retrieve stock data
run_task <- function(symbol, start_date, end_date) {
  
  # simulate long retrieval time
  Sys.sleep(5)
  
  # get stock data
  url <- paste0("https://query1.finance.yahoo.com/v8/finance/chart/", symbol, "?period1=", 
                as.numeric(as.POSIXct(start_date)), "&period2=", as.numeric(as.POSIXct(end_date)), 
                "&interval=1d")
  
  response <- GET(url)
  json_data <- fromJSON(content(response, as = "text"))
  prices <- json_data$chart$result$indicators$quote[[1]]$close[[1]]
  dates <- as.Date(as.POSIXct(json_data$chart$result$timestamp[[1]], origin = "1970-01-01"))
  
  stock <- data.frame(Date = dates, Close = prices, stringsAsFactors = FALSE)
  
  ggplot(stock, aes(x = Date, y = Close)) +
    geom_line(color = "steelblue") +
    labs(x = "Date", y = "Closing Price") +
    ggtitle(paste("Stock Data for", symbol)) +
    theme_minimal()
  
}

status_message <- function(n) {
  if (n > 0) {
    paste(format(Sys.time()), "tasks in progress ⏳ :", n)
  } else {
    "All tasks completed 🚀"
  }
}

ui <- fluidPage(
  
  titlePanel("Async programming in Shiny: calling an API asynchronously and get AEX stock data"),
  sidebarLayout(
    sidebarPanel(
      selectInput("company", "Select Company", choices = c("ADYEN.AS", "ASML.AS", "UNA.AS", "HEIA.AS", "INGA.AS", "RDSA.AS", "PHIA.AS", "ABN.AS", "KPN.AS")),
      dateRangeInput("dates", "Select Date Range", start = Sys.Date() - 365, end = Sys.Date()),
      actionButton("task", "Get stock data (5 seconds)")
    ),
    mainPanel(
      textOutput("status"),
      plotOutput("stock_plot")
    )
  )
  
)

server <- function(input, output, session) {
  # reactive values and outputs
  reactive_result <- reactiveVal(ggplot())
  reactive_status <- reactiveVal("No task submitted yet")
  reactive_poll <- reactiveVal(FALSE)
  output$stock_plot <- renderPlot(reactive_result())
  output$status <- renderText(reactive_status())
  
  # crew controller
  controller <- crew_controller_local(workers = 4, seconds_idle = 10)
  controller$start()
  
  # make sure to terminate the controller on stop
  onStop(function() controller$terminate())
  
  # button to submit a task
  observeEvent(input$task, {
    controller$push(
      command = run_task(symbol, start_date, end_date),
      # pass the function to the workers, and arguments needed
      data = list(run_task = run_task,
                  symbol = input$company,
                  start_date = input$dates[1],
                  end_date = input$dates[2]), 
      packages = c("httr", "jsonlite", "ggplot2")
    )
    reactive_poll(TRUE)
    
  })
  
  # event loop to collect finished tasks
  observe({
    req(reactive_poll())
    invalidateLater(millis = 100)
    result <- controller$pop()$result
    
    if (!is.null(result)) {
      reactive_result(result[[1]])
      print(controller$summary()) # get a summary of workers
    }
    
    reactive_status(status_message(n = sum(controller$schedule$summary())))
    reactive_poll(controller$nonempty())
  })
}

shinyApp(ui = ui, server = server)
				
			

Making the right choice - a comparison of methods

So many options! What is the right choice?

As with everything: context matters. There’s no right or wrong choice. Some packages are more intuitive to use, have more options, or have more documentation.

Personally, I really like crew and callr. For me, it makes much more sense than promises and it demystifies the “magic” processes that are in place when executing tasks asynchronously. That being said, the documentation on these packages is quite thin. While I’m aiming to change that, it might be that you’re more attracted to promises for that reason. Below I listed some differences and similarities between all the packages that were referenced in this blog:

Note that promises can achieve inner-session asynchronicity, but out-of-the box promises was designed to handle cross-session asynchronicity only. Therefore, it requires a small “workaround”.

async_shiny on GitHub

Async programming can sound very daunting but by providing specific Shiny examples my aim is to speed up your learning curve. We’ve covered a lot of examples in this blog, but you can never have enough examples. All apps are different, all use cases are different. That all requires different solutions. That’s why I made a GitHub called async_shiny that contains these and more examples to implement asynchronous programming in Shiny ✨ .

So go ahead and master async programming in Shiny!

Watch "Mastering Asynchronous Programming in Shiny"

You can watch the keynote talk on the ShinyConf2023 back on the Appsilon YouTube channel 🎥

The slides of the presentation are also available.

Resources 📚

Leave a Reply

Your email address will not be published. Required fields are marked *