Implementing reliable applications using durable functions
One of the most commonly used ways to swiftly process data is to go with parallel processing. The main advantage of this approach is that we get the desired output pretty quickly, depending on the previously created sub-threads. This can be achieved in multiple ways using different technologies. However, a common challenge in these approaches is that if something goes wrong in the middle of a sub-thread, it's not easy to self-heal and resume from where things stopped.
In this recipe, we'll implement a simple way of executing a function in parallel with multiple instances using durable functions for the following scenario.
Assume that we have five customers (with IDs 1, 2, 3, 4, and 5, respectively) who need to generate 50,000 barcodes. It would take a lot of time to generate the barcodes owing to the involvement of image processing tasks. One simple way to quickly process this request is to use asynchronous...