I have some questions about using the torch.multiprocessing module. Let’s say I have a torch.nn.Module called model and I call model.share_memory() on it.
What happens if two threads call the forward(), i.e. model(input) at the same time? Is it safe? Or should I use Lock mechanisms to be sure that model is not accessed at the same time by multiple threads?
Similarly, what happens if two or more threads have an optimizer working on model.parameters() and they call optimizer.step() at the same time?
I ask these questions because I often see the optimizer.step() being called on shared models without lock mechanisms (i.e. in RL implementations of A3C or ACER) and I wonder if it is a safe thing to do.