Locking Guidelines

In my last post (Lock Up Unlocked), I gave you some basic guidelines for locking resources in multi-threaded programs.

Basic guidelines for locking.

  • Avoid simultaneously locking multiple shared resources in a single thread.

  • If multiple shared resources must be locked simultaneously, endeavor that all threads lock the resources in the same order and unlock them in reverse order.

  • If they can't be unlocked in reverse order, then endeavor to unlock in the same order and do not lock an earlier resource till all later locks are released.

  • If you can't follow these guidelines, then timeout.



  • In today's post, I am going to elaborate a little on each principle. But first, let's discuss "resources". Many of you can quickly name the commonly understood resources. Things like memory, cpu, network jump right out, but that's because we are so used to looking at Task Manager and Resource Monitor. Besides, they are a little esoteric, programmers are concerned about specific area of memory when it comes to synchronization and locking, not "memory" as that big fuzzy thing that we don't even manage ourselve anymore. So, "share memory" locations are one resource, a communications port is another.

    But look at it another way. ANYTHING that would cause the thread to wait suggests a resource. Yes, we insert our own locks to make the thread wait for our resources, but there are many operations the thread will perform that could cause the thread to wait. Even a "sleep" operation is a wait for the resource of elapsed time. Or maybe you put your thread to sleep to yield the processor resource for another thread. Reading a file, allocating memory, reading a communications port, all could result in a wait. Writing a file, sending an event, throwing an exception, could result in a wait as well. With C# it has become easy to hook just about everything and with our hooks, we could feasibly insert waits, or blocking calls to obtain resources our event handler needs. In the last post, I pointed out that one of the locked resources in the program was the GUI's event loop. Since every wait is a kind of "lock", it behooves us to know when we are likely to block if we are to accurately apply the "basic guidelines for locking. Now, let's take a look at them.

    Avoid simultaneously locking multiple shared resources in a single thread.
    This one is almost too easy. If a thread only locks one resource, it is never holding a lock some other thread needs while waiting for that lock. Therefore, in is never in the way of another thread. It may compete with other threads, but that's really the name of the game in thread synchronization.

    If multiple shared resources must be locked simultaneously, endeavor that all threads lock the resources in the same order and unlock them in reverse order.
    It can be easily shown that if all threads honor an agreed upon sequence of resource locking, then deadlock is prevented, since no thread can lock resource RB without first locking RA, then the holder of the lock on RA can be sure he's not in the way of some thread holding a lock on RB. This principle becomes particularly important when working with semaphores that allow several threads to access the resource (RA). This is typically done with read-only access to a resource. If write access is needed (RB), read access must first be obtained, preserving the order of locks.

    If they (the resources) can't be unlocked in reverse order, then endeavor to unlock in the same order and do not lock an earlier resource till all later locks are released.
    This approach really only works for applications that need to move through a predefined set of states. If the thread never moves backwards in the states, it can unlock previous locks. Think about a person moving from station to station in a sequence of operations. If the thread must backup, it can't release earlier locks, it must unlock in reverse order.

    If you can't follow these guidelines, then timeout.
    One of the important skills of multi-threaded programming is writing your wait call in such a way as to handle a timeout gracefully. It's even more important when you know you've violated one of the guidelines and therefore will likely deadlock without a timeout. Examine Microsoft's Reader-Writer class and example to see how an attempt is made to gain a write lock, but a timeout is coded for just in case the write lock can't be gotten.

    We started this series with a simple deadlock on what looked like a critical sections. "Locking" and its twin brother "waiting" can do a whole lot more than critical sections. But, how then was the deadlock solved? Yes, we could have just removed the locks, but that would have been no fun. The deadlock was prevented when the child thread chose not to lock RB while holding the lock on RA. BeginInvoke() is analogous to the Post Event call of the old days. It does not wait for the GUI thread to service the event. The child thread then releases the lock (RA), then tries to lock only RB by calling EndInvoke() to wait for completion of the event. Since the first guideline is not violated, there is no deadlock.

    No code today.

    Comments

    Popular posts from this blog

    ListBox Flicker

    Regular Expressions in C# - Negative Look-ahead

    A Simple Task Queue