ChatGPT解决这个技术问题 Extra ChatGPT

C# Events and Thread Safety

I frequently hear/read the following advice:

Always make a copy of an event before you check it for null and fire it. This will eliminate a potential problem with threading where the event becomes null at the location right between where you check for null and where you fire the event:

// Copy the event delegate before checking/calling
EventHandler copy = TheEvent;

if (copy != null)
    copy(this, EventArgs.Empty); // Call any handlers on the copied list

Updated: I thought from reading about optimizations that this might also require the event member to be volatile, but Jon Skeet states in his answer that the CLR doesn't optimize away the copy.

But meanwhile, in order for this issue to even occur, another thread must have done something like this:

// Better delist from event - don't want our handler called from now on:
otherObject.TheEvent -= OnTheEvent;
// Good, now we can be certain that OnTheEvent will not run...

The actual sequence might be this mixture:

// Copy the event delegate before checking/calling
EventHandler copy = TheEvent;

// Better delist from event - don't want our handler called from now on:
otherObject.TheEvent -= OnTheEvent;    
// Good, now we can be certain that OnTheEvent will not run...

if (copy != null)
    copy(this, EventArgs.Empty); // Call any handlers on the copied list

The point being that OnTheEvent runs after the author has unsubscribed, and yet they just unsubscribed specifically to avoid that happening. Surely what is really needed is a custom event implementation with appropriate synchronisation in the add and remove accessors. And in addition there is the problem of possible deadlocks if a lock is held while an event is fired.

So is this Cargo Cult Programming? It seems that way - a lot of people must be taking this step to protect their code from multiple threads, when in reality it seems to me that events require much more care than this before they can be used as part of a multi-threaded design. Consequently, people who are not taking that additional care might as well ignore this advice - it simply isn't an issue for single-threaded programs, and in fact, given the absence of volatile in most online example code, the advice may be having no effect at all.

(And isn't it a lot simpler to just assign the empty delegate { } on the member declaration so that you never need to check for null in the first place?)

Updated: In case it wasn't clear, I did grasp the intention of the advice - to avoid a null reference exception under all circumstances. My point is that this particular null reference exception can only occur if another thread is delisting from the event, and the only reason for doing that is to ensure that no further calls will be received via that event, which clearly is NOT achieved by this technique. You'd be concealing a race condition - it would be better to reveal it! That null exception helps to detect an abuse of your component. If you want your component to be protected from abuse, you could follow the example of WPF - store the thread ID in your constructor and then throw an exception if another thread tries to interact directly with your component. Or else implement a truly thread-safe component (not an easy task).

So I contend that merely doing this copy/check idiom is cargo cult programming, adding mess and noise to your code. To actually protect against other threads requires a lot more work.

Update in response to Eric Lippert's blog posts:

So there's a major thing I'd missed about event handlers: "event handlers are required to be robust in the face of being called even after the event has been unsubscribed", and obviously therefore we only need to care about the possibility of the event delegate being null. Is that requirement on event handlers documented anywhere?

And so: "There are other ways to solve this problem; for example, initializing the handler to have an empty action that is never removed. But doing a null check is the standard pattern."

So the one remaining fragment of my question is, why is explicit-null-check the "standard pattern"? The alternative, assigning the empty delegate, requires only = delegate {} to be added to the event declaration, and this eliminates those little piles of stinky ceremony from every place where the event is raised. It would be easy to make sure that the empty delegate is cheap to instantiate. Or am I still missing something?

Surely it must be that (as Jon Skeet suggested) this is just .NET 1.x advice that hasn't died out, as it should have done in 2005?

UPDATE

As of C# 6, the answer to this question is:

SomeEvent?.Invoke(this, e);
This question came up in an internal discussion a while back; I've been intending to blog this for some time now. My post on the subject is here: Events and Races
Stephen Cleary has a CodeProject article which examines this issue, and he concludes a general purpose, "thread-safe" solution does not exist. Basically it's up to the event invoker to ensure the delegate is not null, and up to the event handler to be able to handle being invoked after it's been unsubscribed.
@rkagerer - actually the second issue sometimes has to be dealt with by the event handler even if threads are not involved. Can happen if one event handler tells another handler to unsubscribe from the event currently being handled, but that 2nd subscriber will then receive the event anyway (because it unsubscribed while handling was in progress).
Adding a subscription to an event with zero subscribers, removing the only subscription for the event, invoking an event with zero subscribers, and invoking an event with exactly one subscriber, are all much faster operations than add/remove/invoke scenarios involving other numbers of subscribers. Adding a dummy delegate slows down the common case. The real problem with C# is that its creators decided to have EventName(arguments) invoke the event's delegate unconditionally, rather than having it only invoke the delegate if non-null (do nothing if null).
I moved the answer from the top to the bottom of the question. Ideally the question should not include an answer IMHO.

J
Jon Skeet

The JIT isn't allowed to perform the optimization you're talking about in the first part, because of the condition. I know this was raised as a spectre a while ago, but it's not valid. (I checked it with either Joe Duffy or Vance Morrison a while ago; I can't remember which.)

Without the volatile modifier it's possible that the local copy taken will be out of date, but that's all. It won't cause a NullReferenceException.

And yes, there's certainly a race condition - but there always will be. Suppose we just change the code to:

TheEvent(this, EventArgs.Empty);

Now suppose that the invocation list for that delegate has 1000 entries. It's perfectly possible that the action at the start of the list will have executed before another thread unsubscribes a handler near the end of the list. However, that handler will still be executed because it'll be a new list. (Delegates are immutable.) As far as I can see this is unavoidable.

Using an empty delegate certainly avoids the nullity check, but doesn't fix the race condition. It also doesn't guarantee that you always "see" the latest value of the variable.


Joe Duffy's "Concurrent Programming on Windows" covers the JIT optimization and memory model aspect of the question; see code.logos.com/blog/2008/11/events_and_threads_part_4.html
I've accepted this based on the comment about the "standard" advice being pre-C#2, and I'm not hearing anyone contradicting that. Unless it is really expensive to instantiate your event args, just put '= delegate {}' on the end of your event declaration and then call your events directly as if they are methods; never assign null to them. (The other stuff I brought in about ensuring a handler is not called after delisting, that was all irrelevant, and is impossible to ensure, even for single threaded code, e.g. if handler 1 asks handler 2 to delist, handler 2 will still get called next.)
The only problem case (as always) is structs, where you cannot ensure that they will be instantiated with anything other than null values in their members. But structs suck.
About empty delegate, see also this question: stackoverflow.com/questions/170907/….
@Tony: There's still fundamentally a race condition between something subscribing/unsubscribing and the delegate being executed. Your code (having just browsed through it briefly) reduces that race condition by allowing subscription/unsubscription to take effect while it's being raised, but I suspect that in most cases where the normal behaviour isn't good enough, this isn't either.
J
JP Alioto

I see a lot of people going toward the extension method of doing this ...

public static class Extensions   
{   
  public static void Raise<T>(this EventHandler<T> handler, 
    object sender, T args) where T : EventArgs   
  {   
    if (handler != null) handler(sender, args);   
  }   
}

That gives you nicer syntax to raise the event ...

MyEvent.Raise( this, new MyEventArgs() );

And also does away with the local copy since it is captured at method call time.


I like the syntax, but let's be clear... it doesn't solve the problem with a stale handler getting invoked even after it has been unregistered. This only solves the null dereference problem. While I like the syntax, I question whether it is really any better than: public event EventHandler MyEvent = delete {}; ... MyEvent (this, new MyEventArgs()); This is also a very low-friction solution that I like for its simplicity.
@Simon I see different people saying different things on this. I've tested it and what I've done indicates to me that this does handle the null handler issue. Even if the original Sink unregisters from the Event after the handler != null check, the Event is still raised and no exception is thrown.
+1. I just wrote this method myself, starting thinking about thread-safety, did some research and stumbled upon this question.
How can this be called from VB.NET? Or does 'RaiseEvent' already cater for multi-threading scenarios?
D
Daniel Fortunov

"Why is explicit-null-check the 'standard pattern'?"

I suspect the reason for this might be that the null-check is more performant.

If you always subscribe an empty delegate to your events when they are created, there will be some overheads:

Cost of constructing the empty delegate.

Cost of constructing a delegate chain to contain it.

Cost of invoking the pointless delegate every single time the event is raised.

(Note that UI controls often have a large number of events, most of which are never subscribed to. Having to create a dummy subscriber to each event and then invoke it would likely be a significant performance hit.)

I did some cursory performance testing to see the impact of the subscribe-empty-delegate approach, and here are my results:

Executing 50000000 iterations . . .
OnNonThreadSafeEvent took:      432ms
OnClassicNullCheckedEvent took: 490ms
OnPreInitializedEvent took:     614ms <--
Subscribing an empty delegate to each event . . .
Executing 50000000 iterations . . .
OnNonThreadSafeEvent took:      674ms
OnClassicNullCheckedEvent took: 674ms
OnPreInitializedEvent took:     2041ms <--
Subscribing another empty delegate to each event . . .
Executing 50000000 iterations . . .
OnNonThreadSafeEvent took:      2011ms
OnClassicNullCheckedEvent took: 2061ms
OnPreInitializedEvent took:     2246ms <--
Done

Note that for the case of zero or one subscribers (common for UI controls, where events are plentiful), the event pre-initialised with an empty delegate is notably slower (over 50 million iterations...)

For more information and source code, visit this blog post on .NET Event invocation thread safety that I published just the day before this question was asked (!)

(My test set-up may be flawed so feel free to download the source code and inspect it yourself. Any feedback is much appreciated.)


I think you make the key point in your blog post: there is no need to worry about the performance implications until it is a bottleneck. Why let the ugly way be the recommended way? If we wanted premature optimisation instead of clarity, we'd be using assembler - so my question remains, and I think the likely answer is that the advice simply predates anonymous delegates and it takes a long time for human culture to shift old advice, like in the famous "pot roast story".
And your figures prove the point very well: the overhead comes down to just two and a half NANOSECONDS(!!!) per event raised (pre-init vs. classic-null). This would be undetectable in almost apps with real work to do, but given that the vast majority of event usage is in GUI frameworks, you'd have to compare this with the cost of repainting parts of a screen in Winforms, etc., so it becomes even more invisible in the deluge of real CPU work and waiting around for resources. You get +1 from me for the hard work, anyway. :)
@DanielEarwicker said it right, you have moved me to be a believer in the public event WrapperDoneHandler OnWrapperDone =(x,y)=> {}; model.
It might also be good to time a Delegate.Combine/Delegate.Remove pair in cases where the event has zero, one, or two subscribers; if one repeatedly adds and removes the same delegate instance, the cost differences among the cases will be especially pronounced since Combine has fast special-case behavior when one of the arguments is null (just return the other), and Remove is very fast when the two arguments are equal (just return null).
C
Chuck Savage

I truly enjoyed this read - not! Even though I need it to work with the C# feature called events!

Why not fix this in the compiler? I know there are MS people who read these posts, so please don't flame this!

1 - the Null issue) Why not make events be .Empty instead of null in the first place? How many lines of code would be saved for null check or having to stick a = delegate {} onto the declaration? Let the compiler handle the Empty case, IE do nothing! If it all matters to the creator of the event, they can check for .Empty and do whatever they care with it! Otherwise all the null checks / delegate adds are hacks around the problem!

Honestly I'm tired of having to do this with every event - aka boilerplate code!

public event Action<thisClass, string> Some;
protected virtual void DoSomeEvent(string someValue)
{
  var e = Some; // avoid race condition here! 
  if(null != e) // avoid null condition here! 
     e(this, someValue);
}

2 - the race condition issue) I read Eric's blog post, I agree that the H (handler) should handle when it dereferences itself, but cannot the event be made immutable/thread safe? IE, set a lock flag on its creation, so that whenever it is called, it locks all subscribing and un-subscribing to it while its executing?

Conclusion,

Are not modern day languages supposed to solve problems like these for us?


Agreed, there should be better support for this in the compiler. Until then, I created a PostSharp aspect which does this in a post-compile step. :)
Heaving thread subscription/unsubscription requests block while waiting for arbitrary outside code to complete is far worse than having subscribers receive events after subscriptions are canceled, especially since the latter "problem" can be solved easily by simply having event handlers check a flag to see if they're still interested in receiving their event, but deadlocks resulting from the former design may be intractable.
@supercat. Imo, the "far worse" comment is pretty application dependent. Who wouldn't want very strict locking without extra flags when its an option? A deadlock should only occur if the event handling thread is waiting on yet another thread (that is subcribing/unsubscribing) since locks are same-thread re-entrant and a subscribe/unsubscribe within the original event handler would not be blocked. If there is cross thread waiting as part of an event handler that would be a part of the design I'd prefer to rework. I'm coming from a server-side app angle that has predictable patterns.
@crokusek: The analysis required to prove that a system is free of deadlock is easy if there would be no cycles in a directed graph connecting each lock to all of the locks that might be needed while it's held [the lack of cycles proves the system deadlock-free]. Allowing arbitrary code to be invoked while a lock is held will create an edge in the "might-be-needed" graph from that lock to any lock which arbitrary code might acquire (not quite every lock in the system, but not far from it). The consequent existence of cycles wouldn't imply that deadlock would occur, but...
...would greatly increase the level of analysis necessary to prove that it couldn't.
I
Izaak van Dongen

With C# 6 and above, code could be simplified using new ?. operator as in:

TheEvent?.Invoke(this, EventArgs.Empty);

Here is the MSDN documentation.


a
alyx

According to Jeffrey Richter in the book CLR via C#, the correct method is:

// Copy a reference to the delegate field now into a temporary field for thread safety
EventHandler<EventArgs> temp =
Interlocked.CompareExchange(ref NewMail, null, null);
// If any methods registered interest with our event, notify them
if (temp != null) temp(this, e);

Because it forces a reference copy. For more information, see his Event section in the book.


May be I'm missing smth, but Interlocked.CompareExchange throws NullReferenceException if its first argument is null which is exactly what we want to avoid. msdn.microsoft.com/en-us/library/bb297966.aspx
Interlocked.CompareExchange would fail if it were somehow passed a null ref, but that's not the same thing as being passed a ref to a storage location (e.g. NewMail) which exists and which initially holds a null reference.
A
Ash

I've been using this design pattern to ensure that event handlers aren't executed after they're unsubscribed. It's working pretty well so far, although I haven't tried any performance profiling.

private readonly object eventMutex = new object();

private event EventHandler _onEvent = null;

public event EventHandler OnEvent
{
  add
  {
    lock(eventMutex)
    {
      _onEvent += value;
    }
  }

  remove
  {
    lock(eventMutex)
    {
      _onEvent -= value;
    }
  }

}

private void HandleEvent(EventArgs args)
{
  lock(eventMutex)
  {
    if (_onEvent != null)
      _onEvent(args);
  }
}

I'm mostly working with Mono for Android these days, and Android doesn't seem to like it when you try to update a View after its Activity has been sent to the background.


Actually, I see someone else is using a very similar pattern here: stackoverflow.com/questions/3668953/…
d
dss539

This practice is not about enforcing a certain order of operations. It's actually about avoiding a null reference exception.

The reasoning behind people caring about the null reference exception and not the race condition would require some deep psychological research. I think it has something to do with the fact that fixing the null reference problem is much easier. Once that is fixed, they hang a big "Mission Accomplished" banner on their code and unzip their flight suit.

Note: fixing the race condition probably involves using a synchronous flag track whether the handler should run


I'm not asking for a solution to that problem. I'm wondering why there is widespread advice to spray extra code-mess around event firing, when it only avoids a null exception when there is a hard-to-detect race condition, which will still exist.
Well that was my point. They don't care about the race condition. They only care about the null reference exception. I will edit that into my answer.
And my point is: why would it make sense to care about the null reference exception and yet not care about the race condition?
A properly-written event handler should be prepared to handle the fact that any particular request to raise an event whose processing might overlap a request to add or remove it, may or may not end up raising the event which was being added or removed. The reason programmers don't care about the race condition is that in properly-written code it won't matter who wins.
@dss539: While one could design an event framework which would block unsubscription requests until after pending event invocations had finished, such a design would make it impossible for any event (even something like an Unload event) to safely cancel an object's subscriptions to other events. Nasty. Better is to simply say that event unsubscription requests will cause events to be unsubscribed "eventually", and that event subscribers should check when they are invoked whether there's anything useful for them to do.
L
Levi

So I'm a little late to the party here. :)

As for the use of null rather than the null object pattern to represent events with no subscribers, consider this scenario. You need to invoke an event, but constructing the object (EventArgs) is non-trivial, and in the common case your event has no subscribers. It would be beneficial to you if you could optimize your code to check to see if you had any subscribers at all before you committed processing effort to constructing the arguments and invoking the event.

With this in mind, a solution is to say "well, zero subscribers is represented by null." Then simply perform the null check before performing your expensive operation. I suppose another way of doing this would have been to have a Count property on the Delegate type, so you'd only perform the expensive operation if myDelegate.Count > 0. Using a Count property is a somewhat nice pattern that solves the original problem of allowing optimization, and it also has the nice property of being able to be invoked without causing a NullReferenceException.

Keep in mind, though, that since delegates are reference types, they are allowed to be null. Perhaps there was simply no good way of hiding this fact under the covers and supporting only the null object pattern for events, so the alternative may have been forcing developers to check both for null and for zero subscribers. That would be even uglier than the current situation.

Note: This is pure speculation. I'm not involved with the .NET languages or CLR.


I assume you mean "the use of empty delegate rather than..." You can already do what you suggest, with an event initialized to the empty delegate. The test (MyEvent.GetInvocationList().Length == 1) will be true if the initial empty delegate is the only thing on the list. There still would be no need to make a copy first. Though I think the case you describe would be extremely rare anyway.
I think we're conflating the ideas of delegates and events here. If I have an event Foo on my class, then when external users call MyType.Foo += / -=, they're actually calling add_Foo() and remove_Foo() methods. However, when I reference Foo from within the class where it's defined, I'm actually referencing the underlying delegate directly, not the add_Foo() and remove_Foo() methods. And with the existence of types like EventHandlerList, there's nothing mandating that the delegate and the event even be in the same place. This is what I meant by the "Keep in mind" paragraph in my response.
(cont.) I admit that this is a confusing design, but the alternative may have been worse. Since in the end all you have is a delegate - you might reference the underlying delegate directly, you might get it from a collection, you might instantiate it on the fly - it may have been technically infeasible to support anything other than the "check for null" pattern.
As we're talking about firing the event, I can't see why the add/remove accessors are important here.
@Levi: I really dislike the way C# handles events. If I had my druthers, the delegate would have been given a different name from the event. From outside a class, the only permissible operations on an event name would be += and -=. Within the class, the permissible operations would also include invocation (with built-in null check), testing against null, or setting to null. For anything else, one would have to use the delegate whose name would be the event name with some particular prefix or suffix.
J
Jason Coyne

for single threaded applicaitons, you are correc this is not an issue.

However, if you are making a component that exposes events, there is no guarantee that a consumer of your component is not going to go multithreading, in which case you need to prepare for the worst.

Using the empty delegate does solve the problem, but also causes a performance hit on every call to the event, and could possibly have GC implications.

You are right that the consumer trie dto unsubscribe in order for this to happen, but if they made it past the temp copy, then consider the message already in transit.

If you don't use the temporary variable, and don't use the empty delegate, and someone unsubscribes, you get a null reference exception, which is fatal, so I think the cost is worth it.


G
Greg D

I've never really considered this to be much of an issue because I generally only protect against this sort of potential threading badness in static methods (etc) on my reusable components, and I don't make static events.

Am I doing it wrong?


If you allocate an instance of a class with mutable state (fields that change their values) and then let several threads access the same instance simultaneously, without using locking to protect those fields from being modified by two threads at the same time, you're probably doing it wrong. If all your threads have their own separate instances (sharing nothing) or all your objects are immutable (once allocated, their fields' values never change) then you're probably okay.
My general approach is to leave synchronization up to the caller except in static methods. If I'm the caller, then I'll synchronize at that higher level. (with the exception of an object whose sole purpose is handling synchronized access, of course. :) )
@GregD that depends how complex the method is and what data it uses. if it affects internal members, and you decide to run in an a threaded/Tasked state, you are in for a lot of hurt
T
Triynko

Wire all your events at construction and leave them alone. The design of the Delegate class cannot possibly handle any other usage correctly, as I will explain in the final paragraph of this post.

First of all, there's no point in trying to intercept an event notification when your event handlers must already make a synchronized decision about whether/how to respond to the notification.

Anything that may be notified, should be notified. If your event handlers are properly handling the notifications (i.e. they have access to an authoritative application state and respond only when appropriate), then it will be fine to notify them at any time and trust they will respond properly.

The only time a handler shouldn't be notified that an event has occurred, is if the event in fact hasn't occurred! So if you don't want a handler to be notified, stop generating the events (i.e. disable the control or whatever is responsible for detecting and bringing the event into existence in the first place).

Honestly, I think the Delegate class is unsalvageable. The merger/transition to a MulticastDelegate was a huge mistake, because it effectively changed the (useful) definition of an event from something that happens at a single instant in time, to something that happens over a timespan. Such a change requires a synchronization mechanism that can logically collapse it back into a single instant, but the MulticastDelegate lacks any such mechanism. Synchronization should encompass the entire timespan or instant the event takes place, so that once an application makes the synchronized decision to begin handling an event, it finishes handling it completely (transactionally). With the black box that is the MulticastDelegate/Delegate hybrid class, this is near impossible, so adhere to using a single-subscriber and/or implement your own kind of MulticastDelegate that has a synchronization handle that can be taken out while the handler chain is being used/modified. I'm recommending this, because the alternative would be to implement synchronization/transactional-integrity redundantly in all your handlers, which would be ridiculously/unnecessarily complex.


[1] There is no useful event handler that happens at "a single instant in time". All operations have a timespan. Any single handler can have a non-trivial sequence of steps to perform. Supporting a list of handlers changes nothing.
[2] Holding a lock while an an event is fired is total madness. It leads inevitably to deadlock. The source takes out lock A, fires event, the sink takes out lock B, now two locks are held. What if some operation in another thread results in the locks being taken out in the reverse order? How can such deadly combinations be ruled out when responsibility for locking is divided between separately designed/tested components (which is the whole point of events)?
[3] None of these issues in any way reduce the all-pervasive usefulness of normal multicast delegates/events in single-threaded composition of components, especially in GUI frameworks. This use case covers the vast majority of uses of events. Using events in a free-threaded way is of questionable value; this does not in any way invalidate their design or their obvious usefulness in the contexts where they make sense.
[4] Threads + synchronous events is essentially a red herring. Queued asynchronous communication is the way to go.
[1] I wasn't referring to measured time...I was talking about atomic operations, which logically occur instantaneously... and by that I mean nothing else involving the same resources they're using can change while the event is happening because it's serialized with a lock.
o
ollo

Please take a look here: http://www.danielfortunov.com/software/%24daniel_fortunovs_adventures_in_software_development/2009/04/23/net_event_invocation_thread_safety This is the correct solution and should always be used instead of all other workarounds.

“You can ensure that the internal invocation list always has at least one member by initializing it with a do-nothing anonymous method. Because no external party can have a reference to the anonymous method, no external party can remove the method, so the delegate will never be null” — Programming .NET Components, 2nd Edition, by Juval Löwy

public static event EventHandler<EventArgs> PreInitializedEvent = delegate { };  

public static void OnPreInitializedEvent(EventArgs e)  
{  
    // No check required - event will never be null because  
    // we have subscribed an empty anonymous delegate which  
    // can never be unsubscribed. (But causes some overhead.)  
    PreInitializedEvent(null, e);  
}  

C
Community

I don't believe the question is constrained to the c# "event" type. Removing that restriction, why not re-invent the wheel a bit and do something along these lines?

Raise event thread safely - best practice

Ability to sub/unsubscribe from any thread while within a raise (race condition removed)

Operator overloads for += and -= at the class level.

Generic caller-defined delegate


T
Tony

Thanks for a useful discussion. I was working on this problem recently and made the following class which is a bit slower, but allows to avoid callings to disposed objects.

The main point here is that invocation list can be modified even event is raised.

/// <summary>
/// Thread safe event invoker
/// </summary>
public sealed class ThreadSafeEventInvoker
{
    /// <summary>
    /// Dictionary of delegates
    /// </summary>
    readonly ConcurrentDictionary<Delegate, DelegateHolder> delegates = new ConcurrentDictionary<Delegate, DelegateHolder>();

    /// <summary>
    /// List of delegates to be called, we need it because it is relatevely easy to implement a loop with list
    /// modification inside of it
    /// </summary>
    readonly LinkedList<DelegateHolder> delegatesList = new LinkedList<DelegateHolder>();

    /// <summary>
    /// locker for delegates list
    /// </summary>
    private readonly ReaderWriterLockSlim listLocker = new ReaderWriterLockSlim();

    /// <summary>
    /// Add delegate to list
    /// </summary>
    /// <param name="value"></param>
    public void Add(Delegate value)
    {
        var holder = new DelegateHolder(value);
        if (!delegates.TryAdd(value, holder)) return;

        listLocker.EnterWriteLock();
        delegatesList.AddLast(holder);
        listLocker.ExitWriteLock();
    }

    /// <summary>
    /// Remove delegate from list
    /// </summary>
    /// <param name="value"></param>
    public void Remove(Delegate value)
    {
        DelegateHolder holder;
        if (!delegates.TryRemove(value, out holder)) return;

        Monitor.Enter(holder);
        holder.IsDeleted = true;
        Monitor.Exit(holder);
    }

    /// <summary>
    /// Raise an event
    /// </summary>
    /// <param name="args"></param>
    public void Raise(params object[] args)
    {
        DelegateHolder holder = null;

        try
        {
            // get root element
            listLocker.EnterReadLock();
            var cursor = delegatesList.First;
            listLocker.ExitReadLock();

            while (cursor != null)
            {
                // get its value and a next node
                listLocker.EnterReadLock();
                holder = cursor.Value;
                var next = cursor.Next;
                listLocker.ExitReadLock();

                // lock holder and invoke if it is not removed
                Monitor.Enter(holder);
                if (!holder.IsDeleted)
                    holder.Action.DynamicInvoke(args);
                else if (!holder.IsDeletedFromList)
                {
                    listLocker.EnterWriteLock();
                    delegatesList.Remove(cursor);
                    holder.IsDeletedFromList = true;
                    listLocker.ExitWriteLock();
                }
                Monitor.Exit(holder);

                cursor = next;
            }
        }
        catch
        {
            // clean up
            if (listLocker.IsReadLockHeld)
                listLocker.ExitReadLock();
            if (listLocker.IsWriteLockHeld)
                listLocker.ExitWriteLock();
            if (holder != null && Monitor.IsEntered(holder))
                Monitor.Exit(holder);

            throw;
        }
    }

    /// <summary>
    /// helper class
    /// </summary>
    class DelegateHolder
    {
        /// <summary>
        /// delegate to call
        /// </summary>
        public Delegate Action { get; private set; }

        /// <summary>
        /// flag shows if this delegate removed from list of calls
        /// </summary>
        public bool IsDeleted { get; set; }

        /// <summary>
        /// flag shows if this instance was removed from all lists
        /// </summary>
        public bool IsDeletedFromList { get; set; }

        /// <summary>
        /// Constuctor
        /// </summary>
        /// <param name="d"></param>
        public DelegateHolder(Delegate d)
        {
            Action = d;
        }
    }
}

And the usage is:

    private readonly ThreadSafeEventInvoker someEventWrapper = new ThreadSafeEventInvoker();
    public event Action SomeEvent
    {
        add { someEventWrapper.Add(value); }
        remove { someEventWrapper.Remove(value); }
    }

    public void RaiseSomeEvent()
    {
        someEventWrapper.Raise();
    }

Test

I tested it in the following manner. I have a thread which creates and destroys objects like this:

var objects = Enumerable.Range(0, 1000).Select(x => new Bar(foo)).ToList();
Thread.Sleep(10);
objects.ForEach(x => x.Dispose());

In a Bar (a listener object) constructor I subscribe to SomeEvent (which is implemented as shown above) and unsubscribe in Dispose:

    public Bar(Foo foo)
    {
        this.foo = foo;
        foo.SomeEvent += Handler;
    }

    public void Handler()
    {
        if (disposed)
            Console.WriteLine("Handler is called after object was disposed!");
    }

    public void Dispose()
    {
        foo.SomeEvent -= Handler;
        disposed = true;
    }

Also I have couple of threads which raise event in a loop.

All these actions are performed simultaneously: many listeners are created and destroyed and event is being fired at the same time.

If there were a race conditions I should see a message in a console, but it is empty. But if I use clr events as usual I see it full of warning messages. So, I can conclude that it is possible to implement a thread safe events in c#.

What do you think?


Looks good enough to me. Although I think it might be possible (in theory) for disposed = true to happen before foo.SomeEvent -= Handler in your test application, producing a false positive. But apart from that, there's a few things you might want to change. You really want to use try ... finally for the locks - this will help you make this not just thread-safe, but also abort-safe. Not to mention that you could then get rid of that silly try catch. And you're not checking the delegate passed in Add/Remove - it could be null (you should throw straight away in Add/Remove).