ChatGPT解决这个技术问题 Extra ChatGPT

Should I have a separate assembly for interfaces?

We currently have quite a few classes in a project, and each of those classes implement an interface, mostly for DI reasons.

Now, my personal feeling is that these interfaces should be put into a separate namespace within the same assembly (so we have a MyCompany.CoolApp.DataAccess assembly, and within that there's an Interfaces namespace giving MyCompany.CoolApp.DataAccess.Interfaces).

However, somebody has suggested that these interfaces should actually be in their own assembly. And my question is - are they right? I can see that there are some benefits (eg. other projects will only need to consume the interface assembly), but at the end of they day all of these assemblies are going to need to be loaded. It also seems to me that there could be a slightly more complex deployment issue, as Visual Studio will not automatically pull the implementing assembly into the target's bin folder.

Are there best practice guidelines for this?

EDIT:

To make my point a little clearer: We already separate UI, DataAccess, DataModel and other things into different assemblies. We can also currently swap out our implementation with a different implementation without any pain, as we map the implementing class to the interface using Unity (IOC framework). I should point out that we never write two implementations of the same interface, except for reasons of polymorphism and creating mocks for unit testing. So we don't currently "swap out" an implementation except in unit tests.

The only downside I see of having the interface in the same assembly as the implementation is that the whole assembly (including the unused implementation) will have been loaded.

I can, however, see the point that having them in a different assembly means that developers won't accidentally "new" the implementing class rather than have it created using the IOC wrapper.

One point I haven't understood from the answers is the deployment issue. If I am just depending on the interface assemblies, I'll have a something like the following structure:

MyCompany.MyApplication.WebUI
    References:
        MyCompany.MyApplication.Controllers.Interfaces
        MyCompany.MyApplication.Bindings.Interfaces
        etc...

When I build this, the assemblies that are automatically put into the bin folder are just those interface assemblies. However, my type mappings in unity map different interfaces to their actual implementations. How do the assemblies that contain my implementations end up in the bin folder?


A
Adam Houldsworth

The usual expected? practice is to place them in their own assembly, because then a given project consuming those interfaces doesn't require a hard reference to the implementation of those interfaces. In theory it means you can swap out the implementation with little or no pain.

That said, I can't remember when I last did this, to @David_001's point this isn't necessarily "usual". We tend to have our interfaces in-line with an implementation, our most common use for the interfaces being testing.

I think there are different stances to take depending on what you are producing. I tend to produce LOB applications, which need to interoperate internally with other applications and teams, so there are some stakeholders to the public API of any given app. However, this is not as extreme as producing a library or framework for many unknown clients, where the public API suddenly becomes more important.

In a deployment scenario, if you changed the implementation you could in theory just deploy that single DLL - thus leaving, say, the UI and interface DLLs alone. If you compiled your interfaces and implementation together, you might then need to redeploy the UI DLL...

Another benefit is a clean segregation of your code - having an interfaces (or shared library) DLL explicitly states to any on the development team where to place new types etc. I'm no longer counting this as a benefit as we haven't had any issues not doing it this way, the public contract is still easily found regardless of where the interfaces are placed.

I don't know if there are best practices for or against, the important thing arguably is that in code, you are always consuming the interfaces and never letting any code leak into using the implementation.


Would you say it's a good idea to have an interface assembly for every implementation assembly? Doesn't maintaining it get very cumbersome if you have many layers since you're essentially doubling the assembly count?
@nphx A more pragmatic approach is what I take. I only use interfaces to abstract an API over something. It happens a lot for testing so I can decompose dependencies and use something like mocking or stubbing instead. Not everything gets an interface. The important concept is a "public contract" formed by a type exposing accessible members to other logical tiers.
If I am a library consumer, and the library owner publishes an implementation patch. I don't quite see the benefits of updating CodeImp.dll(keeping ICode.dll) over just updating Code.dll. a given project consuming those interfaces doesn't require a hard reference to the implementation of those If you provide me a library, how can I use your code without reference your implementation? There must be at least a Factory to new your implmentation so I can take your business logic. If I want to "consume" your library. I need a factory that instantiates your implementation.
N
Noctis

The answers so far seem to say that putting the interfaces in their own assembly is the "usual" practice. I don't agree with putting unrelated interfaces into one "shared" common assembly, so this would imply I will need to have 1 interface assembly for each "implementation" assembly.

However, thinking about it further, I can't think of many realy world examples of this practice (eg. do log4net or NUnit provide public interface assemblies so that consumers can then decide on different implementations? If so, what other implementation of nunit can I use?). Spending ages looking through google, I've found a number of resources.

Does having separate assemblies imply loose coupling? The following suggests no: http://www.theserverside.net/tt/articles/showarticle.tss?id=ControllingDependencies http://codebetter.com/blogs/jeremy.miller/archive/2008/09/30/separate-assemblies-loose-coupling.aspx

The general consensus that I could find from googling was that fewer assemblies is better, unless there's a really good reason to add new assemblies. See also this: http://www.cauldwell.net/patrick/blog/ThisIBelieveTheDeveloperEdition.aspx As I am not producing public APIs, and I'm already putting interfaces into their own namespaces, it makes sense not to blindly create new assemblies. The benefits of this approach seem to outweigh the potential benefits of adding more assemblies (where I'm unlikely to ever actually reap the benefits).


I hate to accept my own answer, but as the other answers (currently) don't address my specific scenario, or give references for further info...
I don't think you're being entirely representative of the other answers when proving your point, of which you have accepted your own answer. BTW log4net and NUnit are not applications, they are libraries. I don't think that's too subtle a point.
I know this is old, but I'd like to point out that System.Data follows a similar pattern, chiefly it defines the interfaces for talking to databases, like IDbConnection, IDbCommand, etc... whereas the database specific assemblies define the concrete implementations.
They should absolutely be in their own assembly, and here's the proof. DLLs have a logical dependency graph that cannot contain cycles. As a result, you will have lower-level DLLs, such as those for DTOs, which can be referenced by assemblies higher up, such as business logic assemblies. If you were to put an ISystemLogger interface in a BLL layer, you would not be able to use it to log system errors (e.g. in a Dispose method, which should not throw exceptions) via dependency injection in a lower layer, unless you reference the business logic DLL, which might be impossible.
Therefore, in order to take advantage of dependency injection, the interfaces need to be accessible on their own, or at least at the lowest point in the dependency graph. No such limitation should apply to the actual implementation, therefore, there's a very strong case for them to be in an assembly separate from their implementation. Futhermore, having them in a separate assembly does making swapping out implementations easier, because you only have to swap out the implementation DLL.
T
Tim Lloyd

The pattern I follow for what I call shared types (and I too use DI) is to have a separate assembly which contains the following for application level concepts (rather than common concepts which go into common assemblies):

Shared interfaces. DTOs. Exceptions.

In this way dependencies between clients and core application libraries can be managed, as clients can not take a dependency on a concrete implementation either directly or as an unintended consequence of adding a direct assembly reference and then accessing any old public type.

I then have a runtime type design where I set up my DI container at application start, or the start of a suite of unit tests. In this way there is a clear separation between implementations and how I can vary them via DI. My client modules never have a direct reference to the actual core libraries, only the "SharedTypes" library.

The key for my design is having a common runtime concept for clients (be it a WPF application or NUnit) that sets up the required dependencies i.e. concrete implementations or some sort of mocks\stubs.

If the above shared types are not factored out, but instead clients add a reference to the assembly with the concrete implementation, then it is very easy for clients to use the concrete implementations rather than the interfaces, in both obvious and non-obvious ways. It's very easy to gradually end up with over-coupling over time which is near impossible to sort out without a great deal of effort and more importantly time.

Update

To clarify with an example of how the dependencies end up in the target application.

In my situation I have a WPF client application. I use Prism and Unity (for DI) where importantly, Prism is used for application composition.

With Prism your application assembly is just a Shell, actual implementations of functionality reside in "Module" assemblies (you can have a separate assembly for each conceptual Module, but this is not a requirement, I have one Modules assembly ATM). It is the responsibility of the shell to load the Modules - the composition of these Modules is the application. The Modules use the SharedTypes assembly, but the shell references the concrete assemblies. The runtime type design I discussed is responsible for initializing dependencies, and this is done in the Shell.

In this way module assemblies which have all the functionality do not depend on concrete implementations. They are loaded by the shell which sorts the dependencies out. The shell references the concrete assemblies, and this is how they get in the bin directory.

Dependency Sketch:

Shell.dll <-- Application
  --ModuleA.dll
  --ModuleB.dll
  --SharedTypes.dll
  --Core.dll
  --Common.dll + Unity.dll <-- RuntimeDI

ModuleA.dll
  --SharedTypes.dll
  --Common.dll + Unity.dll <-- RuntimeDI

ModuleB.dll
  --SharedTypes.dll
  --Common.dll + Unity.dll <-- RuntimeDI

SharedTypes.dll
  --...

This model implies a single assembly with many interfaces defined in it, which may represent functionally different things (from my example, you'll have interfaces which relate to Controllers, some interfaces which relate to DataAccess, etc.) Within that "shared" assembly, some interfaces may rely on other interfaces. If you're creating a new application and want to reuse your data access (for example), you'll need to include a reference to that shared assembly. How do you then know what other assemblies and types you need map through unity?
@David_001 The example is not meant to cover every conceivable permutation of this pattern, that would be impossible. It can be adapted to circumstances. The key thing is it separates modules and their dependencies on concrete implementations.
This simply doesn't work. Having a lib for all Exception types is as arbitrary as having a library for all classes that begin with E. Imagine you actually did this. This library would soon need to reference a whole bunch of unrelated packages because types beginning with E could span all manner of uses, involving all manner of other types. Okay so let's narrow it and say its for classes beginning Excepti that's okay now? Starting to make nonsense :) ? The best advice is to never create a lib project until you have a real need to reuse a type.
@LukePuplett I have not suggesting a separate assembly for Exceptions types, I'm suggesting a separate assembly for shared types of which one example is Exceptions and in particular Exception types that need to be shared.
@TimLloyd I see what you mean. My bad. I think because these concepts are strongly called out under bullets 1, 2 and 3 I thought you were placing each in its own library, which many people do :-/ (it won't let me undo my downvote, I'm sorry about that).
L
Luke Puplett

I agree with the ticked answer. Good for you, David. In fact, I was relieved to see the answer, thought I was going mad.

I see this interesting "pens in a pen pot" pattern in enterprise C# freelance jobs all the time, where people follow the convention of the crowd and the team must conform, and not conforming is making trouble.

The other crazy is the one namespace per assembly nonsense. So you get a SomeBank.SomeApp.Interfaces namespace and everything is in it.

For me, it means types are scattered across namespaces and assemblies containing a whole slew of stuff I don't care about has to be referenced all over the place.

As for interfaces, I don't even use interfaces in my private apps; DI works on types, concrete with virtuals, base classes or interfaces. I choose accordingly and place types in DLLs according to what they do.

I have never had a problem with DI or swapping logic later.

• .NET assemblies are a unit of security, API scope and deployment, and are independent of namespaces.

• If two assemblies depend on each other, then they cannot be deployed and versioned separately and should be merged.

• Having many DLLs often means making lots of stuff public such that it’s hard to tell the actual public API from the type members that had to be made public because they were arbitrarily put in their own assembly.

• Does code outside of my DLL ever need to use my type?

• Start conservative; I can usually easily move a type out a layer, it’s a bit harder the other way.

• Could I neatly package up my feature area or framework into a NuGet package such that it is completely optional and versionable, like any other package?

• Do my types align to the delivery of a feature and could they be placed in a feature namespace?

• Many real libraries and frameworks are branded, making them easy to discuss, and they don’t burn up namespace names that imply its use or are ambiguous, could I brandify the components of my app using 'code names' like Steelcore instead of generic clichéd and confusing terms, errm 'Services'?

Edit

This is one of the misunderstood things I see in development today. It's so bad.

You have an API, so put all its types within the single API project. Move them out only when you have a need to share/reuse them. When you move them out, move them straight to a NuGet package with a clear name that carries the intent and focus of the package. If you're struggling for a name, and considering "Common", its probably because you're creating a dumping ground.

You should factor your NuGet package into a family of related packages. Your "core" package should have minimal dependencies on other packages. The types inside are related by usage and depend on each other.

You then create a new package for the more specialised types and subtypes that require additional sets of dependencies; more clearly: you split a library by its external dependencies, not by the kind of type or whether its an interface or an exception.

https://i.stack.imgur.com/tAUyP.jpg

So you might stick all your types in a single big library, but some more specialised types (coloured spots) depend on certain external libs so now your library needs to pull-in all these dependencies. That's unnecessary, you should instead break out those types into further specialised libraries that do take the dependencies needed.

Types in package A and B can belong to the same namespace. Referencing A brings in one set of types and then optionally referencing B supplements the namespace with a bunch more.

That's it.

Luke


W
William Baker Morrison

Recently, I have always advocated separating interfaces from implementation.

Even if someone on the team says "99% of the implementation of these interfaces will never change", never say never.

Splitting libraries saved our refactoring recently when moving a large project from EntityFramework to EntityFrameworkCore. We just changed the implementation in 10 projects with EntityFramework and went to drink coffee.


Migration of EF to EF Core is a stupid example, as these libraries are quite compatible and similar to each other.
T
Talha Yousuf

I'm looking at System.Data.dll (4.0) in Object Browser and i can see that it is autonomous in itself with not just interfaces but all instrumental classes like DataSet, DataTable, DataRow, DataColumn etc in it. Moreover, skimming over the list of namespaces it holds like System.Data, System.Data.Common, System.Configuration & System.Xml, it suggests first to have interfaces contained in their own assemblies with all relevant and required code held together and second and more importantly to re-use same namespaces across the overall application (or framework) to seggregate classes virtually as well.


F
Frank B

I know this thread is really, really old, but I have a thought on this that I want to put out there.

I get the assembly for "reuse". But can't we go a step further for our particular SOLUTION?

If we have an assembly in our SOLUTION that has the appropriate interfaces in it, we can build that assembly and use it wherever it makes sense including reuse.

But, for other projects within OUR SAME solution, why not simply add the interface file by LINK to the other projects that need the interface defined?

By doing that, deployment for your particular project will be simplified (don't need to deploy the interface assembly). On the other hand, if you want to reuse the interface in a different solution of yours, you have the choice of copying the interface file, or simply referencing the interface assembly.

Seems like the best of both worlds. We have the choice of how to get the interface, and it is still version controlled.

Frank


I don't think I understand your answer, do you suggest instead of referencing an assembly to 'copy' the interface file? Link also suggests a copy. That way this is not the same interface, its another interface witht he same name and same implementation.