ChatGPT解决这个技术问题 Extra ChatGPT

Understanding when to use stateful services and when to rely on external persistence in Azure Service Fabric

I'm spending my evenings evaluating Azure Service Fabric as a replacement for our current WebApps/CloudServices stack, and feel a little bit unsure about how to decide when services/actors with state should be stateful actors, and when they should be stateless actors with externally persisted state (Azure SQL, Azure Storage and DocumentDB). I know this is a fairly new product (to the general public at least), so there's probably not a lot of best practices in regards to this yet, but I've read through most of the documentation made available by Microsoft without finding a definite answer for this.

The current problem domain I'm approaching is our event store; parts of our applications are based on event sourcing and CQRS, and I'm evaluating how to move this event store over to the Service Fabric platform. The event store is going to contain a lot time series-data, and as it's our only source of truth for the data being persisted there it must be consistent, replicated and stored to some form of durable storage.

One way I have considered doing this is with stateful "EventStream" actor; each instance of an aggregate using event sourcing stores its events within an isolated stream. This means the stateful actor could keep track of all the events for its own stream, and I'd have met my requirements as to how the data is stored (transactional, replicated and durable). However, some streams may grow very large (hundreds of thousands, if not millions, of events), and this is where I'm starting to get unsure. Having an actor with a large amount of state will, I imagine, have impacts on the performance of the system when these large data models needs to be serialized to or deserialized from disk.

Another option is to keep these actors stateless, and have them just read their data from some external storage like Azure SQL - or just go with stateless services instead of actors.

Basically, when is the amount of state for an actor/service "too much" and you should start considering other ways of handling state?

Also, this section in the Service Fabric Actors design pattern: Some anti-patterns documentation leave me a little bit puzzled:

Treat Azure Service Fabric Actors as a transactional system. Azure Service Fabric Actors is not a two phase commit-based system offering ACID. If we do not implement the optional persistence, and the machine the actor is running on dies, its current state will go with it. The actor will be coming up on another node very fast, but unless we have implemented the backing persistence, the state will be gone. However, between leveraging retries, duplicate filtering, and/or idempotent design, you can achieve a high level of reliability and consistency.

What does "if we do not implement the optional persistance" indicate here? I was under the impression that as long as your transaction modifying the state succeeded, your data was persisted to durable storage and replicated to at least a subset of the replicas. This paragraph leaves me wondering if there are situations where state within my actors/services will get lost, and if this is something I need to handle myself. The impression I got from the stateful model in other parts of the documentation seems to counteract this statement.

do you have an update to this, I find myself in a very similar position wondering what you ended up doing. cheers.
@Trond Nordheim Did you decide on a solution using SF? I have a similar requirement and would be interested to hear what you decided on.
We are facing the same path and we have got the same doubts, we tested many .NET Event Sourcing Tools (mementoFX, ncqrs, etc...) but we are not super happy with them (maybe it's our fault) but we are still looking for some alternative.

c
clca

One option that you have is to keep 'some' of the state in the actor (let's say what could be considered to be hot data that needs to be quickly available) and store everything else on a 'traditional' storage infrastructure such as SQL Azure, DocDB, .... It is difficult to have a general rule about too much local state but, maybe, it helps to think about hot vs. cold data. Reliable Actors also offer the ability to customize the StateProvider so you can also consider implementing a customized StateProvider (by implementing the IActorStateProvider) with the specific policies that you need to be more efficient with the requirements that you have in terms of amount of data, latency, reliability and so on (note: documentation is still very minimal on the StateProvider interface but we can publish some sample code if this is something you want to pursue).

About the anti-patterns: the note is more about implementing transactions across multiple actors. Reliable Actors provides full guarantee on reliability of the data within the boundaries of an actor. Because of the distributed and loosly coupled nature of the Actor model, implementing transactions that involve multiple actors is not a trivial task. If 'distributed' transactions is a strong requirement, the Reliable Services programming model is probably a better fit.


Thanks for the response, that makes some sense approaching it on that level - at least in situations like this where the data sets you work on can grow beyond control. Any sample code you have ready in the pipeline would be greatly appreciated, we're investigating the Service Fabric thoroughly against our current use case here to figure out how and if we should use it - so it would be interesting to see some more examples out there.
quote @clca > "(note: documentation is still very minimal on the StateProvider interface but we can publish some sample code if this is something you want to pursue)". We are also very interested into a piece of code you can share, it would be great to take inspiration from. Thanks in advance
r
ruffin

I know this has been answered, but recently found myself in the same predicament with a CQRS/ES system and here's how I went about it:

Each Aggregate was an actor with only the current state stored in it. On a command, the aggregate would effect a state change and raise an event. Events themselves were stored in a DocDb. On activation, AggregateActor instances read events from DocDb if available to recreate its state. This is obviously only performed once per actor activation. This took care of the case where an actor instance is migrated from one node to another.


P
Phillip Ngan

To answer @Trond's sedcondary question which is, "What does, 'if we do not implement the optional persistance' indicate here?"

An actor is always a stateful service, and its state can be configured, using an attribute on the actor class, to operate in one of three modes:

Persisted. The state is replicated to all replica instances, and it also written to disk. This the state is maintained even if all replicas are shut down. Volatile. The state is replicated to all replica instances, in memory only. This means as long as one replica instance is alive the state is maintained. But when all replicas are shut down the state is lost and cannot be recovered after they are restarted. No persistence. The state is not replicated to other replica instances, nor to disk. This provides the least state protection.

A full discussion of the topic can be found in the Microsoft documentation


关注公众号,不定期副业成功案例分享
Follow WeChat

Success story sharing

Want to stay one step ahead of the latest teleworks?

Subscribe Now