Expiring records and notifications

Hello,
I am looking to relocate some functionality into an existing microservice and was hoping for a little guidance. Currently, in our monolith, there is a daily job that sends out notifications for company documents that are near/past expiration. 2 different users belonging to the same company might receive different content based on their role. I’m currently thinking that the simplest approach might be to publish some events along the lines of Document(Almost)Expired from the monolith and then the ms would listen for those events and ultimately send the message. However, It also seems like maybe I should just listen for when a document is created or the expiration date is changed and then keep track of that. Currently, it’s a system-wide setting that sends a message 7 days before/after and the day of (ie. -7,0,7), but we want to make it more of a per-user setting. Not quite sure what the best approach is, but feel free to elaborate on as little or much as you prefer. Thanks!

Hi Brian, it sounds like using deadline manager. With a deadline manager, you can create a deadline via de document aggregate. If the expiry date is changed, you can easily reschedule the deadline.

Sending the notifications can be done via an event processor, which could also have a projection of personal preferences with if, when and how people want to receive the notification.

Hello Gerard,
Thanks for responding. Deadlines definitely came to mind. Currently, the document is not an aggregate as it is legacy code. My understanding (based on an old video made by Allard) is that the migration process involves publishing events from the legacy code. If something has changed in that regard, please let me know. Publishing an event when the expiration date changes sounds feasible. However, how might I go about initializing things? eg. Do I publish some sort of “DocumentExpirationDatesExported” event with all the data? Maybe I publish a DocumentUploaded event for each one? Just to clarify, each notification will contain the list of documents as opposed to a notification per document. So, I don’t know how I might go about queueing things up before sending the notification. Do I need to send some sort of “Done” signal?

I would expect at least three type sof event from the old system. That a new document was created, with an expiration date and some details. This event could be send for each current document when you start moving towards a more event driven system. Furthermore I would expect some event notifying that the expiration data was changed, and one that the document was deleted.

If you don’t want to send the actual notification for each document, there are several ways. For example you could just keep a list with the documents that expired, and use a scheduled taks, for example using the Spring @Scheduled annotation, to send the notifications. You could also have a task just create an event, and use that event as a trigger to send the notifications. It depends a lot on how and when you want to send the notifications, or how flexible you want to be to change them later.

Hello Gerard,
How might I go about initializing the document data in the other app? Do I publish some kind of event for each document (~60k docs) or can I publish some kind of “bulk” event? In either case, how might I go about publishing the event? Do I need to execute some sort of 1-time job from the monolith? Also, a big reason we can’t currently scale out monolith is because of scheduled tasks. Basically, neither instance is aware of the other and they each execute the job which results in duplicate work. Can I schedule the job from the microservice in such a way so that it only executes once across all instances?

Scheduling it in bulk would both become to big, and you need an event per aggregate instance, so per Document instance.

For scheduling task, without triggering them on each instance, you should use something like Quartz, Db-scheduler or JobRunr. Those are also used for implementations of deadlines.

Hello Gerard,
Currently, I have setup a scheduled task that sends a command to kick off this process. I basically just C&P’d the code into an aggregate command handler and then ironed out the errors. The purpose of the process is to first find all the documents based on their expiration date and then I work out who should be notified. The end result is more or less of map where the user is the key and the values are the documents. Once I do that, it seems like I should be issuing commands to create a notification aggregate. However, my understanding is that issuing a command from a command handler is an anti-pattern. Off-hand, I wonder if I should publish some kind of “job finished” event which has all the info collated and then have a saga listening for the event and then issuing the “Send Notification” command? Please advise. Thanks!

Kicking off is a one time thing right? At least, I assume in the feature to create the ‘document’ aggregate immediately. Likely for such thing it will be best to just run it once, and not use a command, also because it likely takes some time.

You could maybe have a specific command to create the migration events, using the ALWAYS creation policy. That way, any aggregate already created will not change.

The alerts are daily. Just to recap a little… the document creation, etc… is currently handled by the monolith and isn’t an actual aggregate…yet. However, I’m going to publish the events (eg. DocumentCreated, DocumentExpirationDateChanged, etc…) and then maintain a slimmed down read model in the other app to determine which documents are expired and who needs to be notified. The document expiration alerts are a daily job where a user will receive an email with the subject like “Documents expiring in 7 days” and a list of all the documents. I was researching scheduled jobs in a cqrs environment and the suggested approach was to send a command from a scheduled task which makes sense, IMO. I named the command GenerateExpiredDocumentNotifications (subject to change) and then setup an aggregate to handle the command. That handler is where I link the docs and users together. In the legacy code, it is also where the notification records are generated. That’s where I’m at now. It seems like I should just publish those results in some kind of event which in turn triggers a series of commands to create the notification aggregates. As of now, I’m trying to implement this whole thing without changing the current process too drastically and without too much complexity to get management on board with my vision. If you think I’m going about this the wrong way, don’t hesitate to say so. Please advise. Thanks!

It feels like your are making it too complex. If there isn’t really an aggregate to send the command to, it seems better to directly create the events from a scheduled task directly. Although using an actual aggregate means you could easily check if you already need to create events again.

So you could have that aggregate ‘end’ with an GeneratedExpiredDocumentNotificationsEvent. The notifications themselves could be non domain events send directly to the command bus/gateway.

To me, and ‘ideal’ solution would no longer use batching, but would use deadlines on the ‘document’ aggregates, generating one notification when triggered. Handling the actual notifications could still be a ‘batch’ job using an event processor, with some trigger based on a scheduled task. I’m not sure what’s currently there, and thus what’s the clearest path towards it.