I prefer to let any 'long-running' component, like a Change Detection Connector, timeout occasionally. Since your solution is keeping an audit log, this will give you some way to monitor that it is alive and well. Of course, if there can be gaps in changes detected, having the task stop occasionally lets you drop a timestamp into your audit log to indicate when it stopped, as well as another when it is restarted. I've worked with a couple of monitors that managed to use this as a heartbeat for the solution. Allowing an AL to run indefinitely requires you to add monitor-ability otherwise. Since the change detection mechanisms are 'black box' loops that monopolize processing during the wait, you have to do this using additional ALs.
Another advantage with stopping and restarting the ALs is that you can avoid connection timeouts. These could be for connected systems or intervening tech, like firewalls. Long periods of inactivity could cause tunnels to close, session tokens to expire or connections to be lost. Re-establishing these periodically - and ideally in shorter timeframes than the timeout settings themselves - means that you don't have to dig into TDI's (SDI's) Connection Lost and Failover functionality, which although powerful will still mean more configuration.
When it comes to keeping ALs afloat, I prefer to use Schedulers. For something like Change Detection I would use a Keep Alive Scheduler. In the case of periodic tasks, like exports to BI or scanning for file uploads, you set the Scheduler to use a Schedule. This latter acts like a crontab - and in fact, you could put the mask itself in your properties file so it is easy to tune.
Finally, you mentioned that when an AL is started then it is given a client/tenant id which is used to read the correct properties. I am not sure how you implemented this, but my approach is to create my own getProperty() function in a library Script (Resources > Scripts) which I have preloaded for each AL. In addition to the property name, I also pass in some context info - like the client id, or server instance name, or whatever. My properties have these values encoded in them in the property file:
My getProperty() method uses the context info to create the extended property name, fetch the property and return it. In those projects where the preference was one set of properties for each context - e.g. stark.properties - then getProperties read in the properties file (in our case, each time) based on the context information. To get the built-in handling of property encryption/decryption, we would use PropertyStore functions (found in the Java Docs) for doing this.
If the purpose of starting the ALs via command line is to pass in the client id, then perhaps this your simplest approach going forward. If you were to use a Scheduler then you would need to prepare your AL by defining Operations - again more configuration - while passing in the client id when launching the AL is simpler and more flexible.
Ok, I've rambled long enough. Let me know if you want to talk about any of these items :)
Post by Jared Roberts
Hiya folks... seeking some ideas/experience for running change detection connectors in Production.
What are the best configurations to run an AL that monitors changes in Domino Directory? I'm looking for the best way to run the AL, but with some resilience.
- Do I run the AL with a timeout of a specific period (like 23h 55m) and manually close the AL after that? And restart it on a Task schedule every 24h?
- Do I set the timeout to 0 and run forever and manually recycle it once a week/month?
- what if the server is rebooted/crashed? should I just enable a startup script to launch the ALs for each customer ?
stuff like that... any advice/gotchas are appreciated.
- multi-tenant cloud platform.
- 1 SDI Server on Windows.
- 1 AssemblyLine that runs in 10-15 separate threads at any given time (called from command line)
- up until now AL has been running on a schedule with a regular Notes User connector - now switching to Change Detection.
- AL connects to various customer Domino servers to monitor changes and pushes data to central LDAP/user management environment.
- AL is configured to pass in customer ID when ran - this ID references properties files for specific customer config. (eg. Domino server connection parameters)
- custom logging is configured to log changes for each customer