Many times clients will ask you to develop custom logging systems for business purposes. Such an example is logging object changes into a database table, to allow for supervising users to analyse and monitor said changes. However, doing this manually is not an option, since this would result in multiple points of possible failure and a contradiction to the DRY principle . Triggering database saves at each crucial point is also not an option, since you will probably have import errors in your models . py file. This article presents a database logging solution inspired from the Observer Design Pattern , which uses Django signals, sent from various sources in a unified fashion.
Let’s start with a possible use case. Our application uses Finite State Machines (FSMs) to handle the changes of a user’s state, which can be invited , active , locked or deactivated . Some of the changes from one state to another are done by users, such as activation from the invited state or the locking, in case they have entered wrong credentials too many times. Other changes may be done via an admin interface, such as activating, deactivating and unlocking a user. Finally the superadmin, using Django Admin views, can override any changes and set a user’s state at their will, regardless of the Finite State Machine’s rules. The purpose is that every time a user’s state is changed, we get a new line in our database table ( user_state_audit ) which records details about the user who was changed and the user who initiated the changes. For more information on implementing Finite State Machines in such projects, check out our previous article on the subject .
Our model will look something like this:
class UserStateAudit(models.Model): username = models.TextField(null=False, blank=False) initiated_by = models.TextField(null=False, blank=False) start_state = models.TextField(null=False, blank=False) end_state = models.TextField(null=False, blank=False) datetime = models.DateTimeField(auto_now_add=True, null=True, blank=True) ip_address = models.TextField()
This model is fairly straightforward: the structure will hold the changed user’s username, starting state, ending state and the initiator’s username and IP address. It also logs the date time when this change occurred.
We can now think about the minimal number of arguments our signal needs in order to log the changes in the database. We chose the changed user, the old state and the new state. In our signals . py file we add:
from django.dispatchimport Signal state_audit_signal = Signal(providing_args=["user", "old_state", "new_state"])
And we tie it to the FSM using the state_change function, which is triggered automatically whenever the user’s state changes from inside the FSM. Don’t worry if your project is not using FSMs to handle user states. You can always trigger a similar function manually or send the signal from another place. Here the event e holds the old and the new state, but you can send those manually.
from myapp.signalsimport state_audit_signal [...] class UserFSM(models.Model): user = models.OneToOneField(User, related_name='fsm') current_state = models.CharField(max_length=32, null=False, blank=False, default='invited') [...] def state_change(self, e): state_audit_signal.send( sender=self.__class__, user=self.user, old_state=e.src, new_state=e.dst ) self.current_state = e.dst self.save()
Hooray! Our signal is now being sent whenever our user is changed using the FSM! But we’re still not doing anything useful with it. We also have to create the database record and handle the case where the superadmin disregards the FSM and forces the change from outside the FSM. We tackle the former first.
In signals_processing . py we add the logic for creating the new UserStateAudit object which will be saved in the database. Does something seem fishy? Do we have too few parameters to our signal and can’t complete the puzzle for our new object? Actually, no. We have enough parameters and more than enough tricks up our sleeves. Upon receiving the signal, we use the parameters provided to get the changed user’s information, and Django Crequest to find out more about the user who initiated the change.
from django.dispatchimport receiver from crequest.middlewareimport CrequestMiddleware from myapp.modelsimport UserStateAudit, User from myapp.signalsimport state_audit_signal @receiver(state_audit_signal) def user_change_state_signal(sender, **kwargs): current_request = CrequestMiddleware.get_request() user_id = kwargs['user'] old_state = kwargs['old_state'] new_state = kwargs['new_state'] user = User.objects.get(pk=user_id) username = user.username initiated_by = current_request.user.usernameif current_requestelse 'CLI' start_state = old_state end_state = new_state ip = get_client_ip(current_request) if current_requestelse 'CLI' audit = UserStateAudit.objects.create( username=username, initiated_by=initiated_by, start_state=start_state, end_state=end_state, ip_address=ip ) audit.save()
The Crequest library adds a middleware to make the current request available from non-action functions. Note that, since we can make these changes from command line as well, using Django Shell ( python manage . py shell ), in these cases the current request will be None , and we fill in the initiated_by and ip fields with the string ‘CLI’. You can choose not to log these instances at all, depending on your project’s requirements.
Our app now has the capability of automatically adding the corresponding record in the database whenever a user is changed through the FSM mechanism. To recap: we send a signal whenever the FSM state is changed, our signal is captured and, using the parameters and the current request, it creates and saves a UserStateAudit object with the proper information. Now we still have one loose end to knot: if the superadmin uses Django Admin views to directly change a user’s state in the database, the state_changed function will never be called, since doing so will not activate the FSM, but bypass it for direct user state editing, which is of course something only a superadmin should be allowed to do. The good part is we don’t need to alter the signal receiver function. However, we do need to send the signal from there as well. In admin . py we customise our UserFSM form like this:
class UserFsmForm(ModelForm): [...] def clean(self): cleaned_data = super(UserFsmForm, self).clean() if self.has_changed(): if self.instance.current_state != cleaned_data['current_state']: state_audit_signal.send( sender=self.__class__, user=self.instance.user, old_state=self.instance.current_state, new_state=cleaned_data['current_state'] ) return cleaned_data model = UserFSM
The clean function is called whenever our forms are submitted and their input needs validation and/or extra logic. We only trigger the signal sending if the user in question was changed and its new state is different from the old one. That’s it. Now the same signal processing logic will be applied when we change the user’s state directly from Django Admin. We don’t need to modify anything else in our signal definition or receiver function, so the code is DRY.
So there you have it! User state changes are now logged in the audit table every time they happen, regardless of their source. Have you ever had to implement custom database logging? How did you approach the problem? Let us know in the comment section below. We hope this article will help you deal with such cases in the future in a clean and DRY way.