Lately I’ve had the chance to work on a large-ish server application written in RoR. I went from “Wow look at all these cool conventions, I know exactly where everything needs to go!” to “err.. how the fuck do I scale this?” and “This is not where this should go!” in 8 weeks. And this is (partly) why:
(* Yes, this post’s title is a total clickbait. I don’t actually hate Rails; I just think it promotes a lot of bad practices.)
I’ve never personally used the ActiveRecord pattern in previous projects; I always felt this would mix up domain concerns with persistence concerns, and create a bit of a mess.
Well guess what, I was right.
In the specific project I was working on, the code for domain classes would typically consist of 70% business logic, and 30% stuff to do with DB access (scopes, usually, as well as querying / fetching strategies).That in itself is a pretty big warning sign that one class is doing too much.
The arguments as to why ActiveRecord in general is a bad idea are well documented; I’ll briefly recap here:
It breaks the Single Responsibility Principle
A model class’s responsibility is to encapsulate business rules and logic.It’s not responsible for communicating with data storage.
As I mentioned before, a considerable amount of code in our project’s domain classes was dedicated to things like querying, which are not business logic.
This causes domain classes to be bloated , and hard to decouple.
It breaks the Interface Segregation Principle
Have you ever, while debugging, listed the methods of one of your domain objects? Were you able to find the ones that you defined yourself? I wasn’t.
Because they’re buried somewhere underneath endless
ActiveRecord::Basemethods such as
Well, I made up a few there, but
ActiveRecord::Baseinstances have over 100 methods , most of them public.
ISP tells us that we should aspire to have small, easy-to-understand interfaces for our classes. Dozens of public methods on my classes is another indication that they’re doing waaaay too much.
Its database abstraction is leaky
Abstracting-away the database is notoriously hard. And I believe that ActiveRecord doesn’t do a particularly good job of this.
As noted before,
ActiveRecord::Basepollutes your public interface with a plethora of storage-related methods. This makes it very easy for a developer to make the mistake of using one of these very storage-specific methods (i.e
column_for_attribute) inside a controller action, for example.
update_attributeindicate that the using code knows a little too much about the underlying persistence layer.
If there was one thing I knew about Rails before having written a single line of ruby code, it was that everything in Rails is unit-tested. Rails promotes good testing practices. Hooray!
So, obviously, one of the first things I read about concerning RoR development, was how to test :
Testing support was woven into the Rails fabric from the beginning.
Right on! Finally, somebody gets it right!
Just about every Rails application interacts heavily with a database and, as a result, your tests will need a database to interact with as well. To write efficient tests, you’ll need to understand how to set up this database and populate it with sample data.
I must have misread this.. let me check again..
your tests will need a database
Yup. I need a bloody database to test my business logic.
So why is this so bad?
The meaning and intent behind unit tests is to test single units of code.
That means that if the test fails, there can only be one reason for it- the unit under test is broken.
This is why you fake everything external that the unit under test interacts with; you don’t want a bug in a dependency to cause your current unit test to fail.
For example, when testing the
total_salaries method, you use fake
Employee objects, with a fake
salary property, which would return a predefined value.
That way, if we get a wrong
total_salaries value, we’ll know for sure that the problem lies within the
Payroll class, and nowhere else.
But, with rails testing, you’re not encouraged to fake anything.
That way, if the
total_salaries test fails, it can be because
Employee is broken, or my database schema is wrong, or my database server is down, or even something as obscure as a child object of
Employee has some required attribute missing, so it can’t be persisted, and an error is thrown.
This is not how a unit test is supposed to work.
(* Note that if you use hooks, such as
before_update etc., it becomes even more horrible)
Apart from the horribleness of making it harder for me to determine what went wrong when a test fails, this complete abomination of a testing strategy caused me some more hair-tearing moments:
- Our unit test suite took 18 minutes to run. Even when using a local, in-memory database.
- My tests failed because the database wasn’t initialized properly.
- My tests failed because the database wasn’t cleaned properly by previous tests (WTF?).
- My tests failed because a mail template was defined incorrectly.
Since sending an email was invoked in a
before_createcallback, failing to send an email caused the callback to fail, which caused
createto fail, which meant that the record was not persisted, which meant that my test was fucked.
Too much magic
Magic is something that is inherent to any framework; by definition, if it does some sort of heavy lifting for you, some of it is going to be hidden from your view.
That’s doubly true for a framework which prefers “convention over configuration” ,
meaning- if you name your class / method the correct way, it’s going to be “magically” wired up for you.
This kind of magic is fine and acceptable. The magic that I have a problem with is rails’ extensive usage of hooks (aka callbacks); Be it in the controller (before / after action), or in the model (before / after create / update / delete…).
Using callbacks immediately makes your code harder to read:
With regular methods, it’s easy to determine when your code is being executed – you can see the method being called right there in your code.
With callbacks, it’s not obvious when your code is being invoked, and by whom.
I’ve had several instances of scratching my head, trying to figure out why a certain instance variable was initialized for one controller action and not for another, only to track it down to a problem with a
before_action callback of a parent class.
The fact that ActiveRecord callbacks can’t be turned off is also a pain in the ass when testing, as I described previously.
Additionally, callbacks are, of course, very hard to test, since their logic is so tightly coupled to other things in the model, and you can’t trigger them in isolation, but only as a side effect of another action.
This is the reason why some rubyist recognize that callbacks are at least problematic , if not to be avoided , while others prefer to implement node’s Express in ruby , rather than use Rails controllers.
I like the idea behind Rails. Convention over configuration is great, and I also totally subscribe to the notion that application developers should write more business-specific code, and less infrastructure code without any business value.
The problem with Rails isn’t that it’s an opinionated framework.
The problem is that its opinions are wrong :
- Tying you down to a single, err, “controversial” persistence mechanism is wrong.
- Making it impossible to do proper unit testing is wrong.
- Encouraging you to do things as side-effects rather than explicitly is wrong.
When it first launched, Rails was revolutionary: it was the first to offer such comprehensive guidelines, and support, to create your application in a standard way.However, it seems that today, our standards of what is ‘correct’ or ‘recommended’ have changed, while Rails has stubbornly remained where it was 10 years ago.
You can still create a good application using Rails.
It’s just that it doesn’t allow you to create a great application.