Abstraction boundaries are optimization boundaries
The N+1 query problem occurs when your application code sends one SQL query per element in a collection. The N queries are redundant; since all of the data is in the database already, a single query should be enough.
This problem is usually caused by a leaky abstraction; the ORM, or whatever database abstraction you are using, can’t anticipate that it would need to send N queries, so it can’t automatically optimize this down to a single query. The solution is usually to move the abstraction boundary down, and explicitly tell the ORM that you will need to fetch a set of rows in bulk, rather than repeatedly fetching single rows.
But what if we could do it the other way around? Can we solve the problem by moving the abstraction boundary up instead?
The problem with the ORM example is that the compiler doesn’t know what the ORM does, and as such it can’t optimize it; to the compiler it’s just another piece of userland code. However, what if we raise the abstraction boundary and make the ORM part of the language? This means that we could formulate rewrite rules for the ORM, allowing it to eg merge the N queries into a single query.
There are other examples of this: Haskell has support for adding rewrite rule pragmas to libraries; among other things this is used to optimize list operations using stream fusion. However, this only works since Haskell is declarative / pure; the low-level operational semantics (like evaluation order) are abstracted away from the programmer, and as such are amenable to optimization.
I think there’s an interesting pattern here: By raising the abstraction boundary we have also raised the optimization boundary.