Use Postgres Multiple Schema Database in Rails

Postgres provided a very interesting feature called “Schema” in addition to other “normal” database features, which provide a extra layer between database and tables. So with schema, you can have tables with same name in one database, if they are in different schemas.
To me schema is not a good idea! I assume “table-space” or even “namespace” could be a better name. In fact, there are a number of people agree that schema is not a good name:

“Schema” is such a terrible name for this feature. When most people hear the term “schema” they think of a data definition of some sort. This is not what PostgreSQL schemas are. I’m sure the PostgreSQL devs had their reasons, but I really wish they would have named it more appropriately. “Namespaces” would have been apropos.

Anywho, the easiest way for me to describe PostgreSQL schemas (besides telling you that they are, indeed, namespaces for tables) is to relate them to the UNIX execution path. When you run a UNIX command without specifying its absolute path, your shell will work its way down the $PATH until it finds an executable of the same name.


And you can find more here

And there is a popular routine is to use the postgres schema for sub-domains. For example, you’re a BSS provider, you rend your BBS apps to different organizations. To the organizations. they want to have its own BBS app instance running independently, the most important is that data should be stored into separated spaces, and could be accessed from its own domain name. But to you, for administration, you want they share the same backend management console.
In this case the best way to solve the problem is to store the data owned by different subsystem into different schemas. But store all the administration data into a single schema or even in public schema.
The same guy Jerod has a post described how to build this kind of system in details. There are a bunch of posts described how to build the system like this, which could be found by googling easily.
And there is even a ruby gem called apartment from Brad Robertson to support this kind of system

This idea looks fancy, but unless you 10000% certain that the sub-systems will keep its independent status and without any collaboration forever.
Or it sooner or later, you will find the “fancy idea” become a horrible idea.
When time goes by, there could be more and more and more features that required to add collaboration between sub-systems. Such as provide a unified authentication mechanism, so user can logged in once and switch between different systems easily. Or administrator might ask for a unified statistics graph for all sub-systems.
All these requirements are related to cross-schema query! To be honest, cross query in some cases could be painful!
And it brings trouble to all aspects in your system, such as data migration, test data generation, etc.

That’s what exactly happens in my current project!
My current project is Rails 3 project, the codebase is brand new but built on a legacy multiple schema postgres database. And for some reason,
we must keep the multiple schema design unchanged. But our goal is to unify the separated subsystem into a more closed-collaborated system.

Since ActiveRecord in Rails doesn’t include the native support to this fancy feature. Which means you will met problem during migration, or even preparing test data with factory-girl.

Postgres allows to locate the table in different schemas with full qualified name like this <schema name>.<table name>.<column name>. The schema name is optional, when you omitted the schema name, Postgres will search the table in a file-system-path-like order called “search-path“.

And you can set and query current search path with Postgres SQL statements:

Query and Set search_path
1
2
3
SHOW search_path;
SET search_path TO <new_search_path>;

Since ActiveRecord won’t add the full qualified schema name in front of the table name when it translate the ARel into SQL statements. So we can only support the multiple schema database with the search_path.

Basically, it is a very natural idea that you can use the following ruby code to make ActiveRecord make query on different schemas:

Select Schema
1
2
3
4
5
6
7
8
9
def add_schema_to_search_path(schema)
ActiveRecord::Base.connection.execute "SET search_path TO #{schema}, public;"
end
def restore_search_path
ActiveRecord::Base.connection.execute "SET search_path TO public;"
end

This two methods work perfect when querying things from the database. But sooner or later, you will run into big trouble when you try to write data into database.

In db migration or use factory_girl to generate test fixtures, you might found that the data you insert in different schemas finally goes into the first non-public schema. But all the query still works perfect!

We found this problem occurs when the following conditions are satisfied:

  1. Query are happened in a Transaction.
  2. You insert data into multiple non-public schemas.
  3. You user SET search_path TO SQL statement to switch between schema rather than explicitly using full-qualified table name.

And the most interesting thing is that:

  1. All the SELECT queries are executed on schemas correctly
  2. If you use SHOW search_path; to query current search path, you will got correct search path value.
  3. All data are inserted into first non-public schema that you actually wrote data into. So which means it you try to insert data into public schema, it won’t go wrong. Or you switched to a non-public schema, but actually you doesn’t insert any rows, it also won’t be impacted.

To solve this problem, I spent 2 nights and 2 days to digged into the source code of ActiveRecord gem and pg gem (the Postgres database adapter).
And finally I solved the problem by using the attribute on PostgreSQLAdapter.

Basically, instead of using the SQL query, you should use the PostgreSQLAdapter#schema_search_path and PostgreSQLAdapter#schema_search_path= to get and set the search path.
And if you dig into the source code, you will find the two methods does the exact same thing as we did except it assigned one more instance variable @schema_search_path.

methods on PostgreSQLAdapter
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# Sets the schema search path to a string of comma-separated schema names.
# Names beginning with $ have to be quoted (e.g. $user => '$user').
# See: http://www.postgresql.org/docs/current/static/ddl-schemas.html
#
# This should be not be called manually but set in database.yml.
def schema_search_path=(schema_csv)
if schema_csv
execute("SET search_path TO #{schema_csv}", 'SCHEMA')
@schema_search_path = schema_csv
end
end
# Returns the active schema search path.
def schema_search_path
@schema_search_path ||= query('SHOW search_path', 'SCHEMA')[0][0]
end

The most interesting thing is if you search the reference to @schema_search_path, you will find it is only used as a local cache of current search_path in the adapter, and it is initialized with the value from the query SHOW search_path; if it is nil, and then keep the value as the cache!
This implementation is buggy and caused the problems described before!

If we use the SQL query to set the search path rather than calling schema_search_path=, we won’t set the @schema_search_path at sametime, ideally this value will remain nil by default. Then transaction or other object in ActiveRecord call schema_search_path to get current search path. The first time, the variable @@schema_search_path is nil, and will be initialized by the value from query SHOW search_path; and then won’t changed any more, since in the future this query won’t be executed any longer.
As a result, the schema will be switched successfully for the first time, but failed in the following.

Which means at current stage, if you want to change search_path, the only correct way is to use PostgreSQLAdapter#schema_search_path=, and PLEASE PLEASE ignore the warning "This should be not be called manually but set in database.yml." in the source code! It is really a misleading message!

I understand current implementation is for performance consideration, but caching the value is absolutely not a good idea when you cannot keep things in sync and the sync is critical in some cases.
I’m planning to fix this issue in rails codebase and create a pull request to rails maintainer. Wish they could accept this fix. Or at least they should change the warning message.

And besides of using the out-dated and mysterious PgTools mentioned in a lot of posts (I saw a lot of people mentioned this class, but I cannot find it anywhere even I from google or github. It is really a mystery). I create a new utility module called MultiSchema.

You can use it as the utility class in the old-fashioned way:

Procedure usage
1
2
3
MultiSchema.with_in_schemas :except => :public do
# Play around the data in one schema
end

Or you can use it in a DSL-like way:

DSL usage
1
2
3
4
5
6
7
8
9
class SomeMigration < ActiveRecord::Migration
include MultiSchema
def change
with_in_schemas :except => :public do
# Play around the data in one schema
end
end
end

with_in_schemas method accept both symbol and string, and you can pass single value, array or hash to it.

  • with_in_schemas yield all user schemas in the database
  • with_in_schemas :only => %w(schema1 schema2) populates all given schemas.
  • with_in_schemas :except => %w(schema1 schema2) populates all except given schemas.
  • with_in_schemas :except => [:public] is equivalent to with_in_schemas :except => ['public']
  • with_in_schemas :only => [:public] is equivalent to with_in_schemas :only => :public and equivalent to with_in_schemas :public
  • with_in_schemas :except => [:public] is equivalent to with_in_schemas :except => :public

The "Cursed" NULL in postgres

Comparison to NULL

In postgres, NULL is treat as a speical value, that which is not equal to any other value, which means the expression NULL = NULL yields false.
It can be verified by using the following query

SELECT NULL
1
2
3
4
5
SELECT n
FROM unnest(ARRAY(NULL,1,2,3,4,5)) n
WHERE n = NULL

The query returns empty set, because no element equals to NULL even NULL itself.
If you think this experiment is not convincing enough, then you can try this:

CASE NULL
1
2
3
4
5
6
SELECT
n,
CASE WHEN n = NULL THEN 'NULL' ELSE 'NOT NULL' END
FROM unnest(ARRAY(NULL,1,2,3,4,5)) n

This query should yield “ , NOT NULL” which means NULL does not equal to NULL.

To test whether or whether not a value equals to NULL, you should use IS NULL or IS NOT NULL

So if you replace the n = NULL with n IS NULL in previous 2 statements, you will get expected result:

SELECT NULL
1
2
3
4
5
6
7
SELECT
n,
CASE WHEN n IS NULL THEN 'NULL' ELSE 'NOT NULL' END
FROM unnest(ARRAY(NULL,1,2,3,4,5)) n
WHERE n IS NULL

NULL in Crosstab

In most cases, the NULL special character doesn’t hurt much, since we always can alter our expression to fix the problem. But if you’re using table functions to create pivot, and there is NULL in your columns, then you will find NULL is a cursed value, which brought a lot of trouble to you.

Postgres provide tablefunc extension, which can provide a series functions called “crosstab#”. And with these functions, you can convert a set of rows into a pivot table.

Function crosstab accept 2 sql queries. First query should yield 3 columns: row in pivot table, column in pivot table and value in pivot table.
Second query should yield a series of value which defines the columns of the pivot table.
crosstab function will group the rows yield by 1st query by 1st column. Then map each row in the group to a column by comparing the 2nd column in the row yield by 1st query with the value generated by 2nd query, if the value is equal then the 3rd column of the row yield by 1st query will be placed in the column defined by value of the 2nd query.
It if a very convenient feature provided by postgres, and it works perfect in most cases.

But in our case, we met a problem that we have NULL value in the 2nd column of 1st query, which means we have NULL value in pivot table columns!

And the crosstab decide which column the value should be placed to by comparing the whether the value is equal!

And NULL never equals to NULL!

BAM!!!

As a result, the column of NULL in the pivot table is always empty!

CASE WHEN

To solve the problem, we should play a little trick in the first query, we should translate all the NULL into a “normal” value.
Here is our 1st query, and we want to get a pivot table with period as row axis, rating as column axis and volume of the order as content.

Original 1st query
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
SELECT
Periods.period,
Profiles.rating,
SUM(Orders.volume)
FROM
orders
LEFT OUTTER JOIN Periods ON (Periods.id = Orders.period_id)
LEFT OUTTER JOIN Profiles ON (Profiles.id = Orders.profile_id)
GROUP BY
Periods.period,
Profiles.rating
ORDER BY
Periods.period,
Profiles.rating

And we will get NULL in Profiles.rating, so we can translate NULL as 0 in rating. To achieve this we can use CASE WHEN statement.

Original 1st query with CASE WHEN
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
SELECT
Periods.period,
CASE WHEN Profiles.rating IS NULL THEN 0 ELSE Profiles.rating,
SUM(Orders.volume)
FROM
orders
LEFT OUTTER JOIN Periods ON (Periods.id = Orders.period_id)
LEFT OUTTER JOIN Profiles ON (Profiles.id = Orders.profile_id)
GROUP BY
Periods.period,
Profiles.rating
ORDER BY
Periods.period,
Profiles.rating

COALESCE

The solution works fine. But in personal perspective, I don’t like it, because it repeat the statement and is not concise. But luckily, the value we should deal with the the special value NULL and postgres has provided a group of functions to deal with NULL.

What we want is the function COALESCE, which accept a group of value as arguments, and returns the first not null value.
So we can simplify our statement with this super function:

Original 1st query with COALESCE
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
SELECT
Periods.period,
COALESCE(Profiles.rating, 0)
SUM(Orders.volume)
FROM
orders
LEFT OUTTER JOIN Periods ON (Periods.id = Orders.period_id)
LEFT OUTTER JOIN Profiles ON (Profiles.id = Orders.profile_id)
GROUP BY
Periods.period,
Profiles.rating
ORDER BY
Periods.period,
Profiles.rating

In the statement, if the rating is not null, the COALESCE function will return the actual value if the rating is NULL, then the COALESCE will find the next not null value, which must be 0.

Besides COALESCE function, there is another function called NULLIF, which might mislead you to a totally wrong way just as what I had.
According to postgres document, the function might behave in a totally opposite way than you expected.

The NULLIF function returns a null value if value1 equals value2; otherwise it returns value1. This can be used to perform the inverse operation of the COALESCE