Embed CodePen snippet in Hexo

CodePen is a service that provide Html, JavaScript and Css live show-case. It is another clone of Js Fiddle, but with cooler UI and support.

Both CodePen and Js Fiddle provides embedded widget that allow user to embedded their code into blog or articles.

Here is the example, code from CodePen:

This is from Js Fiddle:

Hexo has built-in the Js Fiddle Plug-in to allow writer to embed code from Js Fiddle, which is probably ported from Octopress.
But for CodePen, there is not such thing.

So I created hexo-tag-codepen, its provides similar syntax as built-in ‘Js Fiddle’ plug in:

1
{% codepen userId|anonymous|anon slugHash theme [defaultTab [height [width]]] %}

Now you can embedded Pens from CodePen in your Hexo blog. Enjoy.

For detail, check out hexo-tag-codepen document.

JavaScript Prototype Chain Mutator

In JavaScript world, JSON serialization is widely used. When fetching data from server via Ajax, the data is usually represented in JSON; or loading configuration/data from file in Node.js application, the configuration/data is usually in JSON format.

JSON serialization is powerful and convenient, but there is limitation. For security and other reason, behavior and type information are forbidden in JSON. Functions members are removed when stringify a JavaScript object, also functions are not allowed in JSON.

Comparing Yaml to Ruby, this limitation isn’t that convenient when writing JavaScript application. For example, to consume the JSON data fetched via ajax from server, I really wish I can invoke some method on the deserialized model.

Here is simple example:

Ideal World
1
2
3
4
5
6
7
8
9
10
11
12
13
class Rect
constructor: (width, height) ->
@width = width if width?
@height = height if height?
area: ->
@width * @height
$.get '/rect/latest', (rectJSON) ->
rect = JSON.parse(rectJSON)
console.log rect.area() # This code doesn't work because there is rect is a plain object

The code doesn’t work, because rect in a plain object, which doesn’t contains any behavior. Someone called the rect DTO, Data Transfer Object, or POJO, Plain Old Java Object, a concept borrowed from Java world. Here we call it DTO.

To add behaviors to DTO, there are variant approaches. Such as create a behavior wrapper around the DTO, or create a new model with behavior and copy all the data from DTO to model. These practices are borrowed from Java world, or traditional Object Oriented world.

In fact, in JavaScript, there could be a better and smarter way to achieve that: Object Mutation, altering object prototype chain on the fly to convert a object into the instance of a specific type. The process is really similar to biologic genetic mutation, converting a species into another by altering the gene, so I borrow the term mutation.

With the idea, we can achieve this:

Mutate rect with Mutator
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
class Rect
constructor: (width, height) ->
@width = width if width?
@height = height if height?
area: ->
@width * @height
$.get '/rect/latest', (rectJSON) ->
rect = JSON.parse(rectJSON)
mutate(rect, Rect)
console.log rect.area()

The key to implement mutate function is to simulate new operator behavior, alerting object.__proto__ and apply constructor to the instance! For more detail, check out the library mutator Bower version NPM version, which is available as both NPM package and bower package.

When implementing the mutator, in IE, again, in the evil IE, the idea doesn’t work. Before IE 11, JavaScript prototype chain for instance is not accessible. There is nothing equivalent to object.__proto__ in IE 10 and prior. The most similar workaround is doing a hard-copy of all the members, but it still fails in type check and some dynamical usage.

Background

object.__proto__ is a Mozilla “private” implementation until EcmaScript 6.
It is interesting that most JavaScript support it except IE.
Luckily, IE 11 introduced some features in EcmaScript 6, object.__proto__ is one of them.

Process.nextTick Implementation in Browser

Recursion is a common trick that is often used in JavaScript programming. So infinite recursion will cause stack overflow errors.
Some languages resolves this issue by introduce automatically tail call optimization, but in JavaScript we need to take care it on our own.

To solve the issue, Node.js has the utility functions nextTick to ensure specific code is invoked after the current function returned.
In Browser there is no standard approach to solve this issue, so workarounds are needed.

Thanks to Roman Shtylman(@defunctzombie), who created the node-process for Browserify, which simulate the Node.js API in browser environment.
Here is his implementation:

node-process

Infinite Recursion
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
process.nextTick = (function () {
var canSetImmediate = typeof window !== 'undefined'
&& window.setImmediate;
var canPost = typeof window !== 'undefined'
&& window.postMessage && window.addEventListener;
if (canSetImmediate) {
return function (f) { return window.setImmediate(f) };
}
if (canPost) {
var queue = [];
window.addEventListener('message', function (ev) {
var source = ev.source;
if ((source === window || source === null) && ev.data === 'process-tick') {
ev.stopPropagation();
if (queue.length > 0) {
var fn = queue.shift();
fn();
}
}
}, true);
return function nextTick(fn) {
queue.push(fn);
window.postMessage('process-tick', '*');
};
}
return function nextTick(fn) {
setTimeout(fn, 0);
};
})();

Here is some comments on the implementation.

setTimeout

To simulate the nextTick behavior, setTimeout(fn, 0) is a well-known and easy to adopt approach. The issue of this method is that setTimeout function does heavy operations, call it in loop causes significant performance issue. So we should try to use cheaper approach when possible.

setImmidate

There is a function called setImmediate, which behaves quite similar to nextTick but with a few differences when dealing with IO stuff. But in browser environment, there is no IO issue, so we can definitely replace the nextTick with it.

Immediates are queued in the order created, and are popped off the queue once per loop iteration. This is different from process.nextTick which will execute process.maxTickDepth queued callbacks per iteration. setImmediate will yield to the event loop after firing a queued callback to make sure I/O is not being starved. While order is preserved for execution, other I/O events may fire between any two scheduled immediate callbacks.

setImmediate(callback, [arg], [...])Node.js

The setImmediate function is perfect replacement for nextTick, but it is not supported by all the browsers. Only IE 10 and Node.js 0.10.+ supports it. Chrome, Firefox, Opera and all mobile browsers don’t.

Note: This method is not expected to become standard, and is only implemented by recent builds of Internet Explorer and Node.js 0.10+. It meets resistance both from Gecko (Firefox) and Webkit (Google/Apple).

window.setImmediateMDN

window.postMessage

window.postMessage enable developer to access message queue in the browser. By adding some additional code, we can simulate nextTick behavior based on message queue. It works in most modern browser, except IE 8. In IE 8, the API is implemented in a synchronous way, which introduce an extra level of stack-push, so it cannot be used to simulate nextTick.

Overall, there is no perfect workaround to the nextTick issue for now. All the solutions have different limitations, we can only hope that this issue can be resolved in the future ECMAScript standard.

Mac OS X case-insensitive file system pitfall

I was working on the YouTube video playback feature for LiveHall last night, and have it works successfully on my local devbox, which is running Mac OS X. Then I deployed the code to Heroku, without any regression.

But today morning, when I have the demonstrate the new features, I met server error! It says 1 of the 4 javascripts are missing, so the Jade template failed to render.

This is a very wield issue, then I try the same data on my local dev box once again, and it works perfect! But it does yield error on the production! Then I tried to use heroku toolbelt to run ls command on the production, and I found the there are 4 coffee scripts there.
Then I tried to enforce heroku to redeploy the app by using git push --force, but the issue remains!
Then I even tried to dive into the dependency pacakges connect-assets and snockets, but still found nothing useful.

Same code, same data, but different result! Very odd issue!

After half an hour fighting against the code, I suddenly noticed I the file name is RevealJSPresenter.coffee, whose “S” is capital S! And I reference the file with name #= require ./presenter/RevealJsPresenter, whose ‘s’ is a lowercase ‘s’!

And snockets depends on the OS feature to locate the file. On my local dev environment, I’m using Mac. Although OS X allow user to explicitly format the HFS+ into file name case sensitive mode, but it is case insensitive by default. So snockets can locate the file even the case is wrong.
But once I have deployed to heroku, which, I assume, runs Linux, whose file system is absolutely filename case sensitive. So the snockets won’t be able to locate the file due to the case issue.

To resolve the bug, I renamed my file in RubyMine, then try to commit in terminal.
But when I commit, I met another very interesting issue, that git says there is no file changed, so it refused to commit.
It is still the same issue, due to FS is case insensitive, git cannot detect the renaming.

This problem is more common when coding on Windows, but CI or production runs on Linux. And the very common solution is to pull the code in case sensitive environment, then rename the file and commit it.

But I found another more easier way to do it:

Using git mv in terminal to rename the file, which will enforce git to track the file renaming action.

Or

Most of Git GUIs are able to track file name case changing, so you can try to commit the change with the tool, such as RubyMine or SourceTree.

Manage configuration in Rails way on node.js by using inheritance

Application is usually required to run in different environments. To manage the differences between the environments, we usually introduce the concept of Environment Specific Configuration.
In Rails application, by default, Rails have provided 3 different environments, they are the well known, development, test and production.
And we can use the environment variable RAILS_ENV to tell Rails which environment to be loaded, if the RAILS_ENV is not provided, Rails will load the app in development env by default.

This approach is very convenient, so we want to apply it to anywhere. But in node.js, Express doesn’t provide any configuration management. So we need to built the feature by ourselves.

The environment management usually provide the following functionalities:

  • Allow us to provide some configuration values as the default, which will be loaded in all environments, usually we call it common.
  • Specific configuration will be loaded according to the environment variable, and will override some values in the common if necessary.

Rails uses YAML to hold these configurations, which is concise but powerful enough for this purpose. And YAML provided inheritance mechanism by default, so you can reduce the duplication by using inheritance.

Inheritance in Rails YAML Configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
development: &defaults
adapter: mysql
encoding: utf8
database: sample_app_development
username: root
test:
<<: *defaults
database: sample_app_test
cucumber:
<<: *defaults
database: sample_app_cucumber
production:
<<: *defaults
database: sample_app_production
username: sample_app
password: secret_word
host: ec2-10-18-1-115.us-west-2.compute.amazonaws.com

In express and node.js, if we follow the same approach, comparing to YAML, we prefer JSON, which is supported natively by Javascript.
But to me, JSON isn’t the best option, there are some disadvantages of JSON:

  • JSON Syntax is not concise enough
  • Matching the brackets and appending commas to the line end are distractions
  • Lack of flexility

As an answer to these issues, I chose coffee-script instead of JSON.
Coffee is concise. And similar to YAML, coffee uses indention to indicate the nested level. And coffee is executable, which provides a lot of flexibilities to the configuration. So we can implement a Domain Specific Language form

To do it, we need to solve 4 problems:

  1. Allow dev to declare default configuration.
  2. Load specific configuration besides of default one.
  3. Specific configuration can overrides the values in the default one.
  4. Code is concise, clean and reading-friendly.

Inspired by the YAML solution, I work out my first solution:

Configuration in coffee script
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
_ = require('underscore')
config = {}
config['common'] =
adapter: "mysql"
encoding: "utf8"
database: "sample_app_development"
username: "root"
config['development'] = {}
config['test] =
database:"sample_app_test"
config['cucumber'] =
database:"sample_app_cucumber"
config['production'] =
database:"sample_app_production"
username:"sample_app"
password:"secret_word"
host:"ec2-10-18-1-115.us-west-2.compute.amazonaws.com"
_.extend exports, config.common
specificConfig = config[process.env.NODE_ENV ?'development']
if specificConfig?
_.extend exports, specificConfig

YAML is data centric language, so its inheritance is more like “mixin” another piece of data. So I uses underscore to help me to mixin the specific configuration over the default one, which overrides the overlapped values.

But if we jump out of the YAML’s box, let us think about the Javascript itself, Javascript is a prototype language, which means it had already provide an overriding mechanism natively. Each object inherits and overrides the value from its prototype.
So I worked out the 2nd solution:

Prototype based Configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
config = {}
config['common'] =
adapter: "mysql"
encoding: "utf8"
database: "sample_app_development"
username: "root"
config['development'] = {}
config['development'].__proto__ = config['common']
config['test] =
__proto__: config['common']
database:"sample_app_test"
config['cucumber'] =
__proto__: config['test']
database:"sample_app_cucumber"
config['production'] =
__proto__: config['common']
database:"sample_app_production"
username:"sample_app"
password:"secret_word"
host:"ec2-10-18-1-115.us-west-2.compute.amazonaws.com"
process.env.NODE_ENV = process.env.NODE_ENV?.toLowerCase() ?'development'
module.exports = config[process.env.NODE_ENV]

This approach works, but looks kind of ugly. Since we’re using coffee, which provides the syntax sugar for class and class inheritance.
So we have the 3rd version:

Class based configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
process.env.NODE_ENV = process.env.NODE_ENV?.toLowerCase() ? 'development'
class Config
adapter: "mysql"
encoding: "utf8"
database: "sample_app_development"
username: "root"
class Config.development extends Config
class Config.test extends Config
database: "sample_app_test"
class Config.cucumber extends Config
database: "sample_app_cucumber"
class Config.common extends Config
database: "sample_app_production"
username: "sample_app"
password: "secret_word"
host: "ec2-10-18-1-115.us-west-2.compute.amazonaws.com"
module.exports = new Config[process.env.NODE_ENV]()

Now the code looks clean, and we can improve it a step further if necessary. We can try to separate the configurations into files, and required by the file name:

Class based configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
# config/config.coffee
configName = process.env.NODE_ENV = process.env.NODE_ENV?.toLowerCase() ? 'development'
SpecificConfig = requrie("./envs/#{configName}")
module.exports = new SpecificConfig()
# config/envs/commmon.coffee
class Common
adapter: "mysql"
encoding: "utf8"
database: "sample_app_development"
username: "root"
module.exports = Common
# config/envs/development.coffee
Common = require('./common')
class Development extends Common
module.exports = Development
# config/envs/test.coffee
Common = require('./common')
class Test extends Common
database: "sample_app_test"
module.exports = Test
# config/envs/cucumber.coffee
Test = require('./common')
class Cucumber extends Test
database: "sample_app_cucumber"
module.exports = Cucumber
# config/envs/production.coffee
Common = require('./common')
class Production extends Common
database: "sample_app_production"
username: "sample_app"
password: "secret_word"
host: "ec2-10-18-1-115.us-west-2.compute.amazonaws.com"
module.exports = Production

Pitfall in node crypto and base64 encoding

Today, we found there is a huge pitfall in node.js crypto module! Decipher has potential problem when processing Base64 encoding.

We’re building RESTful web service based on Node.js, which talks to some other services implemented with Ruby.

Ruby

In ruby, we use the default Base64 class to handle Base64 encoding.

Base64#encode64 has a very interesting feature:
It add line break (\n) to output every 60 characters. This format make the output look pretty and be friendly for human reading:

Ruby Base64 Block
1
2
3
4
5
6
7
MSwyLDMsNCw1LDYsNyw4LDksMTAsMTEsMTIsMTMsMTQsMTUsMTYsMTcsMTgs
MTksMjAsMjEsMjIsMjMsMjQsMjUsMjYsMjcsMjgsMjksMzAsMzEsMzIsMzMs
MzQsMzUsMzYsMzcsMzgsMzksNDAsNDEsNDIsNDMsNDQsNDUsNDYsNDcsNDgs
NDksNTAsNTEsNTIsNTMsNTQsNTUsNTYsNTcsNTgsNTksNjAsNjEsNjIsNjMs
NjQsNjUsNjYsNjcsNjgsNjksNzAsNzEsNzIsNzMsNzQsNzUsNzYsNzcsNzgs
NzksODAsODEsODIsODMsODQsODUsODYsODcsODgsODksOTAsOTEsOTIsOTMs
OTQsOTUsOTYsOTcsOTgsOTksMTAw

The Base64#decode64 class ignores the line break (\n) when parsing the base64 encoded data, so the line break won’t pollute the data.

Node.js

Node.js take Base64 as one of the 5 standard encodings (ascii, utf8, base64, binary, hex). Ideally the data or string can be transcoded between these 4 encodings without data loss.

The Buffer class is the simplest way to transcode the data:

Base64 Encoder in Node.js
1
2
3
4
5
6
7
8
Base64 =
encode64: (text) ->
new Buffer(text, 'utf8').toString('base64')
decode64: (base64) ->
new Buffer(base64. 'base64').toString('utf8')

Although encode64 function in node.js won’t add line break to the output, but the decode64 function does ignore the line break when parsing the data. It keeps the consistent behavior with ruby Base64 class, so we can use this decode64 function to decode the data from ruby.

Since base64 is one of the standard encodings, and some of the node.js API does allow set encoding for input and output. So ideally, we can complete the base64 encoding and decoding during processing the data.
It seems Node.js is more convenient comparing to Ruby when dealing with Base64.

e.g. We can combine reading file and base64 encoding the content into one operation by setting the encoding to readFileSync API.

Write and Read string as Base64
1
2
3
4
5
6
fs = require('fs')
fileName = './binary.dat' # this file contains binary data
base64 = fs.readFileSync(fileName, 'base64') # file content has been base64 encoded

It looks like we can always use this trick to avoid manually base64 encoding and decoding when the API has encoding parameter! But actually it is not true! There is a BIG pitfall here!

In our real case, we uses crypto module to decrypt the the JSON document that encrypted and base64 encoded by Ruby:

Base64 Deocde and Decrypt
1
2
3
4
5
6
7
8
9
10
11
crypto = require('crypto')
parse = (data, algorithm, key, iv) ->
decipher = crypto.createDecipheriv(algorithm, key, iv)
decrypted = decipher.update(data, 'base64', 'utf8') # Set input encoding to 'base64' to ask API to base64 decode the input before decryption
decrypted += dechiper.final('utf8')
JSON.parse(decrypted)
Manually Base64 Decoding
1
2
3
4
5
6
7
8
9
10
11
12
13
crypto = require('crypto')
parse = (data, algorithm, key, iv) ->
decipher = crypto.createDecipheriv(algorithm, key, iv)
binary = new Buffer(data,'base64') # Manually Base64 Decode
decrypted = decipher.update(binary, 'binary', 'utf8') # Set input encoding to 'binary'
decrypted += dechiper.final('utf8')
JSON.parse(decrypted)

The previous 2 implementations are very similar except the second one base64 decoded the data manually by using Buffer. Ideally they should be equivalent in behavior. But in fact, they are NOT equivalent!

The previous implementation throws “TypeError: DecipherFinal fail”.
And the reason is that the shortcut way doesn’t ignore the line break, but Buffer does!!! So in the previous implementation, the data is polluted by the line break!

Conclusion

Be careful, when you try to ask the API to base64 decode the data by setting the encoding argument to ‘base64’. It has inconsistent behavior comparing to Buffer class.

I’m not sure whether it is a node.js bug, or it is as is by design. But it is indeed a pitfall that hides so deep. And usually is extremely hard to figure out. Since encrypted binary is hard to human to read, and debugging between 2 languages are also kind of hard!

Pitfall in fs.watch: fs.watch fails when switch from TextMate to RubyMine

I’m writing a cake script that helps me to build the growlStyle bundle.
And I wish to my script can watch the change of the source file, and rebuild when file changed.
So I wrote the code as following:

Watching code change
1
2
3
4
5
files = fs.readdirSync getLocalPath('source')
for file in files
fs.watch file, ->
console.log "File changed, rebuilding..."
build()

The code works when I edits the code with TextMate, but fails when I uses RubyMine!

Super weird!

After half an hour debugging, I found the following interesting phenomena:

  • Given I’m using TextMate
    When I changed the file 1st time
    Then a ‘change’ event is captured
    When I changed the file 2nd time
    Then a ‘change’ event is captured
    When I changed the file 3rd time
    Then a ‘change’ event is captured

  • Given I’m using RubyMine
    When I change the file 1st time
    Then a ‘rename’ event is captured
    When I changed the file 2nd time
    Then no event is captured
    When I changed the file 3rd time
    Then no event is captured

From the result, we can easily find out that the script fails is because “change” event is not triggered as expected when using RubyMine.
And the reason of RubyMine’s “wried” behavior might be that RubyMine what to keep the file integrity so they “write” the file in an atomic way as following:

  1. RubyMine write the file content to a temp file
  2. RubyMine remove the original file
  3. RubyMine rename the temp file to original file

This workflow ensures that the content is fully written or not written. So in a word, RubyMine does not actually write the file, it actually replace the original file with another one, and the original one is removed or stored to some special location.

And on the other hand, according to Node.js document of fs.watch, node uses kqueue on Mac to implement this behavior.
And according to kqueue document, it uses file descriptor as identifier, and file descriptor is bound to the file itself rather than its path. So when the file is renamed, we will keep to track the file with new name. That’s why we lost the status of the file after the first ‘rename’ event.
And in our case, we actually wish to identify the file by file path rather than by ‘file descriptor’.

To solve this issue, we have 2 potential solutions:

  1. Also apply fs.watch to the directory that holds the source file besides of the source file itself.
    When the file is directly updated as TextMate does, the watcher on the file will raise the “change” event.
    When the file is atomically updated as RubyMine does, the watcher on the directory will raise 2 “rename” events.
    So theoretically, we could track the change of the file no matter how it is updated.

  2. Use the old fashioned fs.watchFile function, which tracks the change the with fs.stat.
    Comparing to fs.watch, fs.watchFile is less efficient because its polling mechanism, but it does track the file with file name rather than file descriptor. So it won’t be charmed by the fancy atomic writing.

Obviously, the 1st solution looks better than the 2nd one, because its uses the event rather than old-fashioned polling. Even document of fs.watchFile also says that try to use fs.watch instead of fs.watchFile when possible.

But actually it is kind of painful to write such code, since ‘rename’ event on the directory is not only triggered by the file update, it also can be triggered by adding file and removing file.

And the ‘rename’ event will be triggered twice when updating the file. Obviously we cannot rebuild the code when the first ‘rename’ event fired, or the build might fail because of the absence of the file. And we will trigger the build twice in a really short period of time.

So in fact, to solve our problem, the polling fs.watchFile is more useful, its old-fashion protected itself being charmed by the ‘fancy’ atomic file writing.

So finally, we got the following code:

fs.watchFile
1
2
3
4
5
6
7
8
9
10
11
runInWatch = (options, task) ->
action(options) unless options.watch
console.info "INFO: Watching..."
files = fs.readdirSync getLocalPath('source')
console.log '"Tracking files:'
for file in files
console.log "#{file}"
fs.watchFile getLocalPath('source', file), (current, previous) ->
unless current.mtime == previous.mtime
console.log "#{file} Changed..."
task(options)

HINT: Be careful about the differens of fs.watch and fs.watchFile:

  • The meaning of filename parameter
    The filename parameter of fs.watch is path sensitive, which accept ‘source.jade’ or ‘/path/to/source.jade’ The filename parameter of fs.watchFile isn’t path sensitive, which only accept ‘/path/to/source.jade’
  • Callback is invocation condition
    fs.watch invokes callback when the file is renamed or changed fs.watchFile invokes callback when the file is accessed, including write and read.
    So you need to compare the mtime of the fstat, file is changed when mtime changed.
  • Response time
    fs.watch uses event, which captures the “change” almost in realtime. fs.watchFile uses ‘polling’, which might differed for a period of time. By default, the maximum could be 5s.

exports vs module.exports in node.js

I was confused about how require function works in node.js for a long time. I found when I require a module, sometimes I can get the object I want, but sometimes, I don’t I just got an empty object, which give an imagination that we cannot export the object by assigning it to exports, but it seems somehow we can export a function by assignment.

Today, I re-read the document again, and I finally make clear that I misunderstood the “require” mechanism and how I did that.

I clearly remember this sentence in the doc

In particular module.exports is the same as the exports object.

So I believed that the exports is just a shortcut alias to module.exports, we can use one instead of another without worrying about the differences between them two.
But this understanding is proved to be wrong. exports and module.exports are different.

Today I found this in the doc:

The exports object is created by the Module system. Sometimes this is not acceptable, many want their module to be an instance of some class. To do this assign the desired export object to module.exports.

So it says that module.exports is different from exports. And it you exports something by assignment, you need to assign it to module.exports.

Let’s try to understand these sentences deeper by code examples.

In the saying

The exports object is created by the Module system.

The word “created by” actually means when node.js try to load a javascript file, before executing any line of code in your file, the module system executes the following code first for you:

1
var exports = module.exports

So the actual interface in node.js’s module system is module object. the actual exported object is module.exports not exports.
And the exports is just a normal variable, and there is not “magic” in it. So if you assign something to it, it is replaced absolutely.

That’s why I failed to get the exported object I want when I assign the it to exports variable.

So to export some variable as a whole, we should always assign it to module.exports.
And at same time, if there is no good excuse, we’d better to keep the convention that exports is the shortcut alias to module.exports. So we should also assign the module.exports to exports.

As a conclusion, to export something in node.js by assignment, we should always follow the following pattern:

1
2
3
exports = module.exports = {
...
}

A way to expose singleton object and its constructor in node.js

In Node.js world, we usually encapsulate a service into a module, which means the module need to export the façade of the service. In most case the service could be a singleton, all apps use the same service.

But in some rare cases, people might would like to create several instances of the service ,which means the module also need to also export the service constructor.

A very natural idea is to export the default service, and expose the constructor as a method of the default instance. So we could consume the service in this way:

Ideal Usage
1
2
var defaultService = require('service');
var anotherService = service.newService();

So we need to write the module in this way:

Ideal Export
1
2
3
4
5
function Service() { }
module.exports = new Service();
moudle.exports.newService = Service;

But for some reason, node.js doesn’t allow module to expose object by assigning the a object to module.exports.
To export a whole object, it is required to copy all the members of the object to moudle.exports, which drives out all kinds of tricky code.

I misunderstood how node.js require works, and HERE is the right understanding. Even I misunderstood the mechanism, but the conclusion of this post is still correct. To export function is still a more convenient way to export both default instance and the constructor.

And things can become much worse when there are backward reference from the object property to itself.
So to solve this problem gracefully, we need to change our mind.
Since it is proved that it is tricky to export a object, can we try to expose the constructor instead?

Then answer is yes. And Node.js does allow we to assign a function to the module.exports to exports the function.
So we got this code.

Export Constructor
1
2
function Service() { }
module.exports = Service;

So we can use create service instance in this way:

Create Service
1
2
var Service = require('service');
var aService = new Service();

As you see, since the one we exported is constructor so we need to create a instance manually before we can use it. Another problem is that we lost the shared instance between module users, and it is a common requirement to share the same service instance between users.

How to solve this problem? Since as we know, function is also kind of object in javascript, so we can kind of add a member to the constructor called default, which holds the shared instance of the service.

This solution works but not in a graceful way! A crazy but fancy idea is that can we transform the constructor itself into kind of singleton instance??!! Which means you can do this:

Export Singleton
1
2
3
4
var defaultService = require('service');
defaultService.foo();
var anotherService = service();
anotherService.foo();

The code style looks familiar? Yes, jQuery, and many other well-designed js libraries are designed to work in this way.
So our idea is kind of feasible but how?

Great thank to Javascript’s prototype system (or maybe SELF’s prototype system is more accurate.), we can simply make a service instance to be the constructor’s prototype.

Actual Export
1
2
3
function Service() { }
module.exports = Service;
Service.__proto__ = new Serivce;

Sounds crazy, but works, and gracefully! That’s the beauty of Javascript.

Make javascript node.tmbundle works with TextMate under node.js 0.6.5

I downloaded the TextMate bundle for node.js.
But this bundle doesn’t work properly.
When i clicked cmd+R to run javascript, it reports that it cannot get variable “TM_FILE” from undefined.
And the console output contains a warning that module “sys” is renamed as “util”.

To fix this two issues:
Some fix these 2 issues, some modification to command script is needed:

  1. Open Command Editor in TextMate, and edit the command script of “Run File or Spec” under “JavaScript Node” category:
  2. Change var sys = require("sys"); to var sys = require("util"); to fix the warning.
  3. Replace all instances of process.ENV. with process.env.

After modification, close the Bundle Editor. Then ask TextMate to reload all the bundles.
Then the command will work perfectly now.


There is another Trick, since there had been a bundle called javascript in TextMate. So this node js bundle doesn’t activated when you editing .js file.
You need to press ctrl + alt + n to activate the bundle manually.

This problem can be fixed by changing scope selector of all the snippets and commands. You can change it from “source.js.node” to “source.js”