Complex Value Array in Stylus

Stylus is an awesome CSS pre-processor, which provides much more concise syntax and more powerful feature than its competitors, such as LESS or SCSS.

But now, with more and more features added into Stylus, it seems its syntax become over-weighted. Pitfall come up.

I wish to declare an array of values for box-shadow property. And I can reference them with index:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
drop-shadows = [
0 2px 10px 0 rgba(0, 0, 0, 0.16),
0 6px 20px 0 rgba(0, 0, 0, 0.19),
0 17px 50px 0 rgba(0, 0, 0, 0.19),
0 25px 55px 0 rgba(0, 0, 0, 0.21),
0 40px 77px 0 rgba(0, 0, 0, 0.22)
]
drop-shadow(n)
box-shadow shadows[n]
for i in (1..5)
.drop-shadow-{i}
drop-shadow(i)

And expect it generates

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
.drop-shadow-1 {
box-shadow: 0 2px 10px 0 rgba(0, 0, 0, 0.16);
}
.drop-shadow-2 {
box-shadow: 0 6px 20px 0 rgba(0, 0, 0, 0.19);
}
.drop-shadow-3 {
box-shadow: 0 17px 50px 0 rgba(0, 0, 0, 0.19);
}
.drop-shadow-4 {
box-shadow: 0 25px 55px 0 rgba(0, 0, 0, 0.21);
}
.drop-shadow-5 {
box-shadow: 0 40px 77px 0 rgba(0, 0, 0, 0.22;
}

But I found there is not such thing called Array in Stylus!!!!
There is only Hash, and Hash doesn’t accept number as key!
So finally, I come up something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
drop-shadows = {
'1': 0 2px 10px 0 rgba(0, 0, 0, 0.16),
'2': 0 6px 20px 0 rgba(0, 0, 0, 0.19),
'3': 0 17px 50px 0 rgba(0, 0, 0, 0.19),
'4': 0 25px 55px 0 rgba(0, 0, 0, 0.21),
'5': 0 40px 77px 0 rgba(0, 0, 0, 0.22)
}
drop-shadow(n)
box-shadow shadows[n+'']
for i in (1..5)
.drop-shadow-{i}
drop-shadow(i)

In this piece of code, there are a bunch of pitfalls:

  1. Hash doesn’t accept number as key. So 1: 0 2px 10px 0 rgba(0, 0, 0, 0.16) cause compile error.
  2. '1' != 1, so drop-shadows[1] returns null
  3. There is no type conversion function in Stylus, use the same trick as JavaScript. ''+n convert n into string.

Just found Stylus provides something called List, which is pretty much similar to what array in other languages, except the syntax.

1
2
3
4
5
6
7
8
9
10
11
12
13
drop-shadows = 0 2px 10px 0 rgba(0, 0, 0, 0.16),
0 6px 20px 0 rgba(0, 0, 0, 0.19),
0 17px 50px 0 rgba(0, 0, 0, 0.19),
0 25px 55px 0 rgba(0, 0, 0, 0.21),
0 40px 77px 0 rgba(0, 0, 0, 0.22)
drop-shadow(n)
box-shadow shadows[n]
for i in (1..5)
.drop-shadow-{i}
drop-shadow(i)

So no brackets or parentesis needed.

Page renders improperly in IE before developer tool has opened

Today I found a super annoying issue about IE. Our website works perfectly in any browser except IE. The page isn’t rendered properly in IE 9. Well, this is common, this is the nature of IE. The mysterious issue I found is that once you opened or ever opened the developer tool, open the page or refresh the page, the problem is gone magically!!!!

As a conclusion, opening the developer tool changes the browser behavior!!!!! What a hell! So you know there is something wrong, but once you try to figure out the error message, you have to open developer tool. Once you open the developer tool, the bug is gone! DEAD END!!!

Because I cannot open developer tool, so I have to debug with alert. It is really a horrible experience to me, and feels like inspecting nuclear reaction with a plain optical magnifier or fixing a high-tech spacecraft with stones and clubs.

Since it is client-rich page, a lot of javascript is introduced. So I cannot go through the scripts line by line, instead I have to make an assumption to explain the phenomena spotted, then validate it with experiments, finally correct or extend the assumption according the validation result.

During the process I invalidated a couple of assumptions, some of them are seems very close to the “right answer”, such as “some script is loaded and executed before its dependencies, and developer tool load all the scripts first because it displays all the scripts”.

After spending a couple of hours on it, I put on eye on a line of code that is really out of my expectation: console.warn.

Code breaks the page rendering
1
2
3
console.warn('__proto__ is not supported by current browser, fallback to hard-copy approach');

I displays a warning message to console when a workaround is applied. But a tricky fact about IE 9 is that console isn’t available until developer tool is opened (MSDN reference here))!!!

The fact that console is not available until developer tools is opened really blows my mind away! (Maybe it is because I have little experience work with IE). As a chrome user, I take console as the universal log system for javascript. But in IE, according to the document), the code should check the existence of console every time print a log.

There is another pitfall here, and I saw someone really post it as answer on StackOverflow:

Bad polyfill implemntation
1
2
3
4
5
if (typeof console == "undefined") {
this.console = { log: function (msg) { alert(msg); } };
}

We usually access console as console.log, feels like console is a global instance to access. But actually console is an member of window, its full name should be window.console. When console exists, we can definitely reference to it via console. But if it doesn’t exist, the statement console cause script error! So the following code doesn’t work:

Pitfalls in console existence check
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
if (typeof(console) === 'undefined'){ // Break the script execution
console.log('Never got executed');
}
if (console != null) { // Break the script execution
console.log('Never got executed');
}
if (console) { // Break the script execution
console.log('Never got executed');
}
if (window.console) {
console.log('this works!');
}

To avoid console issue, a ployfill could be very useful. Here is a great implementation available as bower package: console-polyfill

JavaScript Prototype Chain Mutator

In JavaScript world, JSON serialization is widely used. When fetching data from server via Ajax, the data is usually represented in JSON; or loading configuration/data from file in Node.js application, the configuration/data is usually in JSON format.

JSON serialization is powerful and convenient, but there is limitation. For security and other reason, behavior and type information are forbidden in JSON. Functions members are removed when stringify a JavaScript object, also functions are not allowed in JSON.

Comparing Yaml to Ruby, this limitation isn’t that convenient when writing JavaScript application. For example, to consume the JSON data fetched via ajax from server, I really wish I can invoke some method on the deserialized model.

Here is simple example:

Ideal World
1
2
3
4
5
6
7
8
9
10
11
12
13
class Rect
constructor: (width, height) ->
@width = width if width?
@height = height if height?
area: ->
@width * @height
$.get '/rect/latest', (rectJSON) ->
rect = JSON.parse(rectJSON)
console.log rect.area() # This code doesn't work because there is rect is a plain object

The code doesn’t work, because rect in a plain object, which doesn’t contains any behavior. Someone called the rect DTO, Data Transfer Object, or POJO, Plain Old Java Object, a concept borrowed from Java world. Here we call it DTO.

To add behaviors to DTO, there are variant approaches. Such as create a behavior wrapper around the DTO, or create a new model with behavior and copy all the data from DTO to model. These practices are borrowed from Java world, or traditional Object Oriented world.

In fact, in JavaScript, there could be a better and smarter way to achieve that: Object Mutation, altering object prototype chain on the fly to convert a object into the instance of a specific type. The process is really similar to biologic genetic mutation, converting a species into another by altering the gene, so I borrow the term mutation.

With the idea, we can achieve this:

Mutate rect with Mutator
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
class Rect
constructor: (width, height) ->
@width = width if width?
@height = height if height?
area: ->
@width * @height
$.get '/rect/latest', (rectJSON) ->
rect = JSON.parse(rectJSON)
mutate(rect, Rect)
console.log rect.area()

The key to implement mutate function is to simulate new operator behavior, alerting object.__proto__ and apply constructor to the instance! For more detail, check out the library mutator Bower version NPM version, which is available as both NPM package and bower package.

When implementing the mutator, in IE, again, in the evil IE, the idea doesn’t work. Before IE 11, JavaScript prototype chain for instance is not accessible. There is nothing equivalent to object.__proto__ in IE 10 and prior. The most similar workaround is doing a hard-copy of all the members, but it still fails in type check and some dynamical usage.

Background

object.__proto__ is a Mozilla “private” implementation until EcmaScript 6.
It is interesting that most JavaScript support it except IE.
Luckily, IE 11 introduced some features in EcmaScript 6, object.__proto__ is one of them.

converting between HTML 5 data-attribute style hyphen name and javascript camcel-case name

I found a bug in widget.coffee today. To fix the issue, I need the conversion between HTML 5 data-attribute name and javascript function name, e.g. conversion between data-action-handler and actionHandler.

By taking jQuery implementation as reference, I come up 2 utility functions for the conversion:

NameConversion
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Utils =
hyphenToCamelCase: (hyphen) -> # Convert 'action-handler' to 'actionHandler'
hyphen.replace /-([a-z])/g, (match) ->
match[1].toUppercase()
camelCaseToHyphen: (camelCase) -> # Convert 'actionHandler' to 'action-handler'
camelCase.replace(/[A-Z]/g, '-$1').toLowerCase()
attributeToCamelCase: (attribute) -> # Convert 'data-action-handler' or 'action-handler' to 'actionHandler'
Utils.hyphenToCamelCase dataAttribute.replace(/^(data-)?(.*)/, '$2')
camelCaseToAttribute: (camelCase) -> # Convert 'actionHanlder' to 'data-action-handler'
'data-' + Utils.camelCaseToHyphen(camelCase)

Here is a more solid implementation based on previous one.

a sloid javascript version
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
var Utils = (function() {
function hyphenToCamelCase(hyphen) {
return hyphen.replace(/-([a-z])/g, function(match) {
return match[1].toUppercase();
});
}
function camelCaseToHyphen(camelCase) {
return camelCase.replace(/[A-Z]/g, '-$1').toLowerCase();
}
function attributeToCamelCase(attribute) {
return hyphenToCamelCase(dataAttribute.replace(/^(data-)?(.*)/, '$2'));
}
function camelCaseToAttribute(camelCase) {
return 'data-' + camelCaseToHyphen(camelCase);
}
return {
hyphenToCamelCase: hyphenToCamelCase,
camelCaseToHyphen: camelCaseToHyphen,
attributeToCamelCase: attributeToCamelCase,
camelCaseToAttribute: camelCaseToAttribute
};
})();

Remove Bower from your build script

The mysterious broken build

This morning, our QA told us that knockout, a javascript library that we used in our web app is missing on staging environment. Then we checked the package she got from CI server, and the javascript library was indeed not included. But when we tried to generate the package on our local dev box, we found that knockout is included.

It is a big surprise to us, because we share the exact same build scripts and environment between dev-boxes and CI agents and because we manage the front-end dependencies with bower. In our gulp script, we ask bower to install the dependencies every time to make sure they are up to date.

The root cause of the broken build

After spending hours on diagnosing the CI agents, we finally figure out the reason, a tricky story:

When the Knockout maintainer released the v3.1 bower package, they made a mistake in bower.json config file, which packaged the spec folder instead of the dist folder. So this package is actually broken, because the main javascript file dist/knockout.js , described in bower.json doesn’t exist.

Later, the engineers realized they made a mistake, and they fixed the issue by releasing a new package. Maybe they think they haven’t changed any script logic, so they release the new package under the same version number, which is the criminal who broke our builds.

We’re so unlucky that the broken package is downloaded on our CI server when our build script was executed there for the first time. And the broken package is stored in bower cache at that time.

Because of Bower’s cache mechanism, the broken package is used unless the version is bumped or cache is expired. This is the reason why our build is broken on the CI server.

But on our dev box, for some reason, we had run bower cache clean, which invalidated the cache. So we have a good build on our local dev box. This is the reason why we can generate good package on our dev box.

It is a very tricky issue when using bower to manage dependencies. Although it is not completely our fault, but it is kind of the worst case then we can face. The build broke silently, there were no error logs or messages that helped to figure out the reason. (Well, we haven’t got a chance to setup the smoke test for our app yet, so it could be kind of our fault.)

We thought we had been careful enough to clean the bower_components folder every time, but that prevented us from figuring out the real cause.

After fixing this issue, discussed with my pair Rafa and we came up some practices that could be helpful to avoid this kind of issue:

Best practices

  • Avoid bower install or any equivalent step (such as gulp-bower, grunt-bower, etc.) in the build script
  • Check bower_components into the code repository or download the dependencies from our self managed repository for large projects.
  • When dependencies are changed, manually install them and make sure they’re good.

After doing this, our build script runs even faster, because we don’t need to check all dependencies are up-to-date every time. This is a bonus from removing bower install from our build script.

Some thoughts on the package system

Bower components are maintained by the community, and there is no strict quality control to ensure the package is bug-free or being released in an appropriate way. So it could be safer if we can check them manually, and lock them down across environments.

This could be common issue for all kind of community managed package system. Not just Bower, it could be Maven, Ruby Gem, Node.js package, Python pip package, nuget package or even Docker containers!

Process.nextTick Implementation in Browser

Recursion is a common trick that is often used in JavaScript programming. So infinite recursion will cause stack overflow errors.
Some languages resolves this issue by introduce automatically tail call optimization, but in JavaScript we need to take care it on our own.

To solve the issue, Node.js has the utility functions nextTick to ensure specific code is invoked after the current function returned.
In Browser there is no standard approach to solve this issue, so workarounds are needed.

Thanks to Roman Shtylman(@defunctzombie), who created the node-process for Browserify, which simulate the Node.js API in browser environment.
Here is his implementation:

node-process

Infinite Recursion
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
process.nextTick = (function () {
var canSetImmediate = typeof window !== 'undefined'
&& window.setImmediate;
var canPost = typeof window !== 'undefined'
&& window.postMessage && window.addEventListener;
if (canSetImmediate) {
return function (f) { return window.setImmediate(f) };
}
if (canPost) {
var queue = [];
window.addEventListener('message', function (ev) {
var source = ev.source;
if ((source === window || source === null) && ev.data === 'process-tick') {
ev.stopPropagation();
if (queue.length > 0) {
var fn = queue.shift();
fn();
}
}
}, true);
return function nextTick(fn) {
queue.push(fn);
window.postMessage('process-tick', '*');
};
}
return function nextTick(fn) {
setTimeout(fn, 0);
};
})();

Here is some comments on the implementation.

setTimeout

To simulate the nextTick behavior, setTimeout(fn, 0) is a well-known and easy to adopt approach. The issue of this method is that setTimeout function does heavy operations, call it in loop causes significant performance issue. So we should try to use cheaper approach when possible.

setImmidate

There is a function called setImmediate, which behaves quite similar to nextTick but with a few differences when dealing with IO stuff. But in browser environment, there is no IO issue, so we can definitely replace the nextTick with it.

Immediates are queued in the order created, and are popped off the queue once per loop iteration. This is different from process.nextTick which will execute process.maxTickDepth queued callbacks per iteration. setImmediate will yield to the event loop after firing a queued callback to make sure I/O is not being starved. While order is preserved for execution, other I/O events may fire between any two scheduled immediate callbacks.

setImmediate(callback, [arg], [...])Node.js

The setImmediate function is perfect replacement for nextTick, but it is not supported by all the browsers. Only IE 10 and Node.js 0.10.+ supports it. Chrome, Firefox, Opera and all mobile browsers don’t.

Note: This method is not expected to become standard, and is only implemented by recent builds of Internet Explorer and Node.js 0.10+. It meets resistance both from Gecko (Firefox) and Webkit (Google/Apple).

window.setImmediateMDN

window.postMessage

window.postMessage enable developer to access message queue in the browser. By adding some additional code, we can simulate nextTick behavior based on message queue. It works in most modern browser, except IE 8. In IE 8, the API is implemented in a synchronous way, which introduce an extra level of stack-push, so it cannot be used to simulate nextTick.

Overall, there is no perfect workaround to the nextTick issue for now. All the solutions have different limitations, we can only hope that this issue can be resolved in the future ECMAScript standard.

Android Studio 0.6.1 SDK recognition issue when using Android SDK 19 and Gradle

A few days ago I upgraded my Android Studio to version 0.6.1. And migrated my android project build system from Maven to Gradle. Then nightmare happened!

Android Studio Version

It looks there are some issue with Android Studio version 0.6.1, which cannot recognize the jar files in Android SDK 19 (4.4 Kit Kat). As a consequence that all the Android fundemantal classes are not recognized properly, which makes IDEA almost impossible to be used.

Classes Not Recognized

After spending days on googling and trying, I realize the issue is caused that Android Studio doesn’t recognize the sdk 19 content properly.

Here is the content of Android SDK 19 that Android Studio 0.6.1 identified:

SDK in Android Studio

As comparison, here is a list of proper content of Andrdoid SDK 19 with Google API:

SDK in IDEA

Here is a list of proper content of Andrdoid SDK 19 retrived from Maven Repository:

Maven SDK

In the list, you can easily figure out that the android.jar file is missing! It is the reason why the classes are not properly recognized! Even more if you compare the list against the JDK 1.6, you will find that most of the content are the same.

JDK

Ideally, to fix this issue should be quite easy. Android Studio provides a Project Settings dialog allow developer to adjust the SDK configurations.

Project Settings Dialog:

Project Settings

But for Gradle projects, Android Studio displays a greately simplified project settings dialog instead of the original one, which doesn’t allow developer to config the SDK in dialog any longer.

Gradle Project Settings Dialog:

Project Settings

Still now, I figured out several potentisal workarounds to this issue, hope these helps:

  1. Downgrade the SDK version from 19 to 18 fixes the issue.
    If you not really needs SDK 19 features, try to downgrade the SDK version to 18 to fix the issue.
  2. Use IntelliJ instead of Android Studio
    I encounters a different issue when using IDEA, it fails to sync the Gradle file.
  3. Use Maven or ANT instead of Gradle
    Gradle is powerful, but there are too many environment issues when using with IDEs… Maven is relatively more stable.

I haven’t figure out a perfect solution to this issue, just hope the Google can fix the issue as soon as possible.

Is Android API document on ConsumerIrManager lying?

Just found a shocking fact that Android API document on ConsumerIrManger.transmit method is wrong!

KitKat has realised its own Infrared blaster API, which is incompatible with legacy Samsung private API. So I was working on Android Infrared Library to make it adapted automatically on both Samsung private API and Kit Katofficial API.

After I finished the coding according the document, I found the app broke on my Galaxy Note 3 with Kit Kat. It works perfect when running on Jelly Bean.

And I figured out an issue that it takes longer time to transmit the same seqeunce when I upgraded API. (When IR blaster is working, the LED indicator on the phone turns blue. And I found the time of indicator turning blue is significant longer than before.) And my IRRecorder cannot recognize the sequence sent by my phone any longer.

After spent several hours, I figured out the reason. The pattern was encoded in a wrong way. But I’m pretty sure that I strictly followed the API document.

So I get a conculusion that the ConsumerIrManager implementation on Samsung Note 3 is different to what described in Android API document. However I’m not sure the reason is that the Android document is lying or Samsung implemented the driver in a wrong way.

Here is the technical details of the issue and its solution:

IR Command is trasmitted by turnning the IR blaster LED on and off for a certain period of time. So each IR command can be represented by a series of time periods, which indicates how long the led is on or off. The difference between Samsung API and Kit Kat APi is that how the time is mesured.

carrierFrequency The IR carrier frequency in Hertz.
pattern The alternating on/off pattern in microseconds to transmit.

According to the Android Developer Refernece), the time in KitKat is measured in the unit of microseconds.

But for Samsung, the time is mesured by the number of cycles. Take NEC encoding as example, the frequency is 38kHz. So the cycle time T ~= 26us. BIT_MARK is 21 cycles, the period of time is around 26us x 21 ~= 546us.

So ideally, regardless of lead-in and lead-out sequence, to send the code 0xA in NEC encoding, Samsung API needs 21 60 21 21 21 60 21 21; and Kit Kat API needs 560 1600 560 560 560 1600 560 560.

But accroding to my experience, the Android Developer Reference is wrong. Even in KitKat, the time sequence is also measure by number of cycles instead of the number of microseconds!

So to fix the issue, you need some mathmatical work. Here is the conversion formula:

1
2
3
4
5
6
7
n = t / T = t * f / 1000
n: the number of cycles
t: the time in microseconds
T: the cycle time in microseconds
f: the transmitting frequency in Hertz

Use Jade as client-side template engine

Jade is a powerful JavaScript HTML template engine, which is concise and powerful. Due to its awesome syntax and powerful features, it almost become the default template engine for node.js web servers.

Jade is well known as a server-side HTML template, but actually it can also be used as a client-side template engine, which is barely known by people! To have a better understanding of this issue, firstly we should how Jade engine works.

When we’re translating a jade file into HTML, Jade engine actually does 2 separate tasks: Compiling and Rendering.

Compiling

Compiling is almost a transparent process when rendering jade files directly into HTML, including, rendering a jade file with jade cli tool. But it is actually the most important step while translating the jade template to HTML.
Compiling will be translate the jade file into a JavaScript function. During the process, all the static content has been translated.

Here is simple example:

Jade template
1
2
3
4
5
6
7
8
9
10
11
12
13
doctype html
html(lang="en")
head
title Title
body
h1 Jade - node template engine
#container.col
p You are amazing
p.
Jade is a terse and simple
templating language with a
strong focus on performance
and powerful features.
Compiled template
1
2
3
4
5
6
function template(locals) {
var buf = [];
var jade_mixins = {};
buf.push('<!DOCTYPE html><html lang="en"><head><title>Title </title></head><body><h1>Jade - node template engine</h1><div id="container"class="col"> <p>You are amazing</p><p>Jade is a terse and simple\ntemplating language with a\nstrong focus on performance\nand powerful features.</p></div></body></html>');
return buf.join("");
}

As you can see, the template is translated into a JavaScript function, which contains all the HTML data. In this case, since we didn’t introduce any interpolation, so the HTML content has been fully generated.

The case will become more complicated when interpolation, each, if statement is introduced.

Jade template with interpolation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
doctype html
html(lang="en")
head
title =title
body
h1 Jade - node template engine
#container.col
ul
each item in items
li= item
if usingJade
p You are amazing
else
p Get it!
p.
Jade is a terse and simple
templating language with a
strong focus on performance
and powerful features.
Compiled template with interpolation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
function template(locals) {
var buf = [];
var jade_mixins = {};
var locals_ = locals || {}, items = locals_.items, usingJade = locals_.usingJade;
buf.push('<!DOCTYPE html><html lang="en"><head><title>=title </title></head><body><h1>Jade - node template engine</h1><div id="container"class="col"></div><ul>');
(function() {
var $$obj = items;
if ("number" == typeof $$obj.length) {
for (var $index = 0, $$l = $$obj.length; $index < $$l; $index++) {
var item = $$obj[$index];
buf.push("<li>" + jade.escape(null == (jade.interp = item) ? "" : jade.interp) + "</li>");
}
} else {
var $$l = 0;
for (var $index in $$obj) {
$$l++;
var item = $$obj[$index];
buf.push("<li>" + jade.escape(null == (jade.interp = item) ? "" : jade.interp) + "</li>");
}
}
}).call(this);
buf.push("</ul>");
if (usingJade) {
buf.push("<p>You are amazing</p>");
} else {
buf.push("<p>Get it!</p>");
}
buf.push("<p>Jade is a terse and simple\ntemplating language with a\nstrong focus on performance\nand powerful features.</p></body></html>");
return buf.join("");
}
Data for interpolation
1
2
3
4
5
6
7
8
9
{
"title": "Jade Demo",
"usingJade": true,
"items":[
"item1",
"item2",
"item3"
]
}
Output Html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<!DOCTYPE html>
<html lang="en">
<head>
<title>=title </title>
</head>
<body>
<h1>Jade - node template engine</h1>
<div id="container" class="col"></div>
<ul>
<li>item1</li>
<li>item2</li>
<li>item3</li>
</ul>
<p>You are amazing</p>
<p>
Jade is a terse and simple
templating language with a
strong focus on performance
and powerful features.
</p>
</body>
</html>

Well, as you can see, the function has become quite complicated than before. It could become more complicated when extend, include or mixin introduced, you can trial it on your own.

Rendering

After the compiling, the rendering process is quite simple. Just invoking the compiled function, the return string is rendered html. The only thing need to mentioned here is the interpolation data should be passed to the template function as locals.

Using Jade as front-end template engine

Still now, you probably have got my idea. To use jade a front-end template, we can compose the template in jade. Later compile it into JavaScript file. And then we can invoke the JavaScript function in front-end to achieve dynamic client-side rendering!

Since Jade template has been precompiled at server side, so there is very little runtime effort when rendering the template at client-side. So it is a cheaper solution when you have lots of templates.

To compile the jade files into JavaScript instead of HTML, you need to pass -c or --client option to jade cli tool. Or calling jade.compile instead of jade.render while using JavaScript API.

Configure Grunt

Well, since Grunt is popular in node.js world. So we can also use Grunt to do the stuff for us.
Basically, use grunt for jade is straightforward. But it is a little bit tricky when you want to compile the back-end template into HTML as well as to compile the front-end template into JavaScripts.

I used a little trick to solve the issue. I follow the convention in Rails, that prefix the front-end template files with underscore.
So

1
2
3
4
5
/layouts/default.jade -> Layout file, extended by back-end/front-end templates, should not be compiled.
/views/settings/index.jade -> Back-end template, should be compiled into HTML
/views/settings/_item.jade -> Front-end template, should be compiled into JavaScript
Gruntfile.coffee
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
module.exports = (grunt) ->
grunt.initConfig
pkg: grunt.file.readJSON('package.json')
jade:
options:
pretty: true
compile:
expand: true
cwd: 'views'
src: ['**/*.jade', '!**/_*.jade']
dest: 'build/'
ext: '.html'
template:
options:
client: true
namespace: 'Templates'
expand: true
cwd: 'views'
src: ['**/_*.jade']
dest: 'build/'
ext: '.js'
grunt.loadNpmTasks('grunt-contrib-jade')

I distinguish the layouts and templates by file path. And distinguish the front-end/back-end templates by prefix. The filter !**/_*.jade excludes the front-end templates when compiling the back-end templates.

This approach should work fine in most cases, but if you are facing more complicated situation, and can’t be handled with this trick, try defining your own convention, and recognizing it with custom filter function to categorize them.

Upgrading DSL from CoffeeScript to JSON: Part.1. Migrator

I’m working on the Harvester-AdKiller version 2 recently. Version 2 dropped the idea “Code as Configuration”, because the nature of Chrome Extension. Recompiling and reloading the extension every time when configuration changed is the pain in the ass for me as an user.

For security reason, Chrome Extension disabled all the Javascript runtime evaluation features, such as eval or new Function('code'). So that it become almost impossible to edit code as data, and later applied it on the fly.

Thanks to the version 1, the feature and DSL has almost fully settled, little updates needed in the near future. So I can use a less flexible language as the DSL instead of CoffeeScript.

Finally I decided to replace CoffeeScript to JSON, which can be easily edited and applied on the fly.

After introducing JSON DSL, to enable DSL upgrading in the future, an migration system become important and urgent. (Actually, this prediction is so solid. I have changed the JSON schema once today.) So I come up a new migration system:

Upgrader
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
class Upgrader
constructor: ->
@execute()
execute: =>
console.log "[Upgrader] Current Version: #{Configuration.version}"
migrationName = "#{Configuration.version}"
migration = this[migrationName]
unless migration?
console.log '[Upgrader] Latest version, no migration needed.'
return
console.log "[Upgrader] Migration needed..."
migration.call(this, @execute)
'undefined': (done) ->
console.log "[Upgrader] Load data from seed..."
Configuration.resetDataFromSeed(done)
'1.0': (done) ->
console.log "[Upgrader] Migrating configuration schema from 1.0 to 1.1..."
# Do the migration logic here
done()

The Upgrader will be instantiate when extension started, after Configuration is initialized, which holds the DSL data for runtime usage.

When the execute method is invoked, it check the current version, and check is there a upgrading method match to this version. If yes, it triggers the migration; otherwise it succeed the migration. Each time a migration process is completed, it re-trigger execute method for another round of check.

Adding migration for a specific version of schema is quite simple. Just as method 1.0 does, declaring a method with the version number in the Upgrader.

'undefined' method is a special migration method, which is invoked there is no previous configuration found. So I initialize the configuration from seed data json file, which is generated from the version 1 DSL.

The seed data generation is also an interesting topic. Please refer to next post(Redfine DSL behavior) of this series for details.