Atom Editor Background Image

Today spent some time to customize Atom user stylesheet to display the background image for editor.
Background image is a feature that I expected so much in either TextMate or Sublime.
Now finally I have it in Atom.

Finding the class for each html component is a little bit tricky
My theme is seti-ui.

Stylesheet to display background image

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
//.react is needed to avoid impact on mini-editor
.editor.react {
// Fix the color strip between gutter and the editing area.
background: #0e1112;
.scroll-view {
// Align the Background Image with Editing Area
padding-left: 0;
margin-left: 10px;
// I put the background image under ~/.atom/images
background: url('/Users/timnew/.atom/images/batou.jpg') no-repeat fixed;
// Stretch the image
// I tried to set it in with other background properties,
// but it doesn't work. Looks like a bug in Chrome.
background-size: cover;
}
.cursor-line {
// Set cursor-line semi-transparent to make background visible.
// Adjust the alpha value to fit your taste.
background: rgba(90,90,90,.4) !important;
}
.overlayer {
// Set overlayer semi-transparent to make background visible.
// Adjust the alpha value to fit your taste.
background-color: rgba(21, 23, 24, .7) !important;
}
}

Screenshots

Without TreeView

Without Tree View

With TreeView

With Tree View

Atom Editor

Atom is general purpose text editor brought to us by GitHub guys, which is empowered by the loved Node.js and Chrome.

Someone said Atom is just another clone of Sublime. Well, I don’t quite agree with the saying. I’d like to say Atom learnt a lot from Sublime.
And I’d like to explain why.

I think Sublime itself is a great text editor, powerful to use, easy to learn, ready to be hacked, matured community, tons of plug-ins… almost unlimited possibilities…

But on the other hand, I think, Sublime is too sticky to “Text”. Yes, it is a text editor, but it doesn’t mean everything in the editor could only be text-based.
Implementing custom UI in Sublime isn’t a easy job to do. Benefits from HTML based technologies, Atom has a lot more rich features, CSS3 effects, CSS3 Animations, SVG, and a lot more…

Along with Brackets, C9, Atom is the important candidates of next-gen editors. But among them, only Atom is a general purpose editor.

Since Atom is still in the eary stage, there are a number of flaws and issues in it. Besides these bugs, the biggest problem of Atom is performance.
Loading a several MB text file or thousands pages document will kill the editor in no time. (Atom blocks user to load file larger than 2MB.)

But still, these issues won’t block me from loving it. As a node.js and Ruby developer, it is an awesome companion to my work. ❤️

Migrate Blog to Hexo

This post is a celebration for the migration from Octopress to Hexo

Octopress has done an excellent job on filling the gap between jekyll and full function repo based blog engine. But because of the tech stack it is based on. It isn’t really a awesome framework to use.

The first time I got pissed off by Octopress was by the end of 2012. Then I come up the idea to rewrite it in Node.js. But I wasn’t able to make it happen, because I was held by the new project assignment, and didn’t have too many spare time on the blog engine.

To save effort, I began to customize Octopress by rewriting some code in Octopress and Jekyll, which started the long march of Octopress customization.

I did a number of customization on Octopress, from erb template to Jekyll generators, from Rake script to TextMate bundles.

Before I switch to Sublime, I uses TextMate for quite long time. So I customized the Rake script and TextMate bundle, which enables me to invoke almost every rake command in TextMate with hotkey. I can even rename the blog post file name according to the title in front-matter without leaving TextMate. Besides the functionality, I also customized the templates and the widgets a lot to get better visual effects and reading experience.

I’ve benefited from these customization a lot. On the other hand, these deep customization blocks me from migrating to Hexo, a better alternative. Even I have found Hexo in the early 2013, and believe it is a better blog platform. But it is really costy for me to migrate the blogs away from the Octopress.

Luckily, after years development, a bunch of tools and libraries came up, which has minimized the gap between Octopress and Hexo.
After several days effort, finally retired the Octoress engine, and completed the journey of moving from Octopress to Hexo.

Page renders improperly in IE before developer tool has opened

Today I found a super annoying issue about IE. Our website works perfectly in any browser except IE. The page isn’t rendered properly in IE 9. Well, this is common, this is the nature of IE. The mysterious issue I found is that once you opened or ever opened the developer tool, open the page or refresh the page, the problem is gone magically!!!!

As a conclusion, opening the developer tool changes the browser behavior!!!!! What a hell! So you know there is something wrong, but once you try to figure out the error message, you have to open developer tool. Once you open the developer tool, the bug is gone! DEAD END!!!

Because I cannot open developer tool, so I have to debug with alert. It is really a horrible experience to me, and feels like inspecting nuclear reaction with a plain optical magnifier or fixing a high-tech spacecraft with stones and clubs.

Since it is client-rich page, a lot of javascript is introduced. So I cannot go through the scripts line by line, instead I have to make an assumption to explain the phenomena spotted, then validate it with experiments, finally correct or extend the assumption according the validation result.

During the process I invalidated a couple of assumptions, some of them are seems very close to the “right answer”, such as “some script is loaded and executed before its dependencies, and developer tool load all the scripts first because it displays all the scripts”.

After spending a couple of hours on it, I put on eye on a line of code that is really out of my expectation: console.warn.

Code breaks the page rendering
1
2
3
console.warn('__proto__ is not supported by current browser, fallback to hard-copy approach');

I displays a warning message to console when a workaround is applied. But a tricky fact about IE 9 is that console isn’t available until developer tool is opened (MSDN reference here))!!!

The fact that console is not available until developer tools is opened really blows my mind away! (Maybe it is because I have little experience work with IE). As a chrome user, I take console as the universal log system for javascript. But in IE, according to the document), the code should check the existence of console every time print a log.

There is another pitfall here, and I saw someone really post it as answer on StackOverflow:

Bad polyfill implemntation
1
2
3
4
5
if (typeof console == "undefined") {
this.console = { log: function (msg) { alert(msg); } };
}

We usually access console as console.log, feels like console is a global instance to access. But actually console is an member of window, its full name should be window.console. When console exists, we can definitely reference to it via console. But if it doesn’t exist, the statement console cause script error! So the following code doesn’t work:

Pitfalls in console existence check
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
if (typeof(console) === 'undefined'){ // Break the script execution
console.log('Never got executed');
}
if (console != null) { // Break the script execution
console.log('Never got executed');
}
if (console) { // Break the script execution
console.log('Never got executed');
}
if (window.console) {
console.log('this works!');
}

To avoid console issue, a ployfill could be very useful. Here is a great implementation available as bower package: console-polyfill

JavaScript Prototype Chain Mutator

In JavaScript world, JSON serialization is widely used. When fetching data from server via Ajax, the data is usually represented in JSON; or loading configuration/data from file in Node.js application, the configuration/data is usually in JSON format.

JSON serialization is powerful and convenient, but there is limitation. For security and other reason, behavior and type information are forbidden in JSON. Functions members are removed when stringify a JavaScript object, also functions are not allowed in JSON.

Comparing Yaml to Ruby, this limitation isn’t that convenient when writing JavaScript application. For example, to consume the JSON data fetched via ajax from server, I really wish I can invoke some method on the deserialized model.

Here is simple example:

Ideal World
1
2
3
4
5
6
7
8
9
10
11
12
13
class Rect
constructor: (width, height) ->
@width = width if width?
@height = height if height?
area: ->
@width * @height
$.get '/rect/latest', (rectJSON) ->
rect = JSON.parse(rectJSON)
console.log rect.area() # This code doesn't work because there is rect is a plain object

The code doesn’t work, because rect in a plain object, which doesn’t contains any behavior. Someone called the rect DTO, Data Transfer Object, or POJO, Plain Old Java Object, a concept borrowed from Java world. Here we call it DTO.

To add behaviors to DTO, there are variant approaches. Such as create a behavior wrapper around the DTO, or create a new model with behavior and copy all the data from DTO to model. These practices are borrowed from Java world, or traditional Object Oriented world.

In fact, in JavaScript, there could be a better and smarter way to achieve that: Object Mutation, altering object prototype chain on the fly to convert a object into the instance of a specific type. The process is really similar to biologic genetic mutation, converting a species into another by altering the gene, so I borrow the term mutation.

With the idea, we can achieve this:

Mutate rect with Mutator
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
class Rect
constructor: (width, height) ->
@width = width if width?
@height = height if height?
area: ->
@width * @height
$.get '/rect/latest', (rectJSON) ->
rect = JSON.parse(rectJSON)
mutate(rect, Rect)
console.log rect.area()

The key to implement mutate function is to simulate new operator behavior, alerting object.__proto__ and apply constructor to the instance! For more detail, check out the library mutator Bower version NPM version, which is available as both NPM package and bower package.

When implementing the mutator, in IE, again, in the evil IE, the idea doesn’t work. Before IE 11, JavaScript prototype chain for instance is not accessible. There is nothing equivalent to object.__proto__ in IE 10 and prior. The most similar workaround is doing a hard-copy of all the members, but it still fails in type check and some dynamical usage.

Background

object.__proto__ is a Mozilla “private” implementation until EcmaScript 6.
It is interesting that most JavaScript support it except IE.
Luckily, IE 11 introduced some features in EcmaScript 6, object.__proto__ is one of them.

converting between HTML 5 data-attribute style hyphen name and javascript camcel-case name

I found a bug in widget.coffee today. To fix the issue, I need the conversion between HTML 5 data-attribute name and javascript function name, e.g. conversion between data-action-handler and actionHandler.

By taking jQuery implementation as reference, I come up 2 utility functions for the conversion:

NameConversion
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Utils =
hyphenToCamelCase: (hyphen) -> # Convert 'action-handler' to 'actionHandler'
hyphen.replace /-([a-z])/g, (match) ->
match[1].toUppercase()
camelCaseToHyphen: (camelCase) -> # Convert 'actionHandler' to 'action-handler'
camelCase.replace(/[A-Z]/g, '-$1').toLowerCase()
attributeToCamelCase: (attribute) -> # Convert 'data-action-handler' or 'action-handler' to 'actionHandler'
Utils.hyphenToCamelCase dataAttribute.replace(/^(data-)?(.*)/, '$2')
camelCaseToAttribute: (camelCase) -> # Convert 'actionHanlder' to 'data-action-handler'
'data-' + Utils.camelCaseToHyphen(camelCase)

Here is a more solid implementation based on previous one.

a sloid javascript version
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
var Utils = (function() {
function hyphenToCamelCase(hyphen) {
return hyphen.replace(/-([a-z])/g, function(match) {
return match[1].toUppercase();
});
}
function camelCaseToHyphen(camelCase) {
return camelCase.replace(/[A-Z]/g, '-$1').toLowerCase();
}
function attributeToCamelCase(attribute) {
return hyphenToCamelCase(dataAttribute.replace(/^(data-)?(.*)/, '$2'));
}
function camelCaseToAttribute(camelCase) {
return 'data-' + camelCaseToHyphen(camelCase);
}
return {
hyphenToCamelCase: hyphenToCamelCase,
camelCaseToHyphen: camelCaseToHyphen,
attributeToCamelCase: attributeToCamelCase,
camelCaseToAttribute: camelCaseToAttribute
};
})();

Remove Bower from your build script

The mysterious broken build

This morning, our QA told us that knockout, a javascript library that we used in our web app is missing on staging environment. Then we checked the package she got from CI server, and the javascript library was indeed not included. But when we tried to generate the package on our local dev box, we found that knockout is included.

It is a big surprise to us, because we share the exact same build scripts and environment between dev-boxes and CI agents and because we manage the front-end dependencies with bower. In our gulp script, we ask bower to install the dependencies every time to make sure they are up to date.

The root cause of the broken build

After spending hours on diagnosing the CI agents, we finally figure out the reason, a tricky story:

When the Knockout maintainer released the v3.1 bower package, they made a mistake in bower.json config file, which packaged the spec folder instead of the dist folder. So this package is actually broken, because the main javascript file dist/knockout.js , described in bower.json doesn’t exist.

Later, the engineers realized they made a mistake, and they fixed the issue by releasing a new package. Maybe they think they haven’t changed any script logic, so they release the new package under the same version number, which is the criminal who broke our builds.

We’re so unlucky that the broken package is downloaded on our CI server when our build script was executed there for the first time. And the broken package is stored in bower cache at that time.

Because of Bower’s cache mechanism, the broken package is used unless the version is bumped or cache is expired. This is the reason why our build is broken on the CI server.

But on our dev box, for some reason, we had run bower cache clean, which invalidated the cache. So we have a good build on our local dev box. This is the reason why we can generate good package on our dev box.

It is a very tricky issue when using bower to manage dependencies. Although it is not completely our fault, but it is kind of the worst case then we can face. The build broke silently, there were no error logs or messages that helped to figure out the reason. (Well, we haven’t got a chance to setup the smoke test for our app yet, so it could be kind of our fault.)

We thought we had been careful enough to clean the bower_components folder every time, but that prevented us from figuring out the real cause.

After fixing this issue, discussed with my pair Rafa and we came up some practices that could be helpful to avoid this kind of issue:

Best practices

  • Avoid bower install or any equivalent step (such as gulp-bower, grunt-bower, etc.) in the build script
  • Check bower_components into the code repository or download the dependencies from our self managed repository for large projects.
  • When dependencies are changed, manually install them and make sure they’re good.

After doing this, our build script runs even faster, because we don’t need to check all dependencies are up-to-date every time. This is a bonus from removing bower install from our build script.

Some thoughts on the package system

Bower components are maintained by the community, and there is no strict quality control to ensure the package is bug-free or being released in an appropriate way. So it could be safer if we can check them manually, and lock them down across environments.

This could be common issue for all kind of community managed package system. Not just Bower, it could be Maven, Ruby Gem, Node.js package, Python pip package, nuget package or even Docker containers!

The battle between Game developers and Game Hackers - Part.1 How HiddenInt works

Harvest: Massive Encounter is a very unique strategic tower defense game published by Oxeye Game Studio. The game is amazing, but I won’t focus on that. I will discuss something interesting I discovered when hacking the game.

By hacking the game, I want to lock down the the number of Mineral that I have in the game. Mineral is the only key resource in the game, which is used to build or upgrade structures. Theoretically locking down a value is easy. Scan the memory for specific number for a few times to filter out the list of potential memory addresses. Then try them one by one. And finally figure out the proper address, then locked it down with the game hacking tool. Very standard approach, and supported by most of the game hacking tool.

But in Harvest, the story is quite different. By searching the mineral value, we can locate a specific address. But we can easily find that the value is only used for display instead of real game state data. In fact, Harvest uses a quite unique approach to protect its game status data! Oxeye guys call it the Hidden Int.

Here is the psudo-code explains how the it works:

HiddenInt Psudo implementation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
class HiddenInt {
private int mask;
private int hashedValue;
private Random maskGenerator;
public HiddenInt(int initialValue) {
maskGenerator = new Random();
setValue(initialValue);
}
public void setValue(int value) {
this.mask = maskGenerator.next();
this.hashedValue = value ^ this.mask; // ^ is xor operator.
}
public int getValue() {
int value = this.hashedValue ^ this.mask;
setValue(value); // Generate a new mask and hashed value every time when the value is read.
return value;
}
public int modifyValue(int difference) {
int value = getValue();
value += difference;
setValue(value);
return value;
}
public ~HiddenInt() { // Destructor
delete maskGenerator;
}
}

So from the code, you can see, the plain value is never got stored in memory. Instead, it stored a “hashed” value, which is the plain value xor a random generated mask. And every time, when either the value being read or written, the mask changes. This kills the almost all kind of memory scanning features in all kind of game hacking tool! You cannot find the plain value in the memory, so you have to use “fuzzy scan”, which detects the values changes instead of scanning specific value. Again, the data in memory keeps changing even when the value doesn’t (I’ll explain why it happens later), so “fuzzy scan” doesn’t work either here!! That’s a master kill!

Besides of keeping mask changing, it also keeps reading the value out and writing it back, which kills most game editing tool “value frozen” feature! Unless you can ensure you freeze the mask and hashed value at the same time, or it breaks the value! Since the reading and writing happen in a very high frequency (Again I’ll explain why it happens later), so you have to lock down the value at a specific point, or the locked value will be overwritten immediately.

Furthermore, since this value is the key game value, which is displayed on Game UI all the time, the getValue() is called every time when game UI renders! And usually game UI renders in at least 60fps. So the value is read 60 times per second, and the mask changes 60 times per second! (This is why the value keep changing in a high frequency!)

So this is the almost invincible way to keep key game state data safe from common game hacking tools!

In the next post, I’ll explain how to find out a bypass to this security mechanism!

Process.nextTick Implementation in Browser

Recursion is a common trick that is often used in JavaScript programming. So infinite recursion will cause stack overflow errors.
Some languages resolves this issue by introduce automatically tail call optimization, but in JavaScript we need to take care it on our own.

To solve the issue, Node.js has the utility functions nextTick to ensure specific code is invoked after the current function returned.
In Browser there is no standard approach to solve this issue, so workarounds are needed.

Thanks to Roman Shtylman(@defunctzombie), who created the node-process for Browserify, which simulate the Node.js API in browser environment.
Here is his implementation:

node-process

Infinite Recursion
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
process.nextTick = (function () {
var canSetImmediate = typeof window !== 'undefined'
&& window.setImmediate;
var canPost = typeof window !== 'undefined'
&& window.postMessage && window.addEventListener;
if (canSetImmediate) {
return function (f) { return window.setImmediate(f) };
}
if (canPost) {
var queue = [];
window.addEventListener('message', function (ev) {
var source = ev.source;
if ((source === window || source === null) && ev.data === 'process-tick') {
ev.stopPropagation();
if (queue.length > 0) {
var fn = queue.shift();
fn();
}
}
}, true);
return function nextTick(fn) {
queue.push(fn);
window.postMessage('process-tick', '*');
};
}
return function nextTick(fn) {
setTimeout(fn, 0);
};
})();

Here is some comments on the implementation.

setTimeout

To simulate the nextTick behavior, setTimeout(fn, 0) is a well-known and easy to adopt approach. The issue of this method is that setTimeout function does heavy operations, call it in loop causes significant performance issue. So we should try to use cheaper approach when possible.

setImmidate

There is a function called setImmediate, which behaves quite similar to nextTick but with a few differences when dealing with IO stuff. But in browser environment, there is no IO issue, so we can definitely replace the nextTick with it.

Immediates are queued in the order created, and are popped off the queue once per loop iteration. This is different from process.nextTick which will execute process.maxTickDepth queued callbacks per iteration. setImmediate will yield to the event loop after firing a queued callback to make sure I/O is not being starved. While order is preserved for execution, other I/O events may fire between any two scheduled immediate callbacks.

setImmediate(callback, [arg], [...])Node.js

The setImmediate function is perfect replacement for nextTick, but it is not supported by all the browsers. Only IE 10 and Node.js 0.10.+ supports it. Chrome, Firefox, Opera and all mobile browsers don’t.

Note: This method is not expected to become standard, and is only implemented by recent builds of Internet Explorer and Node.js 0.10+. It meets resistance both from Gecko (Firefox) and Webkit (Google/Apple).

window.setImmediateMDN

window.postMessage

window.postMessage enable developer to access message queue in the browser. By adding some additional code, we can simulate nextTick behavior based on message queue. It works in most modern browser, except IE 8. In IE 8, the API is implemented in a synchronous way, which introduce an extra level of stack-push, so it cannot be used to simulate nextTick.

Overall, there is no perfect workaround to the nextTick issue for now. All the solutions have different limitations, we can only hope that this issue can be resolved in the future ECMAScript standard.

Android Studio 0.6.1 SDK recognition issue when using Android SDK 19 and Gradle

A few days ago I upgraded my Android Studio to version 0.6.1. And migrated my android project build system from Maven to Gradle. Then nightmare happened!

Android Studio Version

It looks there are some issue with Android Studio version 0.6.1, which cannot recognize the jar files in Android SDK 19 (4.4 Kit Kat). As a consequence that all the Android fundemantal classes are not recognized properly, which makes IDEA almost impossible to be used.

Classes Not Recognized

After spending days on googling and trying, I realize the issue is caused that Android Studio doesn’t recognize the sdk 19 content properly.

Here is the content of Android SDK 19 that Android Studio 0.6.1 identified:

SDK in Android Studio

As comparison, here is a list of proper content of Andrdoid SDK 19 with Google API:

SDK in IDEA

Here is a list of proper content of Andrdoid SDK 19 retrived from Maven Repository:

Maven SDK

In the list, you can easily figure out that the android.jar file is missing! It is the reason why the classes are not properly recognized! Even more if you compare the list against the JDK 1.6, you will find that most of the content are the same.

JDK

Ideally, to fix this issue should be quite easy. Android Studio provides a Project Settings dialog allow developer to adjust the SDK configurations.

Project Settings Dialog:

Project Settings

But for Gradle projects, Android Studio displays a greately simplified project settings dialog instead of the original one, which doesn’t allow developer to config the SDK in dialog any longer.

Gradle Project Settings Dialog:

Project Settings

Still now, I figured out several potentisal workarounds to this issue, hope these helps:

  1. Downgrade the SDK version from 19 to 18 fixes the issue.
    If you not really needs SDK 19 features, try to downgrade the SDK version to 18 to fix the issue.
  2. Use IntelliJ instead of Android Studio
    I encounters a different issue when using IDEA, it fails to sync the Gradle file.
  3. Use Maven or ANT instead of Gradle
    Gradle is powerful, but there are too many environment issues when using with IDEs… Maven is relatively more stable.

I haven’t figure out a perfect solution to this issue, just hope the Google can fix the issue as soon as possible.