I’m working on an Android application recently. During the development, our new app inherited a legacy SQLite database that created by a WinCE application.
Android announce that it supports the SQLite database, so we don’t worry about that too much. But when we try to open the legacy database with SQLiteDatabase.openDatabase, it throws exception on our face!
After debugging, we found to open SQLite database with SQLiteDatabase class requires the database to have a special table called android_metadata, which contains the locale information. And SQLiteDatabase yields exception when the table cannot be found.
The table will be created automatically when SQLiteDatabase creates the database, but if the app need to open the database not created by android, some patch is needed.
This is the 2nd post of the Node over Express series (the previous one is Configuration). In this post, I’d like to discuss a famous pain point in Node.js.
Pain Point
There is well known Lisp joke:
A top hacker successfully stole the last 100 lines of a top secret program from the Pentagon. Because the program is written in Lisp, so the stolen code is just close brackets.
It is a joke that there are too many brackets in Lisp. In Node.js there is a similar issue that there are too many require. Open any node.js file, usually one could find several lines of require.
Due to the node’s sandbox model, the developer has to require resources time and time again in every files. It is not so exciting to write or read lines of meaningless require. And the worst, it could be a nightmare once a developer wishes to replace some library with another.
Rails Approaches
“Require hell” isn’t only for node.js, but also for Ruby apps. Rails has solved it gracefully, and the developer barely needs to require anything manually in Rails.
There are 2 kinds of dependencies in rails app, one is the external resource, another is the internal resource.
External Resources
External resources are classes encapsulated in ruby gems. In ruby application, developer describe the dependencies in Gemfile, and load them with Bundler. Some frameworks have already integrated with Bundler, such as Rails. When using them, developer doesn’t need to do anything manually, all the dependencies are required automatically. For others, use bundle execute to create the ruby runtime with all gems required.
Internal Resources
Internal Resources are the classes declared in the app, they could be models, the services or the controllers. Rails uses Railtie to require them automatically. The resource is loaded the first time it is used, the requiring process is “Lazy”. (In fact, this description isn’t that precise because Rails behaves differently in production environment. It loads all the classes during the launching for performance reason).
Autoload in Node.js
Rails avoids the “require-hell” with two “autoload” mechanisms. Although there are still debates about whether autoload is good or not. But at least, autoload frees the developer from the dull dependency management and increases the productivity of developers. Developers love autoload in most cases.
So to avoid “require-hell” in Node.js, I prefers autoload mechanism. But because there are significant differences in type system between Node.js and Ruby, we cannot copy the mechanism from ruby to node as is. Therefore before diving into the solution, we need to understand the differences first.
Node.js Module System
There are a number of similarities between Node.js and ruby; things in node.js usually have the equivalences in ruby. For example, package in node is similar to the gem in Ruby, npm equals to Gem and Bundler, package.json takes the responsibility of Gemfile and Gemfile.lock. The similarity enables porting autoload from ruby to node.
In some aspect, there are similarities between Node.js and Ruby, but there are also significant differences between them in some other aspects. One of the major differences is the type system and module sandbox in Node.js, which works in a quite different way to Ruby type system.
JavaScript isn’t a real OO language, so it doesn’t have real type system. All the types in JavaScript are actually functions, which are stored in local variables instead of in type system. Node.js loads files into different sandboxes, all the local variables are isolated between files to avoid “global leak”, a well-known deep-seated bad part of JavaScript. As a result, a Node.js developer needs to require used types again and again in every file.
In ruby, it is a lot better. With the help of the well designed type system, types are shared all over the runtime, a developer just needs to require the types not yet loaded.
So in node.js programs, there are many more require statements than in ruby. And due to the design of node.js and javascript, the issue is harder to be resolved.
Global Variable
In the browser, the JavaScript runtime other than node, global variables are very common. Global variable could be abused easily, which brings global leak to bad written JavaScript programs, and drives thousands of developers up to the wall. The JavaScript developers are scared of global leak so much so that they designed such a strict isolation model in node.js. But to my understanding, the isolation avoided global leaks effectively. But at the same time, it brought tens of require statements to every files, which is also not acceptable.
In fact, with the help of JSLint, CoffeScript and some other tools, developers can avoid global leak easily. And global sharing isn’t the source of evil. If abuse is avoided, I believes a reasonable level of global sharing could be useful and helpful. Actually Node.js have built-in a global sharing mechanism.
To share values across file, a special variable global is needed, which could be accessed in every file, and the value of which is also shared across files.
Besides sharing value around, global has another important feature: node treats global as default context, whose child you can refer to without explicitly identifying. So SomeType === global.SomeType.
With the help of global, we find a way to share types across files naturally.
JS Property
Rails’ autoload mechanism loads the classes lazily. It only loads the class when it is used for first time. It is a neat feature, and Rails achieve it by tracking the exception of “Uninitialized Constant”. To implement similar feature in Node.js, tracking exception is hardly feasible, so I choose a different approach, I use Property.
Property (Attribute in Ruby) enables method (function) being invoked as the field of an object is accessed. Property is a common feature among OO languages, but is a “new” feature to JavaScript. Property is a feature declared in ECMAScript 5 standard, which enables the developers to declare property on object by using the API Object.defineProperty. With the property, we’re able to hook the callback on the type variables, and require the types when the type is accessed. So the module won’t be required until it is used. On the other hand, node.js require function has built in the cache mechanism; it won’t load the file twice, instead it return the value from its cache.
With property, we make the autoload lazy!
My Implementation
To make autoload work, we need to create a magic host object to hold the type variables. In my implementation, I call the magic object Autoloader we need to require a bootstrap script when the app starts, which is used to describe which types and how they should be required.
global.assets = {} # initialize this context for connect-assets helpers
The script sets-up the autoload hosts for all services, routes, records, models for my app. And we can reference the types as following:
Sample Usage
1
2
3
4
Records.User.findById uid, (err, user) ->
badge = new Models.Badget(badgeInfo)
user.addBadge badge
user.save()
In the initEnvironment.coffee script, there are 2 very important classes that are used:
AutoLoader: The class that works as the type variable hosts. All the magic happens here.
PathHelper: The class used to handle the path combination issue.
The detailed implementation is here:
Autoload
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
_ = require('lodash')
path = require('path')
fs = require('fs')
createPathHelper = require('./PathHelper')
createLoaderMethod = (host, name, fullName) ->
host.__names.push name
Object.defineProperty host, name,
get: ->
require(fullName)
classAutoLoader
constructor: (source) ->
@__names = []
for name, fullName of source
extName = path.extname fullName
createLoaderMethod(this, name, fullName) ifrequire.extensions[extName]? or extName == ''
expandPath = (rootPath) ->
createPathHelper(rootPath).toPathObject()
buildSource = (items) ->
result = {}
for item in items
extName = path.extname(item)
name = path.basename(item, extName)
result[name] = item
result
createAutoLoader = (option) ->
pathObj = switchtypeof(option)
when'string'
expandPath(option)
when'object'
if option instanceof Array
buildSource(option)
else
option
new AutoLoader(pathObj)
createAutoLoader.AutoLoader = AutoLoader
exports = module.exports = createAutoLoader
PathHelper
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
_ = require('lodash')
fs = require('fs')
path = require('path')
createPathHelper = (rootPath, isConsolidated) ->
rootPath = path.normalize rootPath
result = (args...) ->
return rootPath if args.length == 0
parts = _.flatten [rootPath, args]
path.join.apply(this, parts)
result.toPathObject = ->
self = result()
files = fs.readdirSync(self)
pathObj = {}
for file in files
fullName = path.join(self, file)
extName = path.extname(file)
name = path.basename(file, extName)
pathObj[name] = fullName
pathObj
result.consolidate = ->
pathObj = result.toPathObject()
for name, fullName of pathObj
stats = fs.statSync(fullName)
result[name] = createPathHelper(fullName) if stats.isDirectory()
result
if isConsolidated
result.consolidate()
else
result
exports = module.exports = createPathHelper
The code above are part of the Express over Node, to access the complete codebase, please check out the repo on github.
Besides of the content, I want to say thank you to my English teacher Marina Sarg, who helped me on this series of blog a lot. Without her, there won’t be this series of blogs. Marina, thank you very much.
I have been working on Node.js related projects for quite a while, and have built apps with node both for the clients or personal projects, such as LiveHall, CiMonitor, etc. I have promised some one to share my experience on node. Today I’ll begin to work on this. This will be the first blog of the series.
Background
In this blog, I would like to talk about the configuration in node, which is common problem we need to solve in apps.
Problems related to configuration aren’t new, and there have been a dozens of mature solutions, but for Node.js apps, there is still something worth to be discussed.
Perhaps configuration could be treated as a kind of special data. Usually developers prefer to use data language to describe their configurations. Here are some examples:
.net and Java developer usually uses Xml to describe their configuration
Ruby developer prefers Yaml as the configuration language
JavaScript developer tend to use Json
Data languages are convenient, because developers can easily build DSL on it, then they describe the configuration with the DSL. But is the data language the best option available? Is it really suitable to be used in all scnearios?
Before we answer the questions, I would like to say something about the problem we’re facing. There is one common requirement to all kinds of configuration solutions, which is default values and overriding.
For example, as a Web app default, we use port 80; but in development environment, we prefer to use a port number over 1024, 3000 is a popular choice. That means we need to provide 80 as the default value of the port, but we wish to override the value with 3000 in the development environment.
For the languages I mentioned above, except for Yaml, Xml and Json, doesn’t provide native support of inheritance and overriding. It means we need to implement the mechanism by our own. Take Json as example, we might write the configuration in this way:
The previous Json snippet is a typical example of web app configuration; it has a default section to provide the default values for all environments. Three sections for specific environments. To apply it corecctly to our app, we need to load and parse the Json file to get all data first, then load the values of the default section, then override the value with the values from specific environment. In addition, we might wish to have the validation that yields error when the provided environment doesn’t exist.
This solution looks simple and seems to work, but when you try to apply this approach to your app in real life, you need to watch out some pitfalls.
Issue 1: Confidential Values
In the real world, values in configuration sometimes could be sensitive and need to be kept confidential. It could contain the credential to access your database, Or it could contain the key to decrypt the cookies. It may also contain private certificate that identifies and authenticates the app to other services. In these scenarios, you need to protect your configuration in order to avoid big trouble!
To solve the issue, you might think about adding new feature that enable you to to encrypt confidential values or to load it from a different safe source. To achieve it, you might need to add another layer of DSL which add more complexities to your app and make your code harder to debug or to maintain.
Issue 2: Dynamic Data
A solution to first issue, one could store the environment related but sensitive data in the environment variables. The solution is simple and works perfectly, so I highly recommend it. However, to do this means you need the capability to load the value not only from Json directly but also from the environment variables.
Sometimes, such as deploying your app to Heroku/Nojitsu, might give rise that make the case trickier. After deployed the app to Heroku/Nojitsu, the default values are provided in Json directly, and some of which need to be overrode with the values from environment variables or you need to do it vice versa. These tricky requirements might blow your mind and your code away easily. It causes complicated DSL design and hundreds lines of implementation, but just to load your configuration properly. Obviously it is not a good idea.
Issue 3: Complicated Inheritance Relationship
Scared about above cases? No, then how about complicated inheritance relationship between environments?
In some big and complicated web apps, there might be more than 3 basic environments, such as:
Development: for developers to develop the app locally
Test: for developers to run unit or function test locally, such as mocha tests
Regression: for developers or QAs to run regression tests, such as cucumber tests
Integration: for QAs or Ops to test the integration with other apps
Staging: for ops and QAs to test the app in production like environment before it really goes live
Production: the environment serves your real users
…
When try to write configurations for these environments, one might find there are only a few differences between environments. To make life easier, to avoid the redundancy, introducing the inheritance between configurations might be a good idea.
As the consequence, the whole configuration becomes environments with complex inheritance relationship. And to support this kind of configuration inheritance, a more complex DSL and hundreds lines of codes are needed.
Some Comments
My assumption above seems to be a little too complex. From some people, it might be the “WORST CASE SCENERIO” and hard to come by. But according to my experience, it is very common when building real web app with node. So if to solve it isn’t too hard, it could be better to consider it seriously and solve it gracefully.
Ruby developer might think they’re lucky because Yaml supports inheritance natively. But confidential data and dynamic data still troubles.
My Solution
After learnt a number of painful lessons, I figured out a simple but working solution: Configuration as Code - describe the configuration with the same language that the business logic is described!
Configuration as code isn’t a new concept, but it is extremely handy when you use it in node applications! Let me explain why and how it works:
To protect the confidential configuration values, one should store them with environment variables, which are only accessible in the specific server. Then one can load these values from the environment variables as dynamically values.
To do it in a data language such as Xml, Json or Yaml could be hard, but it will become as easy as taking a candy from a baby if it is done in the programming language that application applied/used, such as ruby or javascript.
To the configuration inheritance, OO languages have already provided very handy inheritance mechanism. Why do we need to invent one? Why not just use it? To the value overriding, OO programming tells us that it is called polymorphism. The only difference here from the typical scenario is that we override the values instead of the behaviors. But it isn’t an issue, because the value could be the result of the behavior, right?
Now I assume that everyone got a pretty good idea of what I am saying. If that is the case, then the below code should be able to be understood quite clearly, which is a standard Node.js file written in coffee script:
module.exports = new Config[process.env.NODE_ENV]()
See, with the approach, one can describe the configuration easily and clearly in a few lines of code, but with built-in loading dynamical values capability and configuration inheritance and overriding capability.
In fact, with my suggestions, it might work better than expected! Here are the additional free benefits:
Only one configuration is needed when the app deployed to the cloud. Because all the host specific configurations are usually provided via the environment variables in Paas.
Have some simple and straightforward logic in the configuration, which could be very useful, especially if there is some naming convention in the configuration. But complicated or tricky logic should be strictly avoided, because it is hurts the readability and maintainability.
Easy to write tests for configurations, to ensure the values are properly set. It could be very handy when there are complicated inheritance relationships between configurations, or have some simple logic in your configuration.
Avoid to instantiate and execute the code that isn’t related to the current environment, which could be helpful to avoid overhead to instantiate unused expensive resources or to avoid errors caused because of incompatibility between environments.
Get runtime error when the configuration for the environment doesn’t exist.
Besides of the content, I want to say thank you to my English teacher Marina Sarg, who helped me on this series of blog a lot. Without her, there won’t be this series of blogs. Marina, thank you very much.
I usually uses \^\ and \$\ to verify user input, e.g: I uses following regexp to verify whether a user input is valid gmail email address:
Matching Gmail
1
^[a-zA-Z_\.]+@gmail.com$
But in fact it is potentially vulnerable! According to the RegExp document, ^ and $ is matching to line head and line end! So I might rush into pitfall when user try to fool me with following input:
Since there is a \n in the string, so $ won’t really match to the end of the string but actually matched to the \n, then the whole string become a valid input, but actually it isn’t!
To avoid such issue, we should stick to \A and \z, which is literally means the the beginning of the string and end of the string!
Javascript is famous for its lack of preciseness, so it always surprises and make joke with the developers by breaking the common sense or instinct.
Javascript doesn’t provide integer type, but in daily life, integer sometimes is necessary, then how can we convert a trim a float number into integer in Javascript? Some very common answers might be Math.floor, Math.round or even parseInt. But besides calling Math functions, is there any other answer?
The answer is bitwise operations. Amazing? Yes. Because bitwise operations are usually only applied to integers, so Javascript will try to convert the number into "integer" internally when a bitwise operation is applied, even it is still represented in type of number
Suppose value = 3.1415926, and we want integer is the trimmed value of value, then we can have:
Trim Float Number
1
2
3
4
5
6
7
8
9
10
11
12
var value = 3.1415926;
var integer = Math.floor(value);
integer = Math.round(value);
integer = parseInt(value);
integer = ~~value; // Bitwise NOT
integer = value | 0; // Bitwise OR
integer = value << 0; // Left Shift
integer = value >> 0; // Sign-propagating Right Shift
integer = value >>> 0; // Zero-fill Right Shift
For more detail information about bitwise operation in javascript, please check out the MDN document
All approaches listed before are working, but with different performance. And according to the result from JsPerf, I sort the algorithms by performance from good to bad:
integer = ~~value;
integer = value >>> 0; and integer = value << 0;
integer = Math.floor(value);
integer = value >> 0;
integer = value | 0;
integer = Math.round(value);
integer = parseInt(value);
NOTE: The test cases are running in Chrome 24.0.1312.57 on Mac OS X 10.8.2
Application is usually required to run in different environments. To manage the differences between the environments, we usually introduce the concept of Environment Specific Configuration. In Rails application, by default, Rails have provided 3 different environments, they are the well known, development, test and production. And we can use the environment variable RAILS_ENV to tell Rails which environment to be loaded, if the RAILS_ENV is not provided, Rails will load the app in development env by default.
This approach is very convenient, so we want to apply it to anywhere. But in node.js, Express doesn’t provide any configuration management. So we need to built the feature by ourselves.
The environment management usually provide the following functionalities:
Allow us to provide some configuration values as the default, which will be loaded in all environments, usually we call it common.
Specific configuration will be loaded according to the environment variable, and will override some values in the common if necessary.
Rails uses YAML to hold these configurations, which is concise but powerful enough for this purpose. And YAML provided inheritance mechanism by default, so you can reduce the duplication by using inheritance.
In express and node.js, if we follow the same approach, comparing to YAML, we prefer JSON, which is supported natively by Javascript. But to me, JSON isn’t the best option, there are some disadvantages of JSON:
JSON Syntax is not concise enough
Matching the brackets and appending commas to the line end are distractions
Lack of flexility
As an answer to these issues, I chose coffee-script instead of JSON. Coffee is concise. And similar to YAML, coffee uses indention to indicate the nested level. And coffee is executable, which provides a lot of flexibilities to the configuration. So we can implement a Domain Specific Language form
To do it, we need to solve 4 problems:
Allow dev to declare default configuration.
Load specific configuration besides of default one.
Specific configuration can overrides the values in the default one.
Code is concise, clean and reading-friendly.
Inspired by the YAML solution, I work out my first solution:
YAML is data centric language, so its inheritance is more like “mixin” another piece of data. So I uses underscore to help me to mixin the specific configuration over the default one, which overrides the overlapped values.
But if we jump out of the YAML’s box, let us think about the Javascript itself, Javascript is a prototype language, which means it had already provide an overriding mechanism natively. Each object inherits and overrides the value from its prototype. So I worked out the 2nd solution:
This approach works, but looks kind of ugly. Since we’re using coffee, which provides the syntax sugar for class and class inheritance. So we have the 3rd version:
module.exports = new Config[process.env.NODE_ENV]()
Now the code looks clean, and we can improve it a step further if necessary. We can try to separate the configurations into files, and required by the file name:
It is a very basic question, but to solve it in a time limited environment, require solid knowledge about algorithm, and could use these knowledges flexibly. I found the problem of myself is that I know it, but I cannot use it as flexible as my hand.
The problem description:
Calculate the square root of a given number N The N could be a decimal, such as 6.25 or 0.01 The implementation only allow to use basic algebra operators like +, -, *, /, <, >, etc. Advanced functions like Math.sqrt is not allowed
As a TDDer, I’m used to write a simple test structure that is gonna used to express the test cases:
Code Skeleton
1
2
3
4
5
6
7
8
9
10
11
12
13
14
defassert(expected, actual)
if expected == actual
puts "Passed"
else
puts "Failed"
p expected
p actual
end
end
defsqrt(n)
end
After this, we can begin to write our 1st test case, which is the simplest scenario that I can imagine: We assume:
n must be an integer
n must have a integer square root
1st test case
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
defassert(expected, actual)
if expected == actual
puts "Passed"
else
puts "Failed"
p expected
p actual
end
end
defsqrt(n)
end
# Start from the easiest case, the integer square root.
assert 3, sqrt(9)
Run the code, if everything goes right, we will get a failed message as expected. Then we gonna introduce our first implementation to fix the failed test case:
With the 2 additional assumptions in 1st test case, we can easily figure out a simple solution: Simply linear search the integers between the root between 1 and n.
1st implementation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
defassert(expected, actual)
if expected == actual
puts "Passed"
else
puts "Failed"
p expected
p actual
end
end
defsqrt(n)
(1..n).each do|x|
return x if x * x == n
end
end
# Start from the easiest case, the integer square root.
assert 3, sqrt(9)
So far so good. But there are 2 magic integers that related to the sqrt, one is 1 and another is 0. And it seems our function cannot handle all of them correctly, so I wanna improve my algorithm to enable it deals with special numbers: 0, 1. So I added 2 test cases, and improved the implementation:
sqrt of 0 and 1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
defassert(expected, actual)
if expected == actual
puts "Passed"
else
puts "Failed"
p expected
p actual
end
end
defsqrt(n)
return0if n == 0
(1..n).each do|x|
return x if x * x == n
end
end
# Start from the easiest case, the integer square root.
assert 3, sqrt(9)
# 2 corner cases
assert 1, sqrt(1)
assert 0, sqrt(0)
Now everything looks good, except the performance. The time complexity of this algorithm is O(n), which is bad. I expected the algorithm complexity could close to O(1). At least it should be O(log n)
How could we improve the performance? I had ever thought that it is safe to shrink the range to (1..2/n), but in fact it doesn’t really help to improve the performance of this algorithm, it is still O(n) after the update. And it causes problems when dealing with the number 1, so I prefers to keep it as is.
So what we did in the sqrt function now it kind of a search, we search the number match the condition between 1 and n. Obviously that 1..n is a ascending series, and mapping x -> x*x has positive differential coefficient. So it is possible for use to use variant binary search replace the linear search, which reduce the time complexity from O(n) to O(log n)
Binary Search
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
defassert(expected, actual)
if expected == actual
puts "Passed"
else
puts "Failed"
p expected
p actual
end
end
defbinary_search(goal, start, stop)
mid = (stop - start) / 2 + start
mid_square = mid * mid
if mid_square == goal
return mid
elsif mid_square > goal
return binary_search(goal, start, mid)
else
return binary_search(goal, mid, stop)
end
end
defsqrt(n)
return0if n == 0
binary_search(n, 1, n)
end
# Start from the easiest case, the integer square root.
assert 3, sqrt(9)
# 2 corner cases
assert 1, sqrt(1)
assert 0, sqrt(0)
# 2 normal cases
assert 5, sqrt(25)
assert 9, sqrt(81)
After implemented the binary search algorithm, we found a very interesting phenomenon: We didn’t restrict n to integer, and it seems it get some capability to dealing with float number?! So I tried to add 2 float number test cases:
Float number test cases
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
defassert(expected, actual)
if expected == actual
puts "Passed"
else
puts "Failed"
p expected
p actual
end
end
defbinary_search(goal, start, stop)
mid = (stop - start) / 2 + start
mid_square = mid * mid
if mid_square == goal
return mid
elsif mid_square > goal
return binary_search(goal, start, mid)
else
return binary_search(goal, mid, stop)
end
end
defsqrt(n)
return0if n == 0
binary_search(n, 1, n)
end
# Start from the easiest case, the integer square root.
assert 3, sqrt(9)
# 2 corner cases
assert 1, sqrt(1)
assert 0, sqrt(0)
# 2 normal cases
assert 5, sqrt(25)
assert 9, sqrt(81)
# float number
assert 2.5, sqrt(6.25)
assert 1.5, sqrt(2.25)
Amazing, our code works fine! But I believe it is tricky, since both 2.5 and 1.5 is the number stand on the right center between 2 near-by integers. And it fails dealing with generic float number. The problem we met is call stack overflow. Binary search algorithm failed to hit the exactly accurate number that we expected. To solve the problem, we can use a small enough range to replace the accurate equality comparison. We introduce a const EPSILON to describe the accuracy of the calculation.
Adjust precision
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
EPSILON = 100 * Float.const_get(:EPSILON)
defassert(expected, actual)
if (expected - actual) < EPSILON
puts "Passed"
else
puts "Failed"
p expected
p actual
end
end
defbinary_search(goal, start, stop)
mid = (stop - start) / 2 + start
mid_square = mid * mid
if (mid_square - goal).abs < EPSILON
return mid
elsif mid_square > goal
return binary_search(goal, start, mid)
else
return binary_search(goal, mid, stop)
end
end
defsqrt(n)
return0if n == 0
binary_search(n, 1, n)
end
# Start from the easiest case, the integer square root.
assert 3, sqrt(9)
# 2 corner cases
assert 1, sqrt(1)
assert 0, sqrt(0)
# 2 normal cases
assert 5, sqrt(25)
assert 9, sqrt(81)
# float number
assert 2.5, sqrt(6.25)
assert 1.5, sqrt(2.25)
# float numbers not at 2^n
assert 3.3, sqrt(10.89)
assert 7.7, sqrt(59.29)
Now it looks our code can calculate the square root of most of the numbers that larger than 1. But it fails to calculate the square root of number less than 1. The reason of the failure is because x x < x when x < 1 but x x > 1 when x > 1, which means we should search in different range for the numbers > 1 and numbers < 1.
float numbers <1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
EPSILON = 100 * Float.const_get(:EPSILON)
defassert(expected, actual)
if (expected - actual).abs < (EPSILON * 10)
puts "Passed"
else
puts "Failed"
p expected
p actual
end
end
defbinary_search(goal, start, stop)
mid = (stop - start) / 2 + start
mid_square = mid * mid
if (mid_square - goal).abs < EPSILON
return mid
elsif mid_square > goal
return binary_search(goal, start, mid)
else
return binary_search(goal, mid, stop)
end
end
defsqrt(n)
return0if n == 0
if n == 1
return1
elsif n > 1
return binary_search(n, 1, n)
else
return binary_search(n, n, 1)
end
end
puts "Start from the easiest case, the integer square root."
assert 3, sqrt(9)
puts "2 corner cases"
assert 1, sqrt(1)
assert 0, sqrt(0)
puts "2 normal cases"
assert 5, sqrt(25)
assert 9, sqrt(81)
puts "float number"
assert 2.5, sqrt(6.25)
assert 1.5, sqrt(2.25)
puts "float numbers not at 2^n"
assert 3.3, sqrt(10.89)
assert 7.7, sqrt(59.29)
puts "float number < 1"
assert 0.1, sqrt(0.01)
assert 0.02, sqrt(0.0004)
So now the algorithm is pretty much as what we want, but we still found it raise call stack overflow exception sometimes. And it wastes too much iterations on unnecessary precision. So I think maybe we can make the algorithm unlimitedly close to O(1) by sacrificing some precision of the result. So we set up a limit for the maximum iterations that we can take during the calculation. When the limit is reached, we break out the iteration, and return a less accurate number.
It is really useful when calculating the irrational square root, which is has unlimited digits, and there is no accurate solution to it.
irrational square root
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
EPSILON = 10 * Float.const_get(:EPSILON)
DEPTH_LIMIT = 100
defassert(expected, actual)
if (expected - actual).abs < (EPSILON * 10)
puts "Passed"
else
puts "Failed"
p expected
p actual
end
end
defbinary_search(goal, start, stop, depth)
mid = (stop - start) / 2 + start
mid_square = mid * mid
if (mid_square - goal).abs < EPSILON
return mid
else
return mid if depth >= DEPTH_LIMIT
if mid_square > goal
return binary_search(goal, start, mid, depth + 1)
else
return binary_search(goal, mid, stop, depth + 1)
end
end
end
defsqrt(n)
return0if n == 0
n = n.to_f
if n == 1
return1
elsif n > 1
return binary_search(n, 1, n, 0)
else
return binary_search(n, n, 1, 0)
end
end
puts "Start from the easiest case, the integer square root."
assert 3, sqrt(9)
puts "2 corner cases"
assert 1, sqrt(1)
assert 0, sqrt(0)
puts "2 normal cases"
assert 5, sqrt(25)
assert 9, sqrt(81)
puts "float number"
assert 2.5, sqrt(6.25)
assert 1.5, sqrt(2.25)
puts "float numbers not at 2^n"
assert 3.3, sqrt(10.89)
assert 7.7, sqrt(59.29)
puts "float number < 1"
assert 0.1, sqrt(0.01)
assert 0.02, sqrt(0.0004)
puts "irrational root"
assert 1.414213562373095, sqrt(2)
assert 1.732050807568877, sqrt(3)
So besides of the binary search approach, we can calculate the square root with Newton’s method, which calculates the result with a iterative equation. Newton’s method has the limitation in precision, but has a good performance. It is said that one of the ancestors of FPS game Quake uses it in the game engine to get a good performance with limited computing power. Here is a easy-understood document explain how it works.
Today, we found there is a huge pitfall in node.js crypto module! Decipher has potential problem when processing Base64 encoding.
We’re building RESTful web service based on Node.js, which talks to some other services implemented with Ruby.
Ruby
In ruby, we use the default Base64 class to handle Base64 encoding.
Base64#encode64 has a very interesting feature: It add line break (\n) to output every 60 characters. This format make the output look pretty and be friendly for human reading:
The Base64#decode64 class ignores the line break (\n) when parsing the base64 encoded data, so the line break won’t pollute the data.
Node.js
Node.js take Base64 as one of the 5 standard encodings (ascii, utf8, base64, binary, hex). Ideally the data or string can be transcoded between these 4 encodings without data loss.
The Buffer class is the simplest way to transcode the data:
Base64 Encoder in Node.js
1
2
3
4
5
6
7
8
Base64 =
encode64: (text) ->
new Buffer(text, 'utf8').toString('base64')
decode64: (base64) ->
new Buffer(base64. 'base64').toString('utf8')
Although encode64 function in node.js won’t add line break to the output, but the decode64 function does ignore the line break when parsing the data. It keeps the consistent behavior with ruby Base64 class, so we can use this decode64 function to decode the data from ruby.
Since base64 is one of the standard encodings, and some of the node.js API does allow set encoding for input and output. So ideally, we can complete the base64 encoding and decoding during processing the data. It seems Node.js is more convenient comparing to Ruby when dealing with Base64.
e.g. We can combine reading file and base64 encoding the content into one operation by setting the encoding to readFileSync API.
Write and Read string as Base64
1
2
3
4
5
6
fs = require('fs')
fileName = './binary.dat'# this file contains binary data
base64 = fs.readFileSync(fileName, 'base64') # file content has been base64 encoded
It looks like we can always use this trick to avoid manually base64 encoding and decoding when the API has encoding parameter! But actually it is not true! There is a BIG pitfall here!
In our real case, we uses crypto module to decrypt the the JSON document that encrypted and base64 encoded by Ruby:
binary = new Buffer(data,'base64') # Manually Base64 Decode
decrypted = decipher.update(binary, 'binary', 'utf8') # Set input encoding to 'binary'
decrypted += dechiper.final('utf8')
JSON.parse(decrypted)
The previous 2 implementations are very similar except the second one base64 decoded the data manually by using Buffer. Ideally they should be equivalent in behavior. But in fact, they are NOT equivalent!
The previous implementation throws “TypeError: DecipherFinal fail”. And the reason is that the shortcut way doesn’t ignore the line break, but Buffer does!!! So in the previous implementation, the data is polluted by the line break!
Conclusion
Be careful, when you try to ask the API to base64 decode the data by setting the encoding argument to ‘base64’. It has inconsistent behavior comparing to Buffer class.
I’m not sure whether it is a node.js bug, or it is as is by design. But it is indeed a pitfall that hides so deep. And usually is extremely hard to figure out. Since encrypted binary is hard to human to read, and debugging between 2 languages are also kind of hard!
We countered a very wield runtime error today, after migrated some data from a legacy database.
Because there is no change on the models, so we just create the table, and copied the data from the legacy database directly. To ensure the migration doesn’t break anything, we also wrote some migration test to verify the data integrality. And we found all tests are passed.
Everything looks perfect until the app goes live. We found the app crashes occasionally when we’re trying to create new data record in the system. Sometimes it works fine, but sometimes we got an error says “duplicate key value violates unique constraint ‘xxxxx_pkey’”.
It is wield because we’re really confident about our unit test and migration test. The problem must not related to migration and logic.
After some manually tests, we found we also got error when create entry with raw SQL Insert Query. So it seems to be a postgres issue. And the problem is caused because of the primary key, which is a auto-generated id.
Postgres introduces the Sequence to generate the auto-increase index. Sequence remember the last index number it generated, and calculate the new index by +1. During the data migration, we copy the data rows from another table to a new table. To keep the relationship between records, we also copied the primary key in the row. As a result, although we had inserted a number of records into the table, but the sequence binding to the primary key doesn’t been updated.
For example, we have inserted the following 3 entries:
{id: 1, name: ‘Jack’}
{id: 2, name: ‘Rose’}
{id: 4, name: ‘Hook’}
But because the id is also inserted, so the sequence is still at 1, so when we execute the following SQL: `
Insert entry
1
2
3
4
INSERTINTOusers (name)
VALUES ('Robinhood');
And sequence will generate 1 as the id, which is conflicted with entry {id: 1, name: 'Jack'}, and then database yield exception “duplicated key”. But usually the id is not continues because of deletion of the records, which looks like there are “holes” in the records. So our app can successfully insert entry successfully when new entry falls into the “hole”.
To solve this problem, we need to also update the sequences in the table, including the primary sequence. Postgres allow Sequence to be updated by using ALTER SEQUENCE command, and we can set the sequence to a big enough integer:
Update Sequence
1
2
3
ALTER SEQUENCE users_id_seq RESTART 10000
A smarter way is to query the whole table to find out the maximum id number, and set the sequence to that number + 1.