Asset path in Android Stuidio

According to Android guideline document, the assets folder is created automatically and is positioned under the root of the project folder.

So the assets folder should be located at <project root>/assets

But according to my experience, I found it is not true, or is partially true.

The path to assets folder varies according to the build system being used. So it could be very confusing and sometimes make things too complicated.

ADT

For the very traditional ADT build system, then the assets folder is located at <project root>/assets

Gradle

For new Gradle build system, it changed the project structure definition, which is slightly different to the ADT one.

Gradle build system requires the assets folder is part of the “source code” so the asset folder should located at <project root>/src/main/assets/.

For more detailed information, check out this document.

Android Studio

In Android Studio (also apply to IntelliJ IDEA), things get a little bit more complicated. The assets path could be configured in project.

Android Studio store this path in project file (*.iml), which is an xml file. In the project file, under the XPath /module/component@name="FacetManager"/facet@type="android"/configuration, there could be a <option> node with name ASSETS_FOLDER_RELATIVE_PATH to descript the path.

1
<option name="ASSETS_FOLDER_RELATIVE_PATH" value="/assets" />

If the option element with specific name doesn’t exist, please manully create it.

Conclusion

  • Using Eclipse + ADT, place the assets at <project root>/assets

  • Using Android Studio (or IntelliJ) with Ant or Maven, place the assets at <project root>/assets. And set ASSETS_FOLDER_RELATIVE_PATH to /assets

  • Using Android Studio with Gradle, place the assets at <project root>/src/main/assets/. And set ASSETS_FOLDER_RELATIVE_PATH to /src/main/assets. And invoke mergeAssets during build.

Dynamically inflates UI in Android App

There is a fascinating idea that inflates UI according to an android layout xml downloaded from server. According to the Android API, it looks quite feasible.

One of LayoutInflate.inflate method overloads accept Layout Xml as XmlPullParser.

And XmlPullParser can wrap around an input stream, so as consequence, following code seems to be working:

Inflate view on the fly
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
public class DynamicView extends FrameLayout {
public DynamicView(Context context, InputStream layoutData) throws XmlPullParserException {
super(context);
createView(context, layoutData);
}
private void createView(Context context, InputStream layoutData) throws XmlPullParserException {
LayoutInflater inflater = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
XmlPullParserFactory factory = XmlPullParserFactory.newInstance();
factory.setNamespaceAware(true);
XmlPullParser parser = factory.newPullParser();
parser.setInput(layoutData, "UTF-8");
inflater.inflate(parser, this, true);
}
}

The code looks great, compiling looks fine, but when the code excuted, an exception is thrown by the inflater.

According to the LayoutInflater document, this approach won’t work(at least for now, it won’t).

For performance reasons, view inflation relies heavily on pre-processing of XML files that is done at build time. Therefore, it is not currently possible to use LayoutInflater with an XmlPullParser over a plain XML file at runtime.

Actually, Android compiler compiles the layout xml files into binary xml block, which has convert attributes into some special format. And in Android SDK, the LayoutInflater uses XmlResourceParser instead of plain XmlPullParser, which is created by XmlBlock.

XmlBlock is an internal class used in Resources to cache the binary xml document.

And there is no way to create XmlResourceParser or other classes to inject custom behavior into this process. Personally assume that it is pretty much related to the Android Resource and Theming mechanism, there are quite a number cross references between resources. To make it works in a efficient way, Android Runtime did a lot work, such as cache and pre-processing. To override this behavior require quite a number of work and need to aware of potential performance issue. Since the inflating could happen quite often during navigation.

As a not that fansy alternative, UI based on HTML hosted in WebView could be considered.

Reuse Android built-in Bluetooth Device Picker

Most Android devices support Bluetooth, and most Android ROMs has built-in bluetooth device picker, which is available to other system apps to select a bluetooth device. Theoretically the Bluetooth Device Picker could be reuse in any apps. But for some reasons, the API is not documented, and not published to everyone.

But it is possible to reuse such resources, so I wrote the following code. But due to the using of undocumented API, so it is not garenteed to work on all android devices.

Code is available as Gist

In the code I uses the Android Annotations, but it should be easy to remove the dependency on Android Annotations by adding a constructor to BluetoothDeviceManager that accepts Context.

Adjust files encoding in Finder context menu "GB1312 to UTF-8 with 1-click"

One of the most annoying thing of Mac is that encoding, espeically you’re living in a non-Mac world.

Mac uses UTF-8 as the default encoding for text files. But Windows uses local encoding, so it changes according to your OS language. For Chinese users, Windows uses GB2312 as the default encoding.

So usually the movie subtitle files, the song lyrics files, the plain text novel files, the code contains Chinese, which you downloaded form web sites or recieved from others usually cannot be read because of the wrong encoding.

So I really wish to have an item in my finder’s context menu that I can adjust the encoding of selected files with 1-click.

Luckily, with the help of Ruby, Automator workflow and Mac OSX service, it isn’t that hard.

So basically, OSX loads all the workflow files saved in ~/Library/Services/, which is displayed as Context Menu in finder.

To build the service, work through the following steps:

1. To create a new service, just pick Service in Automator’s ‘create new document’ dialog.

2. Set service input as “files and folders from any application”.

3. Run Ruby Script to transcode the files

Add a “Run Shell Script” action to execute the following ruby code, which is used to transcode the files passed to service. (For more detail about how to embed Ruby in workflow, check out Using RVMed Ruby in Mac Automator Workflow )

Make sure the input is passed as arugments to the ruby script.

Transcode the files
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
old_files = []
ARGV.each do |name|
next until File.file? name
backup_name = name + '.old'
File.rename name, backup_name
source = File.open backup_name, 'r:GB2312:UTF-8'
dest = File.open name, 'w'
while line = source.gets
dest.puts line
end
puts name
old_files << backup_name
end
ENV['Transcode_Backup_Files'] = old_files.join('|')

4. Display a growl message when processing is done

5. Prompt user whether to keep the backup files

I use a Ask for confirmation action to ask whether user want to keep the backup files.
The workflow will abort if user clicks “No”, make sure you updated the text on the buttons, and texts are put on right button.

6. Add script to remove backup files

Add another “Run Shell Script” aciton to execute another piece of ruby code.

Remove backup files
1
2
3
4
5
6
7
8
9
if ENV['Transcode_Backup_Files']
ENV['Transcode_Backup_Files'].split('|').each do |file|
File.delete file
end
ENV.delete 'Transcode_Backup_Files'
end

7. Display notification to tell user that backup files has been deleted

TIP: The transcode ruby script requires Ruby 1.9+, but Mac OS X default provides Ruby 1.8.3, which doesn’t support encoding. To interprets workflow embedded code with ruby 1.9+, please refers to Using RVMed Ruby in Mac Automator Workflow

Using RVMed Ruby in Mac Automator Workflow

HINT This content is obsoleted on OSX 10.9 Mavericks

Embed Ruby into Automator Workflow

Automator workflow has the ability to execute ruby code, but it is not that obvious if you doesn’t know it.

To embed ruby code into workflow, you need to create a “Run Shell Script” action first, then choose “/usr/bin/ruby” as the shell. Then the script in the text box will be parsed and executed as ruby code.

Ruby In Automator

So, from now on, you know how to embed ruby into automator workflow.

Use RVM ruby instead of System Ruby

By default, Automator will load system ruby at /usr/bin/ruby, which is ruby v1.8.7 without bundler support. For most ruby developers, they must have installed some kind of ruby version manager, such as RVM or rbenv. As to me, I uses RVM. So I wish I could use RVMed versions of Ruby rather than the system ruby, could be ruby 1.9.3 or even ruby 2.0 with bundler support.

To use the RVMed ruby, I tried several approaches by hacking different configurations or files. And at last, I made it doing this:

RVM provides a ruby executable file at ~/.rvm/bin/ruby. On the other hand, /usr/bin/ruby is actually a symbol link that pointed to ‘/System/Library/Frameworks/Ruby.framework/Versions/Current/usr/bin/ruby’.

So what we need to do is to replace the the symbol link with a new one pointed to ~/.rvm/bin/ruby.

Replace system ruby with RVMed ruby
1
2
3
4
sudo mv /usr/bin/ruby /usr/bin/system_ruby
sudo ln -s /Users/timnew/.rvm/bin/ruby /usr/bin/ruby

(You might need to replace the /Users/timnew/.rvm/bin/ruby with the path to your ruby executable file)

After doing this, done, you have the RVMed ruby in your Automator Workflow.

You can try to excute the following code in Workflow to verify it:

Test Ruby Version
1
puts RUBY_VERSION

If you do it correct, then you should see ‘1.9.3’ or any other version of ruby you have configured.

Arduino IDE 1.5.3 is too buggy to use and work arounds

The Arduino IDE 1.5.3 introduced some new features, such as support latest board Yun, or introduces new libraries and samples. But I found it is too buggy to use.

1. Compile code against Arduino Nano fails due to parameter mcu passed to avrdude is missing.

Reason:
The issue seems caused because the new IDE merged the menu items for Nano board. But for some reason, the configuration haven’t been updated accordingly.

Workaround:
Choose Arduino Duemilanove and Diecimila instead of Arduino Nano. Nano uses same chip as Duemilanove(ATmega328) and Diecimila(ATmega168), but uses a different PCB design. So the binary should be compatible.

2. Compile String(100, DEC) throws ambiguous matching error

Reason:
The issue is caused because the API signature updated to String(*, unsigned char), but constants are still declared as int.

Workaround:
Add force cast DEC, HEX, BIN to byte instead of int.

Pitfall in isEnum method in Java

I found a very interesting phenomenon in Java type reflection when building an Android app.
I’m trying to build a common mechanism to serialize Enum into integer when writing it into database. To make it more flexible so I fetch the value type dynamically by using reflection. So I have the following code to check whether the value to be written is an enumeration:

Code to check enumeration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
CONVERTERS.add(new ValueConverter() {
@Override
public boolean match(Object value) {
Class type = value.getType();
return type.isEnum(); // Doesn't work
}
@Override
public String convert(Object value) {
return String.valueOf(((Enum) value).ordinal());
}
});

I built the converters into a responsibility chain. The converter is applied only when match method returns true.

In the converter, I check the type with isEnum method. I expects the method yields true when the value is a enumeration. But later I found it doesn’t work as expected. And the behavior of this method is really confusing!

Here is how it works:

How isEnum works
1
2
3
4
5
6
7
8
9
10
11
public enum ServiceStatus {
NOT_COVERED,
PARTIAL,
FULL
}
assertThat(ServiceStatus.class.isEnum()).isTrue();
assertThat(ServiceStatus.FULL.class.isEnum()).isFalse();
assertThat(ServiceStatus.FULL.getType().isEnum()).isFalse();

Due to the implementation of Java Enumeration, the definition of enumeration value could be understood as the following code:

Java Enumeration psudo-code
1
2
3
4
5
6
class ServiceStatus$2 extends ServiceStatus {
}
public static final ServiceStatus FULL = new ServiceStatus$2();

So the value FULL has a different type than ServiceStatus as I expected, the type of FULL is actually a subclass of ServiceStatus. And the enumeration value FULL is a singleton instance of the anonymous sub-class.

The the most unexpected behavior is that the isEnum method only returns true on Enumeration class itself, not its subclass!

To resolve this issue gracefully, I changed my implementation a little bit. Here is the updated implementation:

Updated implementation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
CONVERTERS.add(new ValueConverter() {
@Override
public boolean match(Object value) {
Class type = value.getType();
return Enum.class.isAssignableFrom(type);
}
@Override
public String convert(Object value) {
return String.valueOf(((Enum) value).ordinal());
}
});

I uses isAssignableFrom to check whether value is the subclass of Enum or could be casted into Enum. I found this approach solved the issue gracefully.

Android SQLite database requires special table

I’m working on an Android application recently. During the development, our new app inherited a legacy SQLite database that created by a WinCE application.

Android announce that it supports the SQLite database, so we don’t worry about that too much. But when we try to open the legacy database with SQLiteDatabase.openDatabase, it throws exception on our face!

After debugging, we found to open SQLite database with SQLiteDatabase class requires the database to have a special table called android_metadata, which contains the locale information. And SQLiteDatabase yields exception when the table cannot be found.

The table will be created automatically when SQLiteDatabase creates the database, but if the app need to open the database not created by android, some patch is needed.

Here is the script to patch the database:

Patch Database
1
2
3
4
CREATE TABLE android_metadata (locale TEXT);
INSERT INTO android_metadata VALUES('en-US');

Node over Express - Autoload

Preface

This is the 2nd post of the Node over Express series (the previous one is Configuration). In this post, I’d like to discuss a famous pain point in Node.js.

Pain Point

There is well known Lisp joke:

A top hacker successfully stole the last 100 lines of a top secret program from the Pentagon. Because the program is written in Lisp, so the stolen code is just close brackets.

It is a joke that there are too many brackets in Lisp. In Node.js there is a similar issue that there are too many require. Open any node.js file, usually one could find several lines of require.

Due to the node’s sandbox model, the developer has to require resources time and time again in every files. It is not so exciting to write or read lines of meaningless require. And the worst, it could be a nightmare once a developer wishes to replace some library with another.

Rails Approaches

“Require hell” isn’t only for node.js, but also for Ruby apps. Rails has solved it gracefully, and the developer barely needs to require anything manually in Rails.

There are 2 kinds of dependencies in rails app, one is the external resource, another is the internal resource.

External Resources

External resources are classes encapsulated in ruby gems. In ruby application, developer describe the dependencies in Gemfile, and load them with Bundler. Some frameworks have already integrated with Bundler, such as Rails. When using them, developer doesn’t need to do anything manually, all the dependencies are required automatically. For others, use bundle execute to create the ruby runtime with all gems required.

Internal Resources

Internal Resources are the classes declared in the app, they could be models, the services or the controllers. Rails uses Railtie to require them automatically. The resource is loaded the first time it is used, the requiring process is “Lazy”. (In fact, this description isn’t that precise because Rails behaves differently in production environment. It loads all the classes during the launching for performance reason).

Autoload in Node.js

Rails avoids the “require-hell” with two “autoload” mechanisms. Although there are still debates about whether autoload is good or not. But at least, autoload frees the developer from the dull dependency management and increases the productivity of developers. Developers love autoload in most cases.

So to avoid “require-hell” in Node.js, I prefers autoload mechanism. But because there are significant differences in type system between Node.js and Ruby, we cannot copy the mechanism from ruby to node as is. Therefore before diving into the solution, we need to understand the differences first.

Node.js Module System

There are a number of similarities between Node.js and ruby; things in node.js usually have the equivalences in ruby. For example, package in node is similar to the gem in Ruby, npm equals to Gem and Bundler, package.json takes the responsibility of Gemfile and Gemfile.lock. The similarity enables porting autoload from ruby to node.

In some aspect, there are similarities between Node.js and Ruby, but there are also significant differences between them in some other aspects. One of the major differences is the type system and module sandbox in Node.js, which works in a quite different way to Ruby type system.

JavaScript isn’t a real OO language, so it doesn’t have real type system. All the types in JavaScript are actually functions, which are stored in local variables instead of in type system. Node.js loads files into different sandboxes, all the local variables are isolated between files to avoid “global leak”, a well-known deep-seated bad part of JavaScript. As a result, a Node.js developer needs to require used types again and again in every file.

In ruby, it is a lot better. With the help of the well designed type system, types are shared all over the runtime, a developer just needs to require the types not yet loaded.

So in node.js programs, there are many more require statements than in ruby. And due to the design of node.js and javascript, the issue is harder to be resolved.

Global Variable

In the browser, the JavaScript runtime other than node, global variables are very common. Global variable could be abused easily, which brings global leak to bad written JavaScript programs, and drives thousands of developers up to the wall. The JavaScript developers are scared of global leak so much so that they designed such a strict isolation model in node.js. But to my understanding, the isolation avoided global leaks effectively. But at the same time, it brought tens of require statements to every files, which is also not acceptable.

In fact, with the help of JSLint, CoffeScript and some other tools, developers can avoid global leak easily. And global sharing isn’t the source of evil. If abuse is avoided, I believes a reasonable level of global sharing could be useful and helpful. Actually Node.js have built-in a global sharing mechanism.

To share values across file, a special variable global is needed, which could be accessed in every file, and the value of which is also shared across files.

Besides sharing value around, global has another important feature: node treats global as default context, whose child you can refer to without explicitly identifying. So SomeType === global.SomeType.

With the help of global, we find a way to share types across files naturally.

JS Property

Rails’ autoload mechanism loads the classes lazily. It only loads the class when it is used for first time. It is a neat feature, and Rails achieve it by tracking the exception of “Uninitialized Constant”. To implement similar feature in Node.js, tracking exception is hardly feasible, so I choose a different approach, I use Property.

Property (Attribute in Ruby) enables method (function) being invoked as the field of an object is accessed. Property is a common feature among OO languages, but is a “new” feature to JavaScript. Property is a feature declared in ECMAScript 5 standard, which enables the developers to declare property on object by using the API Object.defineProperty. With the property, we’re able to hook the callback on the type variables, and require the types when the type is accessed. So the module won’t be required until it is used. On the other hand, node.js require function has built in the cache mechanism; it won’t load the file twice, instead it return the value from its cache.

With property, we make the autoload lazy!

My Implementation

To make autoload work, we need to create a magic host object to hold the type variables. In my implementation, I call the magic object Autoloader
we need to require a bootstrap script when the app starts, which is used to describe which types and how they should be required.

Bootstrap Script: initEnvironment.coffee
1
2
3
4
5
6
7
8
9
10
11
12
13
global.createAutoLoader = require('./services/AutoLoader')
global.createPathHelper = require('./services/PathHelper')
global.rootPath = createPathHelper(__dirname, true)
global.Configuration = require(rootPath.config('configuration'))
global.Services = createAutoLoader rootPath.services()
global.Routes = createAutoLoader rootPath.routes()
global.Records = createAutoLoader rootPath.records()
global.Models = createAutoLoader rootPath.models()
global.assets = {} # initialize this context for connect-assets helpers

The script sets-up the autoload hosts for all services, routes, records, models for my app. And we can reference the types as following:

Sample Usage
1
2
3
4
Records.User.findById uid, (err, user) ->
badge = new Models.Badget(badgeInfo)
user.addBadge badge
user.save()

In the initEnvironment.coffee script, there are 2 very important classes that are used:

  • AutoLoader: The class that works as the type variable hosts. All the magic happens here.
  • PathHelper: The class used to handle the path combination issue.

The detailed implementation is here:

Autoload
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
_ = require('lodash')
path = require('path')
fs = require('fs')
createPathHelper = require('./PathHelper')
createLoaderMethod = (host, name, fullName) ->
host.__names.push name
Object.defineProperty host, name,
get: ->
require(fullName)
class AutoLoader
constructor: (source) ->
@__names = []
for name, fullName of source
extName = path.extname fullName
createLoaderMethod(this, name, fullName) if require.extensions[extName]? or extName == ''
expandPath = (rootPath) ->
createPathHelper(rootPath).toPathObject()
buildSource = (items) ->
result = {}
for item in items
extName = path.extname(item)
name = path.basename(item, extName)
result[name] = item
result
createAutoLoader = (option) ->
pathObj = switch typeof(option)
when 'string'
expandPath(option)
when 'object'
if option instanceof Array
buildSource(option)
else
option
new AutoLoader(pathObj)
createAutoLoader.AutoLoader = AutoLoader
exports = module.exports = createAutoLoader

PathHelper
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
_ = require('lodash')
fs = require('fs')
path = require('path')
createPathHelper = (rootPath, isConsolidated) ->
rootPath = path.normalize rootPath
result = (args...) ->
return rootPath if args.length == 0
parts = _.flatten [rootPath, args]
path.join.apply(this, parts)
result.toPathObject = ->
self = result()
files = fs.readdirSync(self)
pathObj = {}
for file in files
fullName = path.join(self, file)
extName = path.extname(file)
name = path.basename(file, extName)
pathObj[name] = fullName
pathObj
result.consolidate = ->
pathObj = result.toPathObject()
for name, fullName of pathObj
stats = fs.statSync(fullName)
result[name] = createPathHelper(fullName) if stats.isDirectory()
result
if isConsolidated
result.consolidate()
else
result
exports = module.exports = createPathHelper

The code above are part of the Express over Node, to access the complete codebase, please check out the repo on github.


Besides of the content, I want to say thank you to my English teacher Marina Sarg, who helped me on this series of blog a lot. Without her, there won’t be this series of blogs. Marina, thank you very much.