Friday, December 24, 2010

another free uml tool, bouml

Having tried metauml as my primary design utility recently, the experience isn't as good as I expected. At the beginning, I thought I would be more focused on the content/design while using a pure textual editing tool, but the result is very frustrating. Because I have to pay more attention on the organization and layout, rather than the design. metauml doesn't help me organizing elements at all, leave it all to me. I have to think in advance where should an element be placed, and instructed metauml to place the element explicitly, either in absolute coordinate or in relative coordinate. It seems ok for simple diagrams with very few elements. But as the number of elements increases, it's very hard to plan in advance. Once I made a mistake, the cost to change it is huge, I may have to re-organize everything, to keep the diagram clean.
I found an graphical tool, bouml, as an great replacement. It's efficient and powerful. bouml saves everything in text, so it's also possible to do version control. The best feature I noticed so far is its capability of reversing c++ code. By  feeding it with source code directories, it will generate a full list of classes, methods exist in the code. It's very helpful for analyzing others' projects.

Tuesday, December 21, 2010

common programming and debugging tools

To help me memorize commonly used programming and debugging tools on windows and linux, I created a wiki page here:

Sunday, December 19, 2010

port exosip to android

Port exosip to android platform isn't a difficult task because exosip doesn't rely on any special system calls that aren't available on android. It only requires osip to compile, which can also be easily ported.
As an example, I created two applications to run against exosip lib. One is a native application that can run in android shell, the other is an java application that interact with exosip through jni. The dependency relationship between modules is:

The diagram below depicts the organization of files. They're organized this way so that they can be compiled through android ndk build system.
exosip_root (NDK_MODULE_PATH environment variable points here)
To comply with ndk build system's requirements, we create a directory named jni under sip_jni and sip_exe module, and place actual source file there. The and (optional) are placed in the jni directory as well. Keeping files this way, we can issue ndk-build command right in sip_jni or sip_exe directories to compile applications.
Note that we don't create jni directory for libosip and libexosip, their is placed directly under libosip and libexosip directories. This is because they are dependent on by application modules. We don't need compile them directly, instead, the build system will build them automatically while building applications. In order to help build system find libosip and libexosip, we must set NDK_MODULE_PATH environment variable to the directory that directly containing  libosip and libexosip. The build system will search for them based on directory name, so their names matter.

To port exosip to android, the essential task is to create the file, which specifies source files, c flags, ld flags, dependent libraries. We define HAVE_TIME_H and HAVE_SYS_SELECT_H to compile exosip successfully. And we define ENABLE_TRACE and OSIP_MT to enable logging and multi-threading. In the last line, $(call import-module,libosip) tells the build system that exosip depends on osip, and all header files exported by osip (LOCAL_EXPORT_C_INCLUDES:=$(LOCAL_C_INCLUDES))  will be included by exosip automatically. 
import-module is a feature that isn't available in the build system in android source tree. It enables us organizing our projects in arbitrary manner. For example, we can place libosip and libexosip directories in another directory. The build system can still find them as long as NDK_MODULE_PATH is set to the directory containing them. It's much more flexible than specifying dependency relationship with relative path.

The sample for this post is available at:
It shows how to:
  1. use ndk build system
  2. use stl in c++ code
  3. create a very basic exosip application
  4. call native code from java
  5. call java code from native code
ndk document
exosip user manual

Tuesday, December 14, 2010

tools for working with android jni

When we make use of jni on android, it's a error-prone task to manually write the native function name for a given java native function. Though there are rules for us to follow, no one would like to memorize that. javah to the rescue.
For example, for the following java class with a native method.
 1 package com.rmd.jni;


 3 import;

 4 import android.os.Bundle;


 6 public class Main extends Activity

 7 {

 8     /** Called when the activity is first created. */

 9     @Override

10     public void onCreate(Bundle savedInstanceState)

11     {

12         super.onCreate(savedInstanceState);

13         setContentView(R.layout.main);

14     }


16     public native boolean JniMethod(int a);

17 }

We can follow steps below to generate a native header file:
  1. cd to project root folder
  2. Compile the project, e.g., using ant debug command
  3. run javah -jni -classpath bin/classes -d jni com.rmd.jni.Main (this command generates a native header file in jni directory, for com.rmd.jni.Main class)

And we get a header file com_rmd_jni_Main.h with content below:

 1 /* DO NOT EDIT THIS FILE - it is machine generated */

 2 #include <jni.h>

 3 /* Header for class com_rmd_jni_Main */


 5 #ifndef _Included_com_rmd_jni_Main

 6 #define _Included_com_rmd_jni_Main

 7 #ifdef __cplusplus

 8 extern "C" {

 9 #endif

10 /*

11  * Class:     com_rmd_jni_Main

12  * Method:    JniMethod

13  * Signature: (I)Z

14  */

15 JNIEXPORT jboolean JNICALL Java_com_rmd_jni_Main_JniMethod

16   (JNIEnv *, jobject, jint);


18 #ifdef __cplusplus

19 }

20 #endif

21 #endif

There are cases that we need to provide a method signature of a java function to the GetMethodID function in native code. javap can help us.
Consider the java class below, within which exists a JniCallback method. And this method is meant to be called from native code.

 1 package com.rmd.jni;


 3 import;

 4 import android.os.Bundle;


 6 public class Main extends Activity

 7 {

 8     /** Called when the activity is first created. */

 9     @Override

10     public void onCreate(Bundle savedInstanceState)

11     {

12         super.onCreate(savedInstanceState);

13         setContentView(R.layout.main);

14     }


16     public boolean JniCallback(int a);

17 }

After it's compiled, we can use javap -classpath bin/classes -s -p com.rmd.jni.Main command to find out the signature of JniCallback method. Shown below.
Compiled from ""
public class com.rmd.jni.Main extends{
public com.rmd.jni.Main();
Signature: ()V
public void onCreate(android.os.Bundle);
Signature: (Landroid/os/Bundle;)V
public native boolean JniMethod(int);
Signature: (I)Z

Java Native Interface Programming

Wednesday, December 8, 2010

learning opencore through unit testing

Android 2.3 gingerbread was officially released. A big improvement is this version is in media framework, as stated in platform highlights:

Media Framework

  • New media framework fully replaces OpenCore, maintaining all previous codec/container support for encoding and decoding.
  • Integrated support for the VP8 open video compression format and the WebM open container format
  • Adds AAC encoding and AMR wideband encoding

I'm not able to find out what is the replacement for opencore yet. But before opencore is obsoleted, I still would like to share my way of learning opencore. This might be helpful for those who still have to work on legacy android platforms.

Opencore is a complicated multimedia framework. An effective way I thought a bit easier to learn it is looking at its unit testing code and debugging unit testing application. The advantages include:
  1. The code can be compiled as x86 executable, so we can run it on our pc directly, rather than on a android device or emulator.
  2. The application can be debugged with gdb 
  3. Unit tests demonstrate the simplest usage of opencore. It's particularly useful for a fresh learner to get started.
  4. Test cases each have its own concentration, and is easier to follow.
So, if we use unit testing code as a means of learning opencore, instead of trying to understand the full-featured mediaserver, things will be much easier. We can find out the simplest way to initialize opencore and play a file. For any specific requirement we need to implement, we can always find a corresponding test case in the unit test guide and refer to its source code.

How to debug opencore with gdb
Before we use gdb to debug unit testing application, we need to be aware of a helpful gdb feature: show real type of a object though pointer.
In opencore framework, it always manipulate a concrete object through an interface or base pointer. So, during debugging, we often get lost about what the actual type of the object being pointed to by the pointer is. Gdb can print the real type of object if the pointer points to it has virtual table. This feature can be turned on with "set print object" command. 
For example, if we break at the line returns a pointer in PVPlayerNodeRegistry::CreateNode function, we issue "p nodeInterface" command without print object turning on, we see:
$1 = (class PVMFNodeInterface *) 0x83c65c0
The type of the pinter is shown as PVMFNodeInterface, which isn't very informative because is the base type. If we trun print object on and run the command again, we see:
$2 = (PVMFMP4FFParserNode *) 0x83c65c0
The real type of the object is clearly printed. Very useful feature for debugging complex frameworks.
We can follow below steps to compile and debug unit testing application:
  1. Compile the app by following instructions in quick_start.txt
  2. cd to {open_core_root}/build_config/opencore_dynamic/build/pe_test. We need to start debugging here, otherwise the application may fail to find necessary libraries and media files.
  3. export LD_LIBRARY_PATH=../installed_lib/linux
  4. gdb ../bin/linux/pvplayer_engine_test
  5. set desired breakpoints
  6. run -source test.mp4 -test 1 1. The source and test case number argument can be changed accordingly.

Sunday, December 5, 2010

python decorators

Decorator is a very expressive language feature in python. It helps developers to write cleaner, modular code that is easier to extend and maintain. It also helps implement AOP and decorator pattern in python.

Decorator syntax
To declare a decorator function, we can define a function that takes a another function as argument and returns a function. The usage of decorator is very simple, just add a line begins with '@' symbol and the name of the decorator before the function to be decorated.

 1 def decorator(func):
 2     def new_func():
 3         print "decorator message"
 4         func()
 5     return new_func
 7 @decorator
 8 def foo():
 9     print "hello world"
11 foo()

The effect is when we call foo function, besides print "hello world" message, the message "decorator message" is also printed. That is, the decorator function extends foo function's behavior.
The code above is equivalent to:

 1 def decorator(func):
 2     def new_func():
 3         print "decorator message"
 4         func()
 5     return new_func
 7 def foo():
 8     print "hello world"
10 foo = decorator(foo)
11 foo()

Decorators can also accept arguments, as long as it returns a decorator function that takes a function as argument.
 1 def decorator_with_arg(arg):
 2     def decorator(func):
 3         def new_func():
 4             print arg
 5             func()
 6         return new_func
 7     return decorator
 9 @decorator_with_arg("arg for decorator")
10 def foo():
11     print "hello world"
13 foo()

The example above is equivalent to :

 1 def decorator_with_arg(arg):
 2     def decorator(func):
 3         def new_func():
 4             print arg
 5             func()
 6         return new_func
 7     return decorator
 9 def foo():
10     print "hello world"
12 foo = decorator_with_arg("arg for decorator")(foo)
13 foo()
The decorator_with_arg function creates a closure, so that the arg argument can still be used after the decorator_with_arg returned. Since it's possible for a decorator to accept arguments, the decorator's behavior can changed based on the argument passed in. So it's possible to write more flexible code.

A more practical example
Here is a more practical example. We create a tracable decorator which is a debugging utility. It will keep records of the number of times that a function decorated with it is invoked.
 1 trace_log = {}
 3 def tracable(func):
 4     def decorated_func(*arg, **kwarg):
 5         if not trace_log.has_key(func.__name__):
 6             trace_log[func.__name__] = 1
 7         else:
 8             trace_log[func.__name__] += 1
 9         func(*arg, **kwarg)
11     return decorated_func
13 def print_trace_log():
14     for key, value in trace_log.items():
15         print "%s called %d times"%(key, value)
17 @tracable
18 def foo1():
19     print "foo1"
21 @tracable
22 def foo2(arg, kwd="keyword arg"):
23     print "foo2"
24     print arg
25     print kwd
27 @tracable
28 def foo3():
29     print "foo3"
31 foo1()
32 foo1()
33 foo1()
34 foo2(12)
35 foo2(13)
37 print_trace_log()

If we run the code, it will prints that foo1 function is called three times and foo2 function is called twice.
As the example showed, the code for implementing tracing is separated from business logic code contained within foo1 and foo2. The maintainability of the program is much higher than if the code for tracing is mixed with business logic code.

We got several benefits from decorator.
First, it helps achieve a better separation of business logic code and auxiliary code.
Second, it helps finding out where the auxiliary is used because decorator employs a very special syntax.
Third, we can extend or change our business logic without having to change existing code. New code can be implemented as a decorator.

Charming Python: Decorators make magic easy
Decorators for Functions and Methods

Saturday, December 4, 2010

install fcitx input method on ubuntu

The default scim input method doesn't work well with freemind on my computer, so I'd like to switch to fcitx. The fcitx package in ubuntu repository is an old version, and to install fcitx manually isn't very straightforward, so I make this post about how I managed to get it to work.

1. Download fcitx source code and extract.
2. Make sure all dependent libraries are installed:
sudo apt-get install libpango1.0-dev libcairo2-dev xorg-dev libtool gettext intltool
3. Run configure && make && make install
4. At this time, fcitx failed to run and gave following error:  fcitx: error while loading shared libraries: cannot open shared object file: No such file or directory. To fix it, create a symbol link in /usr/lib with this command:

sudo ln -s /usr/local/lib/ /usr/lib/
5. Create /etc/X11/xinit/xinput.d/fcitx file with below content. Alternatively we can place them in ~/.profile.
6. Edit /usr/lib/gtk-2.0/2.10.0/immodule-files.d/libgtk2.0-0.immodules file, change the line
"xim" "X Input Method" "gtk20" "/usr/share/locale" "ko:ja:th:zh"
"xim" "X Input Method" "gtk20" "/usr/share/locale" "en:ko:ja:th:zh"
7. Restart

To make fcitx more convenient to use, I run fcitx-config and change configurations below.
# Candidate Word Number
# Main Window Hide Mode
# Show Input Window After Trigger Input Mode
# Show Input Speed


Friday, November 26, 2010

update to latest metauml

The metauml package contained in the default texlive installation on ubuntu is the old version 0.2.3. This version doesn't support component diagram. I have to update to latest version manually. Here is how:

  1. backup /usr/share/texmf-texlive/metapost/metauml
  2. download metauml here
  3. extract and copy everything under metauml/inputs to /usr/share/texmf-texlive/metapost/metauml
  4. run "sudo texhash" to refresh texmf database, so that new version will be recognized

Now try the following diagram, it should compile fine.
1 input metauml;
2   beginfig(1);
4   Component.C("ComponentAAA")();
5   drawObject(C);
7   endfig;
8 end

Tuesday, November 23, 2010

draw uml with latex&metauml

I've used and enjoyed the benefits of reversion control system for several years. RCS makes my life a lot easier. And it pushes me to highly prefer text format files over binary files, because text files can be managed by RCS more easily. metauml is a metapost library for creating uml diagrams, in text format.

Here are the steps to use it on ubuntu:
  1. install texlive with: sudo apt-get install texlive
  2. install metauml containing in metapost for tex: sudo apt-get install texlive-metapost
Texlive is also available for windows.
After all these tools have being installed, we can start our first uml diagram in latex.
Suppose we have following classes, and want to draw class diagram for them.
 1 class Point
 2 {
 3 public:
 4     int x;
 5     int y;
 6 };
 8 class Shape
 9 {
10     public:
11         virtual int get_circumference() = 0;
12         virtual ~shape();
13 };
15 class Circle : public Shape
16 {
17     private:
18         Point center;
19         int radius;
20     public:
21         int get_circumference();
22 };

We create below metapost file, save it as, and use "mpost" command to generate a postscript file, class_diagram.1.
 1 input metauml;
 2 beginfig(1);
 3     %define classes
 4     Class.Point("Point")("+x: int""+y: int")();
 5     Class.Shape("Shape")()("+get_circumference(): int");
 6     Class.Circle("Circle")("-center: Point""-radius:  int")("+get_circumference(): int");
 8     %layout classes
 9     topToBottom(50)(Point, Circle);
10     leftToRight(50)(Circle, Shape);
12     %draw classes
13     drawObjects(Point, Shape, Circle);
15     %link classes
16     link(inheritance)(Circle.e -- Shape.w);
17     link(composition)(Point.s -- Circle.n);
18 endfig;
19 end

 Then we create below tex file as a container document for the postscript. And use "pdflatex uml.tex" to generate the final pdf file.
 1 \documentclass{article}
 3 % The following is needed in order to make the code compatible
 4 % with both latex/dvips and pdflatex.
 5 \ifx\pdftexversion\undefined
 6 \usepackage[dvips]{graphicx}
 7 \else
 8 \usepackage[pdftex]{graphicx}
 9 \DeclareGraphicsRule{*}{mps}{*}{}
10 \fi
12 \title{MetaUML example}
13 \author{Raymond Wen}
15 \begin{document}
17 \maketitle
19 \section{Example}
20 \includegraphics{class_diagram.1}
22 \end{document}

A thing worth noticing is the latex document is of A4 size by default, which is not possible to contain a complex uml diagram. We can use "\paperwidth = 1024pt", "\paperheight = 1024pt", "\textheight = 800pt" command to create a document of arbitrary size. Here is more information about latex page layout.
The final diagram is shown below:

The advantages of using tex file to draw uml diagram includes:
  1. We can concentrate on the content of the uml, rather than its layout
  2. The diagram can be easily version controlled and compared
  3. It's easier to make modifications
  4. It's totally free
  5. The output file format can be easily changed
  6. It's possible to use/write tools to generate uml diagram automatically

Monday, November 15, 2010

avoid memory leak in osip

I was debugging memory leak bugs recently. The bug was caused by incorrect usage of the osip library. It's not uncommon that we meet problems when we rely on a library or framework that we don't fully understand.

Symptom and debugging
The symptom is our application ran more and more slowly, and eventually crashed. This seemed very likely to be caused by resource leak. And after ran performance monitor against our application, it's further confirmed that memory was leaking. The virtual bytes, private bytes of the process was continuously increasing.
With the help of umdh.exe, we can find out exact lines of code that were leaking memory. It showed all stack traces of currently allocated memory blocks (including blocks that were either being used or leaked, so we must identify which blocks were in use and which were not) at the moment of dumping.

The causes of the memory leak is mainly caused by not understanding below items well.

  • transaction isn't destroyed automatically
Osip doesn't take full responsibility of managing life time of transactions. Though osip invokes callbacks registered with osip_set_kill_transaction_callback when a transaction is to be terminated, the transaction isn't freed automatically. This is supposed to be done by osip users.
The first thought I had is to call osip_transaction_free inside the kill_transaction callback, but it was wrong. Because the transaction is still accessed after the kill_transaction callback returned. So, a possible point to free transactions is to do it at the end of an iteration of the main loop, after all events have been handled.
I just don't get why this important point isn't mentioned in the official document.
  • inconsistent resource management model
In osip, there are inconsistency between APIs about how memories are managed. For example, we call osip_message_set_body to set the body of a sip message, this function internally duplicates the string we passed to it. So, we can (and need to) free the string passed to it after the function finishes. But when we want to call osip_message_set_method, be cautious! This function doesn't duplicate the string passed in, instead, it simply references the string we gave it. So, we can't free the string which is now owned by the sip message.
Such inconsistency makes it extremely easy to get confused and write code that either crashes or leaks.

Wednesday, October 20, 2010

blogporter v1.0.0 released

The first version of blogporter is released. It's a utility for synchronizing posts between different blog service provider.
I wrote this tool because I need to maintain two blogs. It's annoying to have to copy and paste my posts to another blog. And the main reason I have two blogs is that blogger's service is not available in china mainland, due to a well-known reason. But I don't want to give up blogger's tight integration with my gmail account. So, I mainly write blogs here, and then use blog-porter to synchronize the other blog with this one.

Refer to for more information about this utility.

The immediate usage of this tool is to sync posts from LiveSpace blog to other blog sites, because LiveSpace blog is soon to be closed. To do this, you can:

  1. download LiveSpace blog data through http://{your_name}
  2. extract the downloaded zip file
  3. install python, and BeatuifulSoup module
  4. run "python --list-blogs" to find out supported blog provider
  5. run "python --src-type=4 -p{path_to_unzipped_folder} --dst-type={number_stand_for_dst_blog_type} --dst-account={dst_blog_account} --dst-password={dst_blog_password} --startdate=2000-01-01 --enddate=2010-12-31 -v"

This tool still has limitations, mainly includes:
  1. it doesn't sync comments
  2. it doesn't sync category information from LiveSpace
I've tried below ways to work with LiveSpace, but failed to find a perfect solution.

  1. metaweblog api, it's limited to retrieve 20 posts at most. 
  2. rss, it returns even less posts than metaweblog api. May be Google Reader API can helps us get more data through rss, since google cached a lot of historical rss data on their own server. 
  3. livespace backup file, it doesn't contain category information. 

I finally choose livespace backup file to implement LiveSpaceProvider, hope this doesn't bother too much.

Saturday, October 16, 2010

nook vs kindle 3

I finally had a chance to play with kindle 3, and compared it with my nook.

  • The greatest advantage I felt that kindle 3 is over nook is its speed. It's extremely fast to turn pages, at least twice faster than nook.
  • The second point that kindle 3 is totally over nook is its power sustainability, on which nook performs very poor. Kindle is said to be able to work for at least half a month in one charge, but my nook only can work slightly longer than half a week.
  • Kindle wins nook on size and weight as well. Though the screen size of them are the same, kindle is thinner and lighter than nook. Based on my feeling, kindle only weighs half nook's weight.
  • The display of kindle 3 is a little bit better. It's whiter. And its contrast is higher, so that the text is clearer and easier to read. But the difference is slight, you might not notice it if you don't have them on hand at the same time to compare.

  • Kindle doesn't support epub format, which is a popular format with a lot of resources on the web.
  • Kindle doesn't handle pdf format as well as nook does. I opened the same pdf book on both devices, nook can wisely cut white margins of the pdf to make full use of its screen. Nook can also layout the content correctly when I chose to use a larger font. Kindle doesn't change the layout, it can only zoom into the page, which means I have to scroll the page rather than turning page to see more. But kindle support two viewport, landscape and portrait, this feature gains some point for it.
  • Nook is based on android, and you may extend its capability by installing new software. Thanks to nookdevs.

android touch event summary

In "implement drag and drop on android", I wrote the rules how touch event can be passed to the touched view's parent, and how a parent can intercept the touched view's event. On a device without mouse, Click and LongClicked events are also strongly related to touch event. So, I'll summarize how they are related.

  1. Button widget doesn't bubble event. If a button is touched, it doesn't pass the event to its parent in any case. (There should be more widgets exhibit this character.)
  2. Touch event doesn't bubble up to parent if the widget also registers Click or LongClick event handler.
  3. If a widget registers both Touch and Click event handler, Click event fires only when Touch event handler returns false. In other words, Click event gets fired only when touch event handler declares it's not interested in the touch event.
  4. If a widget registers both Click and LongClick event handler, and LongClick event get fired. Click event will be fired only if LongClick event handler returns false.

Testing Code:

Monday, October 11, 2010

repair mac login failure due to invalid home directory

My macbook failed to start and gave me "the password you entered is invalid" on login. It happened after my user directory is changed.
By default, mac system saves a user's personal data in /Users/user_name directory. In order to make the system easier to maintain and keep my personal data separated from system data. I created two partitions during installation. One partition is used for system installation, and the second partition is used as my home directory.
To do this, after the mac has been successfully, I copied all data from original /Users/user_name directory to /Volumes/user/user_name directory. Then I remove /Users/user_name directory, and created a soft symbol link /Users/user_name pointing to /Volumes/user/user_name. Everything work perfectly.
The problem occurred after I considered the /Volumes/user directory a poor name and changed it to a better one, /Volumes/idiot. After I restarted the machine and found I can't log into the system. Mac simply complained about my password wasn't correct, though I'm pretty sure it's correct.
The reason is I forgot to change /Users/user_name symbol link to correct place. I guess mac stores my credential in my home directory. So if this directory isn't accessible, all my login attempts failed.
To repair this, I tried mounting the mac hard disk on a ubuntu box and changing the symbol link. But due to the file system (HFS, Hierarchical File System) is journaled, it's read-only on ubuntu.
I finally found another mac machine and changed the symbol link correctly. In case I may encounter this problem again, I used "sudo diskutil disableJournal /Volumes/user" command to turn off jorunaling on the file system so that it'll be writable under ubuntu. The cost is the mac may take much longer time to scan file system if not shutting down properly. It deserves, at least I can log into the system.

Repair / Fix Mac HFS+ partition using Ubuntu CD
How to Move the Home Folder in OS X – and Why

Tuesday, September 28, 2010

How to Build a Nook emulator

due to nookdevs isn't accessible, I make a copy of their document about how to build nook emulator here. Also, I shared my system.img for nook at the end this post as well.

Android Emulator for the Nook

It is very possible to run the Barnes and Noble Nook firmware in the Android Emulator. It is time to start developing some Google Android Apps for it.
Instructions for Unix/Linux

In order to do this, you will need to:
  1. Download the Android SDK and install it. Install the Platform 1.5 SDK using tools/android in the Android SDK.
  2. Grab the original 1.0.0 image from (mirrored here: multiupload).
  3. Run dd if=signed_bravo_update.1.0.0.dat of=signed-bravo-update.1.0.0.tar.gz bs=1 skip=152 (On Windows use this tool. At the command line run gzip-extract signed-bravo-update.1.0.0.tar.gz bravo_update.dat. NOTE: gzip-extract requires the .net framework. Afterwards rename bravo_update.dat to signed-bravo-update.1.0.0.tar.gz)
  4. Extract signed-bravo-update.1.0.0.tar.gz.
  5. Rename bravo_update.dat to bravo_update.tar.gz and extract it.
  6. Extract root.tgz.
  7. Extract root/system/framework/services.jar with your favorite unzip utility.
  8. Download and install smali. You need at least baksmali-1.1.jar and smali-1.1.jar. For your sanity, grab the wrapper scripts as well.
  9. Run baksmali classes.dex on the classes.dex from services.jar to disassemble services.jar
  10. Edit out/com/android/server/ServerThread.smali and remove the line if-lt v0, v1, :cond_483 This should be line 966. (In version 1.0.0)
  11. Run smali out/ to re-assemble classes.dex with our fixes.
  12. Rename out.dex to classes.dex and copy this classes.dex to services directory, overwriting the original classes.dex.
  13. Delete the out directory and re-jar the services directory.
  14. Make an Android Device (AVD entry) in the Android Emulator with target platform 1.5 and skin/screen size of 549 by 924 (real resolution is more like 600x944 but the emulator won't start at that size), name it nook (case sensitive).
    1. If you are unable to create the AVDs (something like Error: Ignoring platform 'google_apis-3-r03': build.prop is missing. ) then install Eclipse, install the Android plugins for Eclipse and you will be able to create the AVDs from there.
  15. grab lib/ from stock system.img supplied with Android SDK using unix utility unyaffs to extract the file
  16. overwrite the from the system/lib directory in the nook firmware with the stock Android SDK one.
  17. use mkyaffs2image to make a system.img of the system/ of the nook firmware.
  18. rename system.img in the 1.5 firmware platform folder to system.good and copy in replacement system.img file you just created (it will be bigger than the SDK Android system.img (approximately 108Mb)
  19. Run the emulator by using command line emulator @nook -shell -show-kernel -verbose The emulator will take a few minutes to boot.
  20. You will NOT be able to register your emulated Nook with but you can sideload epub books but placing them in system/media/guides when you create your replacement system.img file mentioned in step 17.
References: (in detailed steps) (nookdevs project repository)  (image for downloading)

how to soft-root your nook

I don't know for what reason, the isn't accessible. So I made a copy of their document about how to soft-root nook here:

How to soft-root your Nook

Thanks to for rooting the device. Guideline provided by

What is soft-rooting?
The softroot is a way to enable the ADB without disassembling nook to gain root shell access on both 3g-equipped (original) and WiFi-only nooks (thanks to muchadoaboutnoth from IRC for confirming that). If you don't know what that means, you probably DO NOT want to run this updater on your nook. As of today, the nook's adb runs over WiFi, you can enable ADB over USB however. Read more on benefits and disadvantages of rooting and why rooting matters.

While we've taken great pains to make sure that this script won't damage your expensive new eBook reader, it comes with ABSOLUTELY NO WARRANTY. To date, it has been tested on nooks running 1.0.0, 1.1.0, 1.1.1, 1.2, 1.3 and 1.4 of the nook firmware on both the 3g-equipped (original) and WiFi-only nooks (thanks to muchadoaboutnoth from IRC for confirming that). If it breaks, you get to keep both pieces. In the unlikely event that you run this script and end up with a paperweight, please join us on #nookdevs (alternatively thru webchat) on and we'll see what we can do.

Once you run a third-party software updater on your nook, Barnes & Noble may consider your warranty null and void. It's sort of like strapping a rocket engine to your Civic -- If you hit the side of a mountain at 300mph, it's just not Honda's fault.

If you run this updater and your nook appears to have become a (very expensive) paperweight, DO NOT CALL BARNES & NOBLE. Join us on IRC and we'll do our best to help sort you out.

Barnes & Noble have built a really fantastic Android tablet for us. So far, it looks like it's going to be an amazing platform for third-party experimentation and development. To make sure that that stays true, there are some things you should keep in mind:
  • 3G is for B&N resources only, if you truly think you spent $259 $199 for unlimited 3G for life, you're delusional. You can, however, use your own SIM-card for browsing thru 3G on a softrooted nook.
  • A number of nook owners have asked us about the nook's DRM. Don't steal books.

If we haven't scared you off, it's time to get your new rocket engine set up.

A word about 1.4 update
The instructions below will help you downgrade to 1.0.0 (which will wipe your nook settings clean) and then upgrade to the 1.4 update with the pre-installed softRoot + nookLauncher + nookLibrary + nookWifiLocker + Trook + VNC + busybox (update file courtesy of perfinion and poutine; the apps are by kbs, hari and hazymind). This streamlined rooted update retains the turboboot/uboot as is, so you will not have to downgrade prior to upgrading to rooted images in the future.

If you have already updated to the stock 1.4 (either manually or via OTA update), you can still softroot your nook following the same steps below (you will however lose your settings when you downgrade to 1.0.0).

If you currently have a nook with 1.1.1 B&N software or 1.2/1.3 softRoot software, you can skip the section about downgrading to 1.0 again and go directly to the How do I do it? section to upgrade to the rooted 1.4. You should not lose any settings during the update.

IMPORTANT: You will need to REBOOT after you get to the nook "home screen" on the first initial boot!

Known issues with softRooted 1.4 update:
  • WiFi Lock might not work
IMPORTANT: You will need to REBOOT after you get to the nook "home screen" on the first initial boot!

  • You need a B&N nook.
  • You need a 128MB-or-higher microSD card with a single, FAT32-formatted partition.
  • You need to download the rooted 1.4 update (do not forget to rename file to bravo_update.dat):
  • Full softRoot (root/ADB + apps) download(SHA1 hash: 5a62a2a3ad4ffa5ea6fe2c93ebe241ead5376cd0)
  • You'll also need the 1.0.0 image download(you won't need this if you have softrooted/streamlined 1.2 or 1.3 on your nook).
  • MD5 hash: c752fa57f7253d4c499398630c27bdab
  • SHA-1 hash: 84287d73b70e98da6a6af9f362b31e96d4e6eea4
  • SHA-256 hash: a22bbbf1cc61a81fd812abc5b75f5c713cab3471be36219897 aeb91b26405b35
  • You won't need any tools or a clean work surface. Unlike our initial efforts, this tool is a simple software updater. There's no need to crack your nook open.
  • Should you want to use older version of the nook software, you can download one of the obsolete versions.
  • Attention Chrome users: if instead of the bravo_update.dat and signed_bravo_update.dat you end up with and accordingly, DO NOT UNZIP these files, you will just need to rename them to bravo_update.dat and signed_bravo_update.dat.

How do I do it?
Here is an overview of the process you'll go through. These are not step-by-step instructions. This is only an overview. Follow the instructions in the following sections.

  1. Back up any files on your nook, as it WILL be erased during this process.
  2. Manually install nook software 1.0.0 (downgrade)
  3. Manually install modified nook 1.4 software (upgrade & root)
  4. Use the Android Debug Bridge to access your nook's root shell over Wi-Fi.
  5. If you already have softrooted 1.2 or 1.3 update on your nook, you just need to apply softrooted 1.4 update, no need to downgrade to 1.0.0 first.

Step 1: Prepping your nook (downgrade to 1.0.0)
You only need to do this if you have applied the B&N 1.2, 1.3 or 1.4 update. If you are currently running softrooted/streamlined 1.2 or 1.3 by poutine, skip this section.
  1. Make sure your nook has sufficient battery to complete the procedure without turning off (at least 20%).
  2. BACK UP ANY FILES ON YOUR NOOK. This bears repeating: you WILL lose your data if you do not.
  3. Download the original nook 1.0.0 software image (see the Pre-requisites section).
  4. Rename the file you downloaded, if necessary, to signed_bravo_update.dat.
  5. If you haven't already, connect your nook to your computer via USB. The "nook" drive should appear—this is your nook's internal microSD card.
  6. Copy the 'signed_bravo_update.dat' file to the "nook" drive.
  7. Eject/unmount the "nook" drive. Remove USB cable. (Note: The B&N Home screen should show some indication that it's unpacking and checking the update after this step)
  8. The update procedure should begin automatically—look at the lower-right corner of your nook's e-ink screen and there should be a small box that says "Preparing update" with a percent-complete indicator.
  9. DO NOT turn off the power during this procedure. The nook will reboot itself when it is done.

Step 2: Rooting your nook (upgrade to modified 1.4)
Warning: This process requires that you are running the original bootloader (nook 1.0.0 software). Odds are that unless you followed the directions in Step 1: Prepping your nook above, you probably aren't. In order to get the original bootloader installed, downgrade to the full 1.0.0 image via sideloading as explained above before proceeding with the softroot.
  1. Make sure your nook has sufficient battery to complete the procedure without turning off (at least 20%).
  2. Download the rooted nook 1.4 software (see the Pre-requisites section).
  3. If you haven't already, insert a microSD card in your nook (see Inserting extra storage in your nook if you need help with that).
  4. Plug your nook into your computer via USB. You should see two drives appear: a "nook" drive (this is your nook's pre-installed, internal drive) and one more.
  5. Unlike in Step 1, you are going to use the second drive, not your nook's internal drive (the one named "nook").
  6. Copy the file you downloaded, which should be named "bravo_update.dat", to the nook's external drive identified in the previous step.
  7. Eject/unmount both drives: the one for your external card, and the internal "nook" drive.
  8. Unplug your nook from your computer.
  9. Turn off your nook by holding in the sleep/power button on top until the screen turns blank.
  10. Press and hold the upper page-flip button on the right-hand side of your nook (the one marked with a < pointing towards the middle of the e-ink screen).
  11. While continuing to hold the page-flip button, press and release sleep/power button on top. Don't let go of the page-flip button.
  12. Continue holding the page-flip button until the e-ink screen displays a "checking for update" message. Release the page-flip button within a couple seconds of when you see this message.
  13. Timing is key. If your nook displays the typical "Starting Up" screen, you've missed it—wait until it starts up, then turn it off and try again.
  14. Wait for your nook to finish running the updater (the touchscreen will show the progress).
  15. Wait for your nook to fully start up after successful update, and then manually reboot it (hold the power button for about 5 seconds so that both screens shut down and then start your nook again).
  16. It is very important, so I repeat again: Wait for your nook to fully start up after successful update, and then manually reboot it (hold the power button for about 5 seconds so that both screens shut down and then start your nook again). It is very important to reboot after an update!
  17. That's it! You've rooted your nook.
You can delete the bravo_update.dat file from your nook's external card; you no longer need it.

Step 3: Getting a root shell with the Android Debug Bridge
How to use your newly-rooted nook:
  1. Download and install the Android SDK.
  2. Find the IP address of your nook (instructions are here: How to find nook's IP address).
  3. In a terminal (command line) window, navigate to the "tools" directory in the Android SDK.
  4. With the NOOK_IP being the IP address of your nook, enter the following:
  5. adb connect NOOK_IP:5555
  6. Nothing should appear to happen and you will be returned to your command prompt. This is normal.
  7. You can now use the ADB tools to talk to your nook. To get a root shell on the nook, enter:
  8. adb shell
  9. A # should appear — you now have root access to your nook's command line interface!
  10. You can also enable ADB over USB. Complete documentation for the Android Debug Bridge is available here.

Things you may want to do with your newly-liberated nook:

Install native Android applications. See application directory for more information. The modified 1.2 and up full softRoot updates already come with several nifty applications pre-installed.
Develop nook-optimized Android applications. Installing the nook emulator on your computer may be of great use for you then.

Have fun!

Friday, September 24, 2010

recovering GRUB bootloader on windows/ubuntu dual boot machine

I installed both windows and ubuntu on my laptop. It's easy to dual boot because I installed ubuntu after windows. But for some reasons, I have to reinstall windows.
And after windows is installed, dual boot no longer work because windows installer overwrited the mbr. I didn't want to install ubuntu again, and luckily, I found this: Dual Boot Ubuntu and Windows.
In short, we can fix the grub with following steps:
  1. Boot with ubuntu live cd
  2. Mount the ubuntu installation file system
  3. Reinstall grub with "grub-setup -d ubuntu_installation_path/boot/grub" command

understanding drawBitmapMesh on android

The Canvas.drawBitmapMesh method on android is a like a mystery to me while I looked at the BitmapMesh sample. Though I tried to get some hints from the official document, it didn't help much. After played it with several tests, I got some ideas about how it worked.

A metaphor

The effect of drawBitmapMesh can be thought as pinching a point of an elastic canvas, and pull it to another point. The distorted image is very similar to what we shall get through drawBitmapMesh. Like the figures below show.

How the mesh affects the bitmap
The bitmap to be drawn is divided into equal size blocks. And the division is defined by the mesh, which is a float array. The array defines the lines that divide the bitmap. To have a division of W*H blocks, there needs to be (W+1)*(H+1) lines. These lines intersect at (W+1)*(H+1) points. Every two consecutive elements in the array corresponds to the x coordinate and y coordinate of an intersection. So, the mesh array comprises of 2*(W+1)*(H*1) elements. Given the bitmap's size is known, the drawing engine can find out the x and y coordinates of intersections by dividing the width or height of the image to W or H, respectively. So, if the x and y coordinates of an intersection supplied in the mesh doesn't equal to its intact values, the drawing engine will "pinch and pull" the intersection from its original location to the location we define.
Keep in mind that for a intersection that deviates its original location, only those four blocks that around it will be affected. All other blocks that are more than one blocks away from the intersection will remain intact.

Sunday, August 29, 2010

avoid designing by coding

I found a obvious, stupid mistake in my c++ code. The mistake itself is trivial and doesn't worth mentioning. What I'm interested in is what lead me to making such a mistake that a very fresh c++ learner can easily recognize.
The main reason is I was designing by coding (implementing). Designing and coding are very separate tasks in a programming project. The goal of designing is to find out an effective way of organizing our code (classes, modules, layers, etc.) so that the code is easy to understand, maintain and to be changed. The goal of coding is implementing required features based on design. Because designing and coding have different goals, there are different ways to do them. But I chose to do them simultaneously, that is, thought about the design and wrote code immediately, even before I had a clear idea of the whole design. In this way, I always didn't have a sole goal in mind, but had to think about multiple things. It's not natural for human brain to work this way. Especially considering the complexity of c++ language, if we can't fully focus on the implementation details, we are very likely to be bitten by it. The case might be less painful if we work with other languages like c#, python, which have much less details to concern. But it's generally wise to choose a more suitable tool to do the design other than coding, for example UML, or other diagrams. And code review is a good way to get rid of such mistakes.

Sunday, August 22, 2010

looper and handler in android

It's widely known that it's illegal to update UI components directly from threads other than main thread in android. This android document (Handling Expensive Operations in the UI Thread) suggests the steps to follow if we need to start a separate thread to do some expensive work and update UI after it's done. The idea is to create a Handler object associated with main thread, and post a Runnable to it at appropriate time. This Runnable will be invoked on the main thread. This mechanism is implemented with Looper and Handler classes.

The Looper class maintains a MessageQueue, which contains a list messages. An important character of Looper is that it's associated with the thread within which the Looper is created. This association is kept forever and can't be broken nor changed. Also note that a thread can't be associated with more than one Looper. In order to guarantee this association, Looper is stored in thread-local storage, and it can't be created via its constructor directly. The only way to create it is to call prepare static method on Looper. prepare method first examines ThreadLocal of current thread to make sure that there isn't already a Looper associated with the thread. After the examination, a new Looper is created and saved in ThreadLocal. Having prepared the Looper, we can call loop method on it to check for new messages and have Handler to deal with them.
As the name indicates, the Handler class is mainly responsible for handling (adding, removing, dispatching) messages of current thread's MessageQueue. A Handler instance is also bound to a thread. The binding between Handler and Thread is achieved via Looper and MessageQueue. A Handler is always bound to a Looper, and subsequently bound to the thread associated with the Looper. Unlike Looper, multiple Handler instances can be bound to the same thread. Whenever we call post or any methods alike on the Handler, a new message is added to the associated MessageQueue. The target field of the message is set to current Handler instance. When the Looper received this message, it invokes dispatchMessage on message's target field, so that the message routes back to to the Handler instance to be handled, but on the correct thread.
The relationships between Looper, Handler and MessageQueue is shown below:

This design is very similar to win32's message loop. The benefit of this design is that we no longer need to worry about concurrency issues while manipulating UI elements because they are guaranteed to be manipulated on the same thread. Without this simplicity, our code may bloat heavily because we have to lock access to UI elements whenever they are possibly accessed concurrently.

The example at the end of this post shows how to send messages to different handlers associated with main Looper (Main Looper is created via prepareMainLooper in ActivityThread.main).


Thursday, July 29, 2010

implement drag and drop on android

Nowadays, drag and drop is a frequently seen feature on touch screen devices. Tthis post introduces the basic idea of how to implement drag and drop on android.

On android, touch event is composed of a series events.
First, user puts his finger on an element, and the element receives a ACTION_DOWN event.
Then, while holding finger on screen, user moves his finger to a new location. The element receives a series of ACTION_MOVE events.
Finally, user raises his finger. At this point, the element receives ACTION_UP event.
Consider the figure below, the parent element has a child element inside. User can drag the child element anywhere in parent.

There are two important traits of touch event on android.
First, touch event will be propagated. That is, if a child chooses to ignore the first event (ACTION_DOWN) by returning false in its onTouchEvent handler, the parent's onTouchEvent handler will receive the event. Unless one of the ancestors agrees to handle the event or the root is reached, the event will continually be propagated.
Second, parent can intercept the touch event before its child's onTouchEvent handler is fired. This is achieved by overriding the onInterceptTouchEvent method on parent, and returning true from it. As a consequence, the child's onTouchEvent handler will be bypassed, and the parent's onTouchEvent handler will fire.
The work flow is shown in below diagram:

We need to setup the onTouchEvent handler for both child and parent. In child's handler, we save the child element as the item to be dragged, and return false so that subsequent event will be delivered to parent's handler. In parent's handler, we change the child's margin to match the position of the finger, so the child will follow our finger.

Sample code:


Wednesday, July 28, 2010

nook's battery sucks

nook's battery is advertised to be able to run for around 10 days if users trun off the wifi. But my nook can only run for 4 days at maximum, even if I always turn air mode on. On average, I used it for two or three hours per day. That's to say, its battery can only support 12 hours normal usage! It seems the culprit is even in sleep mode, the nook still consumes a lot of battery. In this review for nook, it's said nook consumes battery while turned off !! Ridiculous!

Sunday, July 18, 2010

why offsetof can use null pointer

offsetof is a widely used means in c and c++ to find out the offset of a member variable in its struct or class. The most normal way to implement it is via following macro:
#define offsetof(st, m) ((size_t) ( (char *)&((st *)(0))->m - (char *)0 ))
The idea is very simple. We declare a target type pointer pointing at address 0, then we retrieve the address of its member variable. Because the start address is 0, the value of the pointer to member is the offset.
Everything is clear and simple. But wait, look at how we retrieve the address of the member, it's deferencing a pointer to 0. Why id doesn't give us a segmentation fault?
To understand this, let's check the c code below which retrieve the offset of member c in struct foo.
 1 #include    "stdio.h"
 3 struct foo
 4 {
 5     int a;
 6     char b;
 7     int c;
 8 };
10 int main ( int argc, char *argv[] )
11 {
12     struct foo* fp = (struct foo*)0;
13     unsigned int offset = (unsigned int)&fp->c;
14     printf("offset of c is %u\n", offset);
15     return 0;
16 }               // ----------  end of function main  ----------
Then compile it with miscrosoft's c++ compiler, and dump the revelant assembly code generated by the compiler. We got this:

 1 _main:
 2   00401010: 55                 push        ebp
 3   00401011: 8B EC              mov         ebp,esp
 4   00401013: 83 EC 08           sub         esp,8
 5   00401016: C7 45 FC 00 00 00  mov         dword ptr [ebp-4],0
 6             00
 7   0040101D: 8B 45 FC           mov         eax,dword ptr [ebp-4]
 8   00401020: 83 C0 08           add         eax,8
 9   00401023: 89 45 F8           mov         dword ptr [ebp-8],eax
10   00401026: 8B 4D F8           mov         ecx,dword ptr [ebp-8]
11   00401029: 51                 push        ecx
12   0040102A: 68 5C DC 41 00     push        41DC5Ch
13   0040102F: E8 14 00 00 00     call        _printf
14   00401034: 83 C4 08           add         esp,8
15   00401037: 33 C0              xor         eax,eax
16   00401039: 8B E5              mov         esp,ebp
17   0040103B: 5D                 pop         ebp
18   0040103C: C3                 ret         

It's clear that the compiler doesn't blindly follow the null pointer to get its member variable's address. Instead, because the compiler knows the structure and layout of struct foo, it adds the offset (which is already known to the compiler) of member c to the starting address of struct foo to find out the address of c.
offset doesn't access memory pointed to by the null pointer. That's why we didn't get invalid memory access error while using a null pointer.