Install tensorflow on Ubuntu 18.04 with CUDA 9.2


This is a guide on installing the latest tensorflow with the latest CUDA library on the latest Ubuntu LTS. The installation is some how straight forward, but there are still traps that I stepped into.

The tensorflow homepage only provides prebuilt binary supporting CUDA 9.0, but Nvidia has phased out 9.0 for quite some time. One has to click into the legacy link to download. I hate legacy stuff, so I will build tensorflow from source.

Step 1: Install latest nvidia kernel driver 396.x

There are two way to install it: from Ubuntu PPA, or from Nvidia. I choose Ubuntu PPA because it’s better integrated into the package management system. Up to now, the default installed repository only has driver version up to 390, which is not compatible with CUDA 9.2. I ran into the following error:

2018-07-27 21:56:39.036138: I tensorflow/core/common_runtime/gpu/] Adding visible gpu devices: 0
2018-07-27 21:56:39.036223: E tensorflow/core/common_runtime/] Internal: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version

I did the following to install driver 396:

$ sudo add-apt-repository ppa:graphics-drivers
$ sudo apt-get update
$ sudo apt-get purge 'nvidia*'
$ sudo apt-get install nvidia-driver-396

Note that the purge command is necessary, otherwise apt refuses to install 396.

After reboot, one should be able to get the following:

$ cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX x86_64 Kernel Module  396.45  Thu Jul 12 20:49:29 PDT 2018
GCC version:  gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3) 

Step 2: Install latest CUDA and cuDNN SDK

Nvidia only provides CUDA on Ubuntu 17.10 and 16.04, not 18.04, but I found that CUDA for 17.10 works in 18.04 as well. Installation is quite straight forward. Download and run it as root. It will be installed into /usr/local/cuda-9.2/. To install cuDNN, I installed the following 3 debs:


Since the library installation directory is not a standard one, the dynamic linker don’t know where to find. We can add the following environment variable:

export LD_LIBRARY_PATH=/usr/local/cuda-9.2/extras/CUPTI/lib64:/usr/local/cuda-9.2/lib64

But I prefer not to mess with LD_LIBRARY_PATH. I’d rather put it into /etc/ I.e. create a file named cuda.conf which looks like this:

$ cat /etc/

After that, do sudo ldconfig

To test whether CUDA and the kernel driver is good:

$ /usr/local/cuda-9.2/extras/demo_suite/deviceQuery 
/usr/local/cuda-9.2/extras/demo_suite/deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 1070 Ti"
  CUDA Driver Version / Runtime Version          9.2 / 9.2
  CUDA Capability Major/Minor version number:    6.1
  Total amount of global memory:                 8117 MBytes (8510832640 bytes)
  (19) Multiprocessors, (128) CUDA Cores/MP:     2432 CUDA Cores
  GPU Max Clock rate:                            1683 MHz (1.68 GHz)
  Memory Clock rate:                             4004 Mhz
  Memory Bus Width:                              256-bit
  L2 Cache Size:                                 2097152 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.2, CUDA Runtime Version = 9.2, NumDevs = 1, Device0 = GeForce GTX 1070 Ti
Result = PASS

Step 3: Install tensorflow dependencies

Follow the tensorflow official doc, install the python packages. I only have python3.

sudo apt-get install python3-numpy python3-dev python3-pip python3-wheel

To install bazel, download and run it without root. bazel homepage says only 16.04 and 14.04 are supported, but I found it runs fine in my 18.04.

I read from many sources that building tensorflow with the default gcc 7 is problematic, and should use gcc 6, so I installed gcc 6. There is no need to set the default gcc to 6, because the tensorflow build tool will ask which gcc to use. One command will do: sudo apt install gcc-6 g++-6

Step 4: Compile tensorflow

Git clone, git checkout r1.9, then run configure. I pretty much choose “no” for all the options except CUDA. After configure, a file “.tf_configure.bazelrc” is generated. Mine looks like the following:

$ cat .tf_configure.bazelrc
build --action_env PYTHON_BIN_PATH="/usr/bin/python3"
build --action_env PYTHON_LIB_PATH="/usr/lib/python3/dist-packages"
build --python_path="/usr/bin/python3"
build --define with_jemalloc=true
build:gcp --define with_gcp_support=true
build:hdfs --define with_hdfs_support=true
build:s3 --define with_s3_support=true
build:kafka --define with_kafka_support=true
build:xla --define with_xla_support=true
build:gdr --define with_gdr_support=true
build:verbs --define with_verbs_support=true
build --action_env TF_NEED_OPENCL_SYCL="0"
build --action_env TF_NEED_CUDA="1"
build --action_env CUDA_TOOLKIT_PATH="/usr/local/cuda-9.2"
build --action_env TF_CUDA_VERSION="9.2"
build --action_env CUDNN_INSTALL_PATH="/usr/lib/x86_64-linux-gnu"
build --action_env TF_CUDNN_VERSION="7"
build --action_env TF_NCCL_VERSION="1"
build --action_env TF_CUDA_COMPUTE_CAPABILITIES="6.1"
build --action_env LD_LIBRARY_PATH="/usr/local/cuda-9.2/extras/CUPTI/lib64:/usr/local/cuda-9.2/lib64"
build --action_env TF_CUDA_CLANG="0"
build --action_env GCC_HOST_COMPILER_PATH="/usr/bin/gcc-6"
build --config=cuda
test --config=cuda
build --define grpc_no_ares=true
build:opt --copt=-march=native
build:opt --host_copt=-march=native
build:opt --define with_default_optimizations=true
build --strip=always

After that, build and install:

bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
pip3 install ./tensorflow-1.9.0-cp36-cp36m-linux_x86_64.whl

Note that tensorflow webpage asks to run sudo pip install. This is wrong. Never sudo pip.

Finally, test it with the tensorflow hello world program.

$ python3
Python 3.6.5 (default, Apr  1 2018, 05:46:30) 
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
2018-07-27 23:53:22.182147: I tensorflow/stream_executor/cuda/] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-07-27 23:53:22.182524: I tensorflow/core/common_runtime/gpu/] Found device 0 with properties: 
name: GeForce GTX 1070 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.683
pciBusID: 0000:01:00.0
totalMemory: 7.93GiB freeMemory: 7.43GiB
2018-07-27 23:53:22.182537: I tensorflow/core/common_runtime/gpu/] Adding visible gpu devices: 0
2018-07-27 23:53:22.348349: I tensorflow/core/common_runtime/gpu/] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-07-27 23:53:22.348378: I tensorflow/core/common_runtime/gpu/]      0 
2018-07-27 23:53:22.348383: I tensorflow/core/common_runtime/gpu/] 0:   N 
2018-07-27 23:53:22.348530: I tensorflow/core/common_runtime/gpu/] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7174 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
>>> print(
b'Hello, TensorFlow!'

Happy machine learning. You are welcome.


Build Android Clang Toolchain


This document describes how to build the clang toolchain included in Android NDK from source. The official documentation doesn’t work for me:

Step 1: Download NDK binary

Go to Google website to download NDK binary for Linux. We only need the repo.prop file to know the exact commit in the source repo. It’s in NDK_DIR/toolchains/llvm/prebuilt/linux-x86_64/repo.prop.
This documentation is correct for both NDK r15c and r16.

Step 2: Download toolchain source from AOSP repo

repo init -u -b llvm
repo sync

Step 3: Checkout the exact version as in NDK

You can manually go to each project directory and do `git checkout <commit>`, where “<commit>” is listed in repo.prop.
You can also use the following script. To run, go to the root dir and run `bash /path/to/repo.prop`


if [ $# -ne 1 ] ; then
	echo usage: bash repo.prop
if [ '!' -f $prop ] ; then
	echo $prop doesnt exist
if [ '!' -f .repo/project.list ] ; then
	echo not in aosp dir

repo list > tmp1

for i in `cut -d ' ' -f 1 "$prop"` ; do
	commit=`grep "^$i " "$prop" | cut -d ' ' -f 2`
	path=`grep " : $i$" tmp1 | sed -e 's/ .*//'`
	if [ -z "$path" ] ; then
		echo "Warning: $i not found"
	if [ '!' -d "$path" ] ; then
		echo "Warning: $path not exist"
	cd $path
	git checkout -q $commit
	cd $aosp

Step 4: symlink bootstrap.bash

ln -s build/soong/bootstrap.bash ./

Step 5: Patch prebuilts/sdk

There are some version mismatches between prebuilts/sdk and soong.

diff --git a/tools/Android.bp b/tools/Android.bp
index 6f2d1ac..6ed1c8f 100644
--- a/tools/Android.bp
+++ b/tools/Android.bp
@@ -2,7 +2,7 @@ cc_prebuilt_library_shared {
     name: "libLLVM",
     host_supported: true,
     target: {
-        linux_glibc_x86_64: {
+        linux_x86_64: {
             srcs: ["linux/lib64/"],
         darwin_x86_64: {
@@ -21,7 +21,7 @@ cc_prebuilt_library_shared {
     name: "libclang",
     host_supported: true,
     target: {
-        linux_glibc_x86_64: {
+        linux_x86_64: {
             srcs: ["linux/lib64/"],
         darwin_x86_64: {
@@ -35,8 +35,3 @@ cc_prebuilt_library_shared {
-java_import {
-    name: "sdk-core-lambda-stubs",
-    jars: ["core-lambda-stubs.jar"],

Step 6: Patch build/soong

The soong version listed in repo.prop seems to be too old for the header-abi-linker tool. We need to apply the following patch:

diff --git a/cc/builder.go b/cc/builder.go
index 51c4ce9..2f95006 100644
--- a/cc/builder.go
+++ b/cc/builder.go
@@ -170,12 +170,12 @@ var (
 	sAbiLink = pctx.AndroidStaticRule("sAbiLink",
-			Command:        "$sAbiLinker -o ${out} $symbolFile -arch $arch -api $api $exportedHeaderFlags @${out}.rsp ",
+			Command:        "$sAbiLinker -o ${out} $symbolFilter -arch $arch -api $api $exportedHeaderFlags @${out}.rsp ",
 			CommandDeps:    []string{"$sAbiLinker"},
 			Rspfile:        "${out}.rsp",
 			RspfileContent: "${in}",
-		"symbolFile", "arch", "api", "exportedHeaderFlags")
+		"symbolFilter", "arch", "api", "exportedHeaderFlags")
 	_ = pctx.SourcePathVariable("sAbiDiffer", "prebuilts/build-tools/${config.HostPrebuiltTag}/bin/header-abi-diff")
@@ -613,14 +613,17 @@ func TransformObjToDynamicBinary(ctx android.ModuleContext,
 // Generate a rule to combine .dump sAbi dump files from multiple source files
 // into a single .ldump sAbi dump file
-func TransformDumpToLinkedDump(ctx android.ModuleContext, sAbiDumps android.Paths,
+func TransformDumpToLinkedDump(ctx android.ModuleContext, sAbiDumps android.Paths, soFile android.Path,
 	symbolFile android.OptionalPath, apiLevel, baseName, exportedHeaderFlags string) android.OptionalPath {
 	outputFile := android.PathForModuleOut(ctx, baseName+".lsdump")
-	var symbolFileStr string
+	var symbolFilterStr string
 	var linkedDumpDep android.Path
 	if symbolFile.Valid() {
-		symbolFileStr = "-v " + symbolFile.Path().String()
+		symbolFilterStr = "-v " + symbolFile.Path().String()
 		linkedDumpDep = symbolFile.Path()
+	} else {
+		linkedDumpDep = soFile
+		symbolFilterStr = "-so " + soFile.String()
 	ctx.ModuleBuild(pctx, android.ModuleBuildParams{
 		Rule:        sAbiLink,
@@ -629,9 +632,9 @@ func TransformDumpToLinkedDump(ctx android.ModuleContext, sAbiDumps android.Path
 		Inputs:      sAbiDumps,
 		Implicit:    linkedDumpDep,
 		Args: map[string]string{
-			"symbolFile": symbolFileStr,
-			"arch":       ctx.Arch().ArchType.Name,
-			"api":        apiLevel,
+			"symbolFilter": symbolFilterStr,
+			"arch":         ctx.Arch().ArchType.Name,
+			"api":          apiLevel,
 			"exportedHeaderFlags": exportedHeaderFlags,
diff --git a/cc/library.go b/cc/library.go
index 997344c..0164221 100644
--- a/cc/library.go
+++ b/cc/library.go
@@ -589,12 +589,12 @@ func (library *libraryDecorator) linkShared(ctx ModuleContext,
 	objs.sAbiDumpFiles = append(objs.sAbiDumpFiles, deps.WholeStaticLibObjs.sAbiDumpFiles...)
 	library.coverageOutputFile = TransformCoverageFilesToLib(ctx, objs, builderFlags, library.getLibName(ctx))
-	library.linkSAbiDumpFiles(ctx, objs, fileName)
+	library.linkSAbiDumpFiles(ctx, objs, fileName, ret)
 	return ret
-func (library *libraryDecorator) linkSAbiDumpFiles(ctx ModuleContext, objs Objects, fileName string) {
+func (library *libraryDecorator) linkSAbiDumpFiles(ctx ModuleContext, objs Objects, fileName string, soFile android.Path) {
 	//Also take into account object re-use.
 	if len(objs.sAbiDumpFiles) > 0 && ctx.createVndkSourceAbiDump() && !ctx.Vendor() {
 		refSourceDumpFile := android.PathForVndkRefAbiDump(ctx, "current", fileName, vndkVsNdk(ctx), true)
@@ -612,7 +612,7 @@ func (library *libraryDecorator) linkSAbiDumpFiles(ctx ModuleContext, objs Objec
 			SourceAbiFlags = append(SourceAbiFlags, reexportedInclude)
 		exportedHeaderFlags := strings.Join(SourceAbiFlags, " ")
-		library.sAbiOutputFile = TransformDumpToLinkedDump(ctx, objs.sAbiDumpFiles, symbolFile, "current", fileName, exportedHeaderFlags)
+		library.sAbiOutputFile = TransformDumpToLinkedDump(ctx, objs.sAbiDumpFiles, soFile, symbolFile, "current", fileName, exportedHeaderFlags)
 		if refSourceDumpFile.Valid() {
 			unzippedRefDump := UnzipRefDump(ctx, refSourceDumpFile.Path(), fileName)
 			library.sAbiDiff = SourceAbiDiff(ctx, library.sAbiOutputFile.Path(), unzippedRefDump, fileName)

Step 7: Build

Run `python external/clang/`

Performance Cost for Reducing the Number of Registers in X86-64


When compiling a C program, the C compiler allocates CPU registers to program variables. For example, In the following compilation, the compiler assigns register rdi to variable a and register r15 to variable b.

int a = 1;
int b = 2;
b = a + b;
mov   $0x1,%rdi
mov   $0x2,%r15
add   %rdi,%r15

In the X86-64 architecture, there are 16 general purpose registers: rax, rbx, rcx, rdx, rbp, rsp, rsi, rdi, r8, r9, r10, r11, r12, r13, r14 and r15. When there are more variables than registers, the compiler has to use memory instead of registers. When a function calls another function, the caller has to save some of its registers to memory before the call and restore them from memory after the call. These two cases are called spilling, which is bad for performance because memory is much slower than register. Having more registers reduces spilling.

There are papers which modify the compiler and dedicate one or more registers for some specific purposes. For example, in order to track information flow, LIFT uses a dedicated register to store some information metadata. For another example, CPI hides a secret address in a dedicated register so that during arbitrary-memory-read attack, the attacker cannot read the address. Some SFI implementation uses a dedicated register to store jump target address, so that the original code cannot mess with it.

Besides spilling, another problem with having fewer registers is data dependencies. Modern CPUs can execute instructions in parallel or out of order as long as there are no dependencies between the instructions. In our previous example, the CPU can execute mov $0x1,%rdi and mov $0x2,%r15 in parallel. However, in the following example, the four instructions has to be executed in sequence because they all uses rax, thus every instruction depends on its previous one.

mov   $0x1,%rax      # rax = 1
lea   0x2(%rax),%r10 # r10 = rax + 2
mov   $0x3,%rax      # rax = 3
lea   0x4(%rax),%r11 # r11 = rax + 4

With more registers available, the compiler can generate faster code by renaming the rax register into another unused register in the last two instructions. After renaming, the first two and last two instruction can be executed in parallel. In modern CPUs, there are more physical registers than the number of named registers. During execution, a named register, say %rdi, is mapped to an underlying physical register. The Haswell microarchitectures has 168 physical registers. So, register renaming is performed by both compiler and CPU, i.e. there is some redundant work. In terms of data dependencies, maybe reducing the number of named registers isn’t so bad, because the CPU can still do renaming.

I think we have enough theoretical speculations. Let’s put it into some real tests. I have modified the LLVM/Clang compiler so that it uses fewer registers. In our evaluation, we have 4 versions of compilers:

  • C0: The original compiler: LLVM 4.0
  • C1: It doesn’t use r14.
  • C2: It doesn’t use r13, r14 or r15.
  • C3: It doesn’t use r11. The purpose of this is to see the difference between a callee-saved register (r14) and a caller-saved registers (r11).

Let’s first look at the difference of the binary produced by C0 and C1, by compiling the following C code using both compilers.

for (i = 0; i < 100000000; i++) {
    n0 = n1 + n4 * 8 + 46;
    n1 = n2 + n5 * 8 + 95;
    n2 = n3 + n6 * 1 + 55;
    n3 = n4 + n7 * 2 + 90;
    n4 = n5 + n8 * 1 + 58;
    n5 = n6 + n0 * 2 + 1 ;
    n6 = n7 + n1 * 4 + 59;
    n7 = n8 + n2 * 1 + 92;
    n8 = n0 + n3 * 4 + 64;

The following binary is produced by C0. Note that there is no memory operations at all.

  4004d0:       49 89 ff                mov    %rdi,%r15
  4004d3:       4c 89 f3                mov    %r14,%rbx
  4004d6:       49 8d 0c df             lea    (%r15,%rbx,8),%rcx
  4004da:       4d 8d 2c f0             lea    (%r8,%rsi,8),%r13
  4004de:       49 8d 7c f0 5f          lea    0x5f(%r8,%rsi,8),%rdi
  4004e3:       4d 8d 44 13 37          lea    0x37(%r11,%rdx,1),%r8
  4004e8:       4c 89 dd                mov    %r11,%rbp
  4004eb:       48 01 d5                add    %rdx,%rbp
  4004ee:       4e 8d 14 63             lea    (%rbx,%r12,2),%r10
  4004f2:       4e 8d 5c 63 5a          lea    0x5a(%rbx,%r12,2),%r11
  4004f7:       4e 8d 74 0e 3a          lea    0x3a(%rsi,%r9,1),%r14
  4004fc:       48 8d 74 4a 5d          lea    0x5d(%rdx,%rcx,2),%rsi
  400501:       4b 8d 94 ac b7 01 00    lea    0x1b7(%r12,%r13,4),%rdx
  400508:       00
  400509:       4d 8d a4 29 93 00 00    lea    0x93(%r9,%rbp,1),%r12
  400510:       00
  400511:       4e 8d 8c 91 d6 01 00    lea    0x1d6(%rcx,%r10,4),%r9
  400518:       00
  400519:       ff c8                   dec    %eax
  40051b:       75 b3                   jne    4004d0 <compute+0x40>

The following is produced by C1. Note that r14 is absent, because the compiler doesn’t know the existence of c14. There are two memory operations: one store operation at 4004e3, one load operation at 400516.

  4004d0:       49 89 ec                mov    %rbp,%r12
  4004d3:       4c 89 fb                mov    %r15,%rbx
  4004d6:       49 8d 0c dc             lea    (%r12,%rbx,8),%rcx
  4004da:       49 8d 2c f0             lea    (%r8,%rsi,8),%rbp
  4004de:       49 8d 7c f0 5f          lea    0x5f(%r8,%rsi,8),%rdi
  4004e3:       48 89 7c 24 f8          mov    %rdi,-0x8(%rsp)
  4004e8:       4d 8d 44 12 37          lea    0x37(%r10,%rdx,1),%r8
  4004ed:       4d 89 d3                mov    %r10,%r11
  4004f0:       49 01 d3                add    %rdx,%r11
  4004f3:       4a 8d 3c 6b             lea    (%rbx,%r13,2),%rdi
  4004f7:       4e 8d 54 6b 5a          lea    0x5a(%rbx,%r13,2),%r10
  4004fc:       4e 8d 7c 0e 3a          lea    0x3a(%rsi,%r9,1),%r15
  400501:       48 8d 74 4a 5d          lea    0x5d(%rdx,%rcx,2),%rsi
  400506:       49 8d 94 ad b7 01 00    lea    0x1b7(%r13,%rbp,4),%rdx
  40050d:       00
  40050e:       4f 8d ac 19 93 00 00    lea    0x93(%r9,%r11,1),%r13
  400515:       00
  400516:       48 8b 6c 24 f8          mov    -0x8(%rsp),%rbp
  40051b:       4c 8d 8c b9 d6 01 00    lea    0x1d6(%rcx,%rdi,4),%r9
  400522:       00
  400523:       ff c8                   dec    %eax
  400525:       75 a9                   jne    4004d0 <compute+0x40>

To measure the performance, we use SPEC CPU 2006 benchmark suit. Below is the result of the test cases.


Let’s look at the result.

  • For most cases, C0 is the fastest, except sjeng and libquantum. Assuming bug-free compiler, there is no way to explain C0 being slower. My guess is experimental error since the variance for the two cases is quite large.
  • C2 has least registers available, and is the slowest for 6 out of 12 cases.
  • C1 is slower than C3 in most cases. This means a callee-saved register is more performance critical than a caller-saved register.
  • The overall overhead for C1 is XXX, C2 is XXX, C3 is XXX. (TODO)

Singapore Primary School Phase 2C Vacancy Visualization


Singapore parents are very serious on education. Despite that “Every School A Good School”, parents are willing to do 40 hours of voluntary service in order to have a better chance of having their child entering a better primary school.

Normally a good primary school has more potential students than the school’s capacity each year. In this case, the school will conduct a ballot to randomly select students to be enrolled. This process is completely open, including the number of students registered and the vacancies. I have done a data visualization on the ratio of students registered vs vacancies for Phase 2C in Google Map. The link of the interactive map is here.

The region is divided according to the Voronoi diagram of the locations of primary schools. Reddish region means that there are more registering students than the school’s vacancy. Bluish region means otherwise.

Screenshot from 2016-08-04 22-13-08

Modifying Android APKs which does self certificate checking


When I modify an Android app which is not mine, I use apktool to unpack the APK, modify the smali, use apktool to pack it and finally sign it using my own key. This works for most of the Apps. However, some Apps explicitly perform certificate checking in their code. The following code snippet illustrates such checking:

PackageManager pm = myContext.getPackageManager();
PackageInfo info = pm.getPackageInfo("", PackageManager.GET_SIGNATURES);
String expectedSig = "308203a5...";
if (!expectedSig.equals(info.signatures[0].toCharsString()) {

One way to fix this is by removing the “if” block, but it’s difficult to locate all such if blocks, especially in smali. A more convenient way is to hook the PackageManager.getPackageInfo() API and change info.signatures to the original APK’s signature. The following code snippet illustrates such hooking:

PackageManager pm = myContext.getPackageManager();
PackageInfo info = PatchSignature.getPackageInfo(pm, "", PackageManager.GET_SIGNATURES);
public class PatchSignature {
  public static PackageInfo getPackageInfo (PackageManager pm, String packageName, int flags) throws PackageManager.NameNotFoundException
    PackageInfo info = pm.getPackageInfo(packageName, flags);
    if ("".equals(packageName) &&
        info.signatures != null && info.signatures.length > 0 &&
        info.signatures[0] != null)
      info.signatures[0] = new Signature("308203b6...");
    return info;

Three things need to be done here:

  1. Change all PackageManager.getPackageInfo() to PatchSignature.getPackageInfo().
  2. Add class PatchSignature.
  3. Find out the correct signature.

It is very easy to locate and replace all PackageManager.getPackageInfo() calls in the smali code. All we need to do is replacing:

invoke-virtual {PARAM1, PARAM2, PARAM3}, Landroid/content/pm/PackageManager;->getPackageInfo(Ljava/lang/String;I)Landroid/content/pm/PackageInfo;


invoke-static {PARAM1, PARAM2, PARAM3}, LPatchSignature;->getPackageInfo(Landroid/content/pm/PackageManager;Ljava/lang/String;I)Landroid/content/pm/PackageInfo;

The smali code for PatchSignature can be easily obtained by writing a simple app with the java code and disassemble it. Now we have only one problem remaining, how to obtain the original certificate? I.e. what should we put in new Signature("...")?

Here’s what I found the most convenient way:

openssl pkcs7 -inform DER -print_certs -in unpacked-app-using-apktool/original/META-INF/CERT.RSA | openssl x509 -inform PEM -outform DER | xxd -p

Maximal Shutter Speed for Night Sky Photography without Tracking


When taking night sky photography, we need to grab as much light as possible by increasing the ISO, using a larger aperture and/or using a longer shutter speed. However, larger ISO means more noise and there is a limit of aperture size. If the shutter speed is too long, star trails become noticeable.


The star trails can be avoided by attaching the camera on a equatorial mount. However, not everyone can afford one.

What’s the longest shutter speed without leaving any noticeable trail? Experienced astrophotographers have a good estimation through a trial and error approach. Let’s instead look at the mathematics behind it.

Different stars move at different speed.


Stars at the poles, e.g. the Polaris, do not move. Stars on the earth equator plane, e.g. the Orion constellation, move the fastest, i.e. about 360^{\circ} per day. In astronomy, declination (-90^{\circ} \le \delta \le 90^{\circ}) measures the angle of the star above the equator plane.

During time t, a star at declination \delta moves for an angle of \frac{360^{\circ} \times t \times cos \delta}{24 hours}. If (i) a fmm lens focuses at infinity, (ii) a star moves for an angle of \alpha and (iii) it’s at the center of the picture, then its image on the CCD sensor will move for f \times \alpha \times \pi / 180^{\circ}. The Polaris’ declination is about 90^{\circ} and the Orion constellation’s declination is around [-10^{\circ}, 10^{\circ}].


Putting them together, the star’s image on the CCD will move for t \times cos \delta \times f \times 2\pi / 24 hours mm. For instance, if we use 30s shutter speed and a standard 50mm lens to shoot Orion, the length of the trail will be 0.1mm on the camera CCD.

Now, the question is, whether a 0.1mm trail on the CCD is noticeable on the final photo? It’s hard to answer as it depends on several factors: size and resolution of the print, resolution of the sensor and resolving power of the lens. Fortunately, this has been well studied in the optics community. The circle of confusion (CoC) measures the largest blur spot that is indistinguishable from a point. This can be rephrased in our flavour by substituting “largest blur spot” to “longest trail”. A widely used CoC is d/1500, where d is the diagonal of the sensor. For example, for full frame (d= 43mm), CoC = 0.029mm, thus 0.1mm is quite noticeable. I found d/1500 tends to be too large for today’s high resolution camera. Moreover, I usually crop a bit. I’m more comfortable with d/2000.

We now can compute the longest shutter speed t:

\frac{t \times cos \delta \times f \times 2\pi}{24 hours} = \frac{d}{2000}

t = \frac{d \times 24 hours}{2000 \times cos \delta \times f \times 2\pi}

Here is a table for some configurations:

f \delta d max t
14mm 0^{\circ} 43mm (full frame) 21s
50mm 0^{\circ} 43mm (full frame) 6s
500mm 0^{\circ} 43mm (full frame) 0.6s
50mm 0^{\circ} 28mm (APS-C) 4s
4.1mm 0^{\circ} 5mm (iPhone 5S) 6s
50mm 90^{\circ} 43mm (full frame) \infty

A general rule of thumb is 300/F sec where F is the 35 mm equivalent focal length.

Nerd’s Birthday


Do you want to celebrate your 10000th day since birth? How about 6765th (the 20th Fibonacci number) day? I wrote a Javascript program to compute the days to celebrate in the near future. Unfortunately doesn’t support user Javascript, so it is hosted in my github page: Nerd’s Birthday

Ideas? Bug report? Please leave comments here.

Why NUS-to-MyRepublic latency is so high?


I have switched from starhub cable to MyRepublic fibre. I noticed the lag when SSH from home to NUS, so I tried to figure it out.

Here is a comparison of the round trip time (RTT) from MyRepublic to various of places I can think of. The measurement spams about one and half days. 5 ping packets is sent to all targets every 20 minutes. Sorry for the limited coloring of the plot. The legend is sorted from low latency on the top to high latency on the bottom. So singnet is the fastest and berkeley is the slowest. NUS, having an average RTT of 189ms, ranks among the slowest three.


Here is a summary in the following table. I got geographic location from

host geo. location RTT Singapore 4.4 Singapore 5.1 Singapore 6.4
google dns California 11.7 Guangdong 42.4 Beijing 81.4 Shanghai 144.6 Beijing 159.6 Singapore 188.8 Massachusetts 191.3 California 223.5

Then I traceroute from MyRepublic to NUS.

[myrepublic]$ traceroute
traceroute to (, 30 hops max, 60 byte packets
 1 (  0.212 ms  0.412 ms  0.495 ms
 2 (  2.737 ms  2.736 ms  2.910 ms
 3 (  6.822 ms  6.837 ms  6.771 ms
 4 (  4.172 ms  4.296 ms  4.154 ms
 5 (  4.809 ms  4.861 ms  4.764 ms
 6 (  189.093 ms  188.329 ms  188.602 ms
 7 (  177.459 ms  178.062 ms  178.173 ms
 8 (  178.442 ms  178.512 ms  178.423 ms
 9  * * *

The route seems OK in the sense that every intermediate host are in Singapore. It’s just the RTT which isn’t right. Could the return route be the problem? Then I traceroute from NUS to MyRepublic. Note that is in MyRepublic.

[nus]$ traceroute
traceroute to (, 30 hops max, 60 byte packets
 1 (  0.339 ms  0.370 ms  0.458 ms
 2 (  0.315 ms  0.376 ms  0.437 ms
 3 (  1.216 ms  1.115 ms  1.114 ms
 4 (  1.231 ms  1.294 ms  1.142 ms
 5 (  1.234 ms  1.330 ms  1.580 ms
 6 (  1.763 ms  1.420 ms  1.421 ms
 7 (  1.934 ms  1.985 ms  2.174 ms
 8 (  2.553 ms  3.706 ms  4.776 ms
 9 (  4.627 ms  4.120 ms  4.155 ms
10 (  212.824 ms  212.802 ms  233.477 ms
11 (  194.115 ms  214.906 ms  194.099 ms
12 (  192.053 ms  191.509 ms  199.569 ms
13 (  191.899 ms  193.382 ms  193.344 ms
14 (  188.949 ms  190.592 ms  189.575 ms
15 (  204.286 ms  193.958 ms  197.660 ms
16  * (  205.614 ms  204.824 ms
17 (  191.739 ms  191.265 ms  193.485 ms
18 (  227.445 ms  188.560 ms  227.428 ms
19 (  240.193 ms  240.195 ms  240.204 ms
20  * * *
21 (  192.013 ms  193.590 ms  193.314 ms
22 (  203.320 ms  195.278 ms  191.638 ms

Notice that in hop 10-11, it goes from NUS to, which is in California. It then goes to, which is in Tokyo. Then it comes back to Singapore. Why it routes like that is beyond my knowledge.

Besides this, another interesting question is that why google dns is so fast. The distance between Singapore and San Francisco is 13572.66 KM. The RTT of light is 90.5ms, so no one can do better than that. But google can do 11.7ms!

It turns out that is an anycast address which means it routes to one of the google DNS servers distributed across the world. Try yourself traceroute to!

大宇在app store上出的《仙剑奇侠传1 DOS怀旧版》居然用的是网友破解的PAL.EXE


大宇把仙剑1移植到iOS上,并且在app store上销售 (。我下载了一个来研究。发现其中包含的主程序PAL.EXE居然是网友出的破解版。其中包含这样的字符串”This files cracked by”

$ hexdump -n 128 -C PAL_img/PAL.EXE
00000000 4d 5a af 01 c9 00 01 00 08 00 45 19 ff ff 1b 19 |MZ........E.....|
00000010 00 02 00 00 00 01 f0 ff 52 00 00 00 54 68 69 73 |........R...This|
00000020 20 66 69 6c 65 73 20 63 72 61 63 6b 65 64 20 62 | files cracked b|
00000030 79 20 57 69 64 65 53 6f 66 74 2e 45 6d 61 69 6c |y WideSoft.Email|
00000040 3a 73 6f 66 74 77 69 64 65 40 32 31 63 6e 2e 63 |:softwide@21cn.c|
00000050 6f 6d 07 00 00 00 22 00 79 01 00 00 20 00 5d 03 |om....".y... .].|
00000060 ff ff 38 32 80 00 00 00 10 00 a4 2d 1e 00 00 00 |..82.......-....|
00000070 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|

虽然我是用从vShare下载的盗版ipa文件来分析的(,但是我觉得app store上的正版应该是一样的。


Track02.m4a 情愿 CD音源版
Track03.m4a 蝶舞春园 CD音源版
Track04.m4a 雨 CD音源版
Track05.m4a 白河寒秋 CD音源版
Track06.m4a 桃花幻梦 CD音源版
Track07.m4a 云谷鹤峰 CD音源版
Track08.m4a 蝶恋 CD音源版
Track09.m4a 比武招亲 CD音源版
1.m4a 配乐1号
2.m4a 升级
3.m4a 战斗胜利
4.m4a 云谷鹤峰2
5.m4a 云谷鹤峰1
6.m4a 蝶恋1
7.m4a 蝶恋2
8.m4a 余杭春日
9.m4a 蝶恋变奏
10.m4a 忧
11.m4a 惊
12.m4a 小桥流水
13.m4a 来世再续未了缘
14.m4a 比武招亲
15.m4a 蝶恋3
16.m4a 情怨1
17.m4a 情怨2
18.m4a 逆天而行
19.m4a 鬼阴山
20.m4a 兵凶战危
21.m4a 救黎民
22.m4a 魂萦梦牵
23.m4a 蒙难
24.m4a 盟誓
25.m4a 遇袭
26.m4a 十面埋伏
27.m4a 蝶恋4
28.m4a 红尘路渺
31.m4a 乐逍遥
32.m4a 宿命
34.m4a 危机
35.m4a 神木林
37.m4a 风起云涌
38.m4a 势如破竹
39.m4a 心急如焚
40.m4a 战意昂
42.m4a 侠客行
43.m4a 罗汉阵
44.m4a 腥风血雨
46.m4a 兵凶战危变奏
47.m4a 御剑伏魔2
48.m4a 御剑伏魔1
49.m4a 晨光
50.m4a 风光
51.m4a 富甲一方
52.m4a 蝶舞春园1
53.m4a 蝶舞春园2
54.m4a 大开眼界
55.m4a 白河寒秋
56.m4a 桃花幻梦
57.m4a 心忐忑
58.m4a 颓城
59.m4a 回梦
60.m4a 历险
61.m4a 窥春
63.m4a 终曲
64.m4a 醉仙驱魔
65.m4a 戏仙
66.m4a 春色无边
67.m4a 神佑
68.m4a 雨2
69.m4a 看尽前尘
70.m4a 灵山
71.m4a 繁华看尽
72.m4a 凌云壮志
73.m4a 春风恋牡丹
74.m4a 美景
75.m4a 情牵
76.m4a 今生情不悔
77.m4a 云谷鹤峰3
78.m4a 步步为营
79.m4a 灵怨
80.m4a 救佳人
81.m4a 险境
82.m4a 血海余生
83.m4a 鬼影幢幢
84.m4a 雨1
85.m4a 梦醒
86.m4a 酒剑仙
87.m4a 孤雀无栖
30.m4a 配乐30号
33.m4a 配乐33号
36.m4a 配乐36号
41.m4a 配乐41号
45.m4a 配乐45号
62.m4a 配乐62号

Self-referential message with hash


The size of this message is 219 bytes. The sum of all the digits in this message is 125. There are 2 0s, 5 1s, 7 2s, 4 3s, 3 4s, 3 5s, 2 6s, 3 7s, 2 8s and 2 9s. The SHA1 sum of this message has 86 0-bits and 74 1-bits.

For more things to read, see here and here.