Performance Cost for Reducing the Number of Registers in X86-64


When compiling a C program, the C compiler allocates CPU registers to program variables. For example, In the following compilation, the compiler assigns register rdi to variable a and register r15 to variable b.

int a = 1;
int b = 2;
b = a + b;
mov   $0x1,%rdi
mov   $0x2,%r15
add   %rdi,%r15

In the X86-64 architecture, there are 16 general purpose registers: rax, rbx, rcx, rdx, rbp, rsp, rsi, rdi, r8, r9, r10, r11, r12, r13, r14 and r15. When there are more variables than registers, the compiler has to use memory instead of registers. When a function calls another function, the caller has to save some of its registers to memory before the call and restore them from memory after the call. These two cases are called spilling, which is bad for performance because memory is much slower than register. Having more registers reduces spilling.

There are papers which modify the compiler and dedicate one or more registers for some specific purposes. For example, in order to track information flow, LIFT uses a dedicated register to store some information metadata. For another example, CPI hides a secret address in a dedicated register so that during arbitrary-memory-read attack, the attacker cannot read the address. Some SFI implementation uses a dedicated register to store jump target address, so that the original code cannot mess with it.

Besides spilling, another problem with having fewer registers is data dependencies. Modern CPUs can execute instructions in parallel or out of order as long as there are no dependencies between the instructions. In our previous example, the CPU can execute mov $0x1,%rdi and mov $0x2,%r15 in parallel. However, in the following example, the four instructions has to be executed in sequence because they all uses rax, thus every instruction depends on its previous one.

mov   $0x1,%rax      # rax = 1
lea   0x2(%rax),%r10 # r10 = rax + 2
mov   $0x3,%rax      # rax = 3
lea   0x4(%rax),%r11 # r11 = rax + 4

With more registers available, the compiler can generate faster code by renaming the rax register into another unused register in the last two instructions. After renaming, the first two and last two instruction can be executed in parallel. In modern CPUs, there are more physical registers than the number of named registers. During execution, a named register, say %rdi, is mapped to an underlying physical register. The Haswell microarchitectures has 168 physical registers. So, register renaming is performed by both compiler and CPU, i.e. there is some redundant work. In terms of data dependencies, maybe reducing the number of named registers isn’t so bad, because the CPU can still do renaming.

I think we have enough theoretical speculations. Let’s put it into some real tests. I have modified the LLVM/Clang compiler so that it uses fewer registers. In our evaluation, we have 4 versions of compilers:

  • C0: The original compiler: LLVM 4.0
  • C1: It doesn’t use r14.
  • C2: It doesn’t use r13, r14 or r15.
  • C3: It doesn’t use r11. The purpose of this is to see the difference between a callee-saved register (r14) and a caller-saved registers (r11).

Let’s first look at the difference of the binary produced by C0 and C1, by compiling the following C code using both compilers.

for (i = 0; i < 100000000; i++) {
    n0 = n1 + n4 * 8 + 46;
    n1 = n2 + n5 * 8 + 95;
    n2 = n3 + n6 * 1 + 55;
    n3 = n4 + n7 * 2 + 90;
    n4 = n5 + n8 * 1 + 58;
    n5 = n6 + n0 * 2 + 1 ;
    n6 = n7 + n1 * 4 + 59;
    n7 = n8 + n2 * 1 + 92;
    n8 = n0 + n3 * 4 + 64;

The following binary is produced by C0. Note that there is no memory operations at all.

  4004d0:       49 89 ff                mov    %rdi,%r15
  4004d3:       4c 89 f3                mov    %r14,%rbx
  4004d6:       49 8d 0c df             lea    (%r15,%rbx,8),%rcx
  4004da:       4d 8d 2c f0             lea    (%r8,%rsi,8),%r13
  4004de:       49 8d 7c f0 5f          lea    0x5f(%r8,%rsi,8),%rdi
  4004e3:       4d 8d 44 13 37          lea    0x37(%r11,%rdx,1),%r8
  4004e8:       4c 89 dd                mov    %r11,%rbp
  4004eb:       48 01 d5                add    %rdx,%rbp
  4004ee:       4e 8d 14 63             lea    (%rbx,%r12,2),%r10
  4004f2:       4e 8d 5c 63 5a          lea    0x5a(%rbx,%r12,2),%r11
  4004f7:       4e 8d 74 0e 3a          lea    0x3a(%rsi,%r9,1),%r14
  4004fc:       48 8d 74 4a 5d          lea    0x5d(%rdx,%rcx,2),%rsi
  400501:       4b 8d 94 ac b7 01 00    lea    0x1b7(%r12,%r13,4),%rdx
  400508:       00
  400509:       4d 8d a4 29 93 00 00    lea    0x93(%r9,%rbp,1),%r12
  400510:       00
  400511:       4e 8d 8c 91 d6 01 00    lea    0x1d6(%rcx,%r10,4),%r9
  400518:       00
  400519:       ff c8                   dec    %eax
  40051b:       75 b3                   jne    4004d0 <compute+0x40>

The following is produced by C1. Note that r14 is absent, because the compiler doesn’t know the existence of c14. There are two memory operations: one store operation at 4004e3, one load operation at 400516.

  4004d0:       49 89 ec                mov    %rbp,%r12
  4004d3:       4c 89 fb                mov    %r15,%rbx
  4004d6:       49 8d 0c dc             lea    (%r12,%rbx,8),%rcx
  4004da:       49 8d 2c f0             lea    (%r8,%rsi,8),%rbp
  4004de:       49 8d 7c f0 5f          lea    0x5f(%r8,%rsi,8),%rdi
  4004e3:       48 89 7c 24 f8          mov    %rdi,-0x8(%rsp)
  4004e8:       4d 8d 44 12 37          lea    0x37(%r10,%rdx,1),%r8
  4004ed:       4d 89 d3                mov    %r10,%r11
  4004f0:       49 01 d3                add    %rdx,%r11
  4004f3:       4a 8d 3c 6b             lea    (%rbx,%r13,2),%rdi
  4004f7:       4e 8d 54 6b 5a          lea    0x5a(%rbx,%r13,2),%r10
  4004fc:       4e 8d 7c 0e 3a          lea    0x3a(%rsi,%r9,1),%r15
  400501:       48 8d 74 4a 5d          lea    0x5d(%rdx,%rcx,2),%rsi
  400506:       49 8d 94 ad b7 01 00    lea    0x1b7(%r13,%rbp,4),%rdx
  40050d:       00
  40050e:       4f 8d ac 19 93 00 00    lea    0x93(%r9,%r11,1),%r13
  400515:       00
  400516:       48 8b 6c 24 f8          mov    -0x8(%rsp),%rbp
  40051b:       4c 8d 8c b9 d6 01 00    lea    0x1d6(%rcx,%rdi,4),%r9
  400522:       00
  400523:       ff c8                   dec    %eax
  400525:       75 a9                   jne    4004d0 <compute+0x40>

To measure the performance, we use SPEC CPU 2006 benchmark suit. Below is the result of the test cases.


Let’s look at the result.

  • For most cases, C0 is the fastest, except sjeng and libquantum. Assuming bug-free compiler, there is no way to explain C0 being slower. My guess is experimental error since the variance for the two cases is quite large.
  • C2 has least registers available, and is the slowest for 6 out of 12 cases.
  • C1 is slower than C3 in most cases. This means a callee-saved register is more performance critical than a caller-saved register.
  • The overall overhead for C1 is XXX, C2 is XXX, C3 is XXX. (TODO)

Singapore Primary School Phase 2C Vacancy Visualization


Singapore parents are very serious on education. Despite that “Every School A Good School”, parents are willing to do 40 hours of voluntary service in order to have a better chance of having their child entering a better primary school.

Normally a good primary school has more potential students than the school’s capacity each year. In this case, the school will conduct a ballot to randomly select students to be enrolled. This process is completely open, including the number of students registered and the vacancies. I have done a data visualization on the ratio of students registered vs vacancies for Phase 2C in Google Map. The link of the interactive map is here.

The region is divided according to the Voronoi diagram of the locations of primary schools. Reddish region means that there are more registering students than the school’s vacancy. Bluish region means otherwise.

Screenshot from 2016-08-04 22-13-08

Modifying Android APKs which does self certificate checking


When I modify an Android app which is not mine, I use apktool to unpack the APK, modify the smali, use apktool to pack it and finally sign it using my own key. This works for most of the Apps. However, some Apps explicitly perform certificate checking in their code. The following code snippet illustrates such checking:

PackageManager pm = myContext.getPackageManager();
PackageInfo info = pm.getPackageInfo("", PackageManager.GET_SIGNATURES);
String expectedSig = "308203a5...";
if (!expectedSig.equals(info.signatures[0].toCharsString()) {

One way to fix this is by removing the “if” block, but it’s difficult to locate all such if blocks, especially in smali. A more convenient way is to hook the PackageManager.getPackageInfo() API and change info.signatures to the original APK’s signature. The following code snippet illustrates such hooking:

PackageManager pm = myContext.getPackageManager();
PackageInfo info = PatchSignature.getPackageInfo(pm, "", PackageManager.GET_SIGNATURES);
public class PatchSignature {
  public static PackageInfo getPackageInfo (PackageManager pm, String packageName, int flags) throws PackageManager.NameNotFoundException
    PackageInfo info = pm.getPackageInfo(packageName, flags);
    if ("".equals(packageName) &&
        info.signatures != null && info.signatures.length > 0 &&
        info.signatures[0] != null)
      info.signatures[0] = new Signature("308203b6...");
    return info;

Three things need to be done here:

  1. Change all PackageManager.getPackageInfo() to PatchSignature.getPackageInfo().
  2. Add class PatchSignature.
  3. Find out the correct signature.

It is very easy to locate and replace all PackageManager.getPackageInfo() calls in the smali code. All we need to do is replacing:

invoke-virtual {PARAM1, PARAM2, PARAM3}, Landroid/content/pm/PackageManager;->getPackageInfo(Ljava/lang/String;I)Landroid/content/pm/PackageInfo;


invoke-static {PARAM1, PARAM2, PARAM3}, LPatchSignature;->getPackageInfo(Landroid/content/pm/PackageManager;Ljava/lang/String;I)Landroid/content/pm/PackageInfo;

The smali code for PatchSignature can be easily obtained by writing a simple app with the java code and disassemble it. Now we have only one problem remaining, how to obtain the original certificate? I.e. what should we put in new Signature("...")?

Here’s what I found the most convenient way:

openssl pkcs7 -inform DER -print_certs -in unpacked-app-using-apktool/original/META-INF/CERT.RSA | openssl x509 -inform PEM -outform DER | xxd -p

Maximal Shutter Speed for Night Sky Photography without Tracking


When taking night sky photography, we need to grab as much light as possible by increasing the ISO, using a larger aperture and/or using a longer shutter speed. However, larger ISO means more noise and there is a limit of aperture size. If the shutter speed is too long, star trails become noticeable.


The star trails can be avoided by attaching the camera on a equatorial mount. However, not everyone can afford one.

What’s the longest shutter speed without leaving any noticeable trail? Experienced astrophotographers have a good estimation through a trial and error approach. Let’s instead look at the mathematics behind it.

Different stars move at different speed.


Stars at the poles, e.g. the Polaris, do not move. Stars on the earth equator plane, e.g. the Orion constellation, move the fastest, i.e. about 360^{\circ} per day. In astronomy, declination (-90^{\circ} \le \delta \le 90^{\circ}) measures the angle of the star above the equator plane.

During time t, a star at declination \delta moves for an angle of \frac{360^{\circ} \times t \times cos \delta}{24 hours}. If (i) a fmm lens focuses at infinity, (ii) a star moves for an angle of \alpha and (iii) it’s at the center of the picture, then its image on the CCD sensor will move for f \times \alpha \times \pi / 180^{\circ}. The Polaris’ declination is about 90^{\circ} and the Orion constellation’s declination is around [-10^{\circ}, 10^{\circ}].


Putting them together, the star’s image on the CCD will move for t \times cos \delta \times f \times 2\pi / 24 hours mm. For instance, if we use 30s shutter speed and a standard 50mm lens to shoot Orion, the length of the trail will be 0.1mm on the camera CCD.

Now, the question is, whether a 0.1mm trail on the CCD is noticeable on the final photo? It’s hard to answer as it depends on several factors: size and resolution of the print, resolution of the sensor and resolving power of the lens. Fortunately, this has been well studied in the optics community. The circle of confusion (CoC) measures the largest blur spot that is indistinguishable from a point. This can be rephrased in our flavour by substituting “largest blur spot” to “longest trail”. A widely used CoC is d/1500, where d is the diagonal of the sensor. For example, for full frame (d= 43mm), CoC = 0.029mm, thus 0.1mm is quite noticeable. I found d/1500 tends to be too large for today’s high resolution camera. Moreover, I usually crop a bit. I’m more comfortable with d/2000.

We now can compute the longest shutter speed t:

\frac{t \times cos \delta \times f \times 2\pi}{24 hours} = \frac{d}{2000}

t = \frac{d \times 24 hours}{2000 \times cos \delta \times f \times 2\pi}

Here is a table for some configurations:

f \delta d max t
14mm 0^{\circ} 43mm (full frame) 21s
50mm 0^{\circ} 43mm (full frame) 6s
500mm 0^{\circ} 43mm (full frame) 0.6s
50mm 0^{\circ} 28mm (APS-C) 4s
4.1mm 0^{\circ} 5mm (iPhone 5S) 6s
50mm 90^{\circ} 43mm (full frame) \infty

A general rule of thumb is 300/F sec where F is the 35 mm equivalent focal length.

Nerd’s Birthday


Do you want to celebrate your 10000th day since birth? How about 6765th (the 20th Fibonacci number) day? I wrote a Javascript program to compute the days to celebrate in the near future. Unfortunately doesn’t support user Javascript, so it is hosted in my github page: Nerd’s Birthday

Ideas? Bug report? Please leave comments here.

Why NUS-to-MyRepublic latency is so high?


I have switched from starhub cable to MyRepublic fibre. I noticed the lag when SSH from home to NUS, so I tried to figure it out.

Here is a comparison of the round trip time (RTT) from MyRepublic to various of places I can think of. The measurement spams about one and half days. 5 ping packets is sent to all targets every 20 minutes. Sorry for the limited coloring of the plot. The legend is sorted from low latency on the top to high latency on the bottom. So singnet is the fastest and berkeley is the slowest. NUS, having an average RTT of 189ms, ranks among the slowest three.


Here is a summary in the following table. I got geographic location from

host geo. location RTT Singapore 4.4 Singapore 5.1 Singapore 6.4
google dns California 11.7 Guangdong 42.4 Beijing 81.4 Shanghai 144.6 Beijing 159.6 Singapore 188.8 Massachusetts 191.3 California 223.5

Then I traceroute from MyRepublic to NUS.

[myrepublic]$ traceroute
traceroute to (, 30 hops max, 60 byte packets
 1 (  0.212 ms  0.412 ms  0.495 ms
 2 (  2.737 ms  2.736 ms  2.910 ms
 3 (  6.822 ms  6.837 ms  6.771 ms
 4 (  4.172 ms  4.296 ms  4.154 ms
 5 (  4.809 ms  4.861 ms  4.764 ms
 6 (  189.093 ms  188.329 ms  188.602 ms
 7 (  177.459 ms  178.062 ms  178.173 ms
 8 (  178.442 ms  178.512 ms  178.423 ms
 9  * * *

The route seems OK in the sense that every intermediate host are in Singapore. It’s just the RTT which isn’t right. Could the return route be the problem? Then I traceroute from NUS to MyRepublic. Note that is in MyRepublic.

[nus]$ traceroute
traceroute to (, 30 hops max, 60 byte packets
 1 (  0.339 ms  0.370 ms  0.458 ms
 2 (  0.315 ms  0.376 ms  0.437 ms
 3 (  1.216 ms  1.115 ms  1.114 ms
 4 (  1.231 ms  1.294 ms  1.142 ms
 5 (  1.234 ms  1.330 ms  1.580 ms
 6 (  1.763 ms  1.420 ms  1.421 ms
 7 (  1.934 ms  1.985 ms  2.174 ms
 8 (  2.553 ms  3.706 ms  4.776 ms
 9 (  4.627 ms  4.120 ms  4.155 ms
10 (  212.824 ms  212.802 ms  233.477 ms
11 (  194.115 ms  214.906 ms  194.099 ms
12 (  192.053 ms  191.509 ms  199.569 ms
13 (  191.899 ms  193.382 ms  193.344 ms
14 (  188.949 ms  190.592 ms  189.575 ms
15 (  204.286 ms  193.958 ms  197.660 ms
16  * (  205.614 ms  204.824 ms
17 (  191.739 ms  191.265 ms  193.485 ms
18 (  227.445 ms  188.560 ms  227.428 ms
19 (  240.193 ms  240.195 ms  240.204 ms
20  * * *
21 (  192.013 ms  193.590 ms  193.314 ms
22 (  203.320 ms  195.278 ms  191.638 ms

Notice that in hop 10-11, it goes from NUS to, which is in California. It then goes to, which is in Tokyo. Then it comes back to Singapore. Why it routes like that is beyond my knowledge.

Besides this, another interesting question is that why google dns is so fast. The distance between Singapore and San Francisco is 13572.66 KM. The RTT of light is 90.5ms, so no one can do better than that. But google can do 11.7ms!

It turns out that is an anycast address which means it routes to one of the google DNS servers distributed across the world. Try yourself traceroute to!

大宇在app store上出的《仙剑奇侠传1 DOS怀旧版》居然用的是网友破解的PAL.EXE


大宇把仙剑1移植到iOS上,并且在app store上销售 (。我下载了一个来研究。发现其中包含的主程序PAL.EXE居然是网友出的破解版。其中包含这样的字符串”This files cracked by”

$ hexdump -n 128 -C PAL_img/PAL.EXE
00000000 4d 5a af 01 c9 00 01 00 08 00 45 19 ff ff 1b 19 |MZ........E.....|
00000010 00 02 00 00 00 01 f0 ff 52 00 00 00 54 68 69 73 |........R...This|
00000020 20 66 69 6c 65 73 20 63 72 61 63 6b 65 64 20 62 | files cracked b|
00000030 79 20 57 69 64 65 53 6f 66 74 2e 45 6d 61 69 6c |y WideSoft.Email|
00000040 3a 73 6f 66 74 77 69 64 65 40 32 31 63 6e 2e 63 |:softwide@21cn.c|
00000050 6f 6d 07 00 00 00 22 00 79 01 00 00 20 00 5d 03 |om....".y... .].|
00000060 ff ff 38 32 80 00 00 00 10 00 a4 2d 1e 00 00 00 |..82.......-....|
00000070 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|

虽然我是用从vShare下载的盗版ipa文件来分析的(,但是我觉得app store上的正版应该是一样的。


Track02.m4a 情愿 CD音源版
Track03.m4a 蝶舞春园 CD音源版
Track04.m4a 雨 CD音源版
Track05.m4a 白河寒秋 CD音源版
Track06.m4a 桃花幻梦 CD音源版
Track07.m4a 云谷鹤峰 CD音源版
Track08.m4a 蝶恋 CD音源版
Track09.m4a 比武招亲 CD音源版
1.m4a 配乐1号
2.m4a 升级
3.m4a 战斗胜利
4.m4a 云谷鹤峰2
5.m4a 云谷鹤峰1
6.m4a 蝶恋1
7.m4a 蝶恋2
8.m4a 余杭春日
9.m4a 蝶恋变奏
10.m4a 忧
11.m4a 惊
12.m4a 小桥流水
13.m4a 来世再续未了缘
14.m4a 比武招亲
15.m4a 蝶恋3
16.m4a 情怨1
17.m4a 情怨2
18.m4a 逆天而行
19.m4a 鬼阴山
20.m4a 兵凶战危
21.m4a 救黎民
22.m4a 魂萦梦牵
23.m4a 蒙难
24.m4a 盟誓
25.m4a 遇袭
26.m4a 十面埋伏
27.m4a 蝶恋4
28.m4a 红尘路渺
31.m4a 乐逍遥
32.m4a 宿命
34.m4a 危机
35.m4a 神木林
37.m4a 风起云涌
38.m4a 势如破竹
39.m4a 心急如焚
40.m4a 战意昂
42.m4a 侠客行
43.m4a 罗汉阵
44.m4a 腥风血雨
46.m4a 兵凶战危变奏
47.m4a 御剑伏魔2
48.m4a 御剑伏魔1
49.m4a 晨光
50.m4a 风光
51.m4a 富甲一方
52.m4a 蝶舞春园1
53.m4a 蝶舞春园2
54.m4a 大开眼界
55.m4a 白河寒秋
56.m4a 桃花幻梦
57.m4a 心忐忑
58.m4a 颓城
59.m4a 回梦
60.m4a 历险
61.m4a 窥春
63.m4a 终曲
64.m4a 醉仙驱魔
65.m4a 戏仙
66.m4a 春色无边
67.m4a 神佑
68.m4a 雨2
69.m4a 看尽前尘
70.m4a 灵山
71.m4a 繁华看尽
72.m4a 凌云壮志
73.m4a 春风恋牡丹
74.m4a 美景
75.m4a 情牵
76.m4a 今生情不悔
77.m4a 云谷鹤峰3
78.m4a 步步为营
79.m4a 灵怨
80.m4a 救佳人
81.m4a 险境
82.m4a 血海余生
83.m4a 鬼影幢幢
84.m4a 雨1
85.m4a 梦醒
86.m4a 酒剑仙
87.m4a 孤雀无栖
30.m4a 配乐30号
33.m4a 配乐33号
36.m4a 配乐36号
41.m4a 配乐41号
45.m4a 配乐45号
62.m4a 配乐62号

Self-referential message with hash


The size of this message is 219 bytes. The sum of all the digits in this message is 125. There are 2 0s, 5 1s, 7 2s, 4 3s, 3 4s, 3 5s, 2 6s, 3 7s, 2 8s and 2 9s. The SHA1 sum of this message has 86 0-bits and 74 1-bits.

For more things to read, see here and here.

4 Years of Suiyang


I had the opportunity to take two panoramas of Suiyang in February 2009 and February 2013. They were taken from the same location so I could stack them perfectly. It shows how fast a typical country in China develops. The interactive image comparison is hosted here, because doesn’t allow user javascript.



Experiment on TCP Hole Punching


I recently need to find a way to connect to a subversion server behind a NAT. I used to tunnel through a SSH server with public IP. It worked perfectly, but recently I lost access to the server. So I want to try TCP hole punching.

It’s not hard to find related resource online. I followed the approach described in the paper “Peer-to-Peer Communication Across Network Address Translators”. The basic idea is to let both peers do connect and listen on the same port. If the internet gateway sees an outgoing SYN packet to X, the gateway will allow subsequent packets from X. As a result, at least one of the SYN packet should punch trough the NAT.

Before this, we need to know the external IP and port of both peers. Fortunately, most NAT implementations always map the same internal IP/port to the same external IP/port. It’s known as “independent mapping”. Even better, most NAT will use the same external port as the internal port if it’s not occupied. It’s known as “port preserving”. To know the external IP/port, we can connect to a third server and let it tell us, just like STUN.

So I implemented the idea in Ford’s paper.

#include <stdio.h>
#include <sys/socket.h>
#include <arpa/inet.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
#include <errno.h>
#include <sys/types.h>
#include <netinet/in.h>
#include <netdb.h>

#define DIE(format,...) do {perror(NULL); printf(format, ##__VA_ARGS__); exit(1);} while(0)

int say_something (int sock)
	char buff[256];
	int len, flags;

	flags = fcntl(sock, F_GETFL);
	flags = flags & (~ O_NONBLOCK);
	if (fcntl(sock, F_SETFL, flags))
		DIE("fcntl() failed\n");

	snprintf(buff, sizeof(buff), "Hello. I'm %d", getpid());
	printf("sending %s\n", buff);
	if (send(sock, buff, strlen(buff) + 1, 0) != strlen(buff) + 1)
		DIE("send() failed\n");

	len = recv(sock, buff, sizeof(buff), 0);
	if (len <= 0)
		DIE("recv() failed\n");
	printf("received %s\n", buff);

	return 0;

// TODO address type, length...
int getaddr (struct sockaddr *addr, const char *host, const char *port)
	struct addrinfo hints, *res;

	memset(&hints, 0, sizeof(hints));
	hints.ai_family = AF_INET;
	hints.ai_socktype = SOCK_STREAM;
	hints.ai_protocol = 0;
	hints.ai_flags = AI_PASSIVE;

	if (getaddrinfo(host, port, &hints, &res))
		return -1;

	if (res == NULL)
		return -1;

	memcpy(addr, res->ai_addr, res->ai_addrlen);
	return 0;

int main (int argc, char *argv[])
	int ssock, csock;
	struct sockaddr_in local_addr, remote_addr;
	fd_set rfds, wfds;
	struct timeval tv;
	int i;
	socklen_t len;

	if (argc != 4) {
		printf("Usage: %s localport remotehost remoteport\n", argv[0]);

	if (getaddr((struct sockaddr *)&local_addr, NULL, argv[1]))
		DIE("getaddr() failed\n");
	if (getaddr((struct sockaddr *)&remote_addr, argv[2], argv[3]))
		DIE("getaddr() failed\n");

	if ((ssock = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP)) < 0)
		DIE("socket() failed\n");
	if ((csock = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP)) < 0)
		DIE("socket() failed\n");

	i = 1;
	if (setsockopt(ssock, SOL_SOCKET, SO_REUSEADDR, &i, sizeof(int)))
		DIE("setsockopt() failed\n");
	if (setsockopt(csock, SOL_SOCKET, SO_REUSEADDR, &i, sizeof(i)))
		DIE("setsockopt() failed\n");

	if (bind(ssock, (const struct sockaddr *)&local_addr, sizeof(local_addr)))
		DIE("bind() failed\n");
	if (bind(csock, (const struct sockaddr *)&local_addr, sizeof(local_addr)))
		DIE("bind() failed\n");

	if (fork()) {

		if (listen(ssock, 1))
			DIE("listen() failed\n");
		while (1) {
			len = sizeof(remote_addr);
			i = accept(ssock, (struct sockaddr *)&remote_addr, &len);
			if (i < 0) {
				perror("accept() failed.");
			} else {
				printf("accept() succeed.");
				return say_something(i);
	} else {

		for (i = 0; i < 3; i ++) {
			if (connect(csock, (const struct sockaddr *)&remote_addr, sizeof(remote_addr))) {
				int sleeptime = random() * 1000000.0 / RAND_MAX + 1000000.0;
				sleeptime = sleeptime << i;
				perror("connect() failed");
				if (i < 2) {
					printf("sleeping for %.2f sec to retry\n", sleeptime / 1000000.0);
			} else {
				printf("connect() succeed");
				return say_something(csock);
		return 1;

It worked. host1 and host2 have external IP and respectively. Both NAT preserve ports so if host1 binds on port 30000, the external port is also 30000.

host1$ ./biconn 30000 20000
connect() failed: Connection timed out
sleeping for 1.13 sec to retry
connect() succeed: Connection timed out
sending Hello. I'm 8151
received Hello. I'm 6629
host2$ ./biconn 20000 30000
connect() failed: Connection refused
sleeping for 1.68 sec to retry
connect() succeed: Connection refused
sending Hello. I'm 6629
received Hello. I'm 8151

I noticed an unexpected behaviour. accept() never succeeded in either peer. connect() succeed in both peers.

Is it possible for two peers to symmetrically “connect()” to each other? Is question is not related to NAT. The answer is yes. Find any computer networks text book and look for the TCP state diagram. It’s possible to go from the SYN_SENT state to the SYN_RECV state by receiving a SYN packet. Someone has asked the question before.

So I wondered if I can remove the listen() part in the code, and use only one socket in each peer. A problem with the previous approach (as mentioned here) is that it’s not possible to bind additional sockets on the port after listen().

So I did the second experiment. It’s much cleaner.

#include <stdio.h>
#include <sys/socket.h>
#include <arpa/inet.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
#include <errno.h>
#include <sys/select.h>
#include <netinet/in.h>

void die (const char *msg)

int main (int argc, char *argv[])
	int sock;
	struct sockaddr_in addr;
	char buff[256];

	if (argc != 4) {
		printf("Usage: %s localport remotehost remoteport\n", argv[0]);

	if (sock < 0)
		die("socket() failed");

	memset(&addr, 0, sizeof(addr));
	addr.sin_family = AF_INET;
	addr.sin_addr.s_addr = htonl(INADDR_ANY);
	addr.sin_port = htons(atoi(argv[1]));
	if (bind(sock, (const struct sockaddr *)&addr, sizeof(addr)))
		die("bind() failed\n");

	memset(&addr, 0, sizeof(addr));
	addr.sin_family = AF_INET;
	addr.sin_addr.s_addr = inet_addr(argv[2]);
	addr.sin_port = htons(atoi(argv[3]));

	while (connect(sock, (const struct sockaddr *)&addr, sizeof(addr))) {
		if (errno != ETIMEDOUT) {
			perror("connect() failed. retry in 2 sec.");
		} else {
			perror("connect() failed.");

	snprintf(buff, sizeof(buff), "Hi, I'm %d.", getpid());
	printf("sending \"%s\"\n", buff);
	if (send(sock, buff, strlen(buff) + 1, 0) != strlen(buff) + 1)
		die("send() failed.");

	if (recv(sock, buff, sizeof(buff), 0) <= 0)
		die("recv() failed.");
	printf("received \"%s\"\n", buff);

	return 0;

It works. I wonder what’s the reason of doing listen(). Does it related to the way connection tracking is implemented in different type of NAT? Or does it related to the way TCP is implemented in different OS?

host1$ ./biconn1 20000 30000
connect() failed. retry in 2 sec.: Connection refused
sending "Hi, I'm 6566."
received "Hi, I'm 7600."
host2$ ./biconn1 30000 20000
connect() failed. retry in 2 sec.: Connection refused
connect() failed.: Connection timed out
connect() failed.: Connection timed out
sending "Hi, I'm 7600."
received "Hi, I'm 6566."

My objective is to connect to my subversion server in a NAT. Now, I still need a publicly accessible server to coordinate the hole punching. It basically works like this: In the subversion server I run a program with persistent connection to the public server. When I want to connect from outside, I can contact the public server, which then notifies my program in the subversion server. Then I can launch the TCP hole punching and get a TCP connection, which can then be used to tunnel the subversion connection.

Without possessing a public accessible server, other mechanisms can be used. I can think of the following mechanisms:

  • Online forum: Post the client’s external IP/port in a forum and have a program running in the subversion server to periodically check the forum.
  • DHT, e.g. the mainline bittorrent DHT: The server randomly generates a infohash, and “announce” itself to be downloading this infohash. The server then periodically queries for peers on the infohash. To do hole punching, the client also announces itself to be downloading it. The server sees a new peer joining, then both parties can do hole punching. The limitation is that two peers cannot exchange port information, thus they need to predetermine a particular port.
  • IRC bot
  • Public SIP registrar: It’s a bit overkill, but quite related, and well supported (plenty public servers and libraries).

I’m not sure if there is any existing tool for this purpose. Before IPv6 getting well established, there are going to be more and more servers behind NAT, so this is going to be handy. Please leave a comment if you know any.