Today godbolt released a new compiler comparison feature and I decided to quickly try to compare C++ vs C style register manipulation and see what kind of optimizations will be applied across the different compilers.
Source code
So I wrote a naive C++ class that would represent a 32-bit register.
Note that I am trying here to test compilers’ optimization on class member manipulation, rather than making the case for embedded C++ (where bits in registers would be volatile, since the order of bit setting is significant most of the time).
struct Register { unsigned int b0:1; unsigned int b1:1; unsigned int b2:1; unsigned int b3:1; unsigned int b4:1; unsigned int b5:1; unsigned int b6:1; unsigned int b7:1; unsigned int b8:1; unsigned int b9:1; unsigned int b10:1; unsigned int b11:1; unsigned int b12:1; unsigned int b13:1; unsigned int b14:1; unsigned int b15:1; unsigned int b16:1; unsigned int b17:1; unsigned int b18:1; unsigned int b19:1; unsigned int b20:1; unsigned int b21:1; unsigned int b22:1; unsigned int b23:1; unsigned int b24:1; unsigned int b25:1; unsigned int b26:1; unsigned int b27:1; unsigned int b28:1; unsigned int b29:1; unsigned int b30:1; unsigned int b31:1; inline void Set(unsigned char bitNumber, bool value) { switch(bitNumber) { case 0: b0 = value; break; case 1: b1 = value; break; case 2: b2 = value; break; case 3: b3 = value; break; case 4: b4 = value; break; case 5: b5 = value; break; case 6: b6 = value; break; case 7: b7 = value; break; case 8: b8 = value; break; case 9: b9 = value; break; case 10: b10 = value; break; case 11: b11 = value; break; case 12: b12 = value; break; case 13: b13 = value; break; case 14: b14 = value; break; case 15: b15 = value; break; case 16: b16 = value; break; case 17: b17 = value; break; case 18: b18 = value; break; case 19: b19 = value; break; case 20: b20 = value; break; case 21: b21 = value; break; case 22: b22 = value; break; case 23: b23 = value; break; case 24: b24 = value; break; case 25: b25 = value; break; case 26: b26 = value; break; case 27: b27 = value; break; case 28: b28 = value; break; case 29: b29 = value; break; case 30: b30 = value; break; case 31: b31 = value; break; } } };
After that you can see I am using it to set the 0, 1, 3, 5, 13 and 30th bits of the register:
void cppFunction(Register& reg) { reg.Set(0,true); reg.Set(1,true); reg.Set(3,true); reg.Set(5,true); reg.Set(13, true); reg.Set(30, true); }
the corresponding C implementation is:
void cFunction(int* reg) { *reg |= (1 << 0) | (1 << 1) | (1 << 3) | (1 << 5) | (1 << 13) | (1 << 30); }
I am also calling the above functions from a third one to see if the compilers will inline it:
void Caller(Register& reg1, int *reg2) { cppFunction(reg1); cFunction(reg2); }
The compilers
I am comparing three compilers:
- gcc 6.2
- clang 3.9.0
- icc 17
All are using the following options: -O3 -mtune=native -march=native
I would expect that the compilers “see trough” the C++ code and optimize it completely with a simple instruction.
Let’s see the assembly of GCC:
cppFunction(Register&): or BYTE PTR [rdi], 43 or BYTE PTR [rdi+1], 32 or BYTE PTR [rdi+3], 64 ret cFunction(int*): or DWORD PTR [rdi], 1073750059 ret Caller(Register&, int*): or BYTE PTR [rdi], 43 or BYTE PTR [rdi+1], 32 or BYTE PTR [rdi+3], 64 or DWORD PTR [rsi], 1073750059 ret
We see that the cFunction() was correctly optimized and the rdi register representing the register is set correctly with one ‘or‘ instruction.
The cppFunction() on the other is less than ideal – it uses BYTE_PTR addressing three times instead of one DWORD.
The above functions has been correctly inlined in the Caller() function, which does nothing but function calls.
Now lets see Clang:
cppFunction(Register&): # @cppFunction(Register&) or dword ptr [rdi], 1073750059 ret cFunction(int*): # @cFunction(int*) or dword ptr [rdi], 1073750059 ret Caller(Register&, int*): # @Caller(Register&, int*) or dword ptr [rdi], 1073750059 or dword ptr [rsi], 1073750059 ret
Wow, we see correct optimization on all three points there… nothing could be done better.
What about the Intel Compiler:
cppFunction(Register&): push rbp #80.1 xor esi, esi #81.7 mov edx, 1 #81.7 mov rbp, rdi #80.1 call Register::Set(unsigned char, bool) #81.7 mov esi, 1 #82.7 mov rdi, rbp #82.7 mov edx, esi #82.7 call Register::Set(unsigned char, bool) #82.7 mov rdi, rbp #83.7 mov esi, 3 #83.7 mov edx, 1 #83.7 call Register::Set(unsigned char, bool) #83.7 or BYTE PTR [rbp], 32 #48.15 or BYTE PTR [1+rbp], 32 #56.16 or BYTE PTR [3+rbp], 64 #73.16 pop rbp #87.1 ret #87.1 Register::Set(unsigned char, bool): movzx esi, sil #40.3 cmp esi, 31 #41.5 ja ..B2.67 # Prob 50% #41.5 jmp QWORD PTR [.2.10_2.switchtab.3+rsi*8] #41.5 ..1.1_0.TAG.31.0.1: movzx eax, BYTE PTR [3+rdi] #74.16 shl edx, 7 #74.16 and eax, 127 #74.16 or eax, edx #74.16 mov BYTE PTR [3+rdi], al #74.16 jmp ..B2.67 # Prob 100% #74.16 ..1.1_0.TAG.30.0.1: and edx, 1 #73.16 movzx eax, BYTE PTR [3+rdi] #73.16 shl edx, 6 #73.16 and eax, -65 #73.16 or eax, edx #73.16 mov BYTE PTR [3+rdi], al #73.16 jmp ..B2.67 # Prob 100% #73.16 ..1.1_0.TAG.29.0.1: and edx, 1 #72.16 movzx eax, BYTE PTR [3+rdi] #72.16 shl edx, 5 #72.16 and eax, -33 #72.16 or eax, edx #72.16 mov BYTE PTR [3+rdi], al #72.16 jmp ..B2.67 # Prob 100% #72.16 ..1.1_0.TAG.28.0.1: and edx, 1 #71.16 movzx eax, BYTE PTR [3+rdi] #71.16 shl edx, 4 #71.16 and eax, -17 #71.16 or eax, edx #71.16 mov BYTE PTR [3+rdi], al #71.16 jmp ..B2.67 # Prob 100% #71.16 ..1.1_0.TAG.27.0.1: ..B2.12: # Preds ..B2.2 and edx, 1 #70.16 movzx eax, BYTE PTR [3+rdi] #70.16 shl edx, 3 #70.16 and eax, -9 #70.16 or eax, edx #70.16 mov BYTE PTR [3+rdi], al #70.16 jmp ..B2.67 # Prob 100% #70.16 ..1.1_0.TAG.26.0.1: ..B2.14: # Preds ..B2.2 and edx, 1 #69.16 movzx eax, BYTE PTR [3+rdi] #69.16 shl edx, 2 #69.16 and eax, -5 #69.16 or eax, edx #69.16 mov BYTE PTR [3+rdi], al #69.16 jmp ..B2.67 # Prob 100% #69.16 ..1.1_0.TAG.25.0.1: ..B2.16: # Preds ..B2.2 and edx, 1 #68.16 movzx eax, BYTE PTR [3+rdi] #68.16 add edx, edx #68.16 and eax, -3 #68.16 or eax, edx #68.16 mov BYTE PTR [3+rdi], al #68.16 jmp ..B2.67 # Prob 100% #68.16 ..1.1_0.TAG.24.0.1: ..B2.18: # Preds ..B2.2 movzx eax, BYTE PTR [3+rdi] #67.16 and edx, 1 #67.16 and eax, -2 #67.16 or eax, edx #67.16 mov BYTE PTR [3+rdi], al #67.16 jmp ..B2.67 # Prob 100% #67.16 ..1.1_0.TAG.23.0.1: ..B2.20: # Preds ..B2.2 movzx eax, BYTE PTR [2+rdi] #66.16 shl edx, 7 #66.16 and eax, 127 #66.16 or eax, edx #66.16 mov BYTE PTR [2+rdi], al #66.16 jmp ..B2.67 # Prob 100% #66.16 ..1.1_0.TAG.22.0.1: ..B2.22: # Preds ..B2.2 and edx, 1 #65.16 movzx eax, BYTE PTR [2+rdi] #65.16 shl edx, 6 #65.16 and eax, -65 #65.16 or eax, edx #65.16 mov BYTE PTR [2+rdi], al #65.16 jmp ..B2.67 # Prob 100% #65.16 ..1.1_0.TAG.21.0.1: ..B2.24: # Preds ..B2.2 and edx, 1 #64.16 movzx eax, BYTE PTR [2+rdi] #64.16 shl edx, 5 #64.16 and eax, -33 #64.16 or eax, edx #64.16 mov BYTE PTR [2+rdi], al #64.16 jmp ..B2.67 # Prob 100% #64.16 ..1.1_0.TAG.20.0.1: ..B2.26: # Preds ..B2.2 and edx, 1 #63.16 movzx eax, BYTE PTR [2+rdi] #63.16 shl edx, 4 #63.16 and eax, -17 #63.16 or eax, edx #63.16 mov BYTE PTR [2+rdi], al #63.16 jmp ..B2.67 # Prob 100% #63.16 ..1.1_0.TAG.19.0.1: ..B2.28: # Preds ..B2.2 and edx, 1 #62.16 movzx eax, BYTE PTR [2+rdi] #62.16 shl edx, 3 #62.16 and eax, -9 #62.16 or eax, edx #62.16 mov BYTE PTR [2+rdi], al #62.16 jmp ..B2.67 # Prob 100% #62.16 ..1.1_0.TAG.18.0.1: ..B2.30: # Preds ..B2.2 and edx, 1 #61.16 movzx eax, BYTE PTR [2+rdi] #61.16 shl edx, 2 #61.16 and eax, -5 #61.16 or eax, edx #61.16 mov BYTE PTR [2+rdi], al #61.16 jmp ..B2.67 # Prob 100% #61.16 ..1.1_0.TAG.17.0.1: ..B2.32: # Preds ..B2.2 and edx, 1 #60.16 movzx eax, BYTE PTR [2+rdi] #60.16 add edx, edx #60.16 and eax, -3 #60.16 or eax, edx #60.16 mov BYTE PTR [2+rdi], al #60.16 jmp ..B2.67 # Prob 100% #60.16 ..1.1_0.TAG.16.0.1: ..B2.34: # Preds ..B2.2 movzx eax, BYTE PTR [2+rdi] #59.16 and edx, 1 #59.16 and eax, -2 #59.16 or eax, edx #59.16 mov BYTE PTR [2+rdi], al #59.16 jmp ..B2.67 # Prob 100% #59.16 ..1.1_0.TAG.15.0.1: ..B2.36: # Preds ..B2.2 movzx eax, BYTE PTR [1+rdi] #58.16 shl edx, 7 #58.16 and eax, 127 #58.16 or eax, edx #58.16 mov BYTE PTR [1+rdi], al #58.16 jmp ..B2.67 # Prob 100% #58.16 ..1.1_0.TAG.14.0.1: ..B2.38: # Preds ..B2.2 and edx, 1 #57.16 movzx eax, BYTE PTR [1+rdi] #57.16 shl edx, 6 #57.16 and eax, -65 #57.16 or eax, edx #57.16 mov BYTE PTR [1+rdi], al #57.16 jmp ..B2.67 # Prob 100% #57.16 ..1.1_0.TAG.13.0.1: ..B2.40: # Preds ..B2.2 and edx, 1 #56.16 movzx eax, BYTE PTR [1+rdi] #56.16 shl edx, 5 #56.16 and eax, -33 #56.16 or eax, edx #56.16 mov BYTE PTR [1+rdi], al #56.16 jmp ..B2.67 # Prob 100% #56.16 ..1.1_0.TAG.12.0.1: ..B2.42: # Preds ..B2.2 and edx, 1 #55.16 movzx eax, BYTE PTR [1+rdi] #55.16 shl edx, 4 #55.16 and eax, -17 #55.16 or eax, edx #55.16 mov BYTE PTR [1+rdi], al #55.16 jmp ..B2.67 # Prob 100% #55.16 ..1.1_0.TAG.11.0.1: ..B2.44: # Preds ..B2.2 and edx, 1 #54.16 movzx eax, BYTE PTR [1+rdi] #54.16 shl edx, 3 #54.16 and eax, -9 #54.16 or eax, edx #54.16 mov BYTE PTR [1+rdi], al #54.16 jmp ..B2.67 # Prob 100% #54.16 ..1.1_0.TAG.10.0.1: ..B2.46: # Preds ..B2.2 and edx, 1 #53.16 movzx eax, BYTE PTR [1+rdi] #53.16 shl edx, 2 #53.16 and eax, -5 #53.16 or eax, edx #53.16 mov BYTE PTR [1+rdi], al #53.16 jmp ..B2.67 # Prob 100% #53.16 ..1.1_0.TAG.9.0.1: ..B2.48: # Preds ..B2.2 and edx, 1 #52.15 movzx eax, BYTE PTR [1+rdi] #52.15 add edx, edx #52.15 and eax, -3 #52.15 or eax, edx #52.15 mov BYTE PTR [1+rdi], al #52.15 jmp ..B2.67 # Prob 100% #52.15 ..1.1_0.TAG.8.0.1: ..B2.50: # Preds ..B2.2 movzx eax, BYTE PTR [1+rdi] #51.15 and edx, 1 #51.15 and eax, -2 #51.15 or eax, edx #51.15 mov BYTE PTR [1+rdi], al #51.15 jmp ..B2.67 # Prob 100% #51.15 ..1.1_0.TAG.7.0.1: ..B2.52: # Preds ..B2.2 movzx eax, BYTE PTR [rdi] #50.15 shl edx, 7 #50.15 and eax, 127 #50.15 or eax, edx #50.15 mov BYTE PTR [rdi], al #50.15 jmp ..B2.67 # Prob 100% #50.15 ..1.1_0.TAG.6.0.1: ..B2.54: # Preds ..B2.2 and edx, 1 #49.15 movzx eax, BYTE PTR [rdi] #49.15 shl edx, 6 #49.15 and eax, -65 #49.15 or eax, edx #49.15 mov BYTE PTR [rdi], al #49.15 jmp ..B2.67 # Prob 100% #49.15 ..1.1_0.TAG.5.0.1: ..B2.56: # Preds ..B2.2 and edx, 1 #48.15 movzx eax, BYTE PTR [rdi] #48.15 shl edx, 5 #48.15 and eax, -33 #48.15 or eax, edx #48.15 mov BYTE PTR [rdi], al #48.15 jmp ..B2.67 # Prob 100% #48.15 ..1.1_0.TAG.4.0.1: ..B2.58: # Preds ..B2.2 and edx, 1 #47.15 movzx eax, BYTE PTR [rdi] #47.15 shl edx, 4 #47.15 and eax, -17 #47.15 or eax, edx #47.15 mov BYTE PTR [rdi], al #47.15 jmp ..B2.67 # Prob 100% #47.15 ..1.1_0.TAG.3.0.1: ..B2.60: # Preds ..B2.2 and edx, 1 #46.15 movzx eax, BYTE PTR [rdi] #46.15 shl edx, 3 #46.15 and eax, -9 #46.15 or eax, edx #46.15 mov BYTE PTR [rdi], al #46.15 jmp ..B2.67 # Prob 100% #46.15 ..1.1_0.TAG.2.0.1: ..B2.62: # Preds ..B2.2 and edx, 1 #45.15 movzx eax, BYTE PTR [rdi] #45.15 shl edx, 2 #45.15 and eax, -5 #45.15 or eax, edx #45.15 mov BYTE PTR [rdi], al #45.15 jmp ..B2.67 # Prob 100% #45.15 ..1.1_0.TAG.1.0.1: ..B2.64: # Preds ..B2.2 and edx, 1 #44.15 movzx eax, BYTE PTR [rdi] #44.15 add edx, edx #44.15 and eax, -3 #44.15 or eax, edx #44.15 mov BYTE PTR [rdi], al #44.15 jmp ..B2.67 # Prob 100% #44.15 ..1.1_0.TAG.0.0.1: ..B2.66: # Preds ..B2.2 movzx eax, BYTE PTR [rdi] #43.15 and edx, 1 #43.15 and eax, -2 #43.15 or eax, edx #43.15 mov BYTE PTR [rdi], al #43.15 ..B2.67: # Preds ..B2.1 ..B2.4 ..B2.6 ..B2.8 ..B2.10 ret #76.3 .2.10_2.switchtab.3: .quad ..1.1_0.TAG.0.0.1 .quad ..1.1_0.TAG.1.0.1 .quad ..1.1_0.TAG.2.0.1 .quad ..1.1_0.TAG.3.0.1 .quad ..1.1_0.TAG.4.0.1 .quad ..1.1_0.TAG.5.0.1 .quad ..1.1_0.TAG.6.0.1 .quad ..1.1_0.TAG.7.0.1 .quad ..1.1_0.TAG.8.0.1 .quad ..1.1_0.TAG.9.0.1 .quad ..1.1_0.TAG.10.0.1 .quad ..1.1_0.TAG.11.0.1 .quad ..1.1_0.TAG.12.0.1 .quad ..1.1_0.TAG.13.0.1 .quad ..1.1_0.TAG.14.0.1 .quad ..1.1_0.TAG.15.0.1 .quad ..1.1_0.TAG.16.0.1 .quad ..1.1_0.TAG.17.0.1 .quad ..1.1_0.TAG.18.0.1 .quad ..1.1_0.TAG.19.0.1 .quad ..1.1_0.TAG.20.0.1 .quad ..1.1_0.TAG.21.0.1 .quad ..1.1_0.TAG.22.0.1 .quad ..1.1_0.TAG.23.0.1 .quad ..1.1_0.TAG.24.0.1 .quad ..1.1_0.TAG.25.0.1 .quad ..1.1_0.TAG.26.0.1 .quad ..1.1_0.TAG.27.0.1 .quad ..1.1_0.TAG.28.0.1 .quad ..1.1_0.TAG.29.0.1 .quad ..1.1_0.TAG.30.0.1 .quad ..1.1_0.TAG.31.0.1 cFunction(int*): or DWORD PTR [rdi], 1073750059 #92.4 ret #93.1 Caller(Register&, int*): push r13 #97.1 push r14 #97.1 push rsi #97.1 mov r14, rsi #97.1 xor esi, esi #81.7 mov edx, 1 #81.7 mov r13, rdi #97.1 call Register::Set(unsigned char, bool) #81.7 mov esi, 1 #82.7 mov rdi, r13 #82.7 mov edx, esi #82.7 call Register::Set(unsigned char, bool) #82.7 mov rdi, r13 #83.7 mov esi, 3 #83.7 mov edx, 1 #83.7 call Register::Set(unsigned char, bool) #83.7 or BYTE PTR [r13], 32 #48.15 or BYTE PTR [1+r13], 32 #56.16 or BYTE PTR [3+r13], 64 #73.16 or DWORD PTR [r14], 1073750059 #92.4 pop rcx #100.1 pop r14 #100.1 pop r13 #100.1 ret #100.1
The conclusion
So it seems that GCC does a good job on the C code, but not so much on the C++ one.
Clang makes the perfect optimization both in C and C++
And Intel… well that’s just embarrassing.
You can play with the godbolt comparison here.