In “De-multiplexing of E1 stream: converting to C” we tried different ways to de-multiplex E1 streams using SSE and AVX. This problem can be described as transposition of a matrix. In our case the dimensions of the source matrix are 32×64 (32 columns and 64 rows, where 32 is the number of timeslots and 64 is the size of each timeslot’s output). The matrix is assumed stored in memory along the rows.
The usual way to transpose this matrix is to divide it into small blocks that fit into available registers, and transpose each block separately. We tried this using blocks of size 1×4, 1×8, 4×4, 4×16, 8×16, 4×32 and 8×32. The best results were achieved using 8×16 blocks with SSE (120 ms for 1,000,000 iterations), and 8×32 blocks with AVX (109 ms). The times were measured on the Intel® Xeon® CPU E5-2670 @ 2.60GHz.
In the same article I briefly mentioned an attempt to use 16×16 blocks. I didn’t go into the details then, to keep the article shorter. Now I changed my mind. Looking later at the code, I considered it rather beautiful. So today we’ll be transposing a 16×16 byte matrix.
We’ll assume that the matrix is represented as 16 SSE variables (of type
__m128i), where each variable keeps one
row (16 bytes). The result must be stored the same way.
When designing high-performance matrix manipulations using SSE, we developed two utility functions performing two important manipulations. The first one considered a single SSE value as a 4×4 matrix and transposed it:
The second one took four SSE variables, interpreted them as a 4×4 matrix of double-words (4-byte entities) and transposed this matrix:
_128i_shuffle is our own macro defined in
sse.h. It is a more convenient version of
Now here is the plan: We’ll apply the second procedure to four groups of variables, each representing a 16×4 matrix, then apply the first procedure to 16 variables, each representing a 4×4 matrix, and then apply the second procedure again to four groups of four registers along the columns. And that’s it, the job is done.
It is easy to see why it works. Let’s assume that the matrix contains hex values from
Look at the first four rows of the matrix, stored in four SSE registers,
We want to collect the leftmost 4×4 submatrix (from
33) into one SSE variable, the next one
37) into the next one, and so on. The first sub-matrix consists of first double-words of
all of the source values, the second one of the second ones, and so on. In short, this procedure is in fact transposition
of a 4×4 double-word matrix, and it can be done like this:
This is the result of this procedure:
After we run this procedure for all four 16×4 strips, we’ll have the entire source matrix stored in SSE variables in the following way:
We named the new variables according to their positions in this matrix.
The next step is to transpose each of the
w33 as a 4×4 matrix. The
transpose_4x4 will do the job.
Now we are ready to collect the results. We want to put columns of the original 16×16 matrix back into
x15 variables. Before transposition of
w33, the first column of the source matrix
consisted of the first columns of
w30, the second one from the second coluns snd so on.
After the transposition we must use the rows, which are exactly the double-word components of the variables.
This means that we need to transpose a double-word matrix again:
Here is the content of
w30 before this operation:
And this is the result (stored back into
You can see that the variables contain four first columns of the original matrix. If we perform this operation on other groups of four variables, we’ll get the entire matrix.
Here is the code:
De-multiplexing of the E1 stream using the 16×16 transposition
This is what the de-multiplexing procedure looks like:
At first this code may look very inefficient. The
transpose_16x16 function is called with 16 parameters passed
by reference. In general case this requires all the variables to be stored in memory and their addresses passed
into the function. The same applies to calls to
transpose_4x4_dwords, which returns results via the reference
parameters. If those calls aren’t inlined, this is exactly what will happen, so for this code to be efficient,
the inlining is absolutely essential.
If all the calls are inlined, the functions act as safe macros, and all the references are optimised out. There is no need to store all the variables in memory then. The code may end up relatively efficient, although some performance penalties are inevitable.
First of all, no matter how the compiler reorders the instructions, there is always some place in the code where there are at least 16 live SSE variables. In addition, some registers are needed for temporary values. Since SSE only has 16 registers, there is definite register shortage here, and some values will end up in memory.
the same applies to pointers
d15, which are all alive inside the inner loop. The processor has 16
general-purpose registers, so there is a shortage here as well. The impact of this shortage may depend on the
compiler and the processor in use.
On our test machine this methods takes 176ms, which is good result, but not the best. The best time was achieved using 8×32 blocks and AVX instructions.
I also tried it on my notebook (Intel(R) Core(TM) i7-4800MQ CPU @2.70 GHz), using Microsoft Visual C++ 2013 in 64-bit mode. This is what I got:
>e1-new.exe class Reference: 1991 class Write4: 578 class Write8: 545 class Read4_Write4: 685 class Read4_Write4_Unroll: 649 class Read4_Write4_SSE: 317 class Read4_Write16_SSE: 235 class Read8_Write16_SSE: 236 class Read8_Write16_SSE_Unroll: 236 class Read16_Write16_SSE: 208 class Read16_Write16_SSE_Unroll: 204 class Read4_Write32_AVX: 275 class Read8_Write32_AVX: 260 class Read8_Write32_AVX_Unroll: 256
We can see that on Windows our code is the fastest.
This is why we mustn’t discard it – it can still be useful.
Besides, it allows easy adaptation for AVX2 once it becomes available for testing. On AVX2 we can keep two rows
of the matrix in one AVX register, and the entire matrix will require 8 registers. We need AVX2 for that, because
AVX does not have byte manipulation instruction (
The code for this method is quite beautiful (perhaps, the most beautiful of all).
However, the most beautiful isn’t always the fastest. But sometimes it is.
Different methods may be the best for different execution environments. Perhaps, the correct thing to do is to keep several versions and choose the best one at run time – either by means of static configuration, or by automatic discovery.
This method is still not unrolled. I’ll do it in the next article.