I'm solving a problem and it involves sorting 10 numbers (int32) very quickly. My application needs to sort 10 numbers millions of times as fast as possible. I'm sampling a data set of billions of elements and every time I need to pick 10 numbers out of it (simplified) and sort them (and make conclusions from the sorted 10 element list).
Currently I'm using insertion sort, but I imagine I could implement a very fast custom sorting algorithm for my specific problem of 10 numbers which would beat insertion sort.
How can I approach this problem?
if
statements should work the best. Avoid loops.
(Following up on the suggestion of @HelloWorld to look into sorting networks.)
It seems that a 29-comparison/swap network is the fastest way to do a 10-input sort. I used the network discovered by Waksman in 1969 for this example in JavaScript, which should translate directly into C, as it's just a list of if
statements, comparisons and swaps.
function sortNet10(data) { // ten-input sorting network by Waksman, 1969 var swap; if (data[0] > data[5]) { swap = data[0]; data[0] = data[5]; data[5] = swap; } if (data[1] > data[6]) { swap = data[1]; data[1] = data[6]; data[6] = swap; } if (data[2] > data[7]) { swap = data[2]; data[2] = data[7]; data[7] = swap; } if (data[3] > data[8]) { swap = data[3]; data[3] = data[8]; data[8] = swap; } if (data[4] > data[9]) { swap = data[4]; data[4] = data[9]; data[9] = swap; } if (data[0] > data[3]) { swap = data[0]; data[0] = data[3]; data[3] = swap; } if (data[5] > data[8]) { swap = data[5]; data[5] = data[8]; data[8] = swap; } if (data[1] > data[4]) { swap = data[1]; data[1] = data[4]; data[4] = swap; } if (data[6] > data[9]) { swap = data[6]; data[6] = data[9]; data[9] = swap; } if (data[0] > data[2]) { swap = data[0]; data[0] = data[2]; data[2] = swap; } if (data[3] > data[6]) { swap = data[3]; data[3] = data[6]; data[6] = swap; } if (data[7] > data[9]) { swap = data[7]; data[7] = data[9]; data[9] = swap; } if (data[0] > data[1]) { swap = data[0]; data[0] = data[1]; data[1] = swap; } if (data[2] > data[4]) { swap = data[2]; data[2] = data[4]; data[4] = swap; } if (data[5] > data[7]) { swap = data[5]; data[5] = data[7]; data[7] = swap; } if (data[8] > data[9]) { swap = data[8]; data[8] = data[9]; data[9] = swap; } if (data[1] > data[2]) { swap = data[1]; data[1] = data[2]; data[2] = swap; } if (data[3] > data[5]) { swap = data[3]; data[3] = data[5]; data[5] = swap; } if (data[4] > data[6]) { swap = data[4]; data[4] = data[6]; data[6] = swap; } if (data[7] > data[8]) { swap = data[7]; data[7] = data[8]; data[8] = swap; } if (data[1] > data[3]) { swap = data[1]; data[1] = data[3]; data[3] = swap; } if (data[4] > data[7]) { swap = data[4]; data[4] = data[7]; data[7] = swap; } if (data[2] > data[5]) { swap = data[2]; data[2] = data[5]; data[5] = swap; } if (data[6] > data[8]) { swap = data[6]; data[6] = data[8]; data[8] = swap; } if (data[2] > data[3]) { swap = data[2]; data[2] = data[3]; data[3] = swap; } if (data[4] > data[5]) { swap = data[4]; data[4] = data[5]; data[5] = swap; } if (data[6] > data[7]) { swap = data[6]; data[6] = data[7]; data[7] = swap; } if (data[3] > data[4]) { swap = data[3]; data[3] = data[4]; data[4] = swap; } if (data[5] > data[6]) { swap = data[5]; data[5] = data[6]; data[6] = swap; } return(data); } document.write(sortNet10([5,7,1,8,4,3,6,9,2,0]));
Here's a graphical representation of the network, divided into independent phases.
https://i.stack.imgur.com/pp7EG.png
To take advantage of parallel processing, the 5-4-3-4-4-4-3-2 grouping can be changed into a 4-4-4-4-4-4-3-2 grouping.
https://i.stack.imgur.com/IcV0u.png
When you deal with this fixed size, take a look at sorting networks. These algorithms have a fixed runtime and are independent of their input. For your use case, you don't have such overhead that some sorting algorithms have.
Bitonic sort is an implementation of such a network. This one works best with len(n) <= 32 on a CPU. On bigger inputs you could think of moving to a GPU.
By the way, a good page to compare sorting algorithms is this one here (though it's missing the bitonic sort
):
Use a sorting network that has comparisons in groups of 4, so you can do it in SIMD registers. A pair of packed min/max instructions implements a packed comparator function. Sorry I don't have time right now to look for a page I remember seeing about this, but hopefully searching on SIMD or SSE sorting networks will turn something up.
x86 SSE does have packed-32bit-integer min and max instructions for vectors of four 32bit ints. AVX2 (Haswell and later) have the same but for 256b vectors of 8 ints. There are also efficient shuffle instructions.
If you have a lot of independent small sorts, it might be possible to do 4 or 8 sorts in parallel using vectors. Esp. if you're choosing elements randomly (so the data to be sorted won't be contiguous in memory anyway), you can avoid shuffles and simply compare in the order you need. 10 registers to hold all the data from 4 (AVX2: 8) lists of 10 ints still leaves 6 regs for scratch space.
Vector sorting networks are less efficient if you also need to sort associated data. In that case, the most efficient way seems to be to use a packed-compare to get a mask of which elements changed, and use that mask to blend vectors of (references to) associated data.
What about an unrolled, branch-less selection sort?
#include <iostream>
#include <algorithm>
#include <random>
//return the index of the minimum element in array a
int min(const int * const a) {
int m = a[0];
int indx = 0;
#define TEST(i) (m > a[i]) && (m = a[i], indx = i );
//see http://stackoverflow.com/a/7074042/2140449
TEST(1);
TEST(2);
TEST(3);
TEST(4);
TEST(5);
TEST(6);
TEST(7);
TEST(8);
TEST(9);
#undef TEST
return indx;
}
void sort( int * const a ){
int work[10];
int indx;
#define GET(i) indx = min(a); work[i] = a[indx]; a[indx] = 2147483647;
//get the minimum, copy it to work and set it at max_int in a
GET(0);
GET(1);
GET(2);
GET(3);
GET(4);
GET(5);
GET(6);
GET(7);
GET(8);
GET(9);
#undef GET
#define COPY(i) a[i] = work[i];
//copy back to a
COPY(0);
COPY(1);
COPY(2);
COPY(3);
COPY(4);
COPY(5);
COPY(6);
COPY(7);
COPY(8);
COPY(9);
#undef COPY
}
int main() {
//generating and printing a random array
int a[10] = { 1,2,3,4,5,6,7,8,9,10 };
std::random_device rd;
std::mt19937 g(rd());
std::shuffle( a, a+10, g);
for (int i = 0; i < 10; i++) {
std::cout << a[i] << ' ';
}
std::cout << std::endl;
//sorting and printing again
sort(a);
for (int i = 0; i < 10; i++) {
std::cout << a[i] << ' ';
}
return 0;
}
http://coliru.stacked-crooked.com/a/71e18bc4f7fa18c6
The only relevant lines are the first two #define
.
It uses two lists and entirely recheck the first one for ten times which would be a badly implemented selection sort, however it avoids branches and variable length loops, which may compensate with modern processors and such a small data set.
Benchmark
I benchmarked against the sorting network, and my code seems to be slower. However I tried to remove the unrolling and the copy. Running this code:
#include <iostream>
#include <algorithm>
#include <random>
#include <chrono>
int min(const int * const a, int i) {
int m = a[i];
int indx = i++;
for ( ; i<10; i++)
//see http://stackoverflow.com/a/7074042/2140449
(m > a[i]) && (m = a[i], indx = i );
return indx;
}
void sort( int * const a ){
for (int i = 0; i<9; i++)
std::swap(a[i], a[min(a,i)]); //search only forward
}
void sortNet10(int * const data) { // ten-input sorting network by Waksman, 1969
int swap;
if (data[0] > data[5]) { swap = data[0]; data[0] = data[5]; data[5] = swap; }
if (data[1] > data[6]) { swap = data[1]; data[1] = data[6]; data[6] = swap; }
if (data[2] > data[7]) { swap = data[2]; data[2] = data[7]; data[7] = swap; }
if (data[3] > data[8]) { swap = data[3]; data[3] = data[8]; data[8] = swap; }
if (data[4] > data[9]) { swap = data[4]; data[4] = data[9]; data[9] = swap; }
if (data[0] > data[3]) { swap = data[0]; data[0] = data[3]; data[3] = swap; }
if (data[5] > data[8]) { swap = data[5]; data[5] = data[8]; data[8] = swap; }
if (data[1] > data[4]) { swap = data[1]; data[1] = data[4]; data[4] = swap; }
if (data[6] > data[9]) { swap = data[6]; data[6] = data[9]; data[9] = swap; }
if (data[0] > data[2]) { swap = data[0]; data[0] = data[2]; data[2] = swap; }
if (data[3] > data[6]) { swap = data[3]; data[3] = data[6]; data[6] = swap; }
if (data[7] > data[9]) { swap = data[7]; data[7] = data[9]; data[9] = swap; }
if (data[0] > data[1]) { swap = data[0]; data[0] = data[1]; data[1] = swap; }
if (data[2] > data[4]) { swap = data[2]; data[2] = data[4]; data[4] = swap; }
if (data[5] > data[7]) { swap = data[5]; data[5] = data[7]; data[7] = swap; }
if (data[8] > data[9]) { swap = data[8]; data[8] = data[9]; data[9] = swap; }
if (data[1] > data[2]) { swap = data[1]; data[1] = data[2]; data[2] = swap; }
if (data[3] > data[5]) { swap = data[3]; data[3] = data[5]; data[5] = swap; }
if (data[4] > data[6]) { swap = data[4]; data[4] = data[6]; data[6] = swap; }
if (data[7] > data[8]) { swap = data[7]; data[7] = data[8]; data[8] = swap; }
if (data[1] > data[3]) { swap = data[1]; data[1] = data[3]; data[3] = swap; }
if (data[4] > data[7]) { swap = data[4]; data[4] = data[7]; data[7] = swap; }
if (data[2] > data[5]) { swap = data[2]; data[2] = data[5]; data[5] = swap; }
if (data[6] > data[8]) { swap = data[6]; data[6] = data[8]; data[8] = swap; }
if (data[2] > data[3]) { swap = data[2]; data[2] = data[3]; data[3] = swap; }
if (data[4] > data[5]) { swap = data[4]; data[4] = data[5]; data[5] = swap; }
if (data[6] > data[7]) { swap = data[6]; data[6] = data[7]; data[7] = swap; }
if (data[3] > data[4]) { swap = data[3]; data[3] = data[4]; data[4] = swap; }
if (data[5] > data[6]) { swap = data[5]; data[5] = data[6]; data[6] = swap; }
}
std::chrono::duration<double> benchmark( void(*func)(int * const), const int seed ) {
std::mt19937 g(seed);
int a[10] = {10,11,12,13,14,15,16,17,18,19};
std::chrono::high_resolution_clock::time_point t1, t2;
t1 = std::chrono::high_resolution_clock::now();
for (long i = 0; i < 1e7; i++) {
std::shuffle( a, a+10, g);
func(a);
}
t2 = std::chrono::high_resolution_clock::now();
return std::chrono::duration_cast<std::chrono::duration<double>>(t2 - t1);
}
int main() {
std::random_device rd;
for (int i = 0; i < 10; i++) {
const int seed = rd();
std::cout << "seed = " << seed << std::endl;
std::cout << "sortNet10: " << benchmark(sortNet10, seed).count() << std::endl;
std::cout << "sort: " << benchmark(sort, seed).count() << std::endl;
}
return 0;
}
I am consistently getting better result for the branch-less selection sort compared to the sorting network.
$ gcc -v
gcc version 5.2.0 (GCC)
$ g++ -std=c++11 -Ofast sort.cpp && ./a.out
seed = -1727396418
sortNet10: 2.24137
sort: 2.21828
seed = 2003959850
sortNet10: 2.23914
sort: 2.21641
seed = 1994540383
sortNet10: 2.23782
sort: 2.21778
seed = 1258259982
sortNet10: 2.25199
sort: 2.21801
seed = 1821086932
sortNet10: 2.25535
sort: 2.2173
seed = 412262735
sortNet10: 2.24489
sort: 2.21776
seed = 1059795817
sortNet10: 2.29226
sort: 2.21777
seed = -188551272
sortNet10: 2.23803
sort: 2.22996
seed = 1043757247
sortNet10: 2.2503
sort: 2.23604
seed = -268332483
sortNet10: 2.24455
sort: 2.24304
for ( ; i<10; i++) (m > a[i]) && (m = a[i], indx = i );
is exceptionally well optimized. (short-circuiting usually is a form of branching)
std::shuffle
with for (int n = 0; n<10; n++) a[n]=g();
. The execution time is halved and the network is faster now.
std::sort
?
std::sort
as well but it was performing so badly that I didn't even included it in the benchmark. I guess that with tiny data sets there is quite overhead.
The question doesn't say that this is some kind of a web-based application. The one thing that caught my eye was:
I'm sampling a data set of billions of elements and every time I need to pick 10 numbers out of it (simplified) and sort them (and make conclusions from the sorted 10 element list).
As a software and hardware engineer this absolutely screams FPGA to me. I don't know what kind of conclusions you need to draw from the sorted set of numbers or where the data comes from, but I know it would be almost trivial to process somewhere between one hundred million and a billion of these "sort-and-analyze" operations per second. I've done FPGA-assisted DNA sequencing work in the past. It is nearly impossible to beat the massive processing power of FPGAs when the problem is well suited for that type of a solution.
At some level, the only limiting factor becomes how quickly you can shovel data into an FPGA and how quickly you can get it out.
As a point of reference, I designed a high performance real-time image processor that received 32 bit RGB image data at a rate of about 300 million pixels per second. The data streamed through FIR filters, matrix multipliers, lookup tables, spatial edge detection blocks and a number of other operations before coming out the other end. All of this on a relatively small Xilinx Virtex2 FPGA with internal clocking spanning from about 33 MHz to, if I remember correctly, 400 MHz. Oh, yes, it also had a DDR2 controller implementation and ran two banks of DDR2 memory.
An FPGA can output a sort of ten 32 bit number on every clock transition while operating at hundreds of MHz. There would be short delay at the start of the operation as the data fills the processing pipeline/s. After that you should be able to get one result per clock. Or more if the processing can be parallelized through replicating the sort-and-analyze pipeline. The solution, in principle, is almost trivial.
The point is: If the application isn't PC-bound and the data stream and processing is "compatible" with an FPGA solution (either stand-alone or as a co-processor card in the machine) there is no way you are going to be able to beat the attainable level of performance with software written in any language, regardless of the algorithm.
I Just ran quick search and found a paper that might be of use to you. It looks like it dates back to 2012. You can do a lot better in performance today (and even back then). Here it is:
I recently wrote a little class that uses the Bose-Nelson algorithm to generate a sorting network on compile time.
It can be used to create a very fast sort for 10 numbers.
/**
* A Functor class to create a sort for fixed sized arrays/containers with a
* compile time generated Bose-Nelson sorting network.
* \tparam NumElements The number of elements in the array or container to sort.
* \tparam T The element type.
* \tparam Compare A comparator functor class that returns true if lhs < rhs.
*/
template <unsigned NumElements, class Compare = void> class StaticSort
{
template <class A, class C> struct Swap
{
template <class T> inline void s(T &v0, T &v1)
{
T t = Compare()(v0, v1) ? v0 : v1; // Min
v1 = Compare()(v0, v1) ? v1 : v0; // Max
v0 = t;
}
inline Swap(A &a, const int &i0, const int &i1) { s(a[i0], a[i1]); }
};
template <class A> struct Swap <A, void>
{
template <class T> inline void s(T &v0, T &v1)
{
// Explicitly code out the Min and Max to nudge the compiler
// to generate branchless code.
T t = v0 < v1 ? v0 : v1; // Min
v1 = v0 < v1 ? v1 : v0; // Max
v0 = t;
}
inline Swap(A &a, const int &i0, const int &i1) { s(a[i0], a[i1]); }
};
template <class A, class C, int I, int J, int X, int Y> struct PB
{
inline PB(A &a)
{
enum { L = X >> 1, M = (X & 1 ? Y : Y + 1) >> 1, IAddL = I + L, XSubL = X - L };
PB<A, C, I, J, L, M> p0(a);
PB<A, C, IAddL, J + M, XSubL, Y - M> p1(a);
PB<A, C, IAddL, J, XSubL, M> p2(a);
}
};
template <class A, class C, int I, int J> struct PB <A, C, I, J, 1, 1>
{
inline PB(A &a) { Swap<A, C> s(a, I - 1, J - 1); }
};
template <class A, class C, int I, int J> struct PB <A, C, I, J, 1, 2>
{
inline PB(A &a) { Swap<A, C> s0(a, I - 1, J); Swap<A, C> s1(a, I - 1, J - 1); }
};
template <class A, class C, int I, int J> struct PB <A, C, I, J, 2, 1>
{
inline PB(A &a) { Swap<A, C> s0(a, I - 1, J - 1); Swap<A, C> s1(a, I, J - 1); }
};
template <class A, class C, int I, int M, bool Stop = false> struct PS
{
inline PS(A &a)
{
enum { L = M >> 1, IAddL = I + L, MSubL = M - L};
PS<A, C, I, L, (L <= 1)> ps0(a);
PS<A, C, IAddL, MSubL, (MSubL <= 1)> ps1(a);
PB<A, C, I, IAddL, L, MSubL> pb(a);
}
};
template <class A, class C, int I, int M> struct PS <A, C, I, M, true>
{
inline PS(A &a) {}
};
public:
/**
* Sorts the array/container arr.
* \param arr The array/container to be sorted.
*/
template <class Container> inline void operator() (Container &arr) const
{
PS<Container, Compare, 1, NumElements, (NumElements <= 1)> ps(arr);
};
/**
* Sorts the array arr.
* \param arr The array to be sorted.
*/
template <class T> inline void operator() (T *arr) const
{
PS<T*, Compare, 1, NumElements, (NumElements <= 1)> ps(arr);
};
};
#include <iostream>
#include <vector>
int main(int argc, const char * argv[])
{
enum { NumValues = 10 };
// Arrays
{
int rands[NumValues];
for (int i = 0; i < NumValues; ++i) rands[i] = rand() % 100;
std::cout << "Before Sort: \t";
for (int i = 0; i < NumValues; ++i) std::cout << rands[i] << " ";
std::cout << "\n";
StaticSort<NumValues> staticSort;
staticSort(rands);
std::cout << "After Sort: \t";
for (int i = 0; i < NumValues; ++i) std::cout << rands[i] << " ";
std::cout << "\n";
}
std::cout << "\n";
// STL Vector
{
std::vector<int> rands(NumValues);
for (int i = 0; i < NumValues; ++i) rands[i] = rand() % 100;
std::cout << "Before Sort: \t";
for (int i = 0; i < NumValues; ++i) std::cout << rands[i] << " ";
std::cout << "\n";
StaticSort<NumValues> staticSort;
staticSort(rands);
std::cout << "After Sort: \t";
for (int i = 0; i < NumValues; ++i) std::cout << rands[i] << " ";
std::cout << "\n";
}
return 0;
}
Note that instead of an if (compare) swap
statement, we explicitly code out ternary operators for min and max. This is to help nudge the compiler into using branchless code.
##Benchmarks
The following benchmarks are compiled with clang -O3
and ran on my mid-2012 MacBook Air.
###Sorting random data
Comparing it with DarioP's code, here are the number of milliseconds taken to sort 1 million 32-bit int arrays of size 10:
Hardcoded Sort Net 10 : 88.774 ms Templated Bose-Nelson sort 10 : 27.815 ms
Using this templated approach, we can also generate sorting networks upon compile time for other number of elements.
Time (in milliseconds) to sort 1 million arrays of various sizes.
The number of milliseconds for arrays of size 2, 4, 8 are 1.943, 8.655, 20.246 respectively.
https://i.stack.imgur.com/pR6Jo.png
Credits to Glenn Teitelbaum for the unrolled insertion sort.
Here are the average clocks per sort for small arrays of 6 elements. The benchmark code and examples can be found at this question:
Fastest sort of fixed length 6 int array
Direct call to qsort library function : 326.81
Naive implementation (insertion sort) : 132.98
Insertion Sort (Daniel Stutzbach) : 104.04
Insertion Sort Unrolled : 99.64
Insertion Sort Unrolled (Glenn Teitelbaum) : 81.55
Rank Order : 44.01
Rank Order with registers : 42.40
Sorting Networks (Daniel Stutzbach) : 88.06
Sorting Networks (Paul R) : 31.64
Sorting Networks 12 with Fast Swap : 29.68
Sorting Networks 12 reordered Swap : 28.61
Reordered Sorting Network w/ fast swap : 24.63
Templated Sorting Network (this class) : 25.37
It performs as fast as the fastest example in the question for 6 elements.
###Performance for sorting sorted data
Often, the input arrays may be already sorted or mostly sorted. In such cases, insertion sort can be better choice.
https://i.stack.imgur.com/yWouL.png
You may want to choose an appropriate sorting algorithm depending on the data.
The code used for the benchmarks can be found here.
v1 = v0 < v1 ? v1 : v0; // Max
still may branch, in that case it can be replaced with v1 += v0 - t
because if t
is v0
then v1 + v0 -t == v1 + v0 - v0 == v1
else t
is v1
and v1 + v0 -t == v1 + v0 - v1 == v0
maxss
or minss
instruction on modern compilers. But in cases where it does not work, other ways of swapping can be used. :)
Although a network sort has good odds of being fast on small arrays, sometimes you can't beat insertion sort if properly optimized. For example batch insert with 2 elements:
{
final int a=in[0]<in[1]?in[0]:in[1];
final int b=in[0]<in[1]?in[1]:in[0];
in[0]=a;
in[1]=b;
}
for(int x=2;x<10;x+=2)
{
final int a=in[x]<in[x+1]?in[x]:in[x+1];
final int b=in[x]<in[x+1]?in[x+1]:in[x];
int y= x-1;
while(y>=0&&in[y]>b)
{
in[y+2]= in[y];
--y;
}
in[y+2]=b;
while(y>=0&&in[y]>a)
{
in[y+1]= in[y];
--y;
}
in[y+1]=a;
}
You can fully unroll insertion sort.
To make that easier, recursive templates can be used with no function overhead. Since it already is a template, int
can be a template parameter as well. This also makes coding array sizes other than 10 trivial to create.
Note that to sort int x[10]
the call is insert_sort<int, 9>::sort(x);
since the class uses the index of the last item. This could be wrapped, but that would be more code to read through.
template <class T, int NUM>
class insert_sort;
template <class T>
class insert_sort<T,0>
// Stop template recursion
// Sorting one item is a no operation
{
public:
static void place(T *x) {}
static void sort(T * x) {}
};
template <class T, int NUM>
class insert_sort
// Use template recursion to do insertion sort.
// NUM is the index of the last item, e.g. for x[10] call <9>
{
public:
static void place(T *x)
{
T t1=x[NUM-1];
T t2=x[NUM];
if (t1 > t2)
{
x[NUM-1]=t2;
x[NUM]=t1;
insert_sort<T,NUM-1>::place(x);
}
}
static void sort(T * x)
{
insert_sort<T,NUM-1>::sort(x); // Sort everything before
place(x); // Put this item in
}
};
In my testing this was faster than the sorting network examples.
For reasons similar to those that I described here, the following sorting functions, sort6_iterator()
and sort10_iterator_local()
, should perform well, where the sorting network was taken from here:
template<class IterType>
inline void sort10_iterator(IterType it)
{
#define SORT2(x,y) {if(data##x>data##y)std::swap(data##x,data##y);}
#define DD1(a) auto data##a=*(data+a);
#define DD2(a,b) auto data##a=*(data+a), data##b=*(data+b);
#define CB1(a) *(data+a)=data##a;
#define CB2(a,b) *(data+a)=data##a;*(data+b)=data##b;
DD2(1,4) SORT2(1,4) DD2(7,8) SORT2(7,8) DD2(2,3) SORT2(2,3) DD2(5,6) SORT2(5,6) DD2(0,9) SORT2(0,9)
SORT2(2,5) SORT2(0,7) SORT2(8,9) SORT2(3,6)
SORT2(4,9) SORT2(0,1)
SORT2(0,2) CB1(0) SORT2(6,9) CB1(9) SORT2(3,5) SORT2(4,7) SORT2(1,8)
SORT2(3,4) SORT2(5,8) SORT2(6,7) SORT2(1,2)
SORT2(7,8) CB1(8) SORT2(1,3) CB1(1) SORT2(2,5) SORT2(4,6)
SORT2(2,3) CB1(2) SORT2(6,7) CB1(7) SORT2(4,5)
SORT2(3,4) CB2(3,4) SORT2(5,6) CB2(5,6)
#undef CB1
#undef CB2
#undef DD1
#undef DD2
#undef SORT2
}
To call this function I passed it a std::vector
iterator.
Why swap when you can move? One x86 cache line has enough extra memory for you to do a merge sort.
I would probably insertion sort indexes 0-1, 2-4, 5-6, 7-9 separately, or even better keep those little groups sorted as I do the inserts, so that each insert requires at most one or two shifts.
Then merge 5,6 and 7-9 -> 10-14, merge 0-1 and 2-4 -> 5-9, and finally merge 5-9 and 10-14 -> 0-9
An insertion sort requires on average 29,6 comparisons to sort 10 inputs with a best case of 9 and a worst of 45 (given input that is in reverse order).
A {9,6,1} shellsort will require on average 25.5 comparisons to sort 10 inputs. Best case is 14 comparisons, worst is 34 and sorting a reversed input requires 22.
So using shellsort instead of insertion sort reduces the average case by 14%. Although the best case is increased by 56% the worst case is reduced by 24% which is significant in applications where keeping the worst case performance in check is important. The reverse case is reduced by 51%.
Since you seem to be familiar with insertion sort you can implement the algorithm as a sorting network for {9,6} and then tack on the insertion sort ({1}) after that:
i[0] with i[9] // {9}
i[0] with i[6] // {6}
i[1] with i[7] // {6}
i[2] with i[8] // {6}
i[3] with i[9] // {6}
i[0 ... 9] // insertion sort
Success story sharing
#define SORTPAIR(data, i1, i2) if (data[i1] > data[i2]) { int swap = data[i1]... }