I have an algorithm written in Java that I would like to make more efficient. A part that I think could be made more efficient is finding the smallest of 3 numbers. Currently I'm using the Math.min
method as below:
double smallest = Math.min(a, Math.min(b, c));
How efficient is this? Would it be more efficient to replace with if statements like below:
double smallest;
if (a <= b && a <= c) {
smallest = a;
} else if (b <= c && b <= a) {
smallest = b;
} else {
smallest = c;
}
Or if any other way is more efficient
I'm wondering if it is worth changing what I'm currently using?
Any speed increase would be greatly helpful
premature optimization
, and then profile your code.
For a lot of utility-type methods, the apache commons libraries have solid implementations that you can either leverage or get additional insight from. In this case, there is a method for finding the smallest of three doubles available in org.apache.commons.lang.math.NumberUtils. Their implementation is actually nearly identical to your initial thought:
public static double min(double a, double b, double c) {
return Math.min(Math.min(a, b), c);
}
No, it's seriously not worth changing. The sort of improvements you're going to get when fiddling with micro-optimisations like this will not be worth it. Even the method call cost will be removed if the min
function is called enough.
If you have a problem with your algorithm, your best bet is to look into macro-optimisations ("big picture" stuff like algorithm selection or tuning) - you'll generally get much better performance improvements there.
And your comment that removing Math.pow
gave improvements may well be correct but that's because it's a relatively expensive operation. Math.min
will not even be close to that in terms of cost.
double smallest = a;
if (smallest > b) smallest = b;
if (smallest > c) smallest = c;
Not necessarily faster than your code.
Let me first repeat what others already said, by quoting from the article "Structured Programming with go to Statements" by Donald Knuth:
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.
(emphasis by me)
So if you have identified that a seemingly trivial operation like the computation of the minimum of three numbers is the actual bottleneck (that is, the "critical 3%") in your application, then you may consider optimizing it.
And in this case, this is actually possible: The Math#min(double,double)
method in Java has very special semantics:
Returns the smaller of two double values. That is, the result is the value closer to negative infinity. If the arguments have the same value, the result is that same value. If either value is NaN, then the result is NaN. Unlike the numerical comparison operators, this method considers negative zero to be strictly smaller than positive zero. If one argument is positive zero and the other is negative zero, the result is negative zero.
One can have a look at the implementation, and see that it's actually rather complex:
public static double min(double a, double b) {
if (a != a)
return a; // a is NaN
if ((a == 0.0d) &&
(b == 0.0d) &&
(Double.doubleToRawLongBits(b) == negativeZeroDoubleBits)) {
// Raw conversion ok since NaN can't map to -0.0.
return b;
}
return (a <= b) ? a : b;
}
Now, it may be important to point out that this behavior is different from a simple comparison. This can easily be examined with the following example:
public class MinExample
{
public static void main(String[] args)
{
test(0.0, 1.0);
test(1.0, 0.0);
test(-0.0, 0.0);
test(Double.NaN, 1.0);
test(1.0, Double.NaN);
}
private static void test(double a, double b)
{
double minA = Math.min(a, b);
double minB = a < b ? a : b;
System.out.println("a: "+a);
System.out.println("b: "+b);
System.out.println("minA "+minA);
System.out.println("minB "+minB);
if (Double.doubleToRawLongBits(minA) !=
Double.doubleToRawLongBits(minB))
{
System.out.println(" -> Different results!");
}
System.out.println();
}
}
However: If the treatment of NaN
and positive/negative zero is not relevant for your application, you can replace the solution that is based on Math.min
with a solution that is based on a simple comparison, and see whether it makes a difference.
This will, of course, be application dependent. Here is a simple, artificial microbenchmark (to be taken with a grain of salt!)
import java.util.Random;
public class MinPerformance
{
public static void main(String[] args)
{
bench();
}
private static void bench()
{
int runs = 1000;
for (int size=10000; size<=100000; size+=10000)
{
Random random = new Random(0);
double data[] = new double[size];
for (int i=0; i<size; i++)
{
data[i] = random.nextDouble();
}
benchA(data, runs);
benchB(data, runs);
}
}
private static void benchA(double data[], int runs)
{
long before = System.nanoTime();
double sum = 0;
for (int r=0; r<runs; r++)
{
for (int i=0; i<data.length-3; i++)
{
sum += minA(data[i], data[i+1], data[i+2]);
}
}
long after = System.nanoTime();
System.out.println("A: length "+data.length+", time "+(after-before)/1e6+", result "+sum);
}
private static void benchB(double data[], int runs)
{
long before = System.nanoTime();
double sum = 0;
for (int r=0; r<runs; r++)
{
for (int i=0; i<data.length-3; i++)
{
sum += minB(data[i], data[i+1], data[i+2]);
}
}
long after = System.nanoTime();
System.out.println("B: length "+data.length+", time "+(after-before)/1e6+", result "+sum);
}
private static double minA(double a, double b, double c)
{
return Math.min(a, Math.min(b, c));
}
private static double minB(double a, double b, double c)
{
if (a < b)
{
if (a < c)
{
return a;
}
return c;
}
if (b < c)
{
return b;
}
return c;
}
}
(Disclaimer: Microbenchmarking in Java is an art, and for more reliable results, one should consider using JMH or Caliper).
Running this with JRE 1.8.0_31 may result in something like
....
A: length 90000, time 545.929078, result 2.247805342620906E7
B: length 90000, time 441.999193, result 2.247805342620906E7
A: length 100000, time 608.046928, result 2.5032781001456387E7
B: length 100000, time 493.747898, result 2.5032781001456387E7
This at least suggests that it might be possible to squeeze out a few percent here (again, in a very artifical example).
Analyzing this further, by looking at the hotspot disassembly output created with
java -server -XX:+UnlockDiagnosticVMOptions -XX:+TraceClassLoading -XX:+LogCompilation -XX:+PrintAssembly MinPerformance
one can see the optimized versions of both methods, minA
and minB
.
First, the output for the method that uses Math.min
:
Decoding compiled method 0x0000000002992310:
Code:
[Entry Point]
[Verified Entry Point]
[Constants]
# {method} {0x000000001c010910} 'minA' '(DDD)D' in 'MinPerformance'
# parm0: xmm0:xmm0 = double
# parm1: xmm1:xmm1 = double
# parm2: xmm2:xmm2 = double
# [sp+0x60] (sp of caller)
0x0000000002992480: mov %eax,-0x6000(%rsp)
0x0000000002992487: push %rbp
0x0000000002992488: sub $0x50,%rsp
0x000000000299248c: movabs $0x1c010cd0,%rsi
0x0000000002992496: mov 0x8(%rsi),%edi
0x0000000002992499: add $0x8,%edi
0x000000000299249c: mov %edi,0x8(%rsi)
0x000000000299249f: movabs $0x1c010908,%rsi ; {metadata({method} {0x000000001c010910} 'minA' '(DDD)D' in 'MinPerformance')}
0x00000000029924a9: and $0x3ff8,%edi
0x00000000029924af: cmp $0x0,%edi
0x00000000029924b2: je 0x00000000029924e8 ;*dload_0
; - MinPerformance::minA@0 (line 58)
0x00000000029924b8: vmovsd %xmm0,0x38(%rsp)
0x00000000029924be: vmovapd %xmm1,%xmm0
0x00000000029924c2: vmovapd %xmm2,%xmm1 ;*invokestatic min
; - MinPerformance::minA@4 (line 58)
0x00000000029924c6: nop
0x00000000029924c7: callq 0x00000000028c6360 ; OopMap{off=76}
;*invokestatic min
; - MinPerformance::minA@4 (line 58)
; {static_call}
0x00000000029924cc: vmovapd %xmm0,%xmm1 ;*invokestatic min
; - MinPerformance::minA@4 (line 58)
0x00000000029924d0: vmovsd 0x38(%rsp),%xmm0 ;*invokestatic min
; - MinPerformance::minA@7 (line 58)
0x00000000029924d6: nop
0x00000000029924d7: callq 0x00000000028c6360 ; OopMap{off=92}
;*invokestatic min
; - MinPerformance::minA@7 (line 58)
; {static_call}
0x00000000029924dc: add $0x50,%rsp
0x00000000029924e0: pop %rbp
0x00000000029924e1: test %eax,-0x27623e7(%rip) # 0x0000000000230100
; {poll_return}
0x00000000029924e7: retq
0x00000000029924e8: mov %rsi,0x8(%rsp)
0x00000000029924ed: movq $0xffffffffffffffff,(%rsp)
0x00000000029924f5: callq 0x000000000297e260 ; OopMap{off=122}
;*synchronization entry
; - MinPerformance::minA@-1 (line 58)
; {runtime_call}
0x00000000029924fa: jmp 0x00000000029924b8
0x00000000029924fc: nop
0x00000000029924fd: nop
0x00000000029924fe: mov 0x298(%r15),%rax
0x0000000002992505: movabs $0x0,%r10
0x000000000299250f: mov %r10,0x298(%r15)
0x0000000002992516: movabs $0x0,%r10
0x0000000002992520: mov %r10,0x2a0(%r15)
0x0000000002992527: add $0x50,%rsp
0x000000000299252b: pop %rbp
0x000000000299252c: jmpq 0x00000000028ec620 ; {runtime_call}
0x0000000002992531: hlt
0x0000000002992532: hlt
0x0000000002992533: hlt
0x0000000002992534: hlt
0x0000000002992535: hlt
0x0000000002992536: hlt
0x0000000002992537: hlt
0x0000000002992538: hlt
0x0000000002992539: hlt
0x000000000299253a: hlt
0x000000000299253b: hlt
0x000000000299253c: hlt
0x000000000299253d: hlt
0x000000000299253e: hlt
0x000000000299253f: hlt
[Stub Code]
0x0000000002992540: nop ; {no_reloc}
0x0000000002992541: nop
0x0000000002992542: nop
0x0000000002992543: nop
0x0000000002992544: nop
0x0000000002992545: movabs $0x0,%rbx ; {static_stub}
0x000000000299254f: jmpq 0x000000000299254f ; {runtime_call}
0x0000000002992554: nop
0x0000000002992555: movabs $0x0,%rbx ; {static_stub}
0x000000000299255f: jmpq 0x000000000299255f ; {runtime_call}
[Exception Handler]
0x0000000002992564: callq 0x000000000297b9e0 ; {runtime_call}
0x0000000002992569: mov %rsp,-0x28(%rsp)
0x000000000299256e: sub $0x80,%rsp
0x0000000002992575: mov %rax,0x78(%rsp)
0x000000000299257a: mov %rcx,0x70(%rsp)
0x000000000299257f: mov %rdx,0x68(%rsp)
0x0000000002992584: mov %rbx,0x60(%rsp)
0x0000000002992589: mov %rbp,0x50(%rsp)
0x000000000299258e: mov %rsi,0x48(%rsp)
0x0000000002992593: mov %rdi,0x40(%rsp)
0x0000000002992598: mov %r8,0x38(%rsp)
0x000000000299259d: mov %r9,0x30(%rsp)
0x00000000029925a2: mov %r10,0x28(%rsp)
0x00000000029925a7: mov %r11,0x20(%rsp)
0x00000000029925ac: mov %r12,0x18(%rsp)
0x00000000029925b1: mov %r13,0x10(%rsp)
0x00000000029925b6: mov %r14,0x8(%rsp)
0x00000000029925bb: mov %r15,(%rsp)
0x00000000029925bf: movabs $0x515db148,%rcx ; {external_word}
0x00000000029925c9: movabs $0x2992569,%rdx ; {internal_word}
0x00000000029925d3: mov %rsp,%r8
0x00000000029925d6: and $0xfffffffffffffff0,%rsp
0x00000000029925da: callq 0x00000000512a9020 ; {runtime_call}
0x00000000029925df: hlt
[Deopt Handler Code]
0x00000000029925e0: movabs $0x29925e0,%r10 ; {section_word}
0x00000000029925ea: push %r10
0x00000000029925ec: jmpq 0x00000000028c7340 ; {runtime_call}
0x00000000029925f1: hlt
0x00000000029925f2: hlt
0x00000000029925f3: hlt
0x00000000029925f4: hlt
0x00000000029925f5: hlt
0x00000000029925f6: hlt
0x00000000029925f7: hlt
One can see that the treatment of special cases involves some effort - compared to the output that uses simple comparisons, which is rather straightforward:
Decoding compiled method 0x0000000002998790:
Code:
[Entry Point]
[Verified Entry Point]
[Constants]
# {method} {0x000000001c0109c0} 'minB' '(DDD)D' in 'MinPerformance'
# parm0: xmm0:xmm0 = double
# parm1: xmm1:xmm1 = double
# parm2: xmm2:xmm2 = double
# [sp+0x20] (sp of caller)
0x00000000029988c0: sub $0x18,%rsp
0x00000000029988c7: mov %rbp,0x10(%rsp) ;*synchronization entry
; - MinPerformance::minB@-1 (line 63)
0x00000000029988cc: vucomisd %xmm0,%xmm1
0x00000000029988d0: ja 0x00000000029988ee ;*ifge
; - MinPerformance::minB@3 (line 63)
0x00000000029988d2: vucomisd %xmm1,%xmm2
0x00000000029988d6: ja 0x00000000029988de ;*ifge
; - MinPerformance::minB@22 (line 71)
0x00000000029988d8: vmovapd %xmm2,%xmm0
0x00000000029988dc: jmp 0x00000000029988e2
0x00000000029988de: vmovapd %xmm1,%xmm0 ;*synchronization entry
; - MinPerformance::minB@-1 (line 63)
0x00000000029988e2: add $0x10,%rsp
0x00000000029988e6: pop %rbp
0x00000000029988e7: test %eax,-0x27688ed(%rip) # 0x0000000000230000
; {poll_return}
0x00000000029988ed: retq
0x00000000029988ee: vucomisd %xmm0,%xmm2
0x00000000029988f2: ja 0x00000000029988e2 ;*ifge
; - MinPerformance::minB@10 (line 65)
0x00000000029988f4: vmovapd %xmm2,%xmm0
0x00000000029988f8: jmp 0x00000000029988e2
0x00000000029988fa: hlt
0x00000000029988fb: hlt
0x00000000029988fc: hlt
0x00000000029988fd: hlt
0x00000000029988fe: hlt
0x00000000029988ff: hlt
[Exception Handler]
[Stub Code]
0x0000000002998900: jmpq 0x00000000028ec920 ; {no_reloc}
[Deopt Handler Code]
0x0000000002998905: callq 0x000000000299890a
0x000000000299890a: subq $0x5,(%rsp)
0x000000000299890f: jmpq 0x00000000028c7340 ; {runtime_call}
0x0000000002998914: hlt
0x0000000002998915: hlt
0x0000000002998916: hlt
0x0000000002998917: hlt
Whether or not there are cases where such an optimization really makes a difference in an application is hard to tell. But at least, the bottom line is:
The Math#min(double,double) method is not the same as a simple comparison, and the treatment of the special cases does not come for free
There are cases where the special case treatment that is done by Math#min is not necessary, and then a comparison-based approach may be more efficient
As already pointed out in other answers: In most cases, the performance difference will not matter. However, for this particular example, one should probably create a utility method min(double,double,double) anyhow, for better convenience and readability, and then it would be easy to do two runs with the different implementations, and see whether it really affects the performance.
(Side note: The integer type methods, like Math.min(int,int)
actually are a simple comparison - so I would expect no difference for these).
You can use ternary operator as follows:
smallest=(a<b)?((a<c)?a:c):((b<c)?b:c);
Which takes only one assignment and minimum two comparisons.
But I think that these statements would not have any effect on execution time, your initial logic will take same time as of mine and all of others.
a
, b
and c
to smallest
.
OP's efficient code has a bug:
when a == b
, and a (or b) < c
, the code will pick c instead of a or b.
For those who find this topic much later:
If you have just three values to compare there is no significant difference. But if you have to find min of, say, thirty or sixty values, "min" could be easier for anyone to read in the code next year:
int smallest;
smallest = min(a1, a2);
smallest = min(smallest, a3);
smallest = min(smallest, a4);
...
smallest = min(smallest, a37);
But if you think of speed, maybe better way would be to put values into list, and then find min of that:
List<Integer> mySet = Arrays.asList(a1, a2, a3, ..., a37);
int smallest = Collections.min(mySet);
Would you agree?
Collections.min()
is going to help speed
. Java's stab at "implicit parallelism/concurrency" is stream
.
Math.min
uses a simple comparison to do its thing. The only advantage to not using Math.min is to save the extra function calls, but that is a negligible saving.
If you have more than just three numbers, having a minimum
method for any number of double
s might be valuable and would look something like:
public static double min(double ... numbers) {
double min = numbers[0];
for (int i=1 ; i<numbers.length ; i++) {
min = (min <= numbers[i]) ? min : numbers[i];
}
return min;
}
For three numbers this is the functional equivalent of Math.min(a, Math.min(b, c));
but you save one method invocation.
Math.min
only uses a simple comparison for the integral data types. But for double
and float
, it also takes care of special cases like NaN
.
double smallest;
if(a<b && a<c){
smallest = a;
}else if(b<c && b<a){
smallest = b;
}else{
smallest = c;
}
can be improved to:
double smallest;
if(a<b && a<c){
smallest = a;
}else if(b<c){
smallest = b;
}else{
smallest = c;
}
It all looks ok, your code will be fine, unless you're doing this in a tight loop. I also would consider
double min;
min = (a<b) ? a : b;
min = (min<c) ? min : c;
If you will call min() around 1kk times with different a, b, c, then use my method:
Here only two comparisons. There is no way to calc faster :P
public static double min(double a, double b, double c) {
if (a > b) { //if true, min = b
if (b > c) { //if true, min = c
return c;
} else { //else min = b
return b;
}
} //else min = a
if (a > c) { // if true, min=c
return c;
} else {
return a;
}
}
For pure characters-of-code efficiency, I can't find anything better than
smallest = a<b&&a<c?a:b<c?b:c;
Use the Arrays.sort() method, the lowest value will be element0.
In terms of performance, this should not be expensive since the sort operation is already optimised. Also has the advantage of being concise.
private int min(int ... value) {
Arrays.sort(value);
return value[0];
}
Proof of concept
int[] intArr = {12, 5, 6, 9, 44, 28, 1, 4, 18, 2, 66, 13, 1, 33, 74, 12,
5, 6, 9, 44, 28, 1, 4, 18, 2, 66, 13};
// Sorting approach
long startTime = System.currentTimeMillis();
int minVal = min(intArr);
long endTime = System.currentTimeMillis();
System.out.println("Sorting: Min => " + minVal + " took => " + (endTime -
startTime));
System.out.println(startTime + " " + endTime);
System.out.println(" ");
// Scanning approach
minVal = 100;
startTime = System.currentTimeMillis();
for(int val : intArr) {
if (val < minVal)
minVal = val;
}
endTime = System.currentTimeMillis();
System.out.println("Iterating: Min => " + minVal + " took => " + (endTime
- startTime));
System.out.println(startTime + " " + endTime);
https://i.stack.imgur.com/PfDYY.png
I would use min/max
(and not worry otherwise) ... however, here is another "long hand" approach which may or may not be easier for some people to understand. (I would not expect it to be faster or slower than the code in the post.)
int smallest;
if (a < b) {
if (a > c) {
smallest = c;
} else { // a <= c
smallest = a;
}
} else { // a >= b
if (b > c) {
smallest = c;
} else { // b <= c
smallest = b;
}
}
Just throwing it into the mix.
Note that this is just the side-effecting variant of Abhishek's answer.
Works with an arbitrary number of input values:
public static double min(double... doubles) {
double m = Double.MAX_VALUE;
for (double d : doubles) {
m = Math.min(m, d);
}
return m;
}
Simply use this math function
System.out.println(Math.min(a,b,c));
You will get the answer in single line.
Math.min
function in Java. Read the javadocs.
Success story sharing
min
or doing it three numbers at a time - I'd run through the list of hundreds of thousands of numbers once to get the minimum. And, as I said, if you do call it that often, the JIT compiler will almost certainly compile/inline it. That also appears to be the consensus view, given all the other answers as well.