Guess only work with integers, specially for the floor function that is going to give you an integer at the end everytime.
Not my idea, learned it somewhere while doing college in an statistics class. The idea is that the exponential function grow really fast, so small difference on variables become extreme difference on the exponential, then the log function reverse the exponential, but because it grew more for the biggest variable it reverts to the max variable making the other variables the decimal part (this is why you need the floor function). I think is cool because works for any number of variables, unlike mathematician 2 who only work for 2 variables (maybe it can be generalized for more variables but I don’t think can be done).
For a min fuction it can be use ceiling(-ln(e^-x + e^-y))
to be fair it does seem to work for any two numbers where one is >ln(2).
As lim x,y–> inf ln(ex+ey) <= lim x,y --> inf ln(2 e^(max(x,y))) = max(x,y) + ln(2).
I think is cool because works for any number of variables
using the same proof as before we can see that:
lim,x_i -->inf ln(sum_i/in I} e^(x_i)) <= ln(.
So it would only work for at most [base of your log, so e<3 for ln] variables.
I do think the upper bound on that page is wrong thought. Incedentally in the article itself only the lower bound is prooven, but in its sources this paper prooves what I did in my comment before as well:
for the upper bound it has max +log(n) . (Section 2, eq 4) This lets us construct an example (see reply to your other comment) to disproove the notion about beeing able to calculate the max for many integers.
I just remembered where I learned about that function, in this course on convex optimization that unfortunately I never had the opportunity to finishing it but is really good.
I don’t have a mathematical proof, but doing some experimental tests on excel, using multiple (more than 3) numers and using negative numbers (including only negative numbers) it works perfectly every time.
Try (100,100,100,100,100,101) or 50 ones and a two, should result in 102 and 4 as a max respectively.
I tried using less numbers, but the less numbers you use, the higher the values (to be exact less off a deviation(%-difference) between the values, resulting in higher numbers) have to be and wolframAlpha does not like 10^100 values so I stopped trying.
Mathematician 3
Max(x, y) = floor(ln(e^x + e^y))
so 0.3 ~= 1-ln(2)=max(1-ln(2),1-ln(2)) = floor(ln(2*e^(1-ln(2)))) = floor(ln(2)+(1-ln(2))) = 1 ?
That would bee engeneer 2, not Mathematician3 xD.
Just out of curiostity, what was you Idea behind that?
Guess only work with integers, specially for the floor function that is going to give you an integer at the end everytime.
Not my idea, learned it somewhere while doing college in an statistics class. The idea is that the exponential function grow really fast, so small difference on variables become extreme difference on the exponential, then the log function reverse the exponential, but because it grew more for the biggest variable it reverts to the max variable making the other variables the decimal part (this is why you need the floor function). I think is cool because works for any number of variables, unlike mathematician 2 who only work for 2 variables (maybe it can be generalized for more variables but I don’t think can be done).
For a min fuction it can be use ceiling(-ln(e^-x + e^-y))
to be fair it does seem to work for any two numbers where one is >ln(2). As lim x,y–> inf ln(ex+ey) <= lim x,y --> inf ln(2 e^(max(x,y))) = max(x,y) + ln(2).
using the same proof as before we can see that: lim,x_i -->inf ln(sum_i/in I} e^(x_i)) <= ln(.
So it would only work for at most [base of your log, so e<3 for ln] variables.
After searching a little, I found the name of the function and it’s proof: https://en.wikipedia.org/wiki/LogSumExp
thanks for looking it up:).
I do think the upper bound on that page is wrong thought. Incedentally in the article itself only the lower bound is prooven, but in its sources this paper prooves what I did in my comment before as well:
for the upper bound it has max +log(n) . (Section 2, eq 4) This lets us construct an example (see reply to your other comment) to disproove the notion about beeing able to calculate the max for many integers.
I just remembered where I learned about that function, in this course on convex optimization that unfortunately I never had the opportunity to finishing it but is really good.
I don’t have a mathematical proof, but doing some experimental tests on excel, using multiple (more than 3) numers and using negative numbers (including only negative numbers) it works perfectly every time.
Try (100,100,100,100,100,101) or 50 ones and a two, should result in 102 and 4 as a max respectively. I tried using less numbers, but the less numbers you use, the higher the values (to be exact less off a deviation(%-difference) between the values, resulting in higher numbers) have to be and wolframAlpha does not like 10^100 values so I stopped trying.