- cross-posted to:
- science_memes@mander.xyz
- cross-posted to:
- science_memes@mander.xyz
Stolen Cross-posted from here: https://fosstodon.org/@foo/113731569632505985
Okay. I really laughed out loud on that one
The downvote is from someone who doesn’t understand floating point notation
Technically, floating point also imitates irrational and whole numbers as well. Not all numbers though, you’d need a more uhm… elaborate structure to represent complex numbers, surreal numbers, vectors, matrices, and so on.
It does not even imitate all rationals. For example 1/3.
… Surreal numbers?!?
IEE 754 is my favourite IEEE standard. 754 gang
Has anyone ever come across 8 or 16 bit floats? What were they used for?
Neural net evaluation mainly, but FP16 is used in graphics too.
I found this: https://en.wikipedia.org/wiki/Minifloat
Actually, you can consider RGB values to be (triplets of) floats, too.
Typically, one pixel takes up up to 32 bits of space, encoding Red, Green, Blue, and sometimes Alpha (opacity) values. That makes approximately 8 bits per color channel.
Since each color can be a value between 0.0 (color is off) and 1.0 (color is on), that means every color channel is effectively a 8-bit float.
Pretty sure what you’re describing isn’t floating-point numbers, but fixed-point numbers… Which would also work just as well or better in most cases where floats are used.
Aren’t they fractions rather than floating point decimals?