In 2024, with GNOME 45, Wayland, and 1.25 fractional scaling, regular DPI displays still look better than HiDPI displays. This is a photo of Discord on two laptops side by side.
The blurry one is the HiDPI display from Framework 13. The sharp one is a regular DPI display from Dell XPS 13. Both laptops.
The difference is even more stark in person.
Even the screenshots from the Framework are blurrier than the screen shots from the Dell.
Even “real” fractional scaling in Plasma with Qt 6 is not much better. Text will look slightly sharper, but icons are still blurry. There is no way for them to look sharp with 1.25 scaling since they are drawn with a pixel grid in mind. Unless you invent some way to stretch svgs so that their individual elements and spaces between them retain their integer-ness while the scale of the whole image is fractional.
The only other solution is monitors with 300+ PPI where blurriness is simply not noticeable (that’s the way Apple went).
What bugs me is we have fsr and dlss and all these cutting edge scaling techniques for the 3d game space, but we’re stuck fighting pixels on desktop I guess
FSR and DLSS work well if you have a lot of pixels to work with, but it gets drastically worse the fewer you have to work with.
Both also struggle with text.
It’d be completely unusable for a lot of typical computing
I’m not sure I understand this, I use FSR to scale from 480 to 1080 which I thought was the intent? Render small image and then fill in information to make it closer to native resolution?
But yes it definitely it struggles with text, I wouldn’t expect to apply existing solutions and have it all just work, more like something specialized for text and desktops, using tensor cores or whatever.
I’m ultimately just frustrated we live in a time with tech to generate an image of a potato bug juggling flaming swords, while simultaneously failing to have a good UI experience with HiDPI displays that are becoming more and more common.