Thread with 10 posts

jump to expanded post

looping over every pixel one-by-one is expensive. for relatively simple images used in the GUI (for example, two outlined intersecting rectangles), it is cheaper to compress the images to something akin to run-length encoding, do the operations in compressed form, then uncompress

Open thread at this post

there's all sorts of interesting implications to this idea. for example, processing time (excepting the compression/decompression stage) would scale not with image size, but with the amount of detail in the image. you also would presumably try to directly emit compressed images

Open thread at this post

i happen to be aware that, historically speaking, the routines in “accelerated” 2D graphics drivers for Windows were also very complex. i don't know what they were doing, but perhaps they did similar tricks. today, of course, 2D graphics routines of this kind are obsolete

Open thread at this post

with the coming of the 21st century, the “embarrassingly parallel” power of the GPU, a processor designed exclusively for 3D applications, became too big to ignore, and it completely obsoleted this type of optimised CPU routines or hardware for handling 2D graphics for GUIs

Open thread at this post

GPUs are funny because, if “operations executed” is your metric of efficiency, they're very inefficient compared to what QuickDraw and the like were doing. and in fact they are “bad” at 2D generally! but they have such absurd throughput that it doesn't matter. the GPU is king now

Open thread at this post

a funny thing about the GPU being king now is that all modern operating systems need a GPU to render their GUI. there's a software (CPU) simulation of a GPU that kicks in on Windows if you don't have a hardware one available. and it is less efficient than the old routines :)

Open thread at this post