Thread with 19 posts
jump to expanded postWAIT
sdl2's opengl context sharing maybe actually works, i was just using it wrong?
oh my god 😭
i need to verify that i can actually do something useful with this, not just that sdl2 says the context was created successfully, but… 🤞🤞🤞
this is of course about touchHLE, which really needs the ability to share framebuffers between opengl contexts in some form, but which i've never been able to get working until now, so i've had to resort to incredibly horrible workarounds that'll only get worse in future
Core Animation here we come. i hope
hell yeah I can share a texture between contexts!!
wait… wait why is the texture fucked up the first time it's rendered… in both contexts… what the fuck
i didn't realise the drivers were this buggy 😱
- first frame, drawn with first context, which creates the “GL” texture
- second frame, drawn with the second context, which uses the same texture
- third frame, drawn with the first context, same texture again
- fourth frame, drawn with the second context, same texture again
hmm, could this just be a bug related to GL_QUADS and not actually a problem with the shared texture? 🤔
(yes I know that's legacy, but so is my whole stack and this is a prototype in any case)
no wait lol it's my fault, I was calling glVertex() before glTexCoord() rather than after, phew
(yes I know, I'd never use glBegin()/glEnd() normally but they're extremely convenient for prototyping)
for the morbidly curious, here's my experiment code: https://gist.github.com/hikari-no-yume/d454c8c4cdc0b6e460238238f53aec07
anyway, yay, seems like this might be viable for touchHLE. I'll simply pray this works on Windows and Android too…
well, I prototyped it in touchHLE, and GL context sharing does work for presenting frames…
…but I have to call glFinish(), so I may as well use glReadPixels(), the thing I had been dreading…
I mean. maybe using glReadPixels() is fine. these are ancient games, surely they can finish rendering in well under 16ms…? I'm not doing major rendering after that, just compositing. OTOH I shudder at the thought of sending a 4K framebuffer from VRAM to system RAM and back…
Event handling <1ms
ARMv6 emulation 5ms
Emulation overhead 5ms
Waiting for vsync 5ms
Unavoidable OpenGL pipeline flush 15ms
someone who is good at the graphics please help me budget this. my emulator is dying
(not real numbers but they're what I'm afraid of)
what are the chances I end up trying to create virtual opengl contexts… god, every solution sucks
ok so my options are
- require opengl context sharing and introduce at least 16.67ms of latency to frame presentation
- use glReadPixels() and cause a pipeline flush every single frame, and potentially send a 4K framebuffer from VRAM to system RAM and back
the secret third option is I reimplement OpenGL ES 1.1… on top of WebGPU or something
fuck I hate everything
@hikari does ANGLE or something do GLES 1.1?
@leftpaddotpy I think ANGLE has some support for OpenGL ES 1.x? not sure how complete it is. unfortunately ANGLE is probably a nightmare to integrate into my project because it's so huge and uses Google's weird build system. I really should try it though.
@hikari yeah, my understanding is that people tend to get so mad at the ANGLE build system in particular that there may be forks that convert it to cmake or literally anything else
@leftpaddotpy a friend of mine ported it to cmake! but alas they ported a pretty old version of it, from before gles1 support was added