Thread with 20 posts
jump to expanded postWAIT
sdl2's opengl context sharing maybe actually works, i was just using it wrong?
oh my god 😭
i need to verify that i can actually do something useful with this, not just that sdl2 says the context was created successfully, but… 🤞🤞🤞
this is of course about touchHLE, which really needs the ability to share framebuffers between opengl contexts in some form, but which i've never been able to get working until now, so i've had to resort to incredibly horrible workarounds that'll only get worse in future
Core Animation here we come. i hope
hell yeah I can share a texture between contexts!!
wait… wait why is the texture fucked up the first time it's rendered… in both contexts… what the fuck
i didn't realise the drivers were this buggy 😱
- first frame, drawn with first context, which creates the “GL” texture
- second frame, drawn with the second context, which uses the same texture
- third frame, drawn with the first context, same texture again
- fourth frame, drawn with the second context, same texture again
hmm, could this just be a bug related to GL_QUADS and not actually a problem with the shared texture? 🤔
(yes I know that's legacy, but so is my whole stack and this is a prototype in any case)
no wait lol it's my fault, I was calling glVertex() before glTexCoord() rather than after, phew
(yes I know, I'd never use glBegin()/glEnd() normally but they're extremely convenient for prototyping)
for the morbidly curious, here's my experiment code: https://gist.github.com/hikari-no-yume/d454c8c4cdc0b6e460238238f53aec07
anyway, yay, seems like this might be viable for touchHLE. I'll simply pray this works on Windows and Android too…
well, I prototyped it in touchHLE, and GL context sharing does work for presenting frames…
…but I have to call glFinish(), so I may as well use glReadPixels(), the thing I had been dreading…
I mean. maybe using glReadPixels() is fine. these are ancient games, surely they can finish rendering in well under 16ms…? I'm not doing major rendering after that, just compositing. OTOH I shudder at the thought of sending a 4K framebuffer from VRAM to system RAM and back…
Event handling <1ms
ARMv6 emulation 5ms
Emulation overhead 5ms
Waiting for vsync 5ms
Unavoidable OpenGL pipeline flush 15ms
someone who is good at the graphics please help me budget this. my emulator is dying
(not real numbers but they're what I'm afraid of)
what are the chances I end up trying to create virtual opengl contexts… god, every solution sucks
ok so my options are
- require opengl context sharing and introduce at least 16.67ms of latency to frame presentation
- use glReadPixels() and cause a pipeline flush every single frame, and potentially send a 4K framebuffer from VRAM to system RAM and back
the secret third option is I reimplement OpenGL ES 1.1… on top of WebGPU or something
fuck I hate everything
or maybe I can do some bullshit with EGL/WGL/etc???? haven't looked into that but oh god I don't want to have to stop using SDL
@hikari IIUC you are trying to use the result from rendering in one context in another one, right? And this is all in the dame process. Does the process doing the rendering need a window surface, or does it do it somehow offscren? I think you may be able to get away with taking care yourself of buffer allocation and using that as target texture for a FBO, and the same buffer should be usable in the other context as well. The main thing to look for is glEGLImageTargetTexture2DOES
@hikari the caveats are:
- Either you glFinish on the context that paints before using the buffer as texture elsewhere for rendering (potentially slow), or arrange using fence objects for syncing (potentially gnarly).
- What can be used as buffer to bind as render target for the FBO depends on the EGL platform: GBM buffers on Linux for most drivers, AHardwareBuffer on Android, IOSurface on Apple (I think, don't get my word for it) and so on.
- You probably need two (front/back) buffers
IUC you are trying to use the result from rendering in one context in another one, right? And this is all in the dame process.
Yes, that's right.
Does the process doing the rendering need a window surface, or does it do it somehow offscren?
The first rendering is offscreen, resulting in an OpenGL renderbuffer. Then I want to use that renderbuffer's content in another context so I can composite it with other stuff and output it to the window.
Thanks for the suggestions related to EGL, that seems like a workable solution.
@hikari you're welcome, and at any rate I hope you manage to get things working in a way that feels satisfactory to you, regardless of whether my suggestion ends up being useful or not. While I haven't been much into the iOS ecosystem, I think it's great that there are folks out there poking at it in ways that would enable software preservation—happy hacking!