Thread with 4 posts

jump to expanded post

you know, itโ€™s a shame that C is designed by ISO/IEC JTC1/SC22/WG14 and not the OpenGL Architecture Review Board. iโ€™m sure weโ€™d all rather write:

glDisable(GL_MASKING);
glEnableClientState(GL_SOURCE_DATA_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glSourceDataPointer(1, gl::UNSIGNED_BYTE, 1, dst_ptr);
glEnableClientState(GL_DESTINATION_DATA_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glDestinationDataPointer(1, gl::UNSIGNED_BYTE, 1, src_ptr);
glCopyBytes(0, num_bytes);
if (glGetError() != 0) {
   // fuck
}

rather than

memcpy(dst_ptr, src_ptr, size);
Open thread at this post

โ€œclient state? in 2023?โ€ i hear you cry. ok ok. surely what we want to do then is

glDisable(GL_MASKING);
glEnableSourceDataArray();
glBindBuffer(GL_ARRAY_BUFFER, src_buf);
glSourceDataPointer(1, gl::UNSIGNED_BYTE, 1, (const GLvoid*)(uintptr_t)src_offset);
glEnableDestinationDataArray();
glBindBuffer(GL_ARRAY_BUFFER, dst_buf);
glDestinationDataPointer(1, gl::UNSIGNED_BYTE, 1, (const GLvoid*)(uintptr_t)dst_offset);
glCopyBytes(0, num_bytes);
if (glGetError() != 0) {
   // fuck
}
Open thread at this post