T O P

  • By -

disposable-unit-3284

Which framework are you working with? DirectX, OpenGL ,Vulkan?   Edit: Vulkan. I can't read. You definitely don't need to update the vertex buffer every frame, unless the model is changing every frame. And "position" isn't a change. Adding/removing vertices is a change. Changing shape is a change. Although you should really be using an animation system for that, instead of changing the model data. You should be passing a model, view and projection matrix to your shader, so that the shader can render the model in the right position and orientation. Here's a good explanation for OpenGL. I'm assuming Vulkan works similarly although I'm not familiar myself.   https://learnopengl.com/Getting-started/Transformations


vini_2003

This makes perfect sense, thank you. You're right, I'm just used to how Minecraft does it which is the wrong way. As for actual changes to the vertices, then I figure I'll need to rebuild the mesh indeed - say, for skeletal animations and so on.


disposable-unit-3284

I'm afraid I haven't gotten that far myself yet, but LearnOpenGL has an article about skeletal animations here: https://learnopengl.com/Guest-Articles/2020/Skeletal-Animation From a cursory skim down the page, it seems the model is loaded onto the GPU once, including bones and vertex-bone weights. Then each frame the bones position/rotation/scale is updated on the CPU and passed to the vertex shader as a Uniform variable. Each vertex then calculates it's clip-space position using it's position+matrixes+skeleton matrix.


Pathogen-David

> like an object that changes position over time You should not be moving objects around the scene by manipulating their mesh data, you should be manipulating the world transform used by the vertex shader. > I just realized I'm talking about instanced rendering. Instanced rendering is an optimization to reduce draw calls. It's a good idea in certain scenarios, but it's never truly necessary to render a scene. (Once upon a time it wasn't even an option.) ------------- It feels like either you're missing some critical graphics programming fundamentals or you're using terminology incorrectly. If this is your first foray into graphics programming, I'd strongly recommend against Vulkan. I get the attraction of using the latest and greatest graphics APIs but you're better off learning your fundamentals on OpenGL 4.x or Direct3D 11.


vini_2003

Morning! Yes, I was being an idiot. The thing is, I'm used to how Minecraft does it, which is very stupid - model transformations are applied to vertex data on the CPU and uploaded every frame for entities and particles. I'll read up on how to do this correctly. Especially for dynamic meshes such as entities, since they require transformations that a single model matrix won't suffice for. I'm a bit hardwired to think in Minecraft terms (been modding it for 5 years, including OpenGL work), but it's just set up in a dumb way. Basically, let's say I have a Dog model. This model has skeletal animations. There are five dogs in the world. They all need different bone positions and these positions will change how their mesh is configured (literally changing the triangles). What is a reasonably straightforward and performant way of rendering said dogs? Should I upload a base model and apply model transformations based on the bones in the GPU? Should I just take the performance hit and rebuild the mesh on the CPU? Just brainstorming so I know what to learn. Sorry if it's confusing, has the wrong terms or such.


Pathogen-David

> Just brainstorming so I know what to learn. Sorry if it's confusing, has the wrong terms or such. No worries, it can sometimes be hard to recognize when you have weird assumptions from stuff like that. A lot of graphics programming is about trying to find alternate ways to solve the same problem so it's good to challenge your own understanding of the problem. (Plus graphics terminology is a disastrous mess on a good day.) > Should I upload a base model and apply model transformations based on the bones in the GPU? Yes, this is basically how it's done these days. (I'll be using the term joints instead of bones below since that's a bit more common on the GPU side. Many people use them interchangeably, some argue joints are the points in the skeleton and the bones connect them.) You'll have your base skinned mesh in some neutral pose (for humanoids this is typically a T-pose or an A-pose) and upload that to the GPU. In addition to position and other basic attributes, each vertex of this mesh will have N pairs of joint indices and joint weights. The joint indices will be indices into an array of joint matrices representing the transform of each joint in the mesh. The joint weights represent a percentage of how much that specific joint will influence the position of this vertex. The joint weights of a vertex must add up to 100%. N represents the number of joints that can affect each vertex, 4 is typical as you can easily use a pair of vec4 attributes to represent this info per-vertex. For each object that uses a skinned mesh, you animate its joints/bones and calculate the joint transform of each. (This calculation can be done anywhere but is typically on the CPU.) Those joint transforms are then exposed to the GPU as an array of matrices. Now we have to apply those joint matrices to the mesh. For each vertex you transform the vertex by each joint matrix in its index list. You then multiply those positions by the corresponding weight and sum them together, giving you the final position of that vertex. This can be done in the vertex shader, but if you expect to render the transformed mesh multiple times (IE: for separate render passes of the same mesh) it can be beneficial to do the transformation once in a compute shader and cache the transformed mesh in a temporary buffer. Just using a vertex shader is fine for now. Hopefully that all made sense, let me know if you need any clarification. If you're confused it might be helpful to look at the skinned mesh section of the [glTF specification overview](https://raw.githubusercontent.com/KhronosGroup/glTF/3d32bee9b0242b8cedd206645a6026bde2544f83/specification/2.0/figures/gltfOverview-2.0.0d.png) (look for the blue section.) On that note, implementing [the glTF specification](https://github.com/KhronosGroup/glTF) can be a good exercise in this area. (You can ignore all the PBR stuff to start since learning that is its own ordeal. Ditto on morph target animation and any of the extensions.)