Hey all, I need some help solidifying my understanding of peer replication.
In a networked game setup where the player is ahead in time and peers are behind, I learned that the process for replicating peers is:
- The server sends you their state (position, velocity, facing, and inputs).
- Wait until you have at least 2 states for that peer.
- Lerp between the latest 2 states for that peer.
- If you run out of data, keep predicting with their latest inputs. When their data finally comes through, jump them back to the oldest authoritative point and process multiple states until they're caught up.
I'm wondering about steps 2 and 3 though, why not look at 1 state at a time and use your prediction code? If you need to put in the prediction/replay code for #4 anyway, it seems like you would just use that. Something like:
- The server sends you their state (position, velocity, facing, and inputs).
- Jump them to that state, or verify that your last prediction was correct.
- Sim them to their next predicted state (deterministic sim, should be perfect).
- If you run out of data, keep predicting with their latest inputs. When their data finally comes through, jump them back to the oldest authoritative point and process multiple states until they're caught up.
You would add some buffer time to try and make sure you can always base your prediction off real data.
Is there something I'm missing that makes the first approach better? It could be that wherever I learned this approach from was just being unclear about the difference between the sim update and the render update (where you are constantly lerping between 2 knowns), so it'll be good to get my understanding straightened out.