2020-05-01 22:03:47 +00:00
|
|
|
From 8a760283129e8313839f6fa60e7275446201e46a Mon Sep 17 00:00:00 2001
|
2020-02-15 15:04:09 +00:00
|
|
|
From: froobynooby <froobynooby@froobworld.com>
|
|
|
|
Date: Thu, 20 Feb 2020 15:50:49 +0930
|
|
|
|
Subject: [PATCH] Reduce entity tracker updates on move
|
|
|
|
|
|
|
|
With this patch, for each player we keep track of a set of
|
|
|
|
entities that the player is tracking. This is used to split
|
|
|
|
the entity tracker update logic in the movePlayer method in
|
|
|
|
PlayerChunkMap in to two parts:
|
|
|
|
* Full update: Run through all entity trackers and update them
|
|
|
|
* Partial update: Run through all entity trackers for entities
|
|
|
|
the player is already tracking and update them
|
|
|
|
|
|
|
|
Partial updates will always take less time than full updates,
|
|
|
|
usually by a considerable amount if players and entities are
|
|
|
|
spread out over the map. Assuming they are evenly spread,
|
|
|
|
and given there are x many players, it would be expected to
|
|
|
|
take 1/x the time of a full update.
|
|
|
|
|
|
|
|
Full updates are only run if the following conditions are met:
|
|
|
|
* It has been 20 ticks since the last full update
|
|
|
|
* The player has moved over set distance since the last full
|
|
|
|
update (distance is configurable)
|
|
|
|
|
|
|
|
The motivation for the first condition is that the client
|
|
|
|
sends the server its position once a second, which calls
|
|
|
|
movePlayer, so at a minimum we want to be sending the player
|
|
|
|
an updated set of entities it can see every second.
|
|
|
|
|
|
|
|
The motivation for the second condition is that looping
|
|
|
|
through every entity in world to check if it is now within
|
|
|
|
the tracking range after the player has moved 0.1 blocks is
|
|
|
|
largely unnecessary. Checking only after the player has moved
|
|
|
|
1 or 2 blocks is far better for performance, and very unlikely
|
|
|
|
to give any noticeable side effects.
|
|
|
|
|
|
|
|
In testing, this has reduced the time taken for movement
|
|
|
|
packet processing by up to 4x. Packet processing for movement
|
|
|
|
packets often show up as a major contributor to TPS loss in
|
|
|
|
servers with large player counts
|
|
|
|
|
|
|
|
diff --git a/src/main/java/com/destroystokyo/paper/PaperWorldConfig.java b/src/main/java/com/destroystokyo/paper/PaperWorldConfig.java
|
Improve mid tick chunk loading, Fix Oversleep, other improvements
Process loads outside of any canSleep check. Original intent was to
only apply those restrictions to generations but realized I had some
checks higher up the call chain.
Reworked the back off strategy to just run every 1 millisecond per world,
and to apply the per tick limit to generations only.
This guarantees that your chunk will load with at most around 1ms delay.
Additionally, fire midTick processing in a few more places, notably the
oversleep section so we can keep processing loads here too which has
a large up to 50ms window...
Speaking of oversleep, we had a bug in our implementation changes for
Timings that caused oversleep to not sleep the correct amount.
Because we now moved it into the NEXT tick instead of THIS tick, the
value of nextTick had already been increased to +50ms, resulting in
the risk of sleeping more than it should, but, more importantly, this
caused every task that was trying to NOT run during oversleep to actually
run during oversleep.
This is now fixed.
Another small tweak is to the /tps command, to no longer show the star when
TPS is right at 20.
Due to ineffeciencies in the sleep precision, TPS is commonly 20.02.
This causes the star to show up almost constantly, so now only show it if
we actually hit a real "catchup".
This commit also improves the changes to the CallbackExecutor, in that
it now is also recursion safe.
It was possible that the executor could run tasks out of desired order
if the executor task scheduled more executor tasks.
We solve this by ensuring new additions do not enter the currently iterated queue.
Each depth level will have its own queue.
Fixes #3220
2020-04-26 03:47:29 +00:00
|
|
|
index 7ca67a4aa5..e93176d8f2 100644
|
2020-02-15 15:04:09 +00:00
|
|
|
--- a/src/main/java/com/destroystokyo/paper/PaperWorldConfig.java
|
|
|
|
+++ b/src/main/java/com/destroystokyo/paper/PaperWorldConfig.java
|
|
|
|
@@ -668,4 +668,9 @@ public class PaperWorldConfig {
|
|
|
|
private void zombieVillagerInfectionChance() {
|
|
|
|
zombieVillagerInfectionChance = getDouble("zombie-villager-infection-chance", zombieVillagerInfectionChance);
|
|
|
|
}
|
|
|
|
+
|
|
|
|
+ public double trackerUpdateDistance = 1;
|
|
|
|
+ private void trackerUpdateDistance() {
|
|
|
|
+ trackerUpdateDistance = getDouble("tracker-update-distance", trackerUpdateDistance);
|
|
|
|
+ }
|
|
|
|
}
|
|
|
|
diff --git a/src/main/java/net/minecraft/server/EntityPlayer.java b/src/main/java/net/minecraft/server/EntityPlayer.java
|
2020-05-01 22:03:47 +00:00
|
|
|
index cf837bdb3b..9da6ed85fa 100644
|
2020-02-15 15:04:09 +00:00
|
|
|
--- a/src/main/java/net/minecraft/server/EntityPlayer.java
|
|
|
|
+++ b/src/main/java/net/minecraft/server/EntityPlayer.java
|
|
|
|
@@ -86,6 +86,10 @@ public class EntityPlayer extends EntityHuman implements ICrafting {
|
|
|
|
public final int[] mobCounts = new int[ENUMCREATURETYPE_TOTAL_ENUMS]; // Paper
|
|
|
|
public final com.destroystokyo.paper.util.PooledHashSets.PooledObjectLinkedOpenHashSet<EntityPlayer> cachedSingleMobDistanceMap;
|
|
|
|
// Paper end
|
|
|
|
+ // Paper start - Reduce entity tracker updates on move
|
|
|
|
+ public Vec3D lastTrackedPosition = new Vec3D(0, 0, 0);
|
|
|
|
+ public long lastTrackedTick;
|
|
|
|
+ // Paper end
|
|
|
|
|
|
|
|
// CraftBukkit start
|
|
|
|
public String displayName;
|
|
|
|
diff --git a/src/main/java/net/minecraft/server/PlayerChunkMap.java b/src/main/java/net/minecraft/server/PlayerChunkMap.java
|
2020-05-01 22:03:47 +00:00
|
|
|
index 043ba702d7..8f477767b9 100644
|
2020-02-15 15:04:09 +00:00
|
|
|
--- a/src/main/java/net/minecraft/server/PlayerChunkMap.java
|
|
|
|
+++ b/src/main/java/net/minecraft/server/PlayerChunkMap.java
|
|
|
|
@@ -133,6 +133,39 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
+ // Paper end
|
|
|
|
+
|
|
|
|
+ // Paper start - Reduce entity tracker updates on move
|
|
|
|
+ private double trackerUpdateDistanceSquared;
|
|
|
|
+ private final Int2ObjectMap<Int2ObjectMap<PlayerChunkMap.EntityTracker>> playerTrackedEntities = new Int2ObjectOpenHashMap<>();
|
|
|
|
+ private final Int2ObjectMap<Queue<Integer>> playerTrackedEntitiesRemoveQueue = new Int2ObjectOpenHashMap<>();
|
|
|
|
+
|
|
|
|
+ void flushRemoveQueue(EntityPlayer entityplayer) {
|
|
|
|
+ Queue<Integer> removeQueue = getPlayerTrackedEntityMapRemoveQueue(entityplayer.getId());
|
|
|
|
+ Int2ObjectMap<PlayerChunkMap.EntityTracker> entityMap = getPlayerTrackedEntityMap(entityplayer.getId());
|
|
|
|
+ for (Integer id = removeQueue.poll(); id != null; id = removeQueue.poll()) {
|
|
|
|
+ entityMap.remove(id);
|
|
|
|
+ }
|
|
|
|
+ }
|
|
|
|
+
|
|
|
|
+ void flushRemoveQueues() {
|
|
|
|
+ for (Int2ObjectMap.Entry<Queue<Integer>> entry : playerTrackedEntitiesRemoveQueue.int2ObjectEntrySet()) {
|
|
|
|
+ Int2ObjectMap<EntityTracker> entityMap = getPlayerTrackedEntityMap(entry.getKey());
|
|
|
|
+ Queue<Integer> removeQueue = entry.getValue();
|
|
|
|
+ for (Integer id = removeQueue.poll(); id != null; id = removeQueue.poll()) {
|
|
|
|
+ entityMap.remove(id);
|
|
|
|
+ }
|
|
|
|
+ }
|
|
|
|
+ }
|
|
|
|
+
|
|
|
|
+ Int2ObjectMap<EntityTracker> getPlayerTrackedEntityMap(int id) {
|
|
|
|
+ return playerTrackedEntities.computeIfAbsent(id, i -> new Int2ObjectOpenHashMap<>());
|
|
|
|
+ }
|
|
|
|
+
|
|
|
|
+ Queue<Integer> getPlayerTrackedEntityMapRemoveQueue(int id) {
|
|
|
|
+ return playerTrackedEntitiesRemoveQueue.computeIfAbsent(id, i -> new java.util.ArrayDeque<>());
|
|
|
|
+ }
|
|
|
|
+
|
|
|
|
// Paper end
|
|
|
|
|
|
|
|
public PlayerChunkMap(WorldServer worldserver, File file, DataFixer datafixer, DefinedStructureManager definedstructuremanager, Executor executor, IAsyncTaskHandler<Runnable> iasynctaskhandler, ILightAccess ilightaccess, ChunkGenerator<?> chunkgenerator, WorldLoadListener worldloadlistener, Supplier<WorldPersistentData> supplier, int i) {
|
|
|
|
@@ -167,6 +200,7 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
|
|
|
|
this.m = new VillagePlace(new File(this.w, "poi"), datafixer, this.world); // Paper
|
|
|
|
this.setViewDistance(i);
|
|
|
|
this.playerMobDistanceMap = this.world.paperConfig.perPlayerMobSpawns ? new com.destroystokyo.paper.util.PlayerMobDistanceMap() : null; // Paper
|
|
|
|
+ this.trackerUpdateDistanceSquared = Math.pow(this.world.paperConfig.trackerUpdateDistance, 2); // Paper
|
|
|
|
}
|
|
|
|
|
|
|
|
public void updatePlayerMobTypeMap(Entity entity) {
|
Improve mid tick chunk loading, Fix Oversleep, other improvements
Process loads outside of any canSleep check. Original intent was to
only apply those restrictions to generations but realized I had some
checks higher up the call chain.
Reworked the back off strategy to just run every 1 millisecond per world,
and to apply the per tick limit to generations only.
This guarantees that your chunk will load with at most around 1ms delay.
Additionally, fire midTick processing in a few more places, notably the
oversleep section so we can keep processing loads here too which has
a large up to 50ms window...
Speaking of oversleep, we had a bug in our implementation changes for
Timings that caused oversleep to not sleep the correct amount.
Because we now moved it into the NEXT tick instead of THIS tick, the
value of nextTick had already been increased to +50ms, resulting in
the risk of sleeping more than it should, but, more importantly, this
caused every task that was trying to NOT run during oversleep to actually
run during oversleep.
This is now fixed.
Another small tweak is to the /tps command, to no longer show the star when
TPS is right at 20.
Due to ineffeciencies in the sleep precision, TPS is commonly 20.02.
This causes the star to show up almost constantly, so now only show it if
we actually hit a real "catchup".
This commit also improves the changes to the CallbackExecutor, in that
it now is also recursion safe.
It was possible that the executor could run tasks out of desired order
if the executor task scheduled more executor tasks.
We solve this by ensuring new additions do not enter the currently iterated queue.
Each depth level will have its own queue.
Fixes #3220
2020-04-26 03:47:29 +00:00
|
|
|
@@ -1339,8 +1373,19 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
|
2020-02-15 15:04:09 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
public void movePlayer(EntityPlayer entityplayer) {
|
|
|
|
- ObjectIterator objectiterator = this.trackedEntities.values().iterator();
|
|
|
|
+ // Paper start
|
|
|
|
+ // ObjectIterator objectiterator = this.trackedEntities.values().iterator();
|
|
|
|
+ ObjectIterator objectiterator;
|
|
|
|
|
|
|
|
+ if (MinecraftServer.currentTick - entityplayer.lastTrackedTick >= 20
|
|
|
|
+ || entityplayer.lastTrackedPosition.distanceSquared(entityplayer.getPositionVector()) >= trackerUpdateDistanceSquared) {
|
|
|
|
+ entityplayer.lastTrackedPosition = entityplayer.getPositionVector();
|
|
|
|
+ entityplayer.lastTrackedTick = MinecraftServer.currentTick;
|
|
|
|
+ objectiterator = this.trackedEntities.values().iterator(); // Update all entity trackers
|
|
|
|
+ } else {
|
|
|
|
+ objectiterator = getPlayerTrackedEntityMap(entityplayer.getId()).values().iterator(); // Only update entity trackers for already tracked entities
|
|
|
|
+ }
|
|
|
|
+ // Paper end
|
|
|
|
while (objectiterator.hasNext()) {
|
|
|
|
PlayerChunkMap.EntityTracker playerchunkmap_entitytracker = (PlayerChunkMap.EntityTracker) objectiterator.next();
|
|
|
|
|
Improve mid tick chunk loading, Fix Oversleep, other improvements
Process loads outside of any canSleep check. Original intent was to
only apply those restrictions to generations but realized I had some
checks higher up the call chain.
Reworked the back off strategy to just run every 1 millisecond per world,
and to apply the per tick limit to generations only.
This guarantees that your chunk will load with at most around 1ms delay.
Additionally, fire midTick processing in a few more places, notably the
oversleep section so we can keep processing loads here too which has
a large up to 50ms window...
Speaking of oversleep, we had a bug in our implementation changes for
Timings that caused oversleep to not sleep the correct amount.
Because we now moved it into the NEXT tick instead of THIS tick, the
value of nextTick had already been increased to +50ms, resulting in
the risk of sleeping more than it should, but, more importantly, this
caused every task that was trying to NOT run during oversleep to actually
run during oversleep.
This is now fixed.
Another small tweak is to the /tps command, to no longer show the star when
TPS is right at 20.
Due to ineffeciencies in the sleep precision, TPS is commonly 20.02.
This causes the star to show up almost constantly, so now only show it if
we actually hit a real "catchup".
This commit also improves the changes to the CallbackExecutor, in that
it now is also recursion safe.
It was possible that the executor could run tasks out of desired order
if the executor task scheduled more executor tasks.
We solve this by ensuring new additions do not enter the currently iterated queue.
Each depth level will have its own queue.
Fixes #3220
2020-04-26 03:47:29 +00:00
|
|
|
@@ -1350,6 +1395,7 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
|
2020-02-15 15:04:09 +00:00
|
|
|
playerchunkmap_entitytracker.updatePlayer(entityplayer);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
+ flushRemoveQueues(); // Paper
|
|
|
|
|
|
|
|
int i = MathHelper.floor(entityplayer.locX()) >> 4;
|
|
|
|
int j = MathHelper.floor(entityplayer.locZ()) >> 4;
|
Improve mid tick chunk loading, Fix Oversleep, other improvements
Process loads outside of any canSleep check. Original intent was to
only apply those restrictions to generations but realized I had some
checks higher up the call chain.
Reworked the back off strategy to just run every 1 millisecond per world,
and to apply the per tick limit to generations only.
This guarantees that your chunk will load with at most around 1ms delay.
Additionally, fire midTick processing in a few more places, notably the
oversleep section so we can keep processing loads here too which has
a large up to 50ms window...
Speaking of oversleep, we had a bug in our implementation changes for
Timings that caused oversleep to not sleep the correct amount.
Because we now moved it into the NEXT tick instead of THIS tick, the
value of nextTick had already been increased to +50ms, resulting in
the risk of sleeping more than it should, but, more importantly, this
caused every task that was trying to NOT run during oversleep to actually
run during oversleep.
This is now fixed.
Another small tweak is to the /tps command, to no longer show the star when
TPS is right at 20.
Due to ineffeciencies in the sleep precision, TPS is commonly 20.02.
This causes the star to show up almost constantly, so now only show it if
we actually hit a real "catchup".
This commit also improves the changes to the CallbackExecutor, in that
it now is also recursion safe.
It was possible that the executor could run tasks out of desired order
if the executor task scheduled more executor tasks.
We solve this by ensuring new additions do not enter the currently iterated queue.
Each depth level will have its own queue.
Fixes #3220
2020-04-26 03:47:29 +00:00
|
|
|
@@ -1491,12 +1537,21 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
|
2020-02-15 15:04:09 +00:00
|
|
|
|
|
|
|
playerchunkmap_entitytracker.clear(entityplayer);
|
|
|
|
}
|
|
|
|
+ // Paper start
|
|
|
|
+ playerTrackedEntities.remove(entityplayer.getId());
|
|
|
|
+ playerTrackedEntitiesRemoveQueue.remove(entityplayer.getId());
|
|
|
|
+ // Paper end
|
|
|
|
}
|
|
|
|
|
|
|
|
PlayerChunkMap.EntityTracker playerchunkmap_entitytracker1 = (PlayerChunkMap.EntityTracker) this.trackedEntities.remove(entity.getId());
|
|
|
|
|
|
|
|
if (playerchunkmap_entitytracker1 != null) {
|
|
|
|
playerchunkmap_entitytracker1.a();
|
|
|
|
+ // Paper start
|
|
|
|
+ for (EntityPlayer player : playerchunkmap_entitytracker1.trackedPlayers) {
|
|
|
|
+ getPlayerTrackedEntityMap(player.getId()).remove(playerchunkmap_entitytracker1.tracker.getId());
|
|
|
|
+ }
|
|
|
|
+ // Paper end
|
|
|
|
}
|
|
|
|
entity.tracker = null; // Paper - We're no longer tracked
|
|
|
|
}
|
Improve mid tick chunk loading, Fix Oversleep, other improvements
Process loads outside of any canSleep check. Original intent was to
only apply those restrictions to generations but realized I had some
checks higher up the call chain.
Reworked the back off strategy to just run every 1 millisecond per world,
and to apply the per tick limit to generations only.
This guarantees that your chunk will load with at most around 1ms delay.
Additionally, fire midTick processing in a few more places, notably the
oversleep section so we can keep processing loads here too which has
a large up to 50ms window...
Speaking of oversleep, we had a bug in our implementation changes for
Timings that caused oversleep to not sleep the correct amount.
Because we now moved it into the NEXT tick instead of THIS tick, the
value of nextTick had already been increased to +50ms, resulting in
the risk of sleeping more than it should, but, more importantly, this
caused every task that was trying to NOT run during oversleep to actually
run during oversleep.
This is now fixed.
Another small tweak is to the /tps command, to no longer show the star when
TPS is right at 20.
Due to ineffeciencies in the sleep precision, TPS is commonly 20.02.
This causes the star to show up almost constantly, so now only show it if
we actually hit a real "catchup".
This commit also improves the changes to the CallbackExecutor, in that
it now is also recursion safe.
It was possible that the executor could run tasks out of desired order
if the executor task scheduled more executor tasks.
We solve this by ensuring new additions do not enter the currently iterated queue.
Each depth level will have its own queue.
Fixes #3220
2020-04-26 03:47:29 +00:00
|
|
|
@@ -1537,7 +1592,7 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
|
2020-02-15 15:04:09 +00:00
|
|
|
}
|
|
|
|
world.timings.tracker2.stopTiming(); // Paper
|
|
|
|
}
|
|
|
|
-
|
|
|
|
+ flushRemoveQueues(); // Paper
|
|
|
|
|
|
|
|
}
|
|
|
|
|
Improve mid tick chunk loading, Fix Oversleep, other improvements
Process loads outside of any canSleep check. Original intent was to
only apply those restrictions to generations but realized I had some
checks higher up the call chain.
Reworked the back off strategy to just run every 1 millisecond per world,
and to apply the per tick limit to generations only.
This guarantees that your chunk will load with at most around 1ms delay.
Additionally, fire midTick processing in a few more places, notably the
oversleep section so we can keep processing loads here too which has
a large up to 50ms window...
Speaking of oversleep, we had a bug in our implementation changes for
Timings that caused oversleep to not sleep the correct amount.
Because we now moved it into the NEXT tick instead of THIS tick, the
value of nextTick had already been increased to +50ms, resulting in
the risk of sleeping more than it should, but, more importantly, this
caused every task that was trying to NOT run during oversleep to actually
run during oversleep.
This is now fixed.
Another small tweak is to the /tps command, to no longer show the star when
TPS is right at 20.
Due to ineffeciencies in the sleep precision, TPS is commonly 20.02.
This causes the star to show up almost constantly, so now only show it if
we actually hit a real "catchup".
This commit also improves the changes to the CallbackExecutor, in that
it now is also recursion safe.
It was possible that the executor could run tasks out of desired order
if the executor task scheduled more executor tasks.
We solve this by ensuring new additions do not enter the currently iterated queue.
Each depth level will have its own queue.
Fixes #3220
2020-04-26 03:47:29 +00:00
|
|
|
@@ -1586,6 +1641,7 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
|
2020-02-15 15:04:09 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
+ flushRemoveQueue(entityplayer); // Paper
|
|
|
|
|
|
|
|
Iterator iterator;
|
|
|
|
Entity entity1;
|
Improve mid tick chunk loading, Fix Oversleep, other improvements
Process loads outside of any canSleep check. Original intent was to
only apply those restrictions to generations but realized I had some
checks higher up the call chain.
Reworked the back off strategy to just run every 1 millisecond per world,
and to apply the per tick limit to generations only.
This guarantees that your chunk will load with at most around 1ms delay.
Additionally, fire midTick processing in a few more places, notably the
oversleep section so we can keep processing loads here too which has
a large up to 50ms window...
Speaking of oversleep, we had a bug in our implementation changes for
Timings that caused oversleep to not sleep the correct amount.
Because we now moved it into the NEXT tick instead of THIS tick, the
value of nextTick had already been increased to +50ms, resulting in
the risk of sleeping more than it should, but, more importantly, this
caused every task that was trying to NOT run during oversleep to actually
run during oversleep.
This is now fixed.
Another small tweak is to the /tps command, to no longer show the star when
TPS is right at 20.
Due to ineffeciencies in the sleep precision, TPS is commonly 20.02.
This causes the star to show up almost constantly, so now only show it if
we actually hit a real "catchup".
This commit also improves the changes to the CallbackExecutor, in that
it now is also recursion safe.
It was possible that the executor could run tasks out of desired order
if the executor task scheduled more executor tasks.
We solve this by ensuring new additions do not enter the currently iterated queue.
Each depth level will have its own queue.
Fixes #3220
2020-04-26 03:47:29 +00:00
|
|
|
@@ -1682,6 +1738,7 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
|
2020-02-15 15:04:09 +00:00
|
|
|
org.spigotmc.AsyncCatcher.catchOp("player tracker clear"); // Spigot
|
|
|
|
if (this.trackedPlayers.remove(entityplayer)) {
|
|
|
|
this.trackerEntry.a(entityplayer);
|
|
|
|
+ getPlayerTrackedEntityMap(entityplayer.getId()).remove(this.tracker.getId()); // Paper
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
Improve mid tick chunk loading, Fix Oversleep, other improvements
Process loads outside of any canSleep check. Original intent was to
only apply those restrictions to generations but realized I had some
checks higher up the call chain.
Reworked the back off strategy to just run every 1 millisecond per world,
and to apply the per tick limit to generations only.
This guarantees that your chunk will load with at most around 1ms delay.
Additionally, fire midTick processing in a few more places, notably the
oversleep section so we can keep processing loads here too which has
a large up to 50ms window...
Speaking of oversleep, we had a bug in our implementation changes for
Timings that caused oversleep to not sleep the correct amount.
Because we now moved it into the NEXT tick instead of THIS tick, the
value of nextTick had already been increased to +50ms, resulting in
the risk of sleeping more than it should, but, more importantly, this
caused every task that was trying to NOT run during oversleep to actually
run during oversleep.
This is now fixed.
Another small tweak is to the /tps command, to no longer show the star when
TPS is right at 20.
Due to ineffeciencies in the sleep precision, TPS is commonly 20.02.
This causes the star to show up almost constantly, so now only show it if
we actually hit a real "catchup".
This commit also improves the changes to the CallbackExecutor, in that
it now is also recursion safe.
It was possible that the executor could run tasks out of desired order
if the executor task scheduled more executor tasks.
We solve this by ensuring new additions do not enter the currently iterated queue.
Each depth level will have its own queue.
Fixes #3220
2020-04-26 03:47:29 +00:00
|
|
|
@@ -1718,9 +1775,11 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
|
2020-02-15 15:04:09 +00:00
|
|
|
|
|
|
|
if (flag1 && this.trackedPlayerMap.putIfAbsent(entityplayer, true) == null) { // Paper
|
|
|
|
this.trackerEntry.b(entityplayer);
|
|
|
|
+ getPlayerTrackedEntityMap(entityplayer.getId()).put(this.tracker.getId(), this); // Paper
|
|
|
|
}
|
|
|
|
} else if (this.trackedPlayers.remove(entityplayer)) {
|
|
|
|
this.trackerEntry.a(entityplayer);
|
|
|
|
+ getPlayerTrackedEntityMapRemoveQueue(entityplayer.getId()).add(this.tracker.getId()); // Paper
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
--
|
Improve mid tick chunk loading, Fix Oversleep, other improvements
Process loads outside of any canSleep check. Original intent was to
only apply those restrictions to generations but realized I had some
checks higher up the call chain.
Reworked the back off strategy to just run every 1 millisecond per world,
and to apply the per tick limit to generations only.
This guarantees that your chunk will load with at most around 1ms delay.
Additionally, fire midTick processing in a few more places, notably the
oversleep section so we can keep processing loads here too which has
a large up to 50ms window...
Speaking of oversleep, we had a bug in our implementation changes for
Timings that caused oversleep to not sleep the correct amount.
Because we now moved it into the NEXT tick instead of THIS tick, the
value of nextTick had already been increased to +50ms, resulting in
the risk of sleeping more than it should, but, more importantly, this
caused every task that was trying to NOT run during oversleep to actually
run during oversleep.
This is now fixed.
Another small tweak is to the /tps command, to no longer show the star when
TPS is right at 20.
Due to ineffeciencies in the sleep precision, TPS is commonly 20.02.
This causes the star to show up almost constantly, so now only show it if
we actually hit a real "catchup".
This commit also improves the changes to the CallbackExecutor, in that
it now is also recursion safe.
It was possible that the executor could run tasks out of desired order
if the executor task scheduled more executor tasks.
We solve this by ensuring new additions do not enter the currently iterated queue.
Each depth level will have its own queue.
Fixes #3220
2020-04-26 03:47:29 +00:00
|
|
|
2.26.2
|
2020-02-15 15:04:09 +00:00
|
|
|
|