Increases risk of deadlock if a plugin using protocollib sends a packet
async, and then a listener then reads world state, and main thread is then
blocked waiting for the queue to flush.
This will break out of the synchronized block when it jumps to the netty event loop.
See: https://gist.github.com/aikar/e7abb2ba7059149d0a91f7a226e98590
Java 9+ doesn't allow using the exposed cleanup method, but added
a new method on Unsafe to do it.
So have to detect java version and use the appropriate strategy.
See: https://www.evanjones.ca/java-bytebuffer-leak.html
This is potentially a source of lots of native memory usage.
We are clearly seeing native usage upwards to 1-4GB which doesn't make sense.
Region File usage fixed in previous patch should of tecnically only been somewhat
temporary until GC finally gets it some time later, but between all the various
plugins doing IO on various threads, this hidden detail of the JDK could be
keeping long lived large direct buffers in cache.
Set system properly at server startup if not set already to help protect from this.
Mojang was semi leaking native memory here by relying on finalizers
to clean up the direct memory.
Finalizers have no guarantee on when they will be ran, and since this is
old generation memory, it might be a while before its called.
This method shows up as super hot in profiler, and also a high "self" time.
Upon analyzing, it appears most usages of this method fall down to the final
else statement of the nasty ternary.
Upon even further analyzation, it appears then the majority of those have a
consistent list 1.... One with Infinity head and Tails.
First optimization is to detect these infinite states and immediately return that
VoxelShapeMergerList so we can avoid testing the rest for most cases.
Break the method into 2 to help the JVM promote inlining of this fast path.
Then it was also noticed that VoxelShapeMergerList constructor is also a hotspot
with a high self time...
Well, knowing that in most cases our list 1 is actualy the same value, it allows
us to know that with an infinite list1, the result on the merger is essentially
list2 as the final values.
This let us analyze the 2 potential states (Infinite with 2 sources or 4 sources)
and compute a deterministic result for the MergerList values.
Additionally, this lets us avoid even allocating new objects for this too, further
reducing memory usage.
We've seen many a cases where the "last good" x/y/z is desynced from
the x/y/z that is checked for moving too fast.
Theory is that when you have multiple movement packets queued up,
and the player is teleported after the first then the 2nd and 3rd come in,
it is triggering a massive movement velocity.
This will ensure that the servers position is synchronized anytime player is te
Fixes#3258
It was still technically read correctly in what it was doing, but
all our Player events begin with Player.
Nothing uses this event yet so safe to rename.
If you are some rapid adopter of this event, sorry :P
If a server enables Anti Xray, packet sending can be delayed until the
chunk has been obfuscated, blocking the entire queue from going out.
On a busy server, considering Anti Xray can only operate on a single
thread, it is quite possible the obfuscation backlog can get quite behind
resulting in a delay of sending packets.
And logging in is a clear area where lots of chunks are going to be queued
for obfuscation....
We should probably special case a few more than this (such as chat),
but this will hopefully help the keep alive issues some people run into.
Now has separate configs to control Villager immunities a bit.
whether or not they wake up due to panic situations (raids)
and when should they wake up when work is available after being
inactive for so long, and for how long.
This work config may make the 'wake up inactive' feature for villagers
useless in most scenarios, but if there is a situation where the villager
does go without needing to work for a long period of time, it would kick
in then.
This also removes movement based immunities, so now villagers should only move
if they trigger a work immunity, panic immunity, or inactive wake up immunity.
Fixes#3263
Upstream has released updates that appears to apply and compile correctly.
This update has not been tested by PaperMC and as with ANY update, please do your own testing
Bukkit Changes:
b999860d SPIGOT-2304: Add LootGenerateEvent
CraftBukkit Changes:
77fd87e4 SPIGOT-2304: Implement LootGenerateEvent
a1a705ee SPIGOT-5566: Doused campfires & fires should call EntityChangeBlockEvent
41712edd SPIGOT-5707: PersistentDataHolder not Persistent on API dropped Item
If a sync load was triggered, it would process pending join events,
causing them to be added to the world in the middle of the entity ticking
process.
This caused their add to be queued instead of immediate, causing
"Illegal Tracking" errors.
This schedules it to fire at the players next Connection Tick, which
is exactly where this entire process use to run anyways.
Also added missing tab complete and syntax for syncloadinfo debug command
Upstream has released updates that appears to apply and compile correctly.
This update has not been tested by PaperMC and as with ANY update, please do your own testing
Bukkit Changes:
220bc594 #486: Add method to get player's attack cooldown
21853d39 SPIGOT-5681: Increase max plugin channel size
5b972adc Improve build process
b55e58d9 Note which custom generator is missing required method
CraftBukkit Changes:
893ad93b #650: Add method to get player's attack cooldown
ef706b06 #655: Added support for the VM tag jansi.passthrough when processing messages sent to a ColouredConsoleSender.
e0cfb347 SPIGOT-5689: Fireball.setDirection increases velocity too much
94cb030f SPIGOT-5673: swingHand API does not show to self
b331a055 SPIGOT-5680: isChunkGenerated creates empty region files
e1335932 Improve build process
a8ec1d60 Add a couple of method null checks to CraftWorld
ce66f693 Misc checkstyle fixes
8bd0e9ab SPIGOT-5669: Fix Beehive.isSedated
Spigot Changes:
2040c4c4 SPIGOT-5677, MC-114796: Fix portals generating outside world border
ab8f6b5a Rebuild patches
e7dc2f53 Rebuild patches
This is the start of a new module for Paper to add support for API's
that interface Mojang API's directly.
This allows us to version properly by MC version incase Mojang makes any major breaking changes.
It also lets us separate Mojang API's from Paper-API so our downstream friends at Glowstone
will not have to worry about Mojang code.
Adds AsyncPlayerSendCommandsEvent
- Allows modifying on a per command basis what command data they see.
Adds CommandRegisteredEvent
- Allows manipulating the CommandNode to add more children/metadata for the client
Calling this 2.0 as it's a pretty major improvement with more knobs to twist.
This update fixes many things. The goal here is to restore vanilla behavior to some degree.
Instead of permanent inactive pools of animals, let them show some signs of life some....
Yes this may reduce performance compared to before, but I hope it is minimal. Got to find a balance.
Previous EAR logic really compromised vanilla behavior of mobs. This tries to restore it.
Changes:
1) All monsters are now classed as Monster. Mojang has an interface, we should use it.
- This now includes Shulker, Slimes, see #2 for Phantom and Ghast
2) Villagers and Flying Monsters now have their own separate activation range configs.
- Villagers will default to your Animals config
3) Added a bunch of more immunities
- Brand new entities are immune for a few seconds
- Entities that recently traveled by portal are immune for few seconds
- Entities that are leashed to a player are immune
- Ender Signals are immune
- Entities that are jumping, climbing, dying (lol) are immune
- Minecarts are now always immune to the movement restriction
4) Villagers immunity received major overhaul...
- Now has many immunities for Villager activities to let them
do their work then go back inactive
- Such as interacting with doors and workstations should be more normal now
- Raids will trigger immunities, in that villagers will run and hide when bell rings.
- Raid should keep the entire village immune during the raid to keep gameplay mechanics
You can disable raids by game rule if you dont want raids
Then the big one.....
Wake Up Inactive Entities:
One issue plagueing "farms" is that we no longer even let entities move now.
Entities become lifeless.
A new system has been introduced to wake up inactive entities every so often, to let
them stretch their legs, eat some food, play with each other and experience the good entity life.
Animals, Villagers, Monsters (Includes Pillagers), and Flying Monsters will now wake up every
so often after staying inactive for a very long. This grants them a temporary immunity, that
the goal is they will then find "stuff to do" by having a longer activity window.
How many to wake up, how often they wake up, and for how long they wake up are all configurable.
Current EAR Immunities really don't give some entities enough of a window to find work
to then keep them immune for the work to even start. This system should help that.
We will only wake up a few entities per tick on the first wave, restoring 1 per type per world per tick.
So say you have 10 monsters qualify for inactive wake up, all 8 will wake up on the first eligible tick,
and then the 9th will wake up on next tick, 10th on next tick.
If for 5 ticks no more inactive wake up, our buffer will have built back up to 5, and then 5 can go next needed tick.
This basically incrementally wakes them up, preventing too many from waking up in a single tick, to reduce impact to TPS.
This was missing Entity Tracking Range support, creating different
values in this section vs normal section.
Concerned this might of caused some carnage on tracker if this code says
"Yes you should track this player 500 blocks away from you on a horse" and then
the other check uses the normal value.
Set:
settings:
- use-optimized-ticklist: false
If you are having issues with block updates and want to see if this fixes it.
Please report confirmations on #3145 ticket
This is friendlier to plugins as far as the plugin is concerned,
the inventory did open and immediately closed.
We avoid sending the packet to client so they don't see the window
flash either.
If a plugin wants to avoid wasteful fake opens, they should check
that the player is not sleeping before opening the inventory.
Renames a bunch of timings to be more appropriate for the new environment.
Many things dealt with sync loads which wasn't correct anymore.
adjusted timings to be a little bit more accurate here.
Also cleaned up old 1.13 async chunks configs so people won't keep
thinking they can change some of those configs when they can't.
Process loads outside of any canSleep check. Original intent was to
only apply those restrictions to generations but realized I had some
checks higher up the call chain.
Reworked the back off strategy to just run every 1 millisecond per world,
and to apply the per tick limit to generations only.
This guarantees that your chunk will load with at most around 1ms delay.
Additionally, fire midTick processing in a few more places, notably the
oversleep section so we can keep processing loads here too which has
a large up to 50ms window...
Speaking of oversleep, we had a bug in our implementation changes for
Timings that caused oversleep to not sleep the correct amount.
Because we now moved it into the NEXT tick instead of THIS tick, the
value of nextTick had already been increased to +50ms, resulting in
the risk of sleeping more than it should, but, more importantly, this
caused every task that was trying to NOT run during oversleep to actually
run during oversleep.
This is now fixed.
Another small tweak is to the /tps command, to no longer show the star when
TPS is right at 20.
Due to ineffeciencies in the sleep precision, TPS is commonly 20.02.
This causes the star to show up almost constantly, so now only show it if
we actually hit a real "catchup".
This commit also improves the changes to the CallbackExecutor, in that
it now is also recursion safe.
It was possible that the executor could run tasks out of desired order
if the executor task scheduled more executor tasks.
We solve this by ensuring new additions do not enter the currently iterated queue.
Each depth level will have its own queue.
Fixes#3220
This notably fixes the newest "Donkey Dupe", but also fixes a lot
of dupe bugs in general around nether portals and entity world transfer
We also fix item duplication generically by anytime we clone an item
to drop it on the ground, destroy the source item.
This avoid an itemstack ever existing twice in the world state pre
clean up stage.
So even if something NEW comes up, it would be impossible to drop the
same item twice because the source was destroyed.
This should make us more forward proof on preventing dupes.
These dupes have been in for years at this point, they aren't new...
Everyone knows about them and are mitigating with plugins atm breaking gameplay.
so better to make it clear its fixed in the messaging.
I am submitting this to Mojang.
We still keep vanilla process of waiting for existing session to be removed before logging in
by storing a separate map of pending.
also fire the callback using executor incase further recursion causes any trouble
* Don't check for Entities with Inventories if the block above us is also occluding (not just Inventoried)
* Remove Streams from Item Suck In and restore restore 1.12 AABB checks which is simpler and no voxel allocations (was doing TWO Item Suck ins)
* Restore missing application of previous optimization to getEntities for Inventoried Entities from CullanP
* Use getChunkIfLoadedImmediately for getting loaded entities (faster/simpler, no risk of sync loads)
I feel sorry for those who need to do this, and now feel sorry more
since back to slow startups again.
There is keep-spawn-loaded-range in paper.yml to reduce the range to
mitigate this if you must keep async chunks off.
Bump chunk priority to ensure chunks load fast
Handle case where client disconnects before they even fire PlayerJoinEvent
- no longer call PlayerQuitEvent or print quit message.
- don't save the player data file if never joined. Nothing has changed.
CraftBukkit has a bug here that if you do save it, you will lose
any horse that the player logged off on because the horse hasn't
been resummoned yet.
ChunkMapDistance polls multiple entries for pendingChunkUpdates
Each of these have the potential to move a chunk in and out of
"Loaded" state, which will result in multiple callbacks being
needed within a single tick of ChunkMapDistance
Use an ArrayDeque to store this Queue
This event is called when processing a player's attack on an entity
right before their attack strength cd is reset, there are no existing
events that fire within this period of time so it was impossible to
capture the players attack strength via API prior to this commit.
The event is cancellable, which will just skip over the normal reset of
attack strength cd
This change lets players who are in their bed have a position which is above
ground for a longer period of time. This is because of the server not setting
their position to the ground/exit location when entering the bed, resulting in
the server believing they're still in the air.
Because we moved entity registration to occur before the PlayerJoinEvent occurs,
We started tracking the entity too early before it was registered to the client.
So delay tracking until after list packets have been sent.
No longer will trigger Synchronous Chunk Loads when a player logs
in to the server.
Will delay PlayerJoinEvent until the chunk has been loaded.
Should have massive performance benefits for larger servers with
lots of players logging in and out.
Confused on this one, as commit history says Spigots version is older
than our version, so i'm not sure how we ended up duplicating this when
the 2 events are 100% identical.
Subclass spigots event and rely on the inheritance system, and clean up
the duplicate event fires.
Fix Spigots setPosition to use setPositionRaw to avoid chunk load prematurely.
For years, plugin developers have had to delay many things they do
inside of the PlayerJoinEvent by 1 tick to make it actually work.
This all boiled down to 1 reason why: The event fired before the
player was fully ready and joined to the world!
Additionally, if that player logged out on a vehicle, the event
fired before the vehicle was even loaded, so that plugins had no
access to the vehicle during this event either.
This change finally fixes this issue, fully preparing the player
into the world as a fully ready entity, vehicle included.
There should be no plugins that break because of this change, but might
improve consistency with other plugins instead.
For example, if 2 plugins listens to this event, and the first one
teleported the player in the event, then the 2nd plugin actually
would be getting a valid player!
This was very non deterministic. This change will ensure every plugin
receives a deterministic result, and should no longer require 1 tick
delays anymore.
Appending to the tail of the chunk tasks leaves a
window for the chunk to be moved to a
non-ticking status.
Additionally, use CB's callback executor so we
can ensure that we are not incorrectly
scheduling.
See: https://gist.github.com/aikar/dd22bbd2a3d78a2fd3d92e95e9f28dc6
as part of post processing a chunk, we can call ChunkConverter.
ChunkConverter then kicks off major physics updates, and when blocks
that have connections across chunk boundries occur, a recursive risk
can occur where A updates a block that triggers a physics request.
That physics request may trigger a chunk request, that then enqueues
a task into the Mailbox ChunkTaskQueueSorter.
If anything requests that same chunk that is in the middle of conversion,
it's mailbox queue is going to be held up, so the subsequent chunk request
will be unable to proceed.
We delay post processing of Chunk.A() 1 "pass" by re stuffing it back into
the executor so that the mailbox ChunkQueue is now considered empty.
This successfully fixed a reoccurring and highly reproduceable crash
for heightmaps.
If the request to shut down the server is received while we are in
a watchdog hang, immediately treat it as a crash and begin the shutdown
process. Shutdown process is now improved to also shutdown cleanly when
not using restart scripts either.
If a server is deadlocked, a server owner can send SIGHUP (or any other signal
the JVM understands to shut down as it currently does) and the watchdog
will no longer need to wait until the full timeout, allowing you to trigger
a close process and try to shut the server down gracefully, saving player and
world data.
Previously there was no way to trigger this outside of waiting for a full watchdog
timeout, which may be set to a really long time...
Additionally, fix everything to do with shutting the server down asynchronously.
Previously, nearly everything about the process was fragile and unsafe. Main might
not have actually been frozen, and might still be manipulating state.
Or, some reuest might ask main to do something in the shutdown but main is dead.
Or worse, other things might start closing down items such as the Console or Thread Pool
before we are fully shutdown.
This change tries to resolve all of these issues by moving everything into the stop
method and guaranteeing only one thread is stopping the server.
We then issue Thread Death to the main thread of another thread initiates the stop process.
We have to ensure Thread Death propagates correctly though to stop main completely.
This is to ensure that if main isn't truely stuck, it's not manipulating state we are trying to save.
Also check class loader cache before locking to speed up cached hits to avoid the lock
wasn't gonna make a unique build just for that but can lump it in here.
Very few entities actually hard collide, so store them in their own
entity slices and provide a special getEntites type call just for them.
This reduces entity collision checking impact (in my testing) by 25%
for crammed entities (shove 130 cows into an 8x6 area in one chunk).
Less crammed entities are likely to show significantly less benefit.
Effectively, this patch optimises crammed entity situations.
A players previous block break location is held onto permanently, and if
an interact event is cancelled, the client sends a stop breaking block packet
This then tries to update client about that old location.
This old location might then be in a now unloaded chunk, and it caused it to load.
We now also clear reference to it once abort destroy block is ran to stop trying
to send updates about the old block anyways.
I had did a few of the operations myself, which would have broken chunkCheck
from doing it itself, which would leave some state left in the original chunk
and thats not good....
If the request to shut down the server is received while we are in
a watchdog hang, immediately treat it as a crash and begin the shutdown
process. Shutdown process is now improved to also shutdown cleanly when
not using restart scripts either.
If a server is deadlocked, a server owner can send SIGUP (or any other signal
the JVM understands to shut down as it currently does) and the watchdog
will no longer need to wait until the full timeout, allowing you to trigger
a close process and try to shut the server down gracefully, saving player and
world data.
Previously there was no way to trigger this outside of waiting for a full watchdog
timeout, which may be set to a really long time...
Leaf informed me this could cause ordering issues.
So, the risk if this occurring is lowered now anyways, but if an
entity causes a sync chunk load, it could process an unload...
We will tackle the problem better in a future commit
Also fixed another async-chunks=false issue
This will help prevent many cases of unregistering entities during entity ticking
Currently delays Chunk Unloads and Async Chunk load callbacks
Also dropped mid ticking chunk tasks during entity ticking to reduce this risk
Previous method only worked for a normal shutdown, and didn't include
when the server enters a closing state due to watchdog crashes
This is the correct variable to detect the server is in the middle of shutdown process
The streams hurt performance and allocate tons of garbage, so
replace them with the standard iterator.
Also optimise the stream.anyMatch statement to move to a bitset
where we can replace the call with a single bitwise operation.
This fix is for the few people who are using such low end systems that
asynchronous chunk loading hurts them rather than helping.
The previous build made paper crash if you turned off async chunks, and
this fixes that issue.
Mark chunks that are blocking main thread for world generation as urgent
Implements a general priority system so that chunks that are sorted in
the generator queues can prioritize certain chunks over another.
Urgent chunks will jump to the front of the line, ensuring that a
sync chunk load on an ungenerated chunk does not lag the server for
a long period of time if the servers generator queues are filled with
lots of chunks already.
This massively reduces the lag spikes from sync chunk gens.
This is also a precursor to my next improvement to prioritize chunks
in front of the player (Frustum Priorization)
In most cases, this change won't benefit much. However, there
exists the possibility that your Chunk Task threads are all busy
doing super slow work such as converting chunks.
If this occurs, the main thread blocking tasks, even at highest priority,
has to wait for some thread to become available.
This change gives us a waiting thread used only for main thread blocking
tasks, as well as an increased thread priority level, so that the OS
will give priority to this thread over the other threads.
This is more about guarantees, and won't be any real performanc boost
to anyone who has low or fast activity on their chunk tasks anyways.
But not all of us force upgrade our worlds, and this can be a life saver.
also reordered some patches because multiple PR's were merged.
Forgot to flip the pending boolean back to false, causing it to copy
empty data on the next tick if nothing else triggered a load.
haven't managed to actually reproduce the crash others got, but did
verify that the bad copy was occurring erasing the data.
also fixed a bug with chunk load callback not executing before
another one was scheduled.
Upstream has released updates that appears to apply and compile correctly.
This update has not been tested by PaperMC and as with ANY update, please do your own testing
CraftBukkit Changes:
183139d4 SPIGOT-5665: Improve loading spawn egg NBT
dec5df26 SPIGOT-5667: Can't add recipe without (vanilla) datapack
Spigot Changes:
ae72bf43 SPIGOT-5666: Customizable End City Seed
This can cause a nasty server lag the spawn chunks are not kept loaded
or they aren't finished loading yet, or if the world spawn radius is
larger than the keep loaded range.
By skipping this, we avoid potential for a large spike on server start.
Credit to Spotted for the idea
A lot of the new chunk system requires constant back and forth the main thread
to handle priority scheduling and ensuring conflicting tasks do not run at the
same time.
The issue is, these queues are only checked at either:
A) Sync Chunk Loads
B) End of Tick while sleeping
This results in generating chunks sitting waiting for a full tick to
complete before it will even start the next unit of work to do.
Additionally, this also delays loading of chunks until this same timing.
We will now periodically poll the chunk task queues throughout the tick,
looking for work to do.
We do this in a fair method that considers all worlds, not just the one being
ticked, so that each world can get 1 task procesed each before the next pass.
We also cap the throughput of these task processes to 1 per world per 0.1ms or
200 max per tick, to ensure that high volume of tasks do not overload the current
tick time.
In a view distance of 15, chunk loading performance was visually faster on the client.
Flying at high speed in spectator mode was able to keep up with chunk loading (as long as they are already generated)
Wiz mentioned that large WorldEdit operations cause light to run on
main thread. The queue was small, set to 5.. this bumps it to 20
but makes it configurable per-world.
The main risk of increasing this higher is during shutdown, some
queued light updates may be lost because mojang did not flush the
light engine on shutdown...
The queue size only puts a cap on max loss, doesn't solve that problem.
Don't touch this unless you know you have a problem and ok with the risk.
Upstream has released updates that appears to apply and compile correctly.
This update has not been tested by PaperMC and as with ANY update, please do your own testing
Bukkit Changes:
7361a62e SPIGOT-5641: Add Block.getDrops(ItemStack, Entity)
1dc91b15 Add specific notes about what is not API
2b05ef88 #484: Allow statistics to be accessed for offline players
CraftBukkit Changes:
f7d6ad53 SPIGOT-5603: Use LootContext#lootingModifier in CraftLootTable
5838285d SPIGOT-5657: BlockPlaceEvent not cancelling for tripwire hooks
f325b9be SPIGOT-5641: Add Block.getDrops(ItemStack, Entity)
e25a2272 Fix some formatting in CraftHumanEntity
498540e0 Add Merchant slot delegate
b2de47d5 SPIGOT-5621: Add missing container types for opening InventoryView
aa3a2f27 #645: Allow statistics to be accessed for offline players
2122c0b1 #649: CraftBell should implement Bell
No longer clones visible chunks which is causing massive memory
allocation issues, likely the source of Humongous Objects on large servers.
Instead we just synchronize, clear and rebuild, reusing the same object buffers
as before with only 2 small objects created (FastIterator/MapEntry)
This should result in siginificant memory use reduction and improved GC behavior.
Upstream has released updates that appears to apply and compile correctly.
This update has not been tested by PaperMC and as with ANY update, please do your own testing
Bukkit Changes:
122289ff Add FaceAttachable interface to handle Grindstone facing in common with Switches
a6db750e SPIGOT-5647: ZombieVillager entity should have getVillagerType()
CraftBukkit Changes:
bbe3d58e SPIGOT-5650: Lectern.setPage(int) causes a NullPointerException
3075579f Add FaceAttachable interface to handle Grindstone facing in common with Switches
95bd4238 SPIGOT-5647: ZombieVillager entity should have getVillagerType()
4d975ac3 SPIGOT-5617: setBlockData does not work when NotPlayEvent is called by redstone current
Try to use a faster chunk lookup for collision detection, and only
fall back to the original for nearby chunks.
The collision code takes an AABB and generates a cuboid of checks rather
than a cylinder, so at high velocity this can generate a lot of chunk checks.
Where I blocked movement did not consider velocity buildup, which I assume
then "unleashes" if something was really trying to push that entity, and moves
it a very large distance.
Additionally, this method was completely misnamed, as movementTick
is more "doLotsOfTickThings", and ended up breaking AI too, which the whole
point of temporary wake ups was to let AI run to trigger new immunity.
Also fixed numerous behavioral rules for Immunity to improve vanilla gameplay,
suchas bees that are angry or moving towards a flower or hive, any insentient
that is targetting any enemy (Accidently made it any player), and included flying
mobs such as phantoms by reducing the type check to insentient instead of Creature.
Also improved inWater immunity to consider if the mob is movable by water or not.
The entire reason the if statement exists is to only flush and print when done if flag is true
This avoids /save-all from hurting as much as it was before, such as from backup plugins.
CraftBukkit caused a regression here by making unloading chunks not
have a ticket added and returning unloaded future.
This caused entities who were killed in same tick their chunk is unloading
to not be able to be removed from the chunk.
This then results in dead entities lingering in the Chunk.
Combine that with a buggy detail of the previous implementation of
the Dupe UUID patch, then this was the likely source of the "Ghost entities"
If something calls register twice, and the world is ticking, it could be
enqueued to add twice.
Vs behavior of non ticking of just overwriting state.
We will now simply log a warning when this happens instead of crashing the server.
This was not applied correctly, and would completely blow up chunk entity
registration if this feature was turned off....
Additionally, change how the entities are removed to be more consistent with other code.
Surface some of the logs indicating there is a problem as we are having so many issues with
entities that we don't need to be surpressing logs like that.
Faster Entity iteration using the chunks full entity list and array access.
Faster chunk lookups skipping the cache, as the pattern of access was not suitable
for cache usage (each request will likely blow cache)
This reduces the cost of Entity Activation Range's initial marking.
1) Immunity no longer gives 20 tick immunity, each immunity check can
give its own tick value on how long it lasts, drastically cutting down on most to 0-1 ticks.
2) Fixed Villager Immunity to use proper 1.15 check for Breeding.
3) Fixed Water Mobs being 100% immune due to the inWater check...
4) Fixed flying mobs being 100% immune due to the !onGround check...
5) Made Insentient mobs only check for the hasTasks during immunity check window, not every single tick. this made them way more active than desired
- this puts behavior closer to inline with my original behavior in Spigot, but still does some checks to allow them temporary immunity, just not as much as before.
6) Inactive Entities would "inch" while trying to move, effectively getting nowhere. Now while an entity is inactive, it just won't even try to move.
- this saves us from the expensiveness of Entity movement 1 out of 20 ticks. Now they will only move while either active or triggered a true immunity.
Upstream has released updates that appears to apply and compile correctly.
This update has not been tested by PaperMC and as with ANY update, please do your own testing
Bukkit Changes:
564ed152 #482: Add a DragonBattle API to manipulate respawn phases etc
9f2fd967 #474: Add ability to set other plugin names as provided API so others can still depend on it
CraftBukkit Changes:
fc318cc1 #642: Add a DragonBattle API to manipulate respawn phases etc
796eb15a #644: Fix ChunkMapDistance#removeAllTicketsFor not propagating ticket level updates
a6f80937 SPIGOT-5606: call BlockRedstoneEvent for fence gates
Spigot Changes:
a03b1fdb Rebuild patches
Only occurred when entries were scheduled with huge tick delays
Add two flags to debug excessive tick delays:
-Dpaper.ticklist-warn-on-excessive-delay=true (false by default)
and -Dpaper.ticklist-excessive-delay-threshold=ticks which
sets the excessive tick delay to the specified ticks (defaults to
60 * 20 ticks, aka 60 seconds)
Removing the try catch and generally reducing ops should make it
faster on its own, however removing the try catch makes it
easier to inline due to code size
Previous solution could still block network thread (while addPending is executing). This window is small, but removing it completely is better. This should probably also speed up concurrent adds, because no locking will be performed anymore.
The only possible downside is that adding elements one by one to synchronized list might be slower (But it's done while already locked, so maybe jvm will avoid additional locking?),