There is some vanilla level bug where this tracking state appears
to get messed up and player doesn't exists in chunk its trying to untrack.
We returned early to prevent crashing, but I suspect if there was a level being
tracked for the chunk, it got leaked due to the early return.
So going to ensure we clean up the level tracker when this state occurs.
This may help with any leaked chunk issues.
Now supports async chunk access even though doing that is bad
and shouldn't be done anyways since we force you back to main, itll
now just delay the ticket add to main the same way.
Now only add the ticket if the plugin CAUSED the chunk load, so no longer
adds ticket if the chunk was already loaded.
Additionally, cap chunk ticket limits to 1 second (Effectively ignoring chunk-gc config
unless the config is lower than 20 ticks)
Fixes#3533
Obfuscate multiple chunks at a time over the server thread pool.
Will speed up chunk processing when anti xray is enabled.
Co-authored-by: Aikar <aikar@aikar.co>
Like previous versions, plugins loading chunks kept them loaded until
they garbage collected to avoid constant spamming of chunk loads
This adds tickets to a few more places so that they can be unloaded.
Additionally, this drops their ticket level to BORDER so they wont be ticking
so they will just sit inactive instead.
Using .loadChunk to keep a chunk ticking was a horrible idea for upstream
when we have TWO methods that are able to do that already in the API.
Not adding it to .getType() though to keep behavior consistent with vanilla.
In previous MC versions, we had a rather simple internal scheduler
for delayed tasks that would just keep pushing task back until desired
tick was reached.
The method it called to schedule the task changed behavior in 1.14, and now
this scheduler is not working nowhere near what it was supposed to be doing.
This was causing long delayed task to eat up CPU (In Oversleep for example)
Rewrite this to just use the CraftScheduler for scheduling delayed tasks.
Once this was fixed, it became quite clear the code that delayed ticket
additions for chunks based on distance was clearly not right, as it was
tested on the previous broken logic.
So the ticket delay process has been vastly revamped to be even smarter.
Chunks behind the player can load slower than the chunks in front of the player.
We also can delay ticket adding until one of its neighbors has loaded, as
this lets us get a smoother spiral out for the chunks (minus frustum intent).
Additionally on frustum previous commit inadvertently broke frustum trying to
fix an issue when the real fix lied elsewhere, so restore chunk priority so
it works again.
When players are moving in the world, doing things such as building or exploring,
they will commonly go back and forth in a small area. This causes a ton of chunk load
and unload activity on the edge chunks of their view distance.
A simple back and forth movement in 6 blocks could spam a chunk to thrash a
loading and unload cycle over and over again.
This is very wasteful. This system introduces a delay of inactivity on a chunk
before it actually unloads, which will be handled by the ticket expiry process.
This allows servers with smaller worlds who do less long distance exploring to stop
wasting cpu cycles on saving/unloading/reloading chunks repeatedly.
Upon further knowledge of the system, it is known that region files
are closing properly, as well as this didn't help native memory use anyways.
This patch also caused issues compiling on a newer JDK being able to
release the jar to java 8 users.
priority tickets being added at 33 was hurting sync EMPTY and lesser requests.
this was likely the source of recent treasure map issues.
This then further hurt nether portal travel too. lots of oddness around.
This also avoids scheduling a level change on ticket removal when the level
is unchanged, as well as ditches CB's horrible change to not letting
you access an unloading chunk which should be valid to cancel the unload
I'm going make a class, and in that class i'm going to
make a method. And in that method, I'm going to make a local class.
And then in that local class, I'm going to make another inner class.
I heard you like complex class trees.
Massive update to light to improve performance and chunk loading/generation.
1) Massive bit packing/unpacking optimizations and inlining.
A lot of performance has to do with constant packing and unpacking of bits.
We now inline a most bit operations, and re-use base x/y/z bits in many places.
This helps with cpu level processing to just do all the math at once instead
of having to jump in and out of function calls.
This much logic also is likely over the JVM Inline limit for JIT too.
2) Applied a few of JellySquid's Phosphor mod optimizations such as
- ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified
- reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used.
3) Optimize hot path accesses to getting updating chunk to have less branching
4) Optimize getBlock accesses to have less branching, and less unpacking
5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks.
6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full
of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front.
this applies for both urgent and non urgent tasks.
7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency.
8) Fix NPE risk that crashes server in getting nibble data
Fixes#3489Fixes#3363
Previously maps would load all chunks in a certain radius depending on
their scale when trying to update their content. This would result in
main thread chunk loads when they weren't really necessary, especially
on low view distances or "slow" async chunk loads after teleports or
other prioritisation.
This changes it to only try to render already loaded chunks based on
the assumption that the chunks around the player will get loaded
eventually anyways.
In rare cases, this class could potentially be loaded from
the chunk threads causing it to initialize async and cause errors.
This would then break the server and chunk saving.
So ensure its loaded at start of server to avoid this.
Still needs front end changes to see it yet though.
1) Adds Game Rules per world
2) Adds View distances per world
3) Removes extra garbage on lambda task names
4) Adds more memory information such as native load
5) Adds load average for non crap operating systems.
6) Fixes online mode showing false when privacy=true
7) Adds Data packs loaded
Switch to a standard fixed size ThreadPoolExecutor as we don't use the
advanced capabilities of a ForkJoinPool.
ForkJoinPool does not allow single threads, and really rather not use
2 different executor types based on core count.
Also, change thread priorities so that main thread is prioritized by
the OS at a higher priority than the other threads. May not help too much
but it at least signals the OS the information to know main is more important.
Locks dimension manager to the first world its used with.
WE is creating a temp world and the world ref on that manager
is getting changed to the temp world.
This would of also caused a memory leak of that temp world too.
Upstream has released updates that appears to apply and compile correctly.
This update has not been tested by PaperMC and as with ANY update, please do your own testing
My recent work on serialization is now in CraftBukkit so was able to drop the patch and Paper
is now consistent with upstream.
Bukkit Changes:
e2699636 Move API notes to more obvious location
CraftBukkit Changes:
1b2830a3 SPIGOT-4441: Fix serializing Components to and from Legacy
This should now complete legacy serialization to avoid ever
changing the output content.
This removes the concept of "Default Color" from the method as
that entire concept was flawed and broke the intent of chat components.
Going to actually PR this patch to Spigot soon.
This now puts us back at a point where any data saved pre Spigot
breaking things will still save back the exact same way as before,
but new component -> legacy will now be fixed to not insert undesirable
default colors (such as black) into the legacy string, and instead use
the proper reset code.
This means you can now safety get the text from a book and
put it in chat or an entity display name without worry about black
color codes or other undesired color codes leaking into the new
context where that color doesn't make sense.
This brings chat componenent serialization to 100% accuracy so
that any text input in the legacy format, converting to comps and
then back to legacy will result in identical results.
If the user explicitly sets a color as prefix to a string, it is retained,
even if that color matches the default.
This also helps improve dealing with the empty string wrappers Bukkit creates.
A unit test has been added to verify this behavior.
This patch fixes the serialization of display names, item lores and
other things which use strings with color codes. The old implementation
deleted the color codes at the beginning of the resulting string if it
matched the default color passed to the conversion function. This
resulted in items having a black display name losing the black color
code in the beginning of the text when the item was serialized (e.g.
saving an ItemStack in a Yaml config).
Spigot has now made the issue worse and expanded the scope to more places.
1) Improve frustum to look more at the near chunks and frontal chunks only instead of 1 large single look up.
2) Delay adding 33 tickets based on view distance and lower their task priority. This will slower roll out the spiral
3) Chunks behind the player have additional delay on loading, favoring chunks in front of the player.
This has benefit that if faster traveling, some of the chunks will be cancelled / not loaded.
This should reduce pressure on chunk loading, as well as reduce loading/unloading unnecessary chunks while moving.
When a chunk is loaded from disk that has already been generated,
the server has to promote the chunk through the system to reach
it's current desired status level.
This results in every single status transition going from the main thread
to the world gen threads, only to discover it has no work it actually
needs to do.... and then it returns back to main.
This back and forth costs a lot of time and can really delay chunk loads
when the server is under high TPS due to their being a lot of time in
between chunk load times, as well as hogs up the chunk threads from doing
actual generation and light work.
Additionally, the whole task system uses a lot of CPU on the server threads anyways.
So by optimizing status transitions for status's that are already complete,
we can run them to the desired level while on main thread (where it has
to happen anyways) instead of ever jumping to world gen thread.
This will improve chunk loading effeciency to be reduced down to the following
scenario / path:
1) MAIN: Chunk Requested, Load Request sent to ChunkTaskManager / IO Queue
2) IO: Once position in queue comes, submit read IO data and schedule to chunk task thread
3) CHUNK: Once IO is loaded and position in queue comes, deserialize the chunk data, process conversions, submit to main queue
4) MAIN: next Chunk Task process (Mid Tick or End Of Tick), load chunk data into world (POI, main thread tasks)
5) MAIN: process status transitions all the way to LIGHT, light schedules Threaded task
6) SERVER: Light tasks register light enablement for chunk and any lighting needing to be done
7) MAIN: Task returns to main, finish processing to FULL/TICKING status
Previously would have hopped to SERVER around 12+ times there extra.
Mojang has flaws in their logic about chunks being concurrently
wrote to. So we constantly see crashes around multiple threads writing.
Additionally, java has optimized synchronization so well that its
in many times faster than trying to manage read wrote locks for low
contention situations.
And this is extremely a low contention situation.
Fixes#3293Fixes#2493
I'm hoping the other fix in 324 for the level map getting corrupted
fixes the real issue and this isn't needed anymore, but i suspect it is
will wait until more study can be done though.
Fixes#3469
We must check the level tracker as ticket levels add "virtual"
tickets to neighbors.
Also added neighbor tracking during generation to be extra safe.
Fixes#3465Fixes#3451Fixes#3459
Mojang implemented a cache like chunks have, but this cache
is accessed by multiple threads and is totally not safe.
So just remove it
Fixes#3466
Also missed a pooled nibble release, so slid that in there too.
This change reimplements the entire BehaviorFindPosition method to
get rid of all of the streams, and implement the logic in a more sane way.
We keep vanilla behavior 100% the same with this change, just wrote more
optimal, as we can abort iterating POI's as soon as we find a match....
One slight change is that Minecraft adds a random delay before a POI is
attempted again. I've increased the amount of that delay based on the distance
to said POI, so farther POI's will not be attempted as often.
Additionally, we spiral out, so we favor local POI's before we ever favor farther POI's.
We also try to pathfind 1 POI at a time instead of collecting multiple POI's then tossing them
all to the pathfinder, so that once we get a match we can return before even looking at other
POI's.
This benefits us in that ideally, a villager will constantly find the near POI's and
not even try to pathfind to the farther POI. Trying to pathfind to distant POI's is
what causes significant lag.
Other improvements here is to stop spamming the POI manager with empty nullables.
Vanilla used them to represent if they needed to load POI data off disk or not.
Well, we load POI data async on chunk load, so we have it, and we surely do not ever
want to load POI data sync either for unloaded chunks!
So this massively reduces object count in the POI hashmaps, resulting in less hash collions,
and also less memory use.
Additionally, unemployed villagers were using significant time due to major ineffeciency in
the code rebuilding data that is static every single invocation for every POI type...
So we cache that and only rebuild it if professions change, which should be never unless
a plugin manipulates and adds custom professions, which it will handle by rebuilding.
Some plugins are doing really really bad things to worlds breaking the
ability to send sounds to some users.
So creating another reference to the player chunk map that plugins wont be breaking, and
print a stack trace at world creation if we ever get an expected world state to identify
who is doing it!
If we encounter this illegal state, we fall back to the old method of sending sounds, so
sending sounds will still work, just less effecient.
Spigot made structure start not load chunks, but forgot to null check
the result...
This likely never blew up before due to the chunk leak issue, but now
that leaky chunks are cleaned up, it was identified.
While last was mostly there, still had some slight risk of unloading
before it was fully finished.
So just going to bump the delay to 3 minutes to be safe. Better than
forever at least.
Was really hoping we could unload them as soon as they were done to
any memory prematurely promoting to old generation, but guess we can't.
A chunk was loaded but not yet finished in use and was unloaded too early.
This caused it to be reloaded again or caused crashes.
Now also check if the chunk pops out of the unload queue that it also
doesn't now have a ticket either.
Due to some complexity in mojangs complicated chain of juggling
whether or not a chunk should be unloaded when the last ticket is
removed, many chunks are remaining around in the cache.
These chunks are never being targetted for unload because they are
vastly out of view distance range and have no reason to be looked at.
This is a huge issue for performance because we have to iterate these
chunks EVERY TICK... This is what's been leading to high SELF time in
Ticking Chunks timings/profiler results.
We will now detect these chunks in that iteration, and automatically
add it to the unload queue when the chunk is found without any tickets.
Spigot inserted their Slack Activity Accountant in the wrong location
resulting in a chunk being removed from the unload queue, inserted into
the unload map, but never calling the function to finish the removal....
This caused the chunk to become stuck in the unload map if ever hit, because
the unload map was meant to be a TEMPORARY location while it was saving.
Fix this by abort iteration AFTER the current chunk is finisehd processing
Also, improve how aggressive we are at unloading chunks, targetting 10% per tick instead.
These saves are asynchronous so there should be less of a hit here.
This is for 2 reasons:
1) Ensuring our log4j is mostly loaded at OUR version.
I've seen stack traces with line numbers that do not match our version. This means that some
plugin has shaded in log4j and their loaded version is mixing with ours....
So by at least trying to load a bunch of log4j classes before we load plugins, we can be
more sure mixed versions are not loading.
2) If the jar file is replaced while the server is runnimg class not found errors galore
This will preloaod a bunch of classes commonly seen to error during shutdown due to this.
The goal here is to help let the server shutdown gracefully as possible. Some plugins will
still blow up here if they access a class that hadn't been loaded yet, but goal is to at least
stop freezing the shutdown process as it does with JLine and Log4j errors requiring an external kill.
Ideally you should not replace jars while the server is running, but it is something that happens in
development for testing.
Updated test server to do a copy though to avoid this happening in Paper development.