| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
| |
waiting to be processed at the second stage (after initial UDP processing)
If this consistently increases then this is a problem since it means the simulator is receiving more requests than it can distribute to other parts of the code.
|
|
|
|
| |
packets sent by a region per second
|
| |
|
|
|
|
|
|
|
|
| |
taking on the initial processing of a UDP packet.
If we're not receiving packets with multiple threads (m_asyncPacketHandling) then this is critical since it will limit the number of incoming UDP requests that the region can handle and affects packet loss.
If m_asyncPacketHandling then this is less critical though a long process will increase the scope for threads to race.
This is an experimental stat which may be changed.
|
|
|
|
|
|
| |
AgentUpdate packet. This fixes the problem with vehicles not moving forward
after the first up-arrow.
Code to fix a potential exception when using different IClientAPIs.
|
|
|
|
| |
observations about AgentUpdates.
|
|
|
|
|
|
| |
significant much earlier in UDP processing (i.e. before we pointlessly place such packets on internal queues, etc.)
Appears to have some impact on cpu but needs testing.
|
|
|
|
|
|
|
|
| |
AgentUpdate in packets to be discarded at a very early stage.
Enabling this will stop anybody from moving on a sim, though all other updates should be unaffected.
Appears to make some cpu difference on very basic testing with a static standing avatar (though not all that much).
Need to see the results with much higher av numbers.
|
|
|
|
| |
stop out" from actually doing anything
|
|
|
|
|
|
|
| |
always performing these on a separate fired thread.
This appears to improve cpu usage since launching a new thread is more expensive than performing a small amount of inline logic.
However, needs testing at scale.
|
|
|
|
|
|
| |
a continuous loop with sleeps.
Does appear to have a cpu impact but may need further tweaking
|
|
|
|
|
|
| |
Revert "Trying to hunt the CPU spikes recently experienced."
This reverts commit ac73e702935dd4607c13aaec3095940fba7932ca.
|
|
|
|
|
|
| |
Revert "Comment out old inbound UDP throttling hack. This would cause the UDP"
This reverts commit 38e6da5522a53c7f65eac64ae7b0af929afb1ae6.
|
|
|
|
|
|
|
|
| |
TriggerOnMakeRootAgent to the end of CompleteMovement.
Justin, if you read this, there's a long story here. Some time ago you placed SendInitialDataToMe at the very beginning of client creation (in LLUDPServer). That is problematic, as we discovered relatively recently: on TPs, as soon as the client starts getting data from child agents, it starts requesting resources back *from the simulator where its root agent is*. We found this to be the problem behind meshes missing on HG TPs (because the viewer was requesting the meshes of the receiving sim from the departing grid). But this affects much more than meshes and HG TPs. It may also explain cloud avatars after a local TP: baked textures are only stored in the simulator, so if a child agent receives a UUID of a baked texture in the destination sim and requests that texture from the departing sim where the root agent is, it will fail to get that texture.
Bottom line: we need to delay sending the new simulator data to the viewer until we are absolutely sure that the viewer knows that its main agent is in a new sim. Hence, moving it to CompleteMovement.
Now I am trying to tune the initial rez delay that we all experience in the CC. I think that when I fixed the issue described above, I may have moved SendInitialDataToMe to much later than it should be, so now I'm moving to earlier in CompleteMovement.
|
|
|
|
|
|
|
| |
reception thread to sleep for 30ms if the number of available user worker
threads got low. It doesn't look like any of the UDP packet types are
marked async so this check is 1) unnecessary and 2) really crazy since
it stops up the reception thread under heavy load without any indication.
|
|
|
|
| |
to be performed immediately from client start
|
|
|
|
|
|
|
| |
"debug lludp" options
also moves the implementing code into LLUDPServer.cs along with other debug commands from OpenSim.cs
gets all debug lludp commands to only activate for the set scene if not root
|
|
|
|
| |
that are coming via TP (root agents)
|
|
|
|
|
|
| |
departing sims for a little while. This was also true for local TPs. BUt for local TPs the assets are on the same server, so it doesn't matter. For HGTPs, it matters. This potential fix moves sending the initial data to later, after the client has completed the movement into the region. Fingers crossed that it doesn't mess other things up!"
This reverts commit f32a21d96707f87ecbdaf42c0059f8494a119d31.
|
|
|
|
| |
for a little while. This was also true for local TPs. BUt for local TPs the assets are on the same server, so it doesn't matter. For HGTPs, it matters. This potential fix moves sending the initial data to later, after the client has completed the movement into the region. Fingers crossed that it doesn't mess other things up!
|
|
|
|
|
|
|
| |
Stop().
This was an undocumented interface which I think was for long defunct region load balancing experiments.
Also adds method doc for some IClientNetworkServer methods.
|
|
|
|
| |
Clean up some parameter code in Statistics.Binary.
|
|
|
|
|
|
| |
delta over time.
The chief motivation for this is to be able to tell whether there's any impact on incoming packet processing from enabling extra packet pooling.
|
|
|
|
| |
lifted up into LLUDPServer and be distiguished by scene name
|
|
|
|
| |
Also puts some packet processing counts in a container named after the scene so that stats can be collected from more than one scene.
|
|
|
|
|
|
| |
running via the "debug lludp pool <on|off>" console command. For debug purposes.
This does not currently apply to the higher LLUDP packetpool.
|
|
|
|
|
|
|
|
| |
they are enabled. Add count stats for existing LLUDP pool.
This introduces a pull stat type in addition to the push stat type.
A pull stat takes a method on construction which knows how to update the stat on request.
In this way, special interfaces for pull stat collection are not necessary.
|
|
|
|
| |
one we pool atm, rather than attempting to return all incoming packets.
|
|
|
|
| |
churn
|
|
|
|
|
|
|
|
| |
UDP data.
Even when an avatar is standing still, it's sending in a constant stream of AgentUpdate packets that the client creates new UDPPacketBuffer objects to handle.
This option pools those objects. This reduces memory churn.
Currently off by default. Works but the scope can be expanded.
|
|
|
|
|
|
|
| |
console for debug processes.
This is controlled via the "debug lludp start <in|out|all>" and "debug lludp stop <in|out|all>" region console commands.
The command "debug lludp status" will show current status.
|
|
|
|
| |
viewers.
|
|
|
|
|
| |
These were neither being returned or in many places reused.
Getting packets from a pool rather than deallocating and reallocating reduces memory churn which in turn reduces garbage collection time and frequency.
|
|
|
|
|
|
| |
prevent an inactive connection being left behind if the user closes the viewer whilst the connection is being established.
This should remove the need to run the console command "kick user --force" when these connections are left around.
|
|
|
|
|
|
|
| |
in Scene.
This is to resolve previous build break.
This unnecessarily but harmlessly reads and sets the parameter multiple times - scene was doing the same thing.
|
|
|
|
| |
This better reflects the long-term purpose of that project and matches Monitoring modules.
|
|
|
|
| |
simultaneously (e.g. ack timeout and an attempt to reconnect)
|
|
|
|
| |
This can always be retrieved via the LLUDPClient and is so done in various places already.
|
|
|
|
| |
to reflect what it actually is
|
| |
|
|
|
|
| |
since two commits ago (b099f26)
|
|
|
|
|
|
|
|
| |
rather than using IsLoggingOut flag.
IsActive is more appropriate since unack timeout is not due to voluntary logout.
This is in line with operations such as manual kick that do not set the IsLoggingOut flag.
It's also slightly better race-wise since it reduces the chance of this operation clashing with another reason for client deactivation (e.g. manual kick).
|
|
|
|
| |
client a kick message with that reason, in case it is somehow still listening.
|
|
|
|
|
|
| |
check the IsLoggingOut flag instead.
This is slightly better thread-race wise
|
|
|
|
|
|
|
| |
than synchronously on the outgoing packet loop.
This is the same async behaviour as normal logouts.
This is necessary because the event queue will sleep the thread for 5 seconds on an ack timeout logout as the client isn't around to pick up the final event queue messages.
|
| |
|
|
|
|
| |
nor client are ever null.
|
|
|
|
|
|
| |
rather than doing another retrieve on dequeue.
Instead of checking whether the client still exists by trying to retrieve again from the client manager, this patch gets it back from IncomingPacket and checks the IClientAPI.IsActive state.
|
| |
|