| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
delta over time.
The chief motivation for this is to be able to tell whether there's any impact on incoming packet processing from enabling extra packet pooling.
|
|
|
|
| |
lifted up into LLUDPServer and be distiguished by scene name
|
|
|
|
| |
Also puts some packet processing counts in a container named after the scene so that stats can be collected from more than one scene.
|
|
|
|
|
|
| |
running via the "debug lludp pool <on|off>" console command. For debug purposes.
This does not currently apply to the higher LLUDP packetpool.
|
|
|
|
|
|
|
|
| |
they are enabled. Add count stats for existing LLUDP pool.
This introduces a pull stat type in addition to the push stat type.
A pull stat takes a method on construction which knows how to update the stat on request.
In this way, special interfaces for pull stat collection are not necessary.
|
|
|
|
| |
one we pool atm, rather than attempting to return all incoming packets.
|
|
|
|
| |
churn
|
|
|
|
|
|
|
|
| |
UDP data.
Even when an avatar is standing still, it's sending in a constant stream of AgentUpdate packets that the client creates new UDPPacketBuffer objects to handle.
This option pools those objects. This reduces memory churn.
Currently off by default. Works but the scope can be expanded.
|
|
|
|
|
|
|
| |
console for debug processes.
This is controlled via the "debug lludp start <in|out|all>" and "debug lludp stop <in|out|all>" region console commands.
The command "debug lludp status" will show current status.
|
|
|
|
| |
viewers.
|
|
|
|
|
| |
These were neither being returned or in many places reused.
Getting packets from a pool rather than deallocating and reallocating reduces memory churn which in turn reduces garbage collection time and frequency.
|
|
|
|
|
|
| |
prevent an inactive connection being left behind if the user closes the viewer whilst the connection is being established.
This should remove the need to run the console command "kick user --force" when these connections are left around.
|
|
|
|
|
|
|
| |
in Scene.
This is to resolve previous build break.
This unnecessarily but harmlessly reads and sets the parameter multiple times - scene was doing the same thing.
|
|
|
|
| |
This better reflects the long-term purpose of that project and matches Monitoring modules.
|
|
|
|
| |
simultaneously (e.g. ack timeout and an attempt to reconnect)
|
|
|
|
| |
This can always be retrieved via the LLUDPClient and is so done in various places already.
|
|
|
|
| |
to reflect what it actually is
|
| |
|
|
|
|
| |
since two commits ago (b099f26)
|
|
|
|
|
|
|
|
| |
rather than using IsLoggingOut flag.
IsActive is more appropriate since unack timeout is not due to voluntary logout.
This is in line with operations such as manual kick that do not set the IsLoggingOut flag.
It's also slightly better race-wise since it reduces the chance of this operation clashing with another reason for client deactivation (e.g. manual kick).
|
|
|
|
| |
client a kick message with that reason, in case it is somehow still listening.
|
|
|
|
|
|
| |
check the IsLoggingOut flag instead.
This is slightly better thread-race wise
|
|
|
|
|
|
|
| |
than synchronously on the outgoing packet loop.
This is the same async behaviour as normal logouts.
This is necessary because the event queue will sleep the thread for 5 seconds on an ack timeout logout as the client isn't around to pick up the final event queue messages.
|
| |
|
|
|
|
| |
nor client are ever null.
|
|
|
|
|
|
| |
rather than doing another retrieve on dequeue.
Instead of checking whether the client still exists by trying to retrieve again from the client manager, this patch gets it back from IncomingPacket and checks the IClientAPI.IsActive state.
|
| |
|
|
|
|
|
|
|
| |
timeout is breached.
This alarm can then invoke this to log extra information.
This is used in LLUDPServer to show which client was being processed when incoming and outgoing udp watchdog alarms are triggered.
|
|
|
|
|
|
| |
The packet was actually being handled but not acted on.
This change extends the default timeout for paused clients to 5 minutes
and makes both the paused and non-paused timeout periods configurable.
|
|
|
|
| |
problem resolution.
|
| |
|
|
|
|
|
|
|
|
| |
On the first frame, all startup scene objects are added to the physics scene.
This can cause a considerable delay, so we don't start raising the alarm on scene loop timeouts until the second frame.
This commit also slightly changes the behaviour of timeout reporting.
Previously, a report was made for the very first timed out thread, ignoring all others until the next watchdog check.
Instead, we now report every timed out thread, though we still only do this once no matter how long the timeout.
|
|
|
|
| |
grid call would try and contact the wrong uri. Also fixes the build from df960d5
|
|
|
|
| |
during LLUDPServer.HandleUseCircuitCode()
|
|
|
|
|
|
| |
STATISTICS to count the number of times clients are disconnected due to ack timeouts.
This has been broken for a long period and would only ever show 0.
|
| |
|
| |
|
|
|
|
| |
This checks that the initial UseCircuitCode packet is handled correctly for a normal client login.
|
|
|
|
|
| |
The only caller is the LLUDP stack and this has to validate the UDP circuit itself, so we know that it exists.
This allows us to eliminate another null check elsewhere and simplifies the method contract
|
|
|
|
|
|
|
|
|
| |
it's live before sending other data.
This means that avatar/appearance data of other avatars and scene objects for a client will be sent after the ack rather than possibly before.
This may stop some avatars appearing grey on login.
This introduces a new OpenSim.Framework.ISceneAgent to accompany the existing OpenSim.Framework.ISceneObject and ISceneEntity
This allows IClientAPI to handle this as it can't reference OpenSim.Region.Framework.Interfaces
|
| |
|
|
|
|
| |
This appears to be code clutter since the code that uses this has long gone.
|
| |
|
|
|