| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
(milliseconds) and TimeSpan.TicksPerXXX (10000 x milliseconds)
|
|
|
|
|
|
| |
This is controlled via the console command "debug lludp client set process-unacked-sends true [<avatar-first-name> <avatar-last-name>]"
For debug purposes to see if this process for very bad connections is causing general outbound udp processing delays.
Relates to http://opensimulator.org/mantis/view.php?id=7393
|
|
|
|
|
|
| |
would fail with a casting exception for scenes with NPCs
Present since 51eb8fa (Oct 2 2014)
|
|
|
|
| |
conference code use a generic JobEngine class rather than 4 slightly different copy/pasted versions.
|
|
|
|
|
|
|
|
| |
thread and run work in the jobengine from Watchdog to a WorkManager class.
This is to achieve a clean separation of concerns - the watchdog is an inappropriate place for work management.
Also adds a WorkManager.RunInThreadPool() class which feeds through to Util.FireAndForget.
Also switches around the name and obj arguments to the new RunInThread() and RunJob() methods so that the callback obj comes after the callback as seen in the SDK and elsewhere
|
|
|
|
|
|
|
| |
requests.
This is to reduce the potential for overload of the threadpool if there are many simultaneous requets in high concurrency situations.
Currently only applied to AvatarProperties and GenericMessage requests.
|
|
|
|
|
|
| |
problem diagnosis.
"show threadpool calls" now also returns named (labelled), anonymous (unlabelled) and total call stats.
|
|
|
|
|
|
|
| |
client throttles properly.
In "show throttles", also renames 'total' column to 'actual' to reflect that it is not necessarily the throttles requested for/by the client.
Also fills out 'target' in non-adapative mode to the actual throttle requested for/by the client.
|
|
|
|
|
|
|
|
|
| |
LLUDP client stack rather than queueing internally within LLClientView.
When an HG avatar enters a scene, it delays processing of entity updates. Could be crowding out by other updates or something else.
This delay in ones own av mvmt updates results in mvmt lag experienced on the client. Avoiding the internal LLClientView for these packets appears to resolve this issue.
Appears most noticeably for avatars with attachments, though has also been seen on those without sometimes. Hasn't been observed for non-HG avatars in general.
Will be investigating exactly what the problem is, at which point there will be a more permanent solution.
|
|
|
|
|
|
|
|
|
|
| |
throttles would cause client throttles to be lower than expected when total requests exceeded the scene limit.
This was because specifying a max client throttle would always request the max from the parent server throttle, no matter the actual total requests on the client throttle.
This would lead to a lower server multiplier than expected.
This change also adds a 'target' column to the "show throttles" output that shows the target rate (as set by client) if adaptive throttles is active.
This commit also re-adds the functionality lost in recent 5c1a1458 to set a max client throttle when adaptive is active.
This commit also adds TestClientThrottlePerClientAndRegionLimited and TestClientThrottleAdaptiveNoLimit regression tests
|
|
|
|
| |
a max client total limit is enforced server-side
|
|
|
|
| |
to potentially set the scene max throttle on the fly.
|
|
|
|
|
|
|
| |
about throttles
This is separate from the user-oriented "show throttles" command since one will often only want to know about varying client throttle settings.
Currently displays max scene throttle and adaptive throttles config if set.
|
|
|
|
|
|
| |
This is the total of queued outgoing packets across all connections, as also seen in the "show queues" command.
Gives some early indication of whether the simulator can't send all outgoing packets fast enough.
Though then one would want to check that this isn't due to a few bad client connections.
|
| |
|
|
|
|
| |
to match set command
|
|
|
|
| |
Can currently only set adaptive true|false, where adaptive = false
|
|
|
|
|
| |
Will only have any affect under Windows or mono with a patch such as https://gist.github.com/justincc/31e52218d098529b4696 (not recommended) applied.
For assessment purposes.
|
|
|
|
| |
a client's throttle (currently just whether adaptive is enabled).
|
|
|
|
| |
<avatar-last-name>" to control extra throttle related debug logging.
|
|
|
|
|
|
|
| |
before it's put on the wire.
Unlike "debug lludp packet" which logs at the point where OpenSim first asks the clientstack to send a certain outgoing packet, this logs immediately before the actual send.
For low-level debugging purposes.
|
|
|
|
|
|
| |
have shown that this has better scalability.
For testing, previous behaviour can be restored with the console command "debug lludp oqre stop" at runtime.
|
|
|
|
|
|
| |
inbound packets.
For test/debug purposes.
|
|
|
|
|
|
|
| |
debug/test purposes.
This drops all outbound packets that match a given packet name.
Can currently only be applied to all connections in a scene.
|
|
|
|
|
|
|
|
|
|
|
| |
on a controlled number of threads rather than the threadpool.
Disabled by default. Currently can only be enabled with console "debug lludp oqre start" command, though this can be started and stopped whilst simulator is running.
When a connection requires packet queue refill processing (used to populate queues with entity updates, entity prop updates and image queue updates), this is done via Threadpool requests.
However, with a very high number of connections (e.g. 100 root + 300 child) a very large number of simultaneous requests may be causing performance issues.
This commit adds an experimental engine for processing these requests from a queue with a persistent thread instead.
Unlike inbound processing, there are no network requests in this processing that might hold the thread up for a long time.
Early implementation - currently only one thread which may (or may not) get overloaded with requests. Added for testing purposes.
|
|
|
|
|
|
|
| |
protected so that other logging code in the clientstack can record more useful information
Adds some commented out logging for use again in the future.
No functional change.
|
|
|
|
| |
closed on teleport, don't unnecessarily resend all avatar and object data about that region.
|
|
|
|
|
|
| |
neighbour) don't resend all the initial avatar and object data again.
This is unnecessary since it has been received (and data continues to be received) in the existing child connection.
|
|
|
|
| |
actually meant to get an ack (because it's reliable).
|
|
|
|
|
|
| |
for both current and future clients
The existing "--default" option only changes the logging level for future clients.
|
| |
|
|
|
|
|
| |
This records how many packets were indicated to be resends by clients
Not 100% reliable since clients can lie about resends, but usually would indicate if clients are not receiving UDP acks at all or in a manner they consider timely.
|
|
|
|
| |
first added a few commits ago
|
|
|
|
|
|
| |
This allows one to monitor the total number of messages resent to clients over time.
A constantly increasing stat may indicate a general server network or overloading issue if a fairly high proportion of packets sent
A smaller constantly increasing stat may indicate a problem with a particular client-server connection, would need to check "show queues" in this case.
|
|
|
|
| |
Appears to be a never used method.
|
|
|
|
| |
if we sent immediately)
|
|
|
|
| |
the code that this is symmetric with CloseAgent()
|
|
|
|
|
|
| |
it clear that all non-clientstack callers should be using this rather than RemoveClient() in order to step through the ScenePresence state machine properly.
Adds IScene.CloseAgent() to replace RemoveClient()
|
|
|
|
|
|
|
|
| |
LLUDPServer.HandleCompleteMovementIntoRegion() to fix race condition regression in commit 7dbc93c (Wed Sep 18 21:41:51 2013 +0100)
This check is necessary to close a race condition where the CompleteAgentMovement processing could proceed when the UseCircuitCode thread had added the client to the client manager but before the ScenePresence had registered to process the CompleteAgentMovement message.
This is most probably why the message appeared to get lost on a proportion of entity transfers.
A better long term solution may be to set the IClientAPI.SceneAgent property before the client is added to the manager.
|
|
|
|
| |
21:41:51 2013 +0100)
|
|
|
|
|
|
|
|
|
| |
LLUDPServer.HandleCompleteMovementIntoRegion()
This is to deal with one aspect of http://opensimulator.org/mantis/view.php?id=6755
With the V2 teleport arrangements, viewers appear to send the single UseCircuitCode and CompleteAgentMovement packets immediately after each other
Possibly, on occasion a poor network might drop the initial UseCircuitCode packet and by the time it retries, the CompleteAgementMovement has timed out and the teleport fails.
There's no apparant harm in doubling the wait time (most times only one wait will be performed) so trying this.
|
|
|
|
|
|
|
| |
LLUDPServer.HandleCompleteMovementIntoRegion()
Add more information on which endpoint sent the packet when we have to wait and if we end up dropping the packet
Only check if the client is active - other checks are redundant since they can only failed if IsActve = false
|
|
|
|
| |
well-formed packets that were not initial connection packets and could not be associated with a connected viewer.
|
|
|
|
|
|
|
| |
a malformed packet. Record this as stat clientstack.<scene>.IncomingPacketsMalformedCount
Used to detect if a simulator is receiving significant junk UDP
Decimates the number of packets between which a warning is logged and prints the IP source of the last malformed packet when logging
|
|
|
|
|
|
|
|
|
|
| |
an line from A->B->C would not close region A when reaching C
The root cause was that v2 was only closing neighbour agents if the root connection also needed a close.
However, fixing this requires the neighbour regions also detect when they should not close due to re-teleports re-establishing the child connection.
This involves restructuring the code to introduce a scene presence state machine that can serialize the different add and remove client calls that are now possible with the late close of the
This commit appears to fix these issues and improve teleport, but still has holes on at least quick reteleporting (and possibly occasionally on ordinary teleports).
Also, has not been completely tested yet in scenarios where regions are running on different simulators
|
| |
|
|
|
|
|
|
|
|
| |
immediately if any data was sent, rather than waiting.
What I believe is happening is that on initial terrain send, this is done one packet at a time.
With WaitOne, the outbound loop has enough time to loop and wait again after the first packet before the second, leading to a slower send.
This approach instead does not wait if a packet was just sent but instead loops again, which appears to lead to a quicker send without losing the cpu benefit of not continually looping when there is no outbound data.
|
|
|
|
|
|
| |
d9d995914c5fba00d4ccaf66b899384c8ea3d5eb (r/23185) -- the WaitOne on the UDPServer. Putting it back to how it was done solves the issue. But this may impact CPU usage, so I'm pushing it to test if it does."
This reverts commit 59b461ac0eaae1cc34bb82431106fdf0476037f3.
|
|
|
|
| |
d9d995914c5fba00d4ccaf66b899384c8ea3d5eb (r/23185) -- the WaitOne on the UDPServer. Putting it back to how it was done solves the issue. But this may impact CPU usage, so I'm pushing it to test if it does.
|
|
|
|
| |
reflect it is a timeout due to no data received rather than an ack issue.
|