aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/OpenSim/Region/ClientStack/Linden/UDP/LLUDPServer.cs (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Fixed mistakes related to confusion between Environment.TickCount ↵Oren Hurvitz2015-08-111-2/+2
| | | | (milliseconds) and TimeSpan.TicksPerXXX (10000 x milliseconds)
* Add debug ability to ignore reliably sent packets that are not acknowledged.Justin Clark-Casey (justincc)2015-01-211-20/+27
| | | | | | This is controlled via the console command "debug lludp client set process-unacked-sends true [<avatar-first-name> <avatar-last-name>]" For debug purposes to see if this process for very bad connections is causing general outbound udp processing delays. Relates to http://opensimulator.org/mantis/view.php?id=7393
* Fix bug where gathering the clientstack..OutgoingPacketsQueuedCount stat ↵Justin Clark-Casey (justincc)2015-01-131-2/+6
| | | | | | would fail with a casting exception for scenes with NPCs Present since 51eb8fa (Oct 2 2014)
* Make the performance controlling job processing threads introduced in ↵Justin Clark-Casey (justincc)2015-01-121-8/+44
| | | | conference code use a generic JobEngine class rather than 4 slightly different copy/pasted versions.
* refactor: Move methods to start a monitored thread, start work in its own ↵Justin Clark-Casey (justincc)2014-11-251-2/+2
| | | | | | | | thread and run work in the jobengine from Watchdog to a WorkManager class. This is to achieve a clean separation of concerns - the watchdog is an inappropriate place for work management. Also adds a WorkManager.RunInThreadPool() class which feeds through to Util.FireAndForget. Also switches around the name and obj arguments to the new RunInThread() and RunJob() methods so that the callback obj comes after the callback as seen in the SDK and elsewhere
* Add incoming packet async handling engine to queue some inbound udp async ↵Justin Clark-Casey (justincc)2014-11-251-0/+9
| | | | | | | requests. This is to reduce the potential for overload of the threadpool if there are many simultaneous requets in high concurrency situations. Currently only applied to AvatarProperties and GenericMessage requests.
* Label all threadpool calls being made in core OpenSimulator. This is to add ↵Justin Clark-Casey (justincc)2014-11-251-3/+5
| | | | | | problem diagnosis. "show threadpool calls" now also returns named (labelled), anonymous (unlabelled) and total call stats.
* Fix setting of max scene throttle so that setting it restricts the child ↵Justin Clark-Casey (justincc)2014-11-251-1/+1
| | | | | | | client throttles properly. In "show throttles", also renames 'total' column to 'actual' to reflect that it is not necessarily the throttles requested for/by the client. Also fills out 'target' in non-adapative mode to the actual throttle requested for/by the client.
* For now, send all non-full terse updates for ones own avatar directly to the ↵Justin Clark-Casey (justincc)2014-11-251-0/+9
| | | | | | | | | LLUDP client stack rather than queueing internally within LLClientView. When an HG avatar enters a scene, it delays processing of entity updates. Could be crowding out by other updates or something else. This delay in ones own av mvmt updates results in mvmt lag experienced on the client. Avoiding the internal LLClientView for these packets appears to resolve this issue. Appears most noticeably for avatars with attachments, though has also been seen on those without sometimes. Hasn't been observed for non-HG avatars in general. Will be investigating exactly what the problem is, at which point there will be a more permanent solution.
* Fix an issue where specifying both max client and server outgoing UDP ↵Justin Clark-Casey (justincc)2014-11-251-6/+1
| | | | | | | | | | throttles would cause client throttles to be lower than expected when total requests exceeded the scene limit. This was because specifying a max client throttle would always request the max from the parent server throttle, no matter the actual total requests on the client throttle. This would lead to a lower server multiplier than expected. This change also adds a 'target' column to the "show throttles" output that shows the target rate (as set by client) if adaptive throttles is active. This commit also re-adds the functionality lost in recent 5c1a1458 to set a max client throttle when adaptive is active. This commit also adds TestClientThrottlePerClientAndRegionLimited and TestClientThrottleAdaptiveNoLimit regression tests
* Add regression test TestClientThrottleLimited() for throttle behaviour when ↵Justin Clark-Casey (justincc)2014-11-251-1/+6
| | | | a max client total limit is enforced server-side
* Add "debug lludp set scene-throttle-max <value>" console command to allow us ↵Justin Clark-Casey (justincc)2014-11-251-6/+4
| | | | to potentially set the scene max throttle on the fly.
* Add "show server throttles" command for showing server specific information ↵Justin Clark-Casey (justincc)2014-11-251-0/+5
| | | | | | | about throttles This is separate from the user-oriented "show throttles" command since one will often only want to know about varying client throttle settings. Currently displays max scene throttle and adaptive throttles config if set.
* Add OutgoingPacketsQueuedCount clientstack stat.Justin Clark-Casey (justincc)2014-11-251-0/+26
| | | | | | This is the total of queued outgoing packets across all connections, as also seen in the "show queues" command. Gives some early indication of whether the simulator can't send all outgoing packets fast enough. Though then one would want to check that this isn't due to a few bad client connections.
* refactor: Move LLUDPServer console commands into their own class.Justin Clark-Casey (justincc)2014-11-251-474/+15
|
* refactor: rename "debug lludp throttle status" to "debug lludp throttle get" ↵Justin Clark-Casey (justincc)2014-11-251-6/+6
| | | | to match set command
* Add "debug lludp throttle set" command to allow setting of parameters at runtimeJustin Clark-Casey (justincc)2014-11-251-1/+50
| | | | Can currently only set adaptive true|false, where adaptive = false
* Make outboudn and packet inbox handling threads highest priority.Justin Clark-Casey (justincc)2014-11-251-0/+4
| | | | | Will only have any affect under Windows or mono with a patch such as https://gist.github.com/justincc/31e52218d098529b4696 (not recommended) applied. For assessment purposes.
* Add "debug lludp throttle status" command to return status information about ↵Justin Clark-Casey (justincc)2014-10-021-0/+36
| | | | a client's throttle (currently just whether adaptive is enabled).
* Add "debug lludp throttle log <level> <avatar-first-name> ↵Justin Clark-Casey (justincc)2014-10-021-1/+48
| | | | <avatar-last-name>" to control extra throttle related debug logging.
* Add "debug lludp data out" console command for logging outgoing data just ↵Justin Clark-Casey (justincc)2014-09-241-1/+46
| | | | | | | before it's put on the wire. Unlike "debug lludp packet" which logs at the point where OpenSim first asks the clientstack to send a certain outgoing packet, this logs immediately before the actual send. For low-level debugging purposes.
* Make LLUDP output queue refill thread active by default, since load tests ↵Justin Clark-Casey (justincc)2014-09-041-0/+2
| | | | | | have shown that this has better scalability. For testing, previous behaviour can be restored with the console command "debug lludp oqre stop" at runtime.
* Extend drop command to "debug lludp drop <in|out>..." to allow drop of ↵Justin Clark-Casey (justincc)2014-08-191-8/+17
| | | | | | inbound packets. For test/debug purposes.
* Add "debug lludp drop out <add|remove> <packet-name>" console command for ↵Justin Clark-Casey (justincc)2014-08-191-0/+49
| | | | | | | debug/test purposes. This drops all outbound packets that match a given packet name. Can currently only be applied to all connections in a scene.
* Add experimental OutgoingQueueRefillEngine to handle queue refill processing ↵Justin Clark-Casey (justincc)2014-08-191-2/+11
| | | | | | | | | | | on a controlled number of threads rather than the threadpool. Disabled by default. Currently can only be enabled with console "debug lludp oqre start" command, though this can be started and stopped whilst simulator is running. When a connection requires packet queue refill processing (used to populate queues with entity updates, entity prop updates and image queue updates), this is done via Threadpool requests. However, with a very high number of connections (e.g. 100 root + 300 child) a very large number of simultaneous requests may be causing performance issues. This commit adds an experimental engine for processing these requests from a queue with a persistent thread instead. Unlike inbound processing, there are no network requests in this processing that might hold the thread up for a long time. Early implementation - currently only one thread which may (or may not) get overloaded with requests. Added for testing purposes.
* Make LLUDPServer.Scene publicly gettable/privately settable instead of ↵Justin Clark-Casey (justincc)2014-08-191-49/+49
| | | | | | | protected so that other logging code in the clientstack can record more useful information Adds some commented out logging for use again in the future. No functional change.
* If a user moves back in sight of a child region before the agent has been ↵Justin Clark-Casey (justincc)2014-08-151-1/+1
| | | | closed on teleport, don't unnecessarily resend all avatar and object data about that region.
* On teleport to a region that already has a child agent established (e.g. a ↵Justin Clark-Casey (justincc)2014-08-151-1/+1
| | | | | | neighbour) don't resend all the initial avatar and object data again. This is unnecessary since it has been received (and data continues to be received) in the existing child connection.
* Only set up the UnackedMethod for an outgoing message if that message is ↵Justin Clark-Casey (justincc)2014-08-131-1/+3
| | | | actually meant to get an ack (because it's reliable).
* Added "debug packet --all" option, which changes the packet logging level ↵Oren Hurvitz2014-07-211-8/+26
| | | | | | for both current and future clients The existing "--default" option only changes the logging level for future clients.
* Fixed the logic that decides if a packet was queued (it was reversed)Oren Hurvitz2014-07-211-5/+6
|
* Add IncomingPacketsResentCount clientstack statisticsJustin Clark-Casey (justincc)2013-11-061-0/+23
| | | | | This records how many packets were indicated to be resends by clients Not 100% reliable since clients can lie about resends, but usually would indicate if clients are not receiving UDP acks at all or in a manner they consider timely.
* Start counting resent packets in the places that I missed when the stat was ↵Justin Clark-Casey (justincc)2013-10-311-0/+4
| | | | first added a few commits ago
* Add OutgoingPacketsResentCount clientstack stat.Justin Clark-Casey (justincc)2013-10-311-0/+27
| | | | | | This allows one to monitor the total number of messages resent to clients over time. A constantly increasing stat may indicate a general server network or overloading issue if a fairly high proportion of packets sent A smaller constantly increasing stat may indicate a problem with a particular client-server connection, would need to check "show queues" in this case.
* Comment out LLUDPServer.BroadcastPacket() to reduce code complexity. ↵Justin Clark-Casey (justincc)2013-10-241-38/+38
| | | | Appears to be a never used method.
* Only set the data present event if we actually queued an outoing packet (not ↵Justin Clark-Casey (justincc)2013-10-241-4/+17
| | | | if we sent immediately)
* refactor: Rename Scene.AddNewClient() to AddNewAgent() to make it obvious in ↵Justin Clark-Casey (justincc)2013-09-271-1/+1
| | | | the code that this is symmetric with CloseAgent()
* refactor: rename Scene.IncomingCloseAgent() to CloseAgent() in order to make ↵Justin Clark-Casey (justincc)2013-09-271-2/+2
| | | | | | it clear that all non-clientstack callers should be using this rather than RemoveClient() in order to step through the ScenePresence state machine properly. Adds IScene.CloseAgent() to replace RemoveClient()
* Reinsert client.SceneAgent checks into ↵Justin Clark-Casey (justincc)2013-09-251-11/+26
| | | | | | | | LLUDPServer.HandleCompleteMovementIntoRegion() to fix race condition regression in commit 7dbc93c (Wed Sep 18 21:41:51 2013 +0100) This check is necessary to close a race condition where the CompleteAgentMovement processing could proceed when the UseCircuitCode thread had added the client to the client manager but before the ScenePresence had registered to process the CompleteAgentMovement message. This is most probably why the message appeared to get lost on a proportion of entity transfers. A better long term solution may be to set the IClientAPI.SceneAgent property before the client is added to the manager.
* Reinsert 200ms sleep accidentally removed in commit 7dbc93c (Wed Sep 18 ↵Justin Clark-Casey (justincc)2013-09-251-2/+2
| | | | 21:41:51 2013 +0100)
* Double the time spent waiting for a UseCircuitCode packet in ↵Justin Clark-Casey (justincc)2013-09-181-1/+1
| | | | | | | | | LLUDPServer.HandleCompleteMovementIntoRegion() This is to deal with one aspect of http://opensimulator.org/mantis/view.php?id=6755 With the V2 teleport arrangements, viewers appear to send the single UseCircuitCode and CompleteAgentMovement packets immediately after each other Possibly, on occasion a poor network might drop the initial UseCircuitCode packet and by the time it retries, the CompleteAgementMovement has timed out and the teleport fails. There's no apparant harm in doubling the wait time (most times only one wait will be performed) so trying this.
* Change logging to provide more information on ↵Justin Clark-Casey (justincc)2013-09-181-10/+39
| | | | | | | LLUDPServer.HandleCompleteMovementIntoRegion() Add more information on which endpoint sent the packet when we have to wait and if we end up dropping the packet Only check if the client is active - other checks are redundant since they can only failed if IsActve = false
* Add stat clientstack.<scene>.IncomingPacketsOrphanedCount to record ↵Justin Clark-Casey (justincc)2013-08-141-4/+29
| | | | well-formed packets that were not initial connection packets and could not be associated with a connected viewer.
* Count any incoming packet that could not be recognized as an LLUDP packet as ↵Justin Clark-Casey (justincc)2013-08-141-21/+44
| | | | | | | a malformed packet. Record this as stat clientstack.<scene>.IncomingPacketsMalformedCount Used to detect if a simulator is receiving significant junk UDP Decimates the number of packets between which a warning is logged and prints the IP source of the last malformed packet when logging
* Fix an issue where under teleport v2 protocol, teleporting from regions in ↵Justin Clark-Casey (justincc)2013-08-081-3/+3
| | | | | | | | | | an line from A->B->C would not close region A when reaching C The root cause was that v2 was only closing neighbour agents if the root connection also needed a close. However, fixing this requires the neighbour regions also detect when they should not close due to re-teleports re-establishing the child connection. This involves restructuring the code to introduce a scene presence state machine that can serialize the different add and remove client calls that are now possible with the late close of the This commit appears to fix these issues and improve teleport, but still has holes on at least quick reteleporting (and possibly occasionally on ordinary teleports). Also, has not been completely tested yet in scenarios where regions are running on different simulators
* minor: Add name to debug lludp packet level feedback on consoleJustin Clark-Casey (justincc)2013-08-011-1/+1
|
* Try a different approach to slow terrain update by always cycling the loop ↵Justin Clark-Casey (justincc)2013-08-011-1/+2
| | | | | | | | immediately if any data was sent, rather than waiting. What I believe is happening is that on initial terrain send, this is done one packet at a time. With WaitOne, the outbound loop has enough time to loop and wait again after the first packet before the second, leading to a slower send. This approach instead does not wait if a packet was just sent but instead loops again, which appears to lead to a quicker send without losing the cpu benefit of not continually looping when there is no outbound data.
* Revert "Issue: painfully slow terrain loading. The cause is commit ↵Justin Clark-Casey (justincc)2013-08-011-5/+5
| | | | | | d9d995914c5fba00d4ccaf66b899384c8ea3d5eb (r/23185) -- the WaitOne on the UDPServer. Putting it back to how it was done solves the issue. But this may impact CPU usage, so I'm pushing it to test if it does." This reverts commit 59b461ac0eaae1cc34bb82431106fdf0476037f3.
* Issue: painfully slow terrain loading. The cause is commit ↵Diva Canto2013-08-011-5/+5
| | | | d9d995914c5fba00d4ccaf66b899384c8ea3d5eb (r/23185) -- the WaitOne on the UDPServer. Putting it back to how it was done solves the issue. But this may impact CPU usage, so I'm pushing it to test if it does.
* minor: Add timeout secs to connection timeout message. Change message to ↵Justin Clark-Casey (justincc)2013-07-291-8/+9
| | | | reflect it is a timeout due to no data received rather than an ack issue.