| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
priority to chat
|
|
|
|
| |
do it only where its supposed to be done..
|
| |
|
| |
|
| |
|
|
|
|
| |
This reverts commit 05a2feba5d780c57c252891a20071800fd9f2e3e.
|
|
|
|
| |
change on significante AgentUpdate check.
|
| |
|
|\
| |
| |
| |
| |
| |
| |
| |
| | |
Conflicts:
.gitignore
OpenSim/Region/CoreModules/Scripting/HttpRequest/ScriptsHttpRequests.cs
OpenSim/Region/CoreModules/World/Land/LandManagementModule.cs
prebuild.xml
runprebuild.bat
|
| |
| |
| |
| |
| | |
This records how many packets were indicated to be resends by clients
Not 100% reliable since clients can lie about resends, but usually would indicate if clients are not receiving UDP acks at all or in a manner they consider timely.
|
|\ \
| |/
| |
| |
| | |
Conflicts:
OpenSim/Region/ClientStack/Linden/UDP/LLUDPServer.cs
|
| |
| |
| |
| | |
first added a few commits ago
|
| |
| |
| |
| |
| |
| | |
This allows one to monitor the total number of messages resent to clients over time.
A constantly increasing stat may indicate a general server network or overloading issue if a fairly high proportion of packets sent
A smaller constantly increasing stat may indicate a problem with a particular client-server connection, would need to check "show queues" in this case.
|
| |
| |
| |
| | |
Appears to be a never used method.
|
| |
| |
| |
| | |
if we sent immediately)
|
|\ \
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Conflicts:
OpenSim/Data/MySQL/MySQLSimulationData.cs
OpenSim/Data/MySQL/Resources/RegionStore.migrations
OpenSim/Region/ClientStack/Linden/Caps/WebFetchInvDescModule.cs
OpenSim/Region/CoreModules/Avatar/Attachments/AttachmentsModule.cs
OpenSim/Region/CoreModules/Avatar/Inventory/Transfer/InventoryTransferModule.cs
OpenSim/Region/CoreModules/World/LightShare/LightShareModule.cs
OpenSim/Region/Framework/Scenes/Scene.cs
OpenSim/Region/Framework/Scenes/ScenePresence.cs
OpenSim/Region/Framework/Scenes/Tests/ScenePresenceCapabilityTests.cs
OpenSim/Region/OptionalModules/World/NPC/NPCModule.cs
OpenSim/Region/ScriptEngine/Shared/Api/Implementation/LSL_Api.cs
|
| |
| |
| |
| | |
the code that this is symmetric with CloseAgent()
|
| |
| |
| |
| |
| |
| | |
it clear that all non-clientstack callers should be using this rather than RemoveClient() in order to step through the ScenePresence state machine properly.
Adds IScene.CloseAgent() to replace RemoveClient()
|
| |
| |
| |
| |
| |
| |
| |
| | |
LLUDPServer.HandleCompleteMovementIntoRegion() to fix race condition regression in commit 7dbc93c (Wed Sep 18 21:41:51 2013 +0100)
This check is necessary to close a race condition where the CompleteAgentMovement processing could proceed when the UseCircuitCode thread had added the client to the client manager but before the ScenePresence had registered to process the CompleteAgentMovement message.
This is most probably why the message appeared to get lost on a proportion of entity transfers.
A better long term solution may be to set the IClientAPI.SceneAgent property before the client is added to the manager.
|
| |
| |
| |
| | |
21:41:51 2013 +0100)
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
LLUDPServer.HandleCompleteMovementIntoRegion()
This is to deal with one aspect of http://opensimulator.org/mantis/view.php?id=6755
With the V2 teleport arrangements, viewers appear to send the single UseCircuitCode and CompleteAgentMovement packets immediately after each other
Possibly, on occasion a poor network might drop the initial UseCircuitCode packet and by the time it retries, the CompleteAgementMovement has timed out and the teleport fails.
There's no apparant harm in doubling the wait time (most times only one wait will be performed) so trying this.
|
| |
| |
| |
| |
| |
| |
| | |
LLUDPServer.HandleCompleteMovementIntoRegion()
Add more information on which endpoint sent the packet when we have to wait and if we end up dropping the packet
Only check if the client is active - other checks are redundant since they can only failed if IsActve = false
|
|\ \
| |/
| |
| |
| |
| |
| | |
Conflicts:
OpenSim/Framework/Servers/HttpServer/PollServiceRequestManager.cs
OpenSim/Region/CoreModules/Avatar/Attachments/AttachmentsModule.cs
OpenSim/Services/Connectors/Neighbour/NeighbourServicesConnector.cs
|
| |
| |
| |
| | |
well-formed packets that were not initial connection packets and could not be associated with a connected viewer.
|
| |
| |
| |
| |
| |
| |
| | |
a malformed packet. Record this as stat clientstack.<scene>.IncomingPacketsMalformedCount
Used to detect if a simulator is receiving significant junk UDP
Decimates the number of packets between which a warning is logged and prints the IP source of the last malformed packet when logging
|
|\ \
| |/
| |
| |
| |
| |
| |
| |
| | |
Conflicts:
OpenSim/Region/Application/OpenSimBase.cs
OpenSim/Region/ClientStack/Linden/UDP/LLClientView.cs
OpenSim/Region/ClientStack/Linden/UDP/LLUDPServer.cs
OpenSim/Region/Framework/Scenes/Scene.cs
OpenSim/Region/Framework/Scenes/ScenePresence.cs
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
an line from A->B->C would not close region A when reaching C
The root cause was that v2 was only closing neighbour agents if the root connection also needed a close.
However, fixing this requires the neighbour regions also detect when they should not close due to re-teleports re-establishing the child connection.
This involves restructuring the code to introduce a scene presence state machine that can serialize the different add and remove client calls that are now possible with the late close of the
This commit appears to fix these issues and improve teleport, but still has holes on at least quick reteleporting (and possibly occasionally on ordinary teleports).
Also, has not been completely tested yet in scenarios where regions are running on different simulators
|
|\ \
| |/
| |
| |
| | |
Conflicts:
OpenSim/Region/CoreModules/Avatar/Attachments/AttachmentsModule.cs
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
immediately if any data was sent, rather than waiting.
What I believe is happening is that on initial terrain send, this is done one packet at a time.
With WaitOne, the outbound loop has enough time to loop and wait again after the first packet before the second, leading to a slower send.
This approach instead does not wait if a packet was just sent but instead loops again, which appears to lead to a quicker send without losing the cpu benefit of not continually looping when there is no outbound data.
|
| |
| |
| |
| |
| |
| | |
d9d995914c5fba00d4ccaf66b899384c8ea3d5eb (r/23185) -- the WaitOne on the UDPServer. Putting it back to how it was done solves the issue. But this may impact CPU usage, so I'm pushing it to test if it does."
This reverts commit 59b461ac0eaae1cc34bb82431106fdf0476037f3.
|
| |
| |
| |
| | |
d9d995914c5fba00d4ccaf66b899384c8ea3d5eb (r/23185) -- the WaitOne on the UDPServer. Putting it back to how it was done solves the issue. But this may impact CPU usage, so I'm pushing it to test if it does.
|
|\ \
| |/
| |
| |
| | |
Conflicts:
OpenSim/Region/ClientStack/Linden/Caps/GetTextureModule.cs
|
| |
| |
| |
| | |
reflect it is a timeout due to no data received rather than an ack issue.
|
| |
| |
| |
| | |
I accidentally left in a test line to force very quick client unack
|
| |
| |
| |
| |
| |
| |
| | |
add this to the StatsManager
This reflects the actual use of this stat - it hasn't recorded general exceptions for some time.
Make the sim extra stats collector draw the data from the stats manager rather than maintaing this data itself.
|
|\ \
| |/
| |
| |
| |
| |
| |
| | |
Conflicts:
OpenSim/Region/CoreModules/Framework/EntityTransfer/EntityTransferModule.cs
OpenSim/Region/Framework/Scenes/Scene.cs
OpenSim/Region/Framework/Scenes/ScenePresence.cs
bin/OpenSimDefaults.ini
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
the destination is being checked)
In this new protocol, and as committed before, the viewer is not sent EnableSimulator/EstablishChildCommunication for the destination. Instead, it is sent TeleportFinish directly. TeleportFinish, in turn, makes the viewer send a UserCircuitCode packet followed by CompleteMovementIntoRegion packet. These 2 packets tend to occur one after the other almost immediately to the point that when CMIR arrives the client is not even connected yet and that packet is ignored (there might have been some race conditions here before); then the viewer sends CMIR again within 5-8 secs. But the delay between them may be higher in busier regions, which may lead to race conditions.
This commit improves the process so there are are no race conditions at the destination. CompleteMovement (triggered by the viewer) waits until Update has been sent from the origin. Update, in turn, waits until there is a *root* scene presence -- so making sure CompleteMovement has run MakeRoot. In other words, there are two threadlets at the destination, one from the viewer and one from the origin region, waiting for each other to do the right thing. That makes it safe to close the agent at the origin upon return of the Update call without having to wait for callback, because we are absolutely sure that the viewer knows it is in th new region.
Note also that in the V1 protocol, the destination was getting UseCircuitCode from the viewer twice -- once on EstablishAgentCommunication and then again on TeleportFinish. The second UCC was being ignored, but it shows how we were not following the expected steps...
|
|\ \
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Conflicts:
OpenSim/Framework/Servers/HttpServer/PollServiceRequestManager.cs
OpenSim/Region/ClientStack/Linden/UDP/LLClientView.cs
OpenSim/Region/ClientStack/Linden/UDP/LLUDPServer.cs
OpenSim/Region/Framework/Scenes/Scene.PacketHandlers.cs
OpenSim/Region/Framework/Scenes/ScenePresence.cs
OpenSim/Region/Physics/Manager/PhysicsActor.cs
OpenSim/Region/Physics/Manager/PhysicsScene.cs
|
| | |
|
| |
| |
| |
| |
| |
| | |
waiting to be processed at the second stage (after initial UDP processing)
If this consistently increases then this is a problem since it means the simulator is receiving more requests than it can distribute to other parts of the code.
|
| |
| |
| |
| | |
packets sent by a region per second
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
taking on the initial processing of a UDP packet.
If we're not receiving packets with multiple threads (m_asyncPacketHandling) then this is critical since it will limit the number of incoming UDP requests that the region can handle and affects packet loss.
If m_asyncPacketHandling then this is less critical though a long process will increase the scope for threads to race.
This is an experimental stat which may be changed.
|
| |
| |
| |
| |
| |
| | |
AgentUpdate packet. This fixes the problem with vehicles not moving forward
after the first up-arrow.
Code to fix a potential exception when using different IClientAPIs.
|
| |
| |
| |
| | |
observations about AgentUpdates.
|
| |
| |
| |
| |
| |
| | |
significant much earlier in UDP processing (i.e. before we pointlessly place such packets on internal queues, etc.)
Appears to have some impact on cpu but needs testing.
|