| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
| |
PacketPool.GetType()
Instead adjusts code with that from Packet.BuildHeader(byte[], ref int, ref int):Header in libomv
This stops packet decoding failures with agent UUIDs that contain 00 in their earlier parts (e.g. b0b0b0b0-0000-0000-0000-000000000211)
Thanks to lkalif for pointing this out.
|
|
|
|
| |
occuring on every login and entity transfer
|
|
|
|
| |
well-formed packets that were not initial connection packets and could not be associated with a connected viewer.
|
|
|
|
|
|
|
| |
a malformed packet. Record this as stat clientstack.<scene>.IncomingPacketsMalformedCount
Used to detect if a simulator is receiving significant junk UDP
Decimates the number of packets between which a warning is logged and prints the IP source of the last malformed packet when logging
|
|
|
|
| |
client without a queue to include the event message name.
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
an line from A->B->C would not close region A when reaching C
The root cause was that v2 was only closing neighbour agents if the root connection also needed a close.
However, fixing this requires the neighbour regions also detect when they should not close due to re-teleports re-establishing the child connection.
This involves restructuring the code to introduce a scene presence state machine that can serialize the different add and remove client calls that are now possible with the late close of the
This commit appears to fix these issues and improve teleport, but still has holes on at least quick reteleporting (and possibly occasionally on ordinary teleports).
Also, has not been completely tested yet in scenarios where regions are running on different simulators
|
|/
|
|
| |
change only affects the editing user's experience. Non-editing users will see nothing different from the current 'slow' result. See comments for the thought process and how the issues surrounding terrain editing, cache, bandwidth, threading, terrain patch reliability and throttling were balanced.
|
|
|
|
|
|
| |
forget conditions.
I generally prefer this approach for regression tests because of the complexity of accounting for different threading conditions.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
immediately if any data was sent, rather than waiting.
What I believe is happening is that on initial terrain send, this is done one packet at a time.
With WaitOne, the outbound loop has enough time to loop and wait again after the first packet before the second, leading to a slower send.
This approach instead does not wait if a packet was just sent but instead loops again, which appears to lead to a quicker send without losing the cpu benefit of not continually looping when there is no outbound data.
|
|
|
|
|
|
| |
d9d995914c5fba00d4ccaf66b899384c8ea3d5eb (r/23185) -- the WaitOne on the UDPServer. Putting it back to how it was done solves the issue. But this may impact CPU usage, so I'm pushing it to test if it does."
This reverts commit 59b461ac0eaae1cc34bb82431106fdf0476037f3.
|
|
|
|
| |
d9d995914c5fba00d4ccaf66b899384c8ea3d5eb (r/23185) -- the WaitOne on the UDPServer. Putting it back to how it was done solves the issue. But this may impact CPU usage, so I'm pushing it to test if it does.
|
|
|
|
|
|
|
| |
cap is something other than "localhost". A new interface for handling
external caps is supported with an example implemented for Simian. The
only linden cap supporting this interface right now is the GetTexture
cap.
|
|
|
|
|
|
|
| |
HGMapModule, hooking on to SimulatorFeatures.OnSimulatorFeaturesRequest event (similar to what the DynamicMenuModule does).
Only HG Visitors get this var, to avoid spamming local users.
The config var is now called MapTileURL, to be consistent with the login one, and its being picked up from either [LoginService], [HGWorldMap] or [SimulatorFeatures], just because I have a bad memory.
|
|\ |
|
| |
| |
| |
| | |
reflect it is a timeout due to no data received rather than an ack issue.
|
| |
| |
| |
| | |
I accidentally left in a test line to force very quick client unack
|
| |
| |
| |
| |
| |
| |
| | |
add this to the StatsManager
This reflects the actual use of this stat - it hasn't recorded general exceptions for some time.
Make the sim extra stats collector draw the data from the stats manager rather than maintaing this data itself.
|
|/
|
|
|
|
| |
OSDMap GridServices to OpenSimExtras, normalized the url keys under it, and moved ExportEnabled to under it too. Melanie: change your viewer code accordingly.
Documentation at http://opensimulator.org/wiki/SimulatorFeatures_Extras
|
|
|
|
|
|
| |
join/drop appropriately, invitechatboxes.
The major departure from flotsam is to send only one message per destination region, as opposed to one message per group member. This reduces messaging considerably in large groups that have clusters of members in certain regions.
|
|
|
|
| |
because DisableSimulator is now being sent via UDP
|
|
|
|
|
|
|
|
|
| |
as needed. (Although sometimes Justin's state machine kicks in and doesn't let you) The EventQueues are a hairy mess, and it's very easy to mess things up. But it looks like this commit makes them work right. Here's what's going on:
- Child and root agents are only closed after 15 sec, maybe
- If the user comes back, they aren't closed, and everything is reused
- On the receiving side, clients and scene presences are reused if they already exist
- Caps are always recreated (this is where I spent most of my time!). It turns out that, because the agents carry the seeds around, the seed gets the same URL, except for the root agent coming back to a far away region, which gets a new seed (because we don't know what was its seed in the departing region, and we can't send it back to the client when the agent returns there).
|
| |
|
| |
|
| |
|
|
|
|
| |
with high priority and hopefully gets to the client before AgentMovementComplete
|
|
|
|
|
|
|
|
| |
the destination is being checked)
In this new protocol, and as committed before, the viewer is not sent EnableSimulator/EstablishChildCommunication for the destination. Instead, it is sent TeleportFinish directly. TeleportFinish, in turn, makes the viewer send a UserCircuitCode packet followed by CompleteMovementIntoRegion packet. These 2 packets tend to occur one after the other almost immediately to the point that when CMIR arrives the client is not even connected yet and that packet is ignored (there might have been some race conditions here before); then the viewer sends CMIR again within 5-8 secs. But the delay between them may be higher in busier regions, which may lead to race conditions.
This commit improves the process so there are are no race conditions at the destination. CompleteMovement (triggered by the viewer) waits until Update has been sent from the origin. Update, in turn, waits until there is a *root* scene presence -- so making sure CompleteMovement has run MakeRoot. In other words, there are two threadlets at the destination, one from the viewer and one from the origin region, waiting for each other to do the right thing. That makes it safe to close the agent at the origin upon return of the Update call without having to wait for callback, because we are absolutely sure that the viewer knows it is in th new region.
Note also that in the V1 protocol, the destination was getting UseCircuitCode from the viewer twice -- once on EstablishAgentCommunication and then again on TeleportFinish. The second UCC was being ignored, but it shows how we were not following the expected steps...
|
| |
|
|
|
|
|
|
| |
waiting to be processed at the second stage (after initial UDP processing)
If this consistently increases then this is a problem since it means the simulator is receiving more requests than it can distribute to other parts of the code.
|
|
|
|
| |
packets sent by a region per second
|
| |
|
|
|
|
|
|
|
|
| |
taking on the initial processing of a UDP packet.
If we're not receiving packets with multiple threads (m_asyncPacketHandling) then this is critical since it will limit the number of incoming UDP requests that the region can handle and affects packet loss.
If m_asyncPacketHandling then this is less critical though a long process will increase the scope for threads to race.
This is an experimental stat which may be changed.
|
|
|
|
|
|
| |
AgentUpdate packet. This fixes the problem with vehicles not moving forward
after the first up-arrow.
Code to fix a potential exception when using different IClientAPIs.
|
| |
|
|
|
|
| |
the equation. (head rotation was the problematic one)
|
|
|
|
| |
lkalif for telling me how to route the information. The viewer effect is under the distance filter, so only avatars with cameras < 10m away see the beams.
|
| |
|
|
|
|
|
|
| |
- The existing event to scene has been split into 2: OnAgentUpdate and OnAgentCameraUpdate, to better reflect the two types of updates that the viewer sends. We can run one without the other, which is what happens when the avie is still but the user is camming around
- Added thresholds (as opposed to equality) to determine whether the update is significant or not. I thin these thresholds are ok, but we can play with them later
- Ignore updates of HeadRotation, which were problematic and aren't being used up stream
|
|
|
|
| |
observations about AgentUpdates.
|
| |
|
| |
|
|
|
|
|
|
| |
significant much earlier in UDP processing (i.e. before we pointlessly place such packets on internal queues, etc.)
Appears to have some impact on cpu but needs testing.
|
|
|
|
|
|
|
|
|
| |
to "show client stats" (i.e. sent on for further processing instead of being discarded)
Added here since it was the most convenient place
Number is in the last column, "Sig. AgentUpdates" along with percentage of all AgentUpdates
Percentage largely falls over time, most cpu for processing AgentUpdates may be in UDP processing as turning this off even earlier (with "debug lludp toggle agentupdate" results in a big cpu fall
Also tidies up display.
|
|
|
|
|
|
|
|
| |
AgentUpdate in packets to be discarded at a very early stage.
Enabling this will stop anybody from moving on a sim, though all other updates should be unaffected.
Appears to make some cpu difference on very basic testing with a static standing avatar (though not all that much).
Need to see the results with much higher av numbers.
|
|
|
|
| |
threads are started/stopped
|
|
|
|
| |
stop out" from actually doing anything
|
|
|
|
|
|
|
| |
always performing these on a separate fired thread.
This appears to improve cpu usage since launching a new thread is more expensive than performing a small amount of inline logic.
However, needs testing at scale.
|
|
|
|
|
|
| |
requests timeout in 60 secs.
There's plenty of room for improvement in handling the EQs. Some other time...
|