| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
| |
lkalif for telling me how to route the information. The viewer effect is under the distance filter, so only avatars with cameras < 10m away see the beams.
|
| |
|
|
|
|
|
|
| |
- The existing event to scene has been split into 2: OnAgentUpdate and OnAgentCameraUpdate, to better reflect the two types of updates that the viewer sends. We can run one without the other, which is what happens when the avie is still but the user is camming around
- Added thresholds (as opposed to equality) to determine whether the update is significant or not. I thin these thresholds are ok, but we can play with them later
- Ignore updates of HeadRotation, which were problematic and aren't being used up stream
|
|
|
|
| |
observations about AgentUpdates.
|
| |
|
| |
|
|
|
|
|
|
| |
significant much earlier in UDP processing (i.e. before we pointlessly place such packets on internal queues, etc.)
Appears to have some impact on cpu but needs testing.
|
|
|
|
|
|
|
|
|
| |
to "show client stats" (i.e. sent on for further processing instead of being discarded)
Added here since it was the most convenient place
Number is in the last column, "Sig. AgentUpdates" along with percentage of all AgentUpdates
Percentage largely falls over time, most cpu for processing AgentUpdates may be in UDP processing as turning this off even earlier (with "debug lludp toggle agentupdate" results in a big cpu fall
Also tidies up display.
|
|
|
|
|
|
|
|
| |
AgentUpdate in packets to be discarded at a very early stage.
Enabling this will stop anybody from moving on a sim, though all other updates should be unaffected.
Appears to make some cpu difference on very basic testing with a static standing avatar (though not all that much).
Need to see the results with much higher av numbers.
|
|
|
|
| |
threads are started/stopped
|
|
|
|
| |
stop out" from actually doing anything
|
|
|
|
|
|
|
| |
always performing these on a separate fired thread.
This appears to improve cpu usage since launching a new thread is more expensive than performing a small amount of inline logic.
However, needs testing at scale.
|
|
|
|
|
|
| |
requests timeout in 60 secs.
There's plenty of room for improvement in handling the EQs. Some other time...
|
|
|
|
|
|
| |
a continuous loop with sleeps.
Does appear to have a cpu impact but may need further tweaking
|
|
|
|
|
|
|
|
| |
successfully tested, and I'm merging back those changes, which proved to
be good.
Revert "Revert "Cleared up much confusion in PollServiceRequestManager. Here's the history:""
This reverts commit fa2370b32ee57a07f27501152c3c705a883b13d8.
|
|
|
|
|
|
| |
history:"
This reverts commit e46459ef21e1ee5ceaeca70365a7c881d33b09ce.
|
|
|
|
|
|
| |
When Melanie added the web fetch inventory throttle to core, she made the long poll requests (EQs) effectively be handled on an active loop. All those requests, if they existed, were being constantly dequeued, checked for events (which most often they didn't have), and requeued again. This was an active loop thread on a 100ms cycle!
This fixes the issue. Now the inventory requests, if they aren't ready to be served, are placed directly back in the queue, but the long poll requests aren't placed there until there are events ready to be sent or timeout has been reached.
This puts the LongPollServiceWatcherThread back to 1sec cycle, as it was before.
|
|
|
|
| |
random crash might be in DoubleQueue instead. See http://pastebin.com/XhNBNqsc
|
|
|
|
| |
a random crash in a load test yesterday
|
|
|
|
| |
Curiously, the number of requests received is always one greater than that shown as handled - needs investigation
|
|\ |
|
| |
| |
| |
| | |
This reverts commit b060ce96d93a33298b59392210af4d336e0d171b.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
Revert "Trying to hunt the CPU spikes recently experienced."
This reverts commit ac73e702935dd4607c13aaec3095940fba7932ca.
|
| |
| |
| |
| |
| |
| | |
Revert "Comment out old inbound UDP throttling hack. This would cause the UDP"
This reverts commit 38e6da5522a53c7f65eac64ae7b0af929afb1ae6.
|
| |
| |
| |
| |
| |
| |
| |
| | |
TriggerOnMakeRootAgent to the end of CompleteMovement.
Justin, if you read this, there's a long story here. Some time ago you placed SendInitialDataToMe at the very beginning of client creation (in LLUDPServer). That is problematic, as we discovered relatively recently: on TPs, as soon as the client starts getting data from child agents, it starts requesting resources back *from the simulator where its root agent is*. We found this to be the problem behind meshes missing on HG TPs (because the viewer was requesting the meshes of the receiving sim from the departing grid). But this affects much more than meshes and HG TPs. It may also explain cloud avatars after a local TP: baked textures are only stored in the simulator, so if a child agent receives a UUID of a baked texture in the destination sim and requests that texture from the departing sim where the root agent is, it will fail to get that texture.
Bottom line: we need to delay sending the new simulator data to the viewer until we are absolutely sure that the viewer knows that its main agent is in a new sim. Hence, moving it to CompleteMovement.
Now I am trying to tune the initial rez delay that we all experience in the CC. I think that when I fixed the issue described above, I may have moved SendInitialDataToMe to much later than it should be, so now I'm moving to earlier in CompleteMovement.
|
|/
|
|
|
|
|
| |
handlers.
This adds explicit cap poll handler supporting to the Caps classes rather than relying on callers to do the complicated coding.
Other refactoring was required to get logic into the right places to support this.
|
|
|
|
| |
others, in preparation for experiments to direct baked texture uploads to a robust instance. No functional or configuration changes -- should work exactly as before.
|
|
|
|
| |
are also non-blocking handlers.
|
|
|
|
| |
checks may block, so they get a FireAndForget. Everything else is non-blocking.
|
| |
|
|
|
|
| |
This _shouldn't_ screw things up, given that all this does is to dump the request in a queue.
|
|
|
|
| |
requesting.
|
|
|
|
|
|
|
| |
reception thread to sleep for 30ms if the number of available user worker
threads got low. It doesn't look like any of the UDP packet types are
marked async so this check is 1) unnecessary and 2) really crazy since
it stops up the reception thread under heavy load without any indication.
|
|
|
|
|
|
| |
main inbound UDP processing loop, to avoid any chance that this is delaying the main udp in loop.
The potential impact of this should be lower now that these requests are being placed on a queue.
|
|
|
|
| |
not set
|
|
|
|
|
|
|
|
|
| |
to print various counts of capability invocation by user and by cap
This currently prints caps requests received and handled, so that overload of received compared to handled or deadlock can be detected.
This involves making BaseStreamHandler and BaseOutputStream record the ints, which means inheritors should subclass ProcessRequest() instead of Handle()
However, existing inheriting classes overriding Handle() will still work, albeit without stats recording.
"show caps" becomes "show caps list" to disambiguate between show caps commands
|
| |
|
| |
|
|
|
|
| |
to be performed immediately from client start
|
|
|
|
|
|
|
| |
"debug lludp" options
also moves the implementing code into LLUDPServer.cs along with other debug commands from OpenSim.cs
gets all debug lludp commands to only activate for the set scene if not root
|
|
|
|
| |
resulted in never ending asset requests
|
| |
|
| |
|
|
|
|
|
|
| |
enabled.
This also moves the abort to RemoveRegion() rather than a destructor.
|
|\ |
|
| |\ |
|
| | |
| | |
| | |
| | | |
force selected and turn down to debug level
|
| |/ |
|